4 Questions To Ask When Evaluating Training
Pasuwan/Shutterstock.com

What To Ask Before You Embark On Evaluating Training

The Power Of Evaluation

Evaluation is a critical stage in any training process. Without clear, actionable evaluation, there’s no way to know whether your training has been a success. This means that you can’t improve future training, you can’t explain the power and importance of your training, and you can’t tie your training to business outcomes.

In comparison, a great evaluation will allow you to see exactly where your training is succeeding and failing, and take steps to make it better. It will also allow you to show the rest of your business how powerful your training has been.

eBook Release: Establishing A Learner-First Training Model
eBook Release
Establishing A Learner-First Training Model
This book aims to put in place the framework that Coassemble developed and refined over the years, providing a step by step guide to implementing your own learner-first training model.

There are 4 key questions to ask yourself when evaluating your training:

Question 1: How Will I Know If My Learners Learned Anything?

The first step in evaluating training is understanding whether your learners have actually taken on the knowledge that you’ve shared.

This may sound pretty obvious, but it’s not as simple as just looking at your learners’ completion data.

Look back at your learning goals (if you haven't yet defined your learning goals, then start thinking about How To Identify Clear Learning Goals [1]). What knowledge were you looking to share with your learners?

Now, tie this to a measurable outcome. If you had tested your training prior to rollout [2], then you would have done exactly this already.

For example, if your learning goals are to increase your sales team’s knowledge of your product, then your measurable outcome should be something along the lines of this: ‘After taking my training program, my sales team will increase their scores on a product-based test by X%’.

By allocating a quantitative number to your training test through pre- and post-test assessment, you’ll be able to objectively measure whether you’ve hit your learning goals.

Unless you’re running accreditation or compliance-based training, then completion data probably won’t cut it. Define your success metric, make it measurable, and keep it as your North Star for evaluation purposes.

Question 2: How Will I Know If My Learners Applied Their Learning?

Too many trainers complete the step above and end their evaluation process. After all, they’ve measured whether their learners have understood their training, and that’s what determines the success of training, right?

In most cases, successful training outcomes are a little more complex than that. Real-world training isn’t just about sharing the knowledge you need to share; it’s about having learners apply that knowledge. To revisit our example above, what’s the point of your sales team understanding more about your product if they don’t then apply that knowledge?

The second step in building a solid evaluation process is evaluating the application of knowledge. To do this, you’ll need to monitor the behaviour of your learners and analyse measurable data on their performance post-training. This will probably require stepping outside your Learning Management System and into other areas of the business.

For example, you may want to monitor the close rate (also known as the conversion rate) of your sales team: does it increase following your product training program? Or, you may want to monitor feedback from leads interacting with your sales team: are they reporting that your sales team has greater product knowledge following your program? If so, you can demonstrate that your training was successful.

Knowledge is often applied in ways that you may not imagine. For this reason, try to monitor as many of the data points on your learners’ performance as possible.

If you have the luxury of rolling out your training in stages, consider splitting your learner personas into control and test groups to better isolate the effect of your training on performance. For example, you could split your sales team into 2 similar groups, deliver training to one group and not the other, and measure the performance of both groups after your training has been delivered. Did the group who underwent training have a better close rate? You’ll be able to separate the effect of your training from other factors that could be affecting performance.

Question 3: How Will My Learners Know If They Were Successful?

Evaluation doesn’t just happen on the trainer side. Great trainers ensure that their learners have a clear understanding of how successfully they completed their training, and how they could improve in the future.

Ensuring your learners have access to their learning results is a good start, but also consider how you will provide context to help them evaluate their success, and how you will engage them to view their results.

Providing context will allow your learners to better understand how well they completed their training, how to apply it, and how to do better next time. You can provide context to learners through mechanisms like trainer feedback alongside their results, but also by more engaging methods, such as in-person sessions with a trainer or manager to run through results and how to apply training, or a social chat room for learners to digest their performance. Giving learners’ access to their results in comparison to the rest of their cohort will also allow them to contextualise their success.

Making learning results available to learners is critical, but you’ll also need to engage them to view their results. Consider how you will notify your learners that their results are ready, and how you’ll display those results to them. If you’ve identified your learner personas, then referencing those personas will help you understand how your learners want to be engaged and what will motivate them to evaluate their success.

Question 4: How Will My Designers Know If Learning Was Successful?

Evaluation is most powerful when it acts as a feedback loop, positively impacting future training. One way to ensure that this feedback loop is happening is to include your learning designers in your evaluation process.

Consider what metrics will be important to your learning designers; for example, you may want to couple measurable training results with engagement metrics like time spent on content, and make them available to ensure that future training design is built off key learnings from your current content.

Putting It All Together

Let’s recap. You can now:

  • Measure that your learners successfully took on the knowledge you needed to share with them.
  • Measure that your learners then applied that knowledge to deliver real-world business outcomes.
  • Share training results with your learners to boost engagement and future training performance.
  • Share training results with your learning designers to improve future training content.

What's Next?

We’ll take a look at how to set up your reporting to answer the questions above. Stay tuned for an article on setting up reporting for ongoing evaluation in the next few days.

Wondering how to implement your own learner-first training model? Download the eBook Establishing A Learner-First Training Model and discover the step by step process on how to perfectly execute the goal of making the learner have success. Also, discover how to incorporate career mapping into your digital training strategy using a learner-first approach through the Webinar 6 Steps To Incorporate Career Mapping Into Your Digital Training Strategy.

Read more:

  1. How To Identify Clear Learning Goals
  2. How To Test Your Training Prior To Rollout
eBook Release: Coassemble
Coassemble
Train the easy way. With Coassemble’s all-in-one online training platform. Create, deliver, train and report all from the one place. Easy!
Close