5 Questions To Ask When Testing Your Training Prior To Rollout
The learner-first training model is simple but radically different from the way a lot of organisations train. In a learner-first training approach, trainers always start with the learner. The learner is the basis for every aspect of a company's training strategy from the company's learning goals to their learner pathways, to the tools that they choose to train and the way they design their training.
Whilst this may seem like common sense, it's actually not what a lot of trainers do.
The Stages In Learner-First Training: When To Start Testing
The learner-first approach is a structured process that places the learner at the heart of the training strategy. The stages of this process are as follows:
- Evaluate
Who are my learners? What knowledge do I need to share with them? - Build
How will my learners learn? What tools will I use for training? - Design
What does my content look like? - Test
Does my training process successfully satisfy my learning goals? Is my data clear and accurate? - Disseminate
Can my learners access my content? Can I scale my training?
We’ve previously written articles on creating learner personas [1], identifying learning goals, structuring learner pathways, choosing tools, and planning and designing training content.
All of these stages are important prerequisites for testing training design - learner personas will let you know who you need to test your training on, learning goals will define what will make your test successful, and learner pathways will allow you to build a test that represents your learning environment. Of course, planning and designing training content are also key, as you won’t have anything to test if you haven’t created any training yet.
Here are some questions to ask when testing your training:
1. What Am I Testing?
Like any good test, testing your training program should begin with a hypothesis—a measure of whether your test is or is not a success. This hypothesis should be mapped back to your learning goals. Ensure that this hypothesis is falsifiable; that is, you can prove that it is right or wrong.
If your learning goals are to increase your sales team’s knowledge of your product, then your hypothesis should be something along the lines of this: ‘after taking my training program, my sales team will increase their scores on a product-based test by X%’. By allocating a quantitative number to your training test through pre- and post-test assessment, you’ll be able to objectively measure whether you’ve hit your learning goals.
This stage is also where you’ll define your success metric. Are you aiming for deeper knowledge as measured by improved test scores? Or, are you more focused on improving employee engagement levels during training, as measured by time spent on training content or successful adoption of content? Do you simply want to improve completion rates?
2. Who Am I Testing On?
In order to effectively test your training, you’ll need to pick test subjects that are representative of the learner base that will be taking your training. If there are multiple cohorts within your learner base, you’ll need representative subjects from each of these cohorts. The easiest way to select representative subjects across multiple cohorts is to refer to your learner personas (if you haven’t yet defined your learner personas, here’s an article on how to), and pick samples from each persona group.
3. What Data Am I Collecting?
The testing phase is not just about proving your hypothesis, but also ensuring that you’re successfully gathering data on each step of your training program before rolling it out across your learner base. Consider the data points that you’ll need to collect from your training program. For example, course completion rates, time spent on content, individual quiz scores, dropout rates at each stage of your learner pathway [2]. Use your test to confirm that your data points are pulling through accurately for each learner. This is your chance to make sure your reporting is clean and there is no data leakage!
4. Putting It All Together
To recap the above in a step-by-step process, here’s an example of a structured testing process:
- Decide on your hypothesis and define a measurable metric for success.
- Gather a representative cohort (based on your learner personas).
- Conduct pre-testing to benchmark your test.
- Conduct testing and prove or disprove a hypothesis.
- Ensure data is clean and accurate, ready for wider dissemination of training.
5. My Program Is Ready To Roll Out. What's Next?
You’re ready for the dissemination phase! This means rolling out your training across your learner base, scaling your training process, and evaluating your data. Stay tuned for an article on how to disseminated training in the coming days.
Wondering how to implement your own learner-first training model? Download the eBook Establishing A Learner-First Training Model and discover the step by step process on how to perfectly execute the goal of making the learner have success. Also, discover how to incorporate career mapping into your digital training strategy using a learner-first approach through the Webinar 6 Steps To Incorporate Career Mapping Into Your Digital Training Strategy.