Is Level 1 Training Evaluation Of No Use Or Can We Put It To Good Use?

Rawpixel.com/Shutterstock.com
Summary: Does level 1 in training evaluation really matter? Or is it a level that can be skipped by an organization? Find out more in this article.

Discussing Training Evaluation Level 1

‘Show me the money’ is not just a phrase from the movie ‘Jerry Maguire’. L&D managers too face similar questions from functional heads about what happened to their money spent on training or what’s the impact of training on their business results in terms of revenues or costs. Return On Investment (ROI) on training has always been a topic of debate as it is not that easy to estimate the ROI on training in terms of business results because there are other factors that impact business results in addition to training.

Don Kirkpatrick gave us an invaluable framework for evaluating training at 4 different levels, starting from reaction and all the way to the impact on business results. Jack Phillips added the fifth level of calculating the ROI in recent times. But the problem with these levels is implementation. That's the reason most companies stuck only to the first level (the reaction of trainees) although they never really believed that the data thus collected could be of much use.

With the advent of eLearning, it has become easier to measure Level 2 (learning) as most eLearning programs have a final quiz. Coming back to Level 1 of measuring training effectiveness, it’s obvious that most of us use it because it is so easy to implement. Now, let’s see if we can make it better so that the data collected can be of practical use, not only to immediately improve the program but also to lay the foundation for a system to measure all 5 levels of training effectiveness.

What’s Level 1 Of Training Evaluation All About?

Research by ATD revealed that 91% of organizations evaluated training at Level 1 followed by Level 2 at 80% [1]. But, when it came to ranking the usefulness of each level of training evaluation, only around 35% of organizations reported that Level 1 training evaluation offers value. Let’s see if we can change that.

To start with, Level 1 of training evaluation is all about measuring learners’ reaction to a training program. It is not intended to check the learning that has happened. It is the simplest form of training evaluation, where learners provide their feedback immediately after completing a training program. Level 1 in training evaluation captures the learners’ reaction to:

  • The course, its content, interactions, and assessments.
  • The instructor (in Instructor-Led Training), his/her mastery of the subject, communication and ability to clear doubts.
  • Learning environment such as classroom features or navigation and tech support in eLearning.

Check a sample form, used to collect learner feedback, here.

What Is Specifically Measured At Level 1 Evaluation?

Level 1 evaluation includes categories that are not directly related to learning but are capable of influencing learning.

1. Relevance Of The Training Program

  • Did you find the training relevant?
  • Did you like the training program; What specifically did you like?
  • Did you understand the learning objectives?
  • Would you recommend this training program to others?

2. Course Content

  • Will you be able to apply the training in your job?
  • Was the training program detailed enough?

3. Facilitator Knowledge (Applicable Only In ILT Programs)

  • Was your learning enhanced by the knowledge of the facilitator?
  • Were you comfortable with the pace of the delivery?

4. Training Delivery (Applicable Only In eLearning)

  • Did you find the course navigation easy?
  • Did you find the course engaging and interactive?

5. Facility (Applicable Only In ILT Programs)

  • Did you like the venue and the logistic arrangements?
  • Was the classroom too cold or too warm?
  • Were there distractions during the training?
  • Was the training time convenient for you?
  • Did you feel refreshed after the breaks?

By collecting such information, the training manager hopes to get an idea about how useful the course is. But of course, the flip side is that the reaction of trainees is not always objective, and it’s difficult to use that to improve the program.

What Can The Utility Of Level 1 Training Evaluation Be?

Can organizations skip training evaluation at Level 1 simply because it is perceived as offering less value? No, and here’s why. Data for Level 1 training evaluation is easy to gather and not expensive to analyze. But that isn’t the only reason organizations carry out training evaluation at Level 1.

It’s essential to understand that data captured at Level 1 can form the basis for analyzing subsequent levels of training evaluation. Level 1 training evaluation can provide a checkpoint should problems arise in Level 2. Level 2 training evaluation is used to measure learning. For example, if a Level 2 training evaluation reveals that participants haven’t learned from the training program, data analyzed in Level 1 can reveal the barriers to learning. It’s not necessary that the reason for ineffective learning always lies with the content of the training program or the competence of the instructor. It could be something as simple as difficult course navigation (in eLearning) or unsatisfactory logistic arrangements (in classroom training).

In this case, the data gathered at Level 1 of training evaluation (reaction) is useful in determining the reasons learners haven’t been able to learn from the training program when you evaluate training at Level 2 – that measures learning. Each level in training evaluation is linked to the one succeeding it.

How Can You Ensure You Get The Right Results From Level 1 Training Evaluation?

Now that we have an understanding of what’s measured in Level 1 evaluation and its utility, do you think this level always gives you valid results? Not always. Richard Clark and Fred Estes conducted research on this and published their results in the book Turning Research into Results: A Guide to Selecting the Right Performance Solutions. According to this research, results from Level 1 training evaluation have an inverse co-relation with Level 3 results (on the job application and behavior) [2]. What this means is that Level 1 evaluation often rates an excellent training program poorly or an ineffective training program highly.

Is there anything you can do to get the right results from Level 1 training evaluation? Here are a few tips to improve Level 1 evaluation forms:

1. Include Open-Ended Questions

Most Level 1 feedback forms fail when they include only close-ended questions that elicit a ‘yes/no’ response.

If a learner answers ‘no’, does that mean the entire content included in the training program is irrelevant or is the learner referring to some part of the content? How would you know? Therefore, it is always a good idea to include open-ended questions as they can provide more detailed and specific information.

2. Include Questions That Give You Actionable Insight

Actionable insight refers to data that can be acted upon or helps decision makers take a decision. So, instead of asking if there is adequate interaction, we could ask where in the program an interaction would have been helpful. Or if the seating was cramped, there is no point asking “Was the seating arrangement comfortable?” because we know it wasn't. Rather, it is more useful if we ask them if they would mind coming in smaller batch sizes.

3. Include Both Qualitative And Quantitative Measurement

A Likert scale like the one below would throw out good data:

1 – Strongly Agree

2 – Agree

3 – Neutral

4 – Disagree

5 – Strongly Disagree

You can also collect more specific information by adding qualitative follow up to questions. For example, asking them to describe in a few lines the reason for their choice.

We need to take care not to include too many qualitative questions (open-ended) as the chances of getting no response or an irrelevant response are higher. This is because Level 1 training evaluation is done at the end of the training program when participants are getting ready to leave.

Here’s a table that gives you a list of criteria and changes you can make to improve the results of the Level 1 training evaluation.

Criteria Evaluating Results Ways to Improve Level 1 Evaluation
Quality of Source Check the credibility of your information sources.

E.g.: Level 1 data evaluation reveals that only new hires found the training program very useful.

Are you sure new hires understood the questions in the feedback form?

Ensure that the questions included in the Level 1 reaction sheet are easy enough to be understood by beginners at the workplace.
Clarity of Data Is there a similar response from most learners? Or is there a mixed reaction?

Is the feedback similar in qualitative data analysis or is there a mismatch?

You might want to recheck the questions included in your reaction sheet and remove ambiguous questions (if any).
Chances of Bias Is there a chance of biased responses in training evaluation?

There is a possibility that learners may not reveal their exact reaction.

Check for problems in the data collection methods you use.

Most often, giving participants the choice to respond anonymously is a better idea.

Reliability of Data Check for the consistency of data.

Have you received similar responses from different groups of participants?

Have the responses been similar even when learners have varying levels of experience?

There are times when learners’ experience levels influence their reaction to training. For example, if your training program expects a certain level of proficiency from learners prior to attending, it is essential to communicate the same to participants well in advance.

If you do Level 1 of training evaluation as described above, you should be able to comprehend how well your employees received the training and also identify gaps to improve the program.

References:

[1] The Value of Evaluation: Making Training Evaluations More Effective

[2] 8 Tips On Developing Valid Level 1 Evaluation Forms