What’s Missing In Your Online Training Evaluation - And How To Fix It

How To Fix What’s Missing In Your Online Training Evaluation

The Kirkpatrick evaluation model is the gold standard for assessing online training programs, but most organizations can’t use it to prove ROI.

eBook Release: Big Data For HR: How Predictive Analytics Can Deliver Business Value
eBook Release
Big Data For HR: How Predictive Analytics Can Deliver Business Value
An eBook for all the questions HR managers can answer with Big Data and predictive analytics.

The Kirkpatrick Model [1] was developed to help trainers measure the effectiveness of their training in an objective way. It’s the most recognized method of evaluating a training program’s ROI. But without the means to measure and analyze the right data, many organizations can’t make full use of the model.

The Kirkpatrick Evaluation Model

Developed by Donald Kirkpatrick in 1959, the Kirkpatrick Model has seen several updates and revisions. It identifies four levels of learning, with each level building on the one before it.

Level 1: Reaction

How much do participants find the training favorable, engaging, and relevant to their jobs? This includes the learner’s satisfaction with the training, their active engagement with learning exercises, and the perceived relevance to their actual work.

Level 2: Learning

To what degree do participants acquire the knowledge, skills, attitude, confidence, and commitment based on their participation in the training? Level 2 is measured by students’ acquisition of:

  • Knowledge (“I know it”).
  • Skill (“I can do it right now”).
  • Attitude (“I believe this will be worthwhile to do on the job”).
  • Confidence (“I think I can do it on the job”).
  • Commitment (“I intend to do it on the job”).

Level 3: Behavior

To what degree do participants apply the training once they are back on the job? This is measures the processes and systems that reinforce and reward performing core behaviors on the job.

Level 4: Results

How well do targeted outcomes occur as a result of the training and the support and accountability package? Level 4 is measured by indications that core behaviors are on the right track for creating positive impact.

Using an evaluation model like the Kirkpatrick Model can help you objectively analyze the effectiveness and impact of your online training program. As you gather more data from each level, you can see where your learning modules are performing strongest, and identify opportunities for improvement.

For example, if you’re seeing a drop off in effectiveness between Level 2 and Level 3, you’ll know that your students are learning the material well, but they’re having trouble applying it to their job roles. This gives you some direction for investigating the disconnect and modifying your course design as needed.

Adding ROI To Kirkpatrick

The Kirkpatrick model is important, but the model itself doesn’t provide the means to measure the four levels. Without a measurement process in place to evaluate learning effectiveness, the model isn’t very practical.

Jack Phillips made a major contribution to the field of learning measurement by developing a process to measure Kirkpatrick’s four levels [2]. And perhaps just as significant, he added a fifth level—Return on Investment (ROI).

Phillips’ model makes it possible to collect Level 3 and 4 data and determine the impact of training itself on job performance. With the Phillips model, you can isolate the impact of online training, versus other factors, at Kirkpatrick’s Level 4. From there, it’s possible to calculate ROI by converting impact into monetary value and comparing it to the cost of your online courses.

Maximizing ROI On Your Training Investment

Training is a critical part of the success of any organization. The problem is, determining actual ROI on the effectiveness of training is a perennial question mark.

To calculate the ROI of your training, you’ll need to measure organizational results [3]. That means selecting the right metrics on actual job performance data—not just training performance. And the data should be able to demonstrate that the training is responsible for increasing revenue or decreasing costs.

For example, senior living centers rely heavily on proper training to both reduce costs and increase revenue. Nurses are trained in workplace safety, preventing patient falls, and administrative procedures. Avoiding accidents and improving administrative efficiency cuts costs (including lawsuits!) for the organization. At the same time, staff are also trained to care for residents’ families, which can have an indirect impact on increasing revenue—referrals are a major source of new business for nursing facilities.

But how do you identify if training is impacting your bottom line? To accurately measure the effectiveness of your training program, you’ll need tools that can help you answer the following questions:

  • Is the training program effective?
  • How can we improve the program?
  • Did the program achieve the desired results at the lowest possible cost?
  • What are the causes of ineffective training?
  • What are our course enrollments and rate of completion?

The Gap In Evaluating Learning And Performance

Most Learning Management Systems can automatically track and report data at Kirkpatrick’s Level 1 and Level 2. Assessment tools can capture students’ reactions to the course, and templates can create reports (Level 1). Training programs can also easily administer pre- and post-tests that evaluate learning results (Level 2).

But beyond that, there’s a critical gap in analysis. When it comes to evaluating learners’ ability to apply the training (Level 3) or judging whether the targeted outcomes are achieved (Level 4), LMSs are incomplete—and gathering the data manually is expensive and time-consuming.

What makes these Levels 3 and 4 so difficult to evaluate? At Levels 1 and 2, data is collected during the course, within the LMS. But Levels 3 and 4 require data outside of the LMS, after training is completed. Evaluations at this level are inherently more complicated, and costly. And so, few companies bother to evaluate the business impact of an online course.

The data is usually already there, in the organization -individual performance data exists in performance management systems, and organizational data exists in marketing, sales, and financial systems- but there’s no easy way to bridge the gap to collect and analyze the training and performance data together.

It’s imperative that organizations fully understand the ROI of their training programs, because ultimately their goal is to increase revenue and cut costs. If training courses aren’t achieving this goal, they’re negatively impacting the business.

Zoola™ Analytics can bridge the gaps. Zoola helps overcome the limitations in LMS reporting and analytics by addressing several challenges:

  • Limited access to data.
  • Limited capabilities to analyze data.
  • Limited options to present data.
  • Time spent creating reports.
  • Limited ability to demonstrate the business value of learning.

Zoola Analytics helps organizations make sense of their learning data quickly and easily, by providing deeper insight into the performance of learners, and learning programs — including how long learners are spending in courses, where learners are struggling or excelling, and how effectively they can apply what they’ve learned. Zoola lets you generate and share real, actionable insights in just minutes. In fact, Zoola can reduce the time and effort required to build reports by as much as 90-95%.

Closing The Gap In Course Evaluation

The data you need to bridge the gap between training and performance most likely already exists in your company. Individual performance data exists in performance management systems. Organizational data exists in marketing, sales, and financial systems.

Bridging this gap requires a technical infrastructure that minimizes the administrative effort needed to collect and analyze the training and performance data together. While most learning management systems don’t bridge the gap, some advanced analytics tools, such as Zoola, provide the connection that your organization needs for proving ROI.

Ready to go deeper in your LMS mastery? Check out our upcoming webinars!

References:

  1. Four Levels of Evaluation
  2. Re-Evaluating Evaluation: Jack Phillips and ROI (Part 2)
  3. Evaluating Training ROI With a Learning Intelligence System

Related Articles:

  1. Free eBook – Big Data For HR: How Predictive Analytics Can Deliver Business Value
  2. 4 Ways Predictive Learning Analytics Decreases Ineffective Learning
  3. The Reason You Need Big Data To Improve Online Learning
  4. 7 Steps To Successfully Implement Learning Analytics In Your Company
  5. 4 Tips For Improving Online Course Design With Learning Analytics
Close