7 Steps To Design The Perfect Training Evaluation Strategy For Learning Analytics

7 Steps To Design The Perfect Training Evaluation Strategy For Learning Analytics
Summary: Evaluating your training is an inseparable part of learning analytics. Here is a 7-step plan to design the perfect evaluation strategy.

Designing The Best Training Evaluation Strategy For Learning Analytics

Globally, millions are spent on employee training every year. At the end of the day, after all that is said and done, the only questions that truly matter are:

  • Has the training resulted in the required impact the organization was seeking in the first place?
  • Has it been successful in creating the ROI to justify the budget spent on the training?
eBook Release: Leveraging Learning Analytics To Maximize Training Effectiveness - Practical Insights And Ideas
eBook Release
Leveraging Learning Analytics To Maximize Training Effectiveness - Practical Insights And Ideas
Learn how to establish the required KPIs, and more—this eBook is a how-to guide on learning analytics.

The Purpose Of Evaluation

L&D teams in every organization are required to prove to their stakeholders that a certain training program is bringing actual improvements to employee performance and, at large, monetary benefits to the organization. How do they determine if a training program is, well, working? How do they test the effectiveness and the impact (or the lack thereof) of employee development activities? The short answer is, by having a training evaluation strategy and program in place.

Now, evaluating a training program is not just about focusing on whether the learning objectives of the training were met or if the participants had an engaging learning experience. The ultimate purpose is to see how effectively learners are able to apply the knowledge gained to their job. Equally important is evaluating the effectiveness of the training program in regard to time, budget, and resources.

The Perfect Training Evaluation Strategy

The perfect training evaluation strategy is one that informs stakeholder decisions and results in action—decisions pertaining to whether the training was worth the investment along with the efficiency and effectiveness of the training itself. Perhaps the best way to illustrate this is through an example.

Business Stakeholder: "We've recently rolled out a new product. Now we're implementing a program where our contact center staff will make outbound calls to marketing-generated leads. With each lead, we expect them to learn more about the customer's potential uses of our product, explain our product's features and benefits, address any issues or concerns, and schedule an appointment for a demonstration by the sales team.

We're pilot-testing the program with one unit (about 50 people) for 6 months. No surprise, there's a lot of visibility on this program. We need to ensure our staff is well-prepared and able to generate results. If we can show a positive ROI with the pilot, I'm confident we'll get the support to roll out the program to the whole center. Can you handle it?"

You: "Absolutely!"

The goal here is to measure the efficiency and effectiveness of the contact center staff training; to see if they are trained properly, are well-prepared, and can generate results (i.e., funneling as many appointments as possible for the sales teams).

Let's assume you designed an effective blended training solution for the above scenario. The training is going to be divided into 3 stages: Before—During (Inform, Demo, Practice, Assess)—After (refresher).

Before During After
This stage involves spreading the word about the upcoming training using video trailers, posters, and the like, to get the staff excited and prepared. This stage will involve the contact center staff:
  • Discovering and learning the potential uses of the product(s).
  • Practicing and demonstrating the effective ways of explaining the product features and benefits to the leads.
  • Addressing their questions and concerns by means of 'role-playing' with a senior sales manager, or by using technology-enabled solutions such as scenario-based learning.
  • Scheduling appointments (an online system) for demonstrations by the sales teams.
  • Getting assessed on the knowledge gained, using either paper-based assessments and/or online scenario-based knowledge checks.
Offering post-training refresher and reinforcement using, for example, microlearning assets—such as PDFs, mobile-compatible videos, and infographics.

To evaluate this blended learning program, here are 7 simple steps that will lead to an effective evaluation effort.

Decoding The 7 Steps

1. Identify The KPIs

Identify the associated key performance metrics and performance targets.

KPIs are used to measure the staff's performance in achieving goals. These metrics can be a number or a ratio. For example:

  • Number of outbound dials made per day by the staff
  • Number of outbound actual calls (conversations) made per day
  • Number of appointments scheduled
  • Appointment "close ratio"
  • Product sales revenue generated

The idea is to specify what conditions and criteria are needed to reach an optimal performance rate. For example, if the employee is expected to make 20 calls a day and scheduled at least 5 appointments, what are the metrics needed to determine the time, budget and resources to make it possible. And, it is critical to determine the target conversion rate; i.e., the appointment "close ratio"— the number of calls needed to schedule an appointment.

2. Enable Learning Analytics

Ensure the desired learning analytics are enabled within the components of the blended solution and confirm access to performance data.

Be it technology-based training or classroom-based, every activity in the training has to be designed to enable the capture of learning data—data that provide insights into the learner's individual performance. Each of these activities needs to be measured using a likely combination of an LMS, focused groups, interviews, reports from survey software, feedback from supervisors, document/system analysis and observation.

3. Collect Descriptive Learning Analytics

The basic evaluation is often called "descriptive analytics" because it describes what has happened, summarizing the data collected to identify patterns in or interpret meaning about learners' performance.

Data regarding performance provides insights into how well employees succeeded in the course. For example, the information could be taken from the classroom/online assessments. Data gathering begins with descriptive analytics, the primary step for further analysis.

4. Collect Performance Data

Collect the data that reveals the performance levels of the employees; i.e., performance indicators. Have employees been able to make the desired number of dials and calls per day? Have they managed to book appointments for the sales teams? What is their close ratio?

When it comes to collecting performance data, most companies use their internal client/customer processing software. In the context of our scenario, performance data can be obtained from the call center software such as a Customer Resource Management (CRM) which logs in every detail about the call center staff—the number of calls they have made, the number of conversations they’ve had with the clients, and so on.

5. Analyze The Findings (Diagnostic Learning Analytics)

This is where you build upon descriptive analytics to help understand why something happened in the training program. Essentially, diagnostic analytics identify anomalies using which analysts can determine causal relationships. As Das Vannes states in his article for IBM [1], "often, this step requires analysts to look for patterns outside the existing data sets, and it might require pulling in data from external sources to identify correlations and determine if any of them are causal in nature."

Questions such as 'Why is the call "close ratio" too low when the number of calls is more than 50 per day? or 'Why is the number of appointments scheduled significantly higher even though the number of calls made per day is less than the average?'

6. Calculate ROI

ROI calculations are often critical in establishing the value of training. If the training succeeds in achieving the learning objectives, this should be reflected in the employee's performance—because learner performance leads to business performance.

Calculate the company's pre-training cost, the cost to deliver the training, the performance after the training, and the net benefit to your company.

ROI (percentage) = ((Monetary benefits – Training Costs)/Training Costs) x 100

7. Present Findings And Recommendations

Present the data to relevant decision-makers. Some stakeholders may just want an overview of the findings, while others expect detailed reports, complete with recommendations as to how to proceed with rolling out the program to the whole contact center. Determine the appropriate kind of data needed to make a well-informed decision and include them in your evaluation report.

Summing It Up

There it is, then. Follow these 7 simple steps and you have a perfect training evaluation strategy in your hands—an evaluation strategy that gives a healthy amount of information for your stakeholders to make great, informed decisions about the training. Download the eBook Leveraging Learning Analytics To Maximize Training Effectiveness - Practical Insights And Ideas for an incredible in-depth view of the topic. And for more insights on this, join the webinar and discover how to quickly plan for learning analytics to ensure utilization.


[1] Diagnostic analytics 101: Why did it happen?