Level 4 Training Evaluation

Level 4 Training Evaluation
Summary: Prudence dictates that investment decisions for any new project, whether it is commercial or non-profit in nature, must be predicated upon a solid business case of value delivery. While some social programs might be justified even though there might not be a monetary value to them, most initiatives in the business world are tied to demonstrable value, either in productivity or monetary terms. Corporate training initiatives fall within that category.

Level 4 Training Evaluation - The key to measuring training value

Deceptively impossible

Achieving "real" results is the ultimate objective of any company's training initiatives. Organizations typically won't invest in a training program unless management is convinced that the proposed training will lead to specific benefits. The challenge that most corporate trainers face however is: How does one measure "real" organizational impact as a result of training? And the answer is: Through a Kirkpatrick Level 4 Evaluation. And when Level 4 (Results) can be substantiated, proponents of the training program can unequivocally proclaim success!Kirkpatrick’s four levels of evaluation strive to offer a framework for the meaningful evaluation of learning in an organization. Envisioned by Donald L. Kirkpatrick in 1959 (and further refined in 1998 in the publication titled "Evaluating Training Programs: The Four Levels."), the four levels of the Kirkpatrick model include:

  • Level 1: Reaction
  • Level 2: Learning
  • Level 3: Behavior
  • Level 4: Results

These levels comprise of progressively difficult metrics against which success is evaluated, with Level 4: Results being the most difficult of them all. It is at this level (4) that evaluators measure the final impact that training has had on the organization.Because of its perceived difficulty, many training-centric initiatives don't strive to complete a Level 4 evaluation. In reality however, doing a Level 4 evaluation isn't that difficult, especially given that the impact of the training initiative will have been deconstructed significantly as the team progresses through the previous 3 levels. Level 4 evaluations will have significant number data points to work with as a result.Starting at the endThe best way to achieve success is to envision what success looks like. The same holds true with training programs. And when a company's trainers start at Level 4, which initially defines what training results should look like once the program has been delivered, closing the loop (once training has been delivered) on the review process with Level 4 evaluation becomes much easier.For instance, some Level 4 results that a company may look forward to achieving might be:

  • 60% reduction in customer order cancellation in the next 2 quarters
  • Cutting staff absenteeism by half each year
  • Decreasing scrap/wastage quantity by 2000 lbs each month
  • Slashing high-school dropout rates by 25% annually

Having such Level 4 outcomes defined in advance then gives training evaluators a solid basis for working backward to quantify what Level 3 metrics are required to be achieved in order to hit the Level 4 evaluation criteria. This process continues through Level 2 and Level 1. In every case however, it is imperative to ensure that the proposed metrics are observable and measurable. For instance, a "poor" set of Level 4 evaluation metric might be:

  • Reducing customer order cancellation significantly
  • Considerable decline in staff absenteeism

By starting at the end, and defining clear and measurable Level 4 evaluation criteria, training proponents can clearly demonstrate to company management and training sponsors the precise value that a training program can bring to the organization. And it is that kind of precision that senior management like to see before approving a program.

Making it happen

A Level 4 evaluation of training results requires a certain framework in order to be credible to company management. Merely publishing test scores of the trainees won't cut it. Management needs to see demonstrable change before they acknowledge that a training exercise was a success. Here are some broad guidelines to consider when setting up a Level 4 evaluation:

  • Points of Reference
    Make sure to have before training and after training measurements of the evaluation metrics
  • Time frame
    Change takes time, so allow an acceptable time period for training to prove its worth
  • Validation
    A single evaluation might not be sufficient. Repeat the measurements at appropriate times in order to validate that results have actually taken root
  • Controlling the evaluation
    Where practical, the use of control groups produce much better evaluation outcomes than random or across the board evaluation
  • Weigh the cost
    Some training might be relatively straightforward, and results are immediately apparent even without a Level 4 assessment. Results of other programs might be extremely difficult (and costly) to validate using a Level 4 approach. Do a cost-benefit assessment first, before ploughing full steam ahead with a Level 4 evaluation
  • Have realistic expectations
    Some training results cannot be validated through empirical data because the outcomes might be too subjective in nature. In such cases, where conclusive proof of results aren't available, one should simply be content with evidence that the program was a success

Using these broad principles, trainers can put together a very effective framework to measure Level 4 evaluation outcomes.

Fingers on the pulse

Sports people need to continually practice their sport and be monitored by their coach, or they risk growing "stale". Similarly, trainees from a learning program must continually demonstrate their training has taken roots by exhibiting sustained pre-determined behavioral changes in the workforce or community.  Level 4 evaluations must therefore be designed to monitor those behavioral changes over the long-term. Some ways to accomplish this include:

  • Sending out post-training surveys to capture feedback from trainees as well as others who are impacted by the training
  • Putting in place a long-term program of ongoing, sequenced training and coaching that reinforces key points of the training
  • Conducting follow-up needs assessments to determine if any gaps are still apparent between training delivered and actual results achieved
  • Verifying post-training metrics (scrap, absenteeism, quality, output, dropout rate) against pre-training metrics
  • Conducting interviews with trainees and others that may be impacted by or have influence upon the training outcomes (Supervisors, Managers, Customers, Teachers, Patients etc.)

Using these tools is a way of keeping a finger on the pulse of the trainees in order to determine whether, over a longer term, the training they received is being practiced as envisioned.

Delivering Proof and Managing Expectations

Financial professionals often tend to ask for proof of success in terms of Return on Equity (ROE), while learning professionals focus on Return on Expectations (ROE). While the Financial ROE might be straightforward to calculate (e.g. "Eliminated $50,000 worth of scrap each shift due to training"), it may be more challenging to quantify a Return on Expectation.Therefore, to deliver proof that expectations of the training program have been met, it is imperative that designers and evaluators of training programs know, in as much granularity as possible, precisely what those expectations are.  To accomplish that objective, learning professionals must:

  • Involve as broad a spectrum of stakeholders (Senior Managers, Line Managers, Supervisors, front-line workers, Community leaders, Teachers etc.) as possible when designing training programs and evaluation metrics
  • Ask probing questions about what each stakeholder group expects to achieve from the program
  • Arrive at consensus about how each expectation will manifest itself in the trainee upon successful conclusion of a training initiative
  • Where possible, agree upon appropriate and measurable metrics that will validate the manifestation of those expectations
  • Where empirical data points (proof) are not practical to measure expectations, agree upon alternative metrics (evidence)

It is extremely difficult for a training program in and of itself to meet the expectations set out by stakeholders. Many outside (of the training environment) factors also play a role in determining whether training was successful. Therefore, learning professionals should manage those expectations by encouraging stakeholders to exert influence over other favorable factors (outside of the training environment) which will impact how a trainee delivers to those expectations. It is that kind of partnership between all stakeholders that will ultimately ensure that Returns on Expectations are met.

The Final Test

The best way to determine whether Financial ROE or "expectation" ROE has been met is to ask those impacted by the training - including the trainees themselves. To that goal, learning professionals, in consultation with other key stakeholders, should ask appropriate questions and analyze the feedback received. Questions to ask include:

  • To what extent did the training meet expectations?
  • How did training have a visible impact on the trainee's job (lower rejects, less cancelled orders, fewer days off)?
  • Was there a measurable improvement to the broader group (Department, Organization and Community)?
  • Is there a need for any change to the training program (curriculum, frequency, design process)?

While such questions might be used as a way to receive significant insight into the question: "Did training deliver the results it was supposed to?", if structured strategically, they can also shed light on other elements of the training program, such as the quality of inclusiveness (Stakeholder engagement, Partnerships with other groups etc.) during the entire training initiative.