Measuring Learning: Asking The Right Questions

Measuring Learning: Asking The Right Questions

Measuring Learning: Asking The Right Questions

Less Content, More Impact!

For workplace learning professionals, the Learning-Transfer Evaluation Model (LTEM) provides a framework to align their analytics approach with the depth of learning evaluation they wish to achieve.

Learning-Transfer Evaluation Model For Measuring Learning

The Learning-Transfer Evaluation Model, developed by Will Thalheimer, presents a nuanced approach for measuring and evaluating the effectiveness of workplace learning. In this eight-tier approach, each tier can answer certain questions with confidence, while other answers would be speculative at a minimum (answered with low confidence). LTEM can be instrumental in shaping analytics strategies, enabling analysts to pose relevant questions, measure the right outcomes, and acknowledge the limitations of their metrics.

Why Not Use Kirkpatrick Level 4 Evaluation?

If you're currently using the Kirkpatrick model and it's working for your organization, keep doing it. This article may be irrelevant for you. Often, the challenge I've seen is not the model itself but the implementation. Namely, relying on level 1 data and assuming a knowledge assessment at the end of a course is a level 2 evaluation.

Learning without transfer (that is, application on the job) is an investment with low returns. Therefore, from early on, we (learning business partners, learning designers, developers, SMEs, etc.) need to focus on what happens after any learning experience. Each role must understand what they can do in their own scope to get to more effective outcomes! That's the reason I've been using LTEM for measuring learning in the workplace: it helps learning designers understand their impact on every single design choice they make along with the message they send, while it also serves as a baseline for stakeholders to understand what we can and cannot answer confidently at each tier (and what data we need to collect to do that).

What Questions Can We Answer Confidently At Each Tier?

Here’s how each tier can guide analytics for measuring learning, and the kind of questions we can confidently answer, as well as those we cannot.

Tier 1: Attendance

At the base of the LTEM is attendance, where analytics can only confirm registration, enrollment, participation, access, or completion. Showing up does not mean paying attention, learning anything, having any intent to apply it, or making any difference on the job.

Tier 2: Activity

Moving beyond mere attendance, the activity tier measures engagement. Engagement, however, has to be clearly defined in the data strategy document because it is one of the most misunderstood labels. Assumptions can be costly later in the project! A learner engages in the following activities related to learning, which can be measured in some way:

In practice, we tend to focus on measuring three types of activities related to learning:

  1. Physical
    What users do (all User Interface interactions fall into here)
  2. Affective
    How users feel about the activity, what emotions they experience.
  3. Cognitive
    How much they learn, reflect, and apply.

Ideally, you design a balance between these three domains. Otherwise, you may end up with "highly-interactive" clicky-clicky-drag-next activities that keep users awake, but they report frustration and no relevant learning. Or, you may end up with the most entertaining video script everyone talks about, yet they may not even remember what they were supposed to learn.

We also use data from this tier in more sophisticated adaptive paths to determine the next steps or personalize the user's journey. The smallest unit of learning is not completing a course! Any captured data point can be part of a logic that determines the path of a user. We use xAPI for this level of advanced decision-making and predictive analytics.

Tier 3: Learner Perceptions

The learner perceptions tier focuses on subjective feedback related to the learning experience. Surveys and interviews can provide insights into learner comprehension, motivation, and perceived support. There are two ways to collect data at this tier. The traditional Kirkpatrick level 1 approach is very common (which we found interesting, but not helpful in finding actionable insights early on), and there is also the more performance-focused approach (which gives you specific data points on confidence, intent to use, self-efficacy, and anticipated support/barriers).

The performance-focused approach provides more practical insights that we can directly take to stakeholders, about what to expect in terms of transfer. I strongly suggest experimenting with your version based on Will Thailheimer's book.

Tier 4: Knowledge

The knowledge tier metrics look at the learner’s ability to recite information. This is one of the most misunderstood concepts in workplace learning practice. Let me illustrate. When you tell the stakeholders that you will design a knowledge check at the end of the module, here's what they understand by that: "A knowledge check is great because otherwise, learners may not pay attention. It is also good to know that they will know what to do on the job after the training."

Now, a lot of "knowledge checks" I've encountered in my 15+ years working across many corporate learning teams are more about remembering what was said previously on a slide than about checking whether participants would be able to apply the knowledge using the proper skills later on the job. This is one of the reasons we decided to use LTEM.

Simple fact recall during, or at the end of, a learning event is not adequate for measuring learning. Using Cathy Moore's action mapping along with LTEM, we can explain to SMEs and stakeholders the dangers of the "illusion of learning." If the fact is crucial, we need to use it in an authentic scenario or task for practice. If it is important but there's support for recall on the job (like a checklist), then use the support.

Tier 5 and tier 6 is where all learning design should be aiming at for assessment! Sometimes, as a learning designer, you feel you don't have the opportunity to make huge changes to the already-decided solution agreed with stakeholders. One thing you can always do: move from fact recall to decision-making competence. This is where using a consistent measurement and evaluation framework can already influence design before any learning would happen. What gets measured, gets done. What gets measured, gets designed for.

Tier 5: Decision-Making Competence

This tier examines the learner’s capability to make decisions based on scenarios reflecting real-life situations. Analytics can measure the accuracy and quality of these decisions.

Decision-making competence does not have to stop at the end of a learning event. You can run campaigns over time, with role-based and skills-fine-tuned challenges to detect knowledge and skills gaps before they cause any performance issues.

Tier 6: Task Competence

Task competence goes beyond decision-making to evaluate task execution, either immediately or after a time lag to measure retention. You can think of it as a scale of authenticity, from role-play simulations to monitored task execution on the job.

Note that the tiers between 4-6 would be part of Kirkpatrick level 2: learning. The granular breakdown ensures learning designers and SMEs can measure and evaluate learning, adjust, and iterate if needed. However, high-level stakeholders don't need that level of granularity, so reporting on "learning, behavior, and performance" would be more appropriate.

Tier 7: Transfer

Transfer assesses the application of learned skills to work tasks. It distinguishes between assisted transfer (with support) and full transfer (independent application).

Realistic transfer must be part of the learning design strategy. Assuming that successful completion of training equals behavior change and appropriate, long-term skills growth in performance is simply wishful thinking.

Tier 8: Effects Of Transfer

The effects of transfer measures the broader impact of learning, including consequences on the organization, community, and beyond. Analytics must employ rigorous methods to assess causal relationships and both positive and negative effects.

Conclusion: Actionable Insights For Measuring Learning

Here are some actionable insights across LTEM tiers:

  1. Adopt multitiered analytics
    Implement analytics at multiple tiers to capture a comprehensive picture of learning effectiveness.
  2. Balance leading and lagging indicators
    Use early tiers for leading indicators of engagement and later tiers for lagging indicators of learning transfer and impact.
  3. Integrate qualitative and quantitative data
    Combine subjective learner perceptions with objective measures of knowledge and task competence.
  4. Measure transfer over time
    Track the application of skills beyond the immediate post-training period to assess long-term transfer. Make sure you also track how much the transfer is supported (partial transfer) or unsupported (full transfer).
  5. Assess impact holistically
    Use sophisticated analytics to evaluate the wider effects of learning, considering the organizational and societal impact. For example, look at team dynamics, transfer rate, or other signs of team-level performance and engagement level changes, rather than just individual skills.
  6. Leverage technology for advanced analytics
    Employ learning analytics platforms that can measure and analyze data across all tiers of LTEM. Use real-time data analytics for adaptive learning or predictive needs.
  7. Communicate analytics clearly
    Present data to stakeholders in a way that reflects both the potential and limitations of the learning interventions with an impact on performance.
  8. Continuously improve analytics practices
    As analytics tools and methods evolve, so should the strategies for measuring learning effectiveness. If you're comfortable only with learning data that you have control over, start there but make sure you're clear about the limitations. Once your data literacy and analytical skills grow, you can expand and iterate to include pre- and post-assessment comparisons, effect size analysis, and learning versus performance change correlation/causation data stories.

In the workplace, the ultimate goal is rarely learning, and not even learning transfer. It is doing, and doing it well under specific limitations. That is why showing the value of L&D starts with measuring the right things.

References:

Exit mobile version