Evaluation: Your Gateway To Content Engagement

Evaluation Your Gateway To Content Engagement
blossomstar/Shutterstock
Summary: Evaluation of any learning should not be an afterthought. It should be a key that opens doors to the creation of engaging content.

Evaluation As A Means To Improve Engagement

Often times, evaluation in eLearning can seem an arduous or unnecessary task. As content generators, we tend to "believe" that our content is relevant and meets most specifications before it gets to our clients. Theirs is to consume and become better, like medicine from the good doctor.

Evaluation In eLearning

Unfortunately, most of our learning systems and artefacts are born of the parent-child relationship. The teacher directs and the learner receives. This permeates the eLearning sector as well. Whether this is a sustainable model is a question for another day. In my estimation, one of the remedies to this relationship is evaluation. I prefer to look at evaluation as an opportunity for the learner to "assess" the presented content. The contention is that the learner's expectation to benefit from content can only be enhanced through evaluation.

This view helps us create rubrics and evaluation models that look to determine the value of what we are working on. Our slight differentiation, however, is that evaluation should not begin at the end of a course, rather it should be an ongoing process, and should be carried out at all touch points. To give you an example, our very initial evaluation is a learner's previous grasp of the topic. We try as much as possible to capture this data point. This goes to inform us on the learner's grasp of content before and after, thereby giving us a clearer picture on content engagement.

eLearning artefacts are a good option to carry out evaluations, as the tools are relatively easy to manipulate. With the advancement of analytics both in authoring tools and Learning Management Systems (LMSs), learning outcomes can be captured efficiently.

I speak out of experience when I say that it can sometimes be cumbersome to tweak models, for example, the Kirkpatrick's evaluation rubric (that we use) to your own eLearning evaluation scenario. It is a good headache, however, to have.

Are You Familiar With Kirkpatrick's Evaluation Model?

Here is a simplified view of the model. There are four levels to the evaluation model. The four levels help with evaluating learner artefacts at each stage.

  • Level 1: Reaction
    Measure your participant's initial reaction to gain an understanding of the training.
  • Level 2: Learning
    Measure how much information was effectively absorbed during the training.
  • Level 3: Behavior
    Measure how much your training has influenced the behavior of the participants.
  • Level 4: Results
    Measure and analyze the impact your training has had at the business level. This is the Return On Investment (ROI) of the business.

Our Perspective

Well, it is only lately, after much designing, that we agreed that as a team we are actually our own best critique. Different teams go through the eLearning, adding, reviewing, etc., and it is only when we believe that we can achieve our objectives that we proceed to release. We, therefore, decided that evaluating levels 1 and 2 learning outcomes of the Kirkpatrick model should happen in-house at the review stage of our eLearning. Learning outcomes for both these levels have remained stable over the short period we have embarked on this process. We can refer to this as a descriptive analysis of what should be learnt. It can easily be summed up as the hindsight of the training.

For level 3 evaluation, we have designed a specific cohort rubric for skills, knowledge, application, and understanding. This analysis is both at a descriptive and a diagnostic analysis level. Our evaluation looks at the completion rates and enrollments among other items. A specific rubric digs a bit deeper into the levels a certain cohort completed, to offer deeper insight to the training.

Level 4 evaluation, however, has been a constant iteration and we hope to capture the right rubric over time. Our efforts on the same have been considerable. The predictive analysis of what will happen is a rigorous task as it offers foresight. Foresight, however, takes time and data. This level constitutes behavior change and we recommend tools that reinforce behavior changes.

The more complex level 5 speaks to the Return On Investment. Firstly, for this stage, both tangible and intangible benefits must be designed at the initial stages with the client's contribution. For example, intangible benefits, like change of culture, attitude, skills and knowledge of employees, or low turnover rates, or high retention rates, have to be accompanied by tools that help you do just that. Secondly, when designing your eLearning, keep in mind that both tangible and intangible benefits to the employees and the company take time. Do not fear to communicate this fact to your clients. You should have a timeline, though, of when you expect to see the desired results. Thereafter, evaluating the Return On Investment becomes a doable exercise within set parameters.

Conclusion

Whichever model you use, remember that you can only create better content through the evaluation process. Engaging the learners to let them tell you how, what, and why they want to learn should be your greatest achievement.

Which evaluation tools are you using? Share with us. In the meantime, happy evaluation!