Evaluating Your Online Learning Program (Part 2)
apichon_tee/Shutterstock.com

How Effective Is Your Online Professional Development Course? 3 Models For Evaluating Your Online Learning Program

In last month’s article, we discussed the importance of evaluating online professional development programs for teachers specifically (but in fact, all learners) and provided a broad overview of evaluation in general.

There are numerous ways to evaluate online professional development programs for teachers. Programs can design their own or use previously developed evaluation models. This article examines 3 types of evaluation models that you can use to evaluate online learning programs for teachers. (If you are interested in learning more about online teacher professional development, click on the image title to the right).

Distance Education for Teacher Training1.  Fitzpatrick’s 4 Levels

Internationally, one of the best-known frameworks for evaluating professional development has been Fitzpatrick’s model, developed in 1959 to evaluate training for Heifer International. This model comprises 4 levels, each of which builds stepwise on the previous level:

  • Level I evaluates learners’ reactions to the professional development.
  • Level II evaluates learners’ learning.
  • Level III evaluates learners’ behavior.
  • Level IV evaluates professional development results in the classroom.

2. Guskey’s 5 Levels Of Evaluating Professional Development

A similar, but more comprehensive professional development evaluation framework is that of Thomas Guskey (2000), whose 5-level framework for evaluating professional development is outlined below. These levels range from the lowest level of evaluation—assessing learners’ reactions to the professional development—to the highest—determining whether the professional development for learners had any impact on student learning.

Figure 1: 5 Levels of evaluating professional development (Guskey, 2000)

Guskey's Five Levels: Evaluating Teacher Training

3. Scriven’s Evaluation Οf Training

A final model is Scriven’s Evaluation of Training (2009), a training or professional development evaluation checklist that can be used for formative and summative evaluations, monitoring professional development, and even conducting meta-evaluations. As will be seen, it combines elements of Fitzpatrick’s 4 Levels and Guskey’s 5 Levels of evaluating professional development. The checklist consists of 11 questions listed below:

  1. Need
    Is this professional development the best way to address this particular need?
  2. Design
    Does the design of the professional development target the particular need defined above? Does it target learners’ background and current knowledge, skills, attitudes, and values? Does it take into account existing resources?
  3. Delivery
    Was the professional development announced, participated in, supported, and presented as proposed?
  4. Reaction
    Was the professional development relevant, comprehensible, and comprehensive?
  5. Learning
    Did learners master intended content, acquire intended value, or modify their attitudes as a result of the professional development?
  6. Retention
    Did learners retain the learning for appropriate intervals?
  7. Application
    Did learners use and appropriately apply what they learned in the professional development?
  8. Extension
    Did learners use what they learned at other times, in other sites, or with other subjects?
  9. Value
    What was the value of the professional development for learners?
  10. Alternatives
    Which alternative approaches could be used to meet the same needs?
  11. Return on Investment
    What is the value of the professional development for the students, the school, the district, the region, and the educational environment?

Conclusion

Evaluation is one of the most critical factors in the success of an online learning program. However, because of its highly technical and political nature, it is one of the field’s least understood and least practiced components—a reality that effectively and inevitably degrades the quality of any elearning program. As we conclude,  it is critical to keep 3 core ideas in mind as evaluate our online programs (Guskey, 2000):

1. Evaluation Should Be Co-Designed With The Professional Development Program Itself

It should begin at the earliest stages of the online learning planning and continue throughout the life of the program. One of the critical benefits of co-designing the evaluation along with the intervention itself (e.g., an online course) is that online educators can examine the soundness of their program theory and logic.

As the graphic below demonstrates, a successful program is one with a sound theory or logic model and program success, that is, where A -> B -> C.

  • Theory failure means that the theory or logic underlying the program itself would never be able to create the desired effect, so A-> B ≠ C.
  • Program failure means that the theory underlying the program was sound but that the program itself, for any number of implementation reasons, did not set in motion the desired causes or effects. In other words, A ≠B ≠ C.

Figure 2: Example of program theory/program logic (Weiss, 1998)

logic model

By using a series of diagnostic and formative evaluations, evaluators can capture the theory failure and program failure and make suggested remedies or revisions to help address these weaknesses.

2. Evaluation Should Be Systemic

Because learners operate within an education system comprised of various levels and actors, all components of the system should be evaluated to make sure they are working together to support teacher change.

3. Evaluation Should Be Informed By Multiple Sources Of Data

Evaluation serves multiple purposes at multiple levels. For this reason, even modest evaluations should include multiple measures (a variety of sources of information) and multiple methods (both a quantitative and qualitative approach). While clearly articulated goals offer direction in selecting the most important outcomes to measure, evaluators also need to be aware of intended and unintended consequences and find a way to capture these to gain a fuller understanding of what occurred. These multiple sources of data can also enhance the validity and reliability of the evaluation.

 

For all references in this article, see:

Burns, M. (2011, November). Evaluating Distance Programs, pp. 252-269. In Distance Education for Teacher Training: Modes, Models, and Methods.

Close