Improving Assessment In eLearning Programs
Yulia Grigoryeva/Shutterstock.com

How To Improve Assessment In eLearning Programs

What gets measured gets taught. This is one of the great truisms of education.

Assessment drives instruction. Yet assessment is typically the weakest component of an eLearning program. Why is this so?

  • In many cases assessment is characterized by a number of practices that impede, rather than enable, learning.
  • Many online continuing education programs may be reluctant to assess summatively whether and what learners have learned as a result of the program.
  • Programs may measure only the product of student learning versus the learner’s progress and process in and of learning.
  • Online programs may use standardized tests that measure out-of-date skills—focusing on declarative knowledge (facts) versus procedural, conceptual, and epistemological knowledge (application of skills, deep understanding, and methods of knowledge acquisition, respectively).
  • Assessments may be exclusively summative (occurring at the end of a learning module or at the end of the online learning course of study) and not formative (ongoing).
  • Assessments may be separate from the online technology employed. Issues of online, finance, and logistics, together with a lack of well-trained personnel who understand assessment, often make it difficult to support more valid and realistic performance-based assessments, such as in-class observations or electronic portfolios of learner work.
  • Finally, many entities may not wish to assess student learning; their aim may simply be to get students in and out of the eLearning system as effortlessly as possible.

In the next series of articles, we discuss assessing online learners. Note that this series of articles is geared toward traditional online courses (semester length, for credit or continuing credit) versus short corporate training courses.

Assessment Or Evaluation?

"Assessment" and "evaluation" are often used synonymously, but they are different. Assessment refers to individuals, whereas evaluation refers to programs (though that rule does not apply in real life—individuals can be evaluated and programs can be assessed).

Assessment refers to any of a variety of procedures used to obtain information. It includes numerous types of measures of knowledge, skills, and performance, usually in the service of learning. Assessment may have an evaluative component—a summative assessment, such as a final exam—that places a value or judgment on performance.

Evaluation is a set of procedures for determining the value or overall worth of a program. It essentially examines impact or outcomes based on predefined criteria.

Successful eLearning programs have overcome many of the above issues by using a range of formative and summative assessment as appropriate. They recognize that assessment is a process that is inextricably linked to teaching and learning (Heritage, 2010:1) and therefore use multiple and flexible types of assessments—quizzes, discussions, interviews—as part of learning. Such programs capitalize on the strengths of the online technology employed to administer and score assessments and assess higher-order thinking skills. They use a multitude of measures—performance-based assessment, growth models, or value-added models—to assess teacher practice. Most critically, they realize that assessment, even when summative, should always have a "formative" component, that is, instructors should always use assessment results to further refine instruction within an online environment.

Strengthening Assessment Within An eLearning System

There are several strategies for strengthening both formative and summative assessment of learners within any eLearning model. We discuss some of the major ones here:

1. Know Why We Are Assessing

Assessment can generally be used for the following purposes:

  • Choosing/Sorting/Screening
    To assign learners to a particular slot, spot, seat, position, or level based on performance.
  • Certification
    To assure that the student has met/exceeded guidelines.
  • Instruction
    To inform the instructor how well, or poorly, learners understand the content. This allows the instructor to reteach information or change the course of instruction.
  • Learning
    To measure the student’s grasp of content on an ongoing basis (Partnership for 21st Century Skills, 2005).

The answer to this question drives the type of assessments we design. For example, if we want to compare one learner’s performance on an assessment with that of another, we should design norm-referenced assessments. If we want to measure a learner’s performance with an empirically derived level of proficiency (such as a cut score that determines whether a learner has mastered a particular skill), we will want a criterion-referenced assessment. If we want to measure a learner’s prior performance as the basis for comparison with his or her current performance, this is an ipsative assessment.

2. Align Learning Outcomes With Assessment

The most critical component of the assessment is designed. We have to design specific, measurable, objective, observable and clear (SMOOC) outcomes; instruct learners according to these outcomes; then measure learner performance against these outcomes. This approach allows educators to have a shared vision and language and, just as important, a shared definition of specific behaviors, which can be identified and measured.

3. Make Formative Assessment An Explicit Part Of The Instruction

Traditional instruction in an online program may involve organizing the curriculum into chronological units or modules of study and then be assessing learners’ understanding of the material at the end of the learning unit (Guskey, 2010: 53). Yet assessment theory tells us that learners do best when the assessment is part of, not separate from, instruction. Rather than separating assessment from instruction and making assessment a purely summative exercise, eLearning courses should promote assessment as part of actual instruction.

4. Measure Learner Performance, Not Simply Knowledge

One way to do this is through a performance-based assessment, using a rubric or checklist to score performance. Checklists are binary and "low inference" in design—the scorer essentially assesses whether a behavior or desired indicator is "present" or "absent". While low-inference scoring guides are easy to complete and can be carried out by less experienced or less well-trained observers or instructors, they simply measure the presence of a behavior. They fail to capture the complexity, breadth, and depth of the performance itself.

In contrast, high-inference tools, or rating systems, incorporate descriptive information or "constructs" of the performance and rate these along with some sort of scoring scale (such as a Likert scale). With high-inference classroom observation tools, the observer must infer the constructs to be rated—such as enthusiasm, clarity of presentation, or empathy—recording the frequency through such scales as "consistently", "sometimes", or "always" (Rosenshine, 1970). Though they are more demanding to use, if used well, high-inference scoring guides, like rubrics, yield information that is both reliable and valid. Such information also better captures the quality, complexity, and intricacies of learning.

Next article: We will look at specific technology options for assessment within an online program.

References:

The references for this article can be found in Burns, M. (2011, November). Distance Education for Teacher Training: Modes, Models and Methods

Close