Yes, You Should Pilot Your Online Course: A Few Things To Consider As You Do

Yes, You Should Pilot Your Online Course: A Few Things To Consider As You Do

Yes, You Should Pilot Your Online Course: A Few Things To Consider As You Do

Whether, How, Why, And When To Pilot Your Online Course

To pilot or not to pilot? That is often the question in online course design. The answer sounds simple enough, of course; you should! But, as with most things, the devil is in the details. What is a pilot exactly? How do we pilot? When is the right time to do a pilot? And why do we pilot? As I’ve discovered over the years, the humble pilot is more complex than it seems, and there’s not a lot out there to guide us on how to do it. In that spirit, this article attempts to answer some basic questions around whether, how, why, and when to pilot your online course.

What Is A "Pilot"?

Good question. The answer, like so many things, depends on who is asking the question. In the world of teacher education, which I inhabit, a pilot is a "user test" or a "dry run" of the online course before it is fully launched. It is an opportunity to "test out" the course in "Petri dish" conditions with a smaller cohort of users to gather information on the technology, directions, content, activities and the whole User Experience so that any problems can be fixed before the course is "fully launched".

For others a pilot may be "beta testing", commonly used when developing technology products. Intrinsic to beta testing is the notion that an online course is essentially a piece of software. A pilot or "beta test" simulates the presence of the material in the same online platform in which it will be hosted so that any problems can be identified, fixed and debugged. Beta testing may be more narrowly, and technically, focused on the technology and design-related elements of the course as for example, bugs, broken links, APIs that don’t work, issues of browser incompatibility, issues of functionality, navigability, and use. Unlike what I refer to above as "user testing", the beta-testing (sometimes called "usability testing", and even confusingly, "user testing"... You see how messy this is!) may be carried out by a small group of people.

In the government, bilateral aid agency, and corporate-funded large-scale international education project world I also inhabit, a "pilot" often has an ex-post meaning. Typically, the first iteration of an online program is a pilot. In other words, those of us in this world often roll out an online program without doing any, or minimal, user testing at all. During the course of the online program, with our first (fairly large) cohort, we begin to identify and document technical, design, teaching and learning issues. Hopefully, we then fix those issues in the second go-round, though unfortunately, because of funder priorities and timelines, this doesn’t always happen. There are many problems with this definition of a "pilot", the most critical of which is that we often don’t do the up-front work involved in an ex-ante pilot, developing surveys, guiding questions, "look for's", etc., so we miss out on potentially valuable information.

Additionally, in large-scale, externally-funded education programs, a "pilot" may, in fact, be an "evaluation". I don’t particularly think this is in keeping with the spirit of a pilot, (or fair to course designers) but a corporation or government agency or foundation may, again, not have time, and want to immediately assess and evaluate the fruits of its investment.

In other fields (say, academia), pilots may be part of field tests or part of research design.

And, in fact, pilots everywhere may be a mix of all of the above.

I’ll conclude here with "variations" on the "what is a pilot?" theme to reiterate that probably the most basic and true thing about a pilot is that it depends on who is doing it.

Why Should We Pilot?

At the risk of being somewhat repetitive, there are numerous reasons to pilot your online course. Arguably, the most important is that piloting has a formative function, informing designers about what design and navigation elements work well, work poorly, or do not work, so they can be fixed.

Pilots serve other purposes too. They serve as an "early warning system" about the technology. There are numerous technology-related questions to ask during a pilot, but two of the most critical are: Is this an appropriate platform or virtual learning environment? Does the technology facilitate or impede the kind of teaching and learning we want to see in the course?

Pilots also serve as an early warning system about the educational aspects of the course. Through pilots, we may discover that content, activities, and assessments are simply too complex (or simplistic), not relevant and not useful for our audience, or that directions are so unclear that the learner doesn't know what to do.

Pilots have numerous purposes and numerous beneficiaries. In addition to course designers, they can also help funders and decision makers understand what additional resources may be necessary to ensure that these online courses are a success. They can help orient, prepare and introduce online learners (especially novice ones) to the rigors, demands, and responsibilities of an online course, especially ones of medium and long-duration, as we often have in many education programs. They also help online instructors self-assess (and be assessed) on their own performance so they can make adjustments in terms of facilitation strategies, response time, presentation of content, directions, etc. They can help online program designers see what sorts of offline supports are necessary to help (again, in my case) teachers transfer learning from the online course to their actual classrooms.

To summarize: Pilots help a range of actors in the online course design and delivery process, and they serve multiple purposes. Most critically, pilots allow us to "dip stick" the effectiveness, usability, and functionality of the course from a broad user (in this case, an online learner) perspective.

When Should We Pilot An Online Course?

Again, that depends on numerous factors, i.e. your course development timeline, and what you want to know as well as when you want to know it. Most of the time, I (try to) pilot my own courses when they are 100% complete. However, I’ve really begun to think that this represents the triumph of my OCD, perfectionist personality over the receipt of demonstrably qualitatively better information.

You can pilot courses at 80% to 100% as long as the important content is included (Benjamin Martin of Learning Solutions suggests 90%) [1]. You can also pilot the course when it is under development.

From my own unscientific online research on this question, the best answer to this question is that you pilot when two conditions are met. First, you reach a point where you need information from potential users. And, second, the course is built out "enough" that your usability or beta testers can give you the information you need.

How Should We Pilot?

By now you should know that the answer is, "Well, that depends" on a number of factors. There are no hard and fast rules on how to pilot, so I’ll share some of my own thoughts (and hope the eLearning Industry community weighs in here). How you conduct your pilot depends on the purpose, beneficiaries, the audience for pilot results (are pilot results internally used or externally disseminated?), what you want to know, and what you’ll do with this information.

A pilot is or should have, two main characteristics. First, it should be done before the full launch of an online program, not after. Second, it should be formative in nature, not evaluative. The aim of a pilot is to identify what works and what doesn’t for the user so designers can undertake evidence-based corrective actions, inputs, supports and design considerations to assure a successful teaching and learning experience for the online instructor and learners.

With those points in mind, here are a few considerations for conducting your online course pilot:

Piloting our online courses is one of the most important actions we can take in terms of both quality assurance and ensuring a valuable experience for our online learners. Unfortunately, there’s not a lot "out there" on "best practices" in piloting online courses. A positive potential outcome from such scarcity is that we can explore different options to see what works best for our online courses. I hope this article gets us, as an eLearning community, started on more conversations around piloting online courses. I look forward to that conversation.

Reference:

[1] Martin, B. (2010). Beta testing an online course. Retrieved from https://bit.ly/2kQD33h

Exit mobile version