editors choice badge Editors' Choice

Yes, You Should Pilot Your Online Course: A Few Things To Consider As You Do

Yes, You Should Pilot Your Online Course: A Few Things To Consider As You Do
TippaPatt/Shutterstock.com
Summary: To pilot or not to pilot? That is often the question for online learning programs. However, the simple "pilot" is surprisingly complex and there are a number of things to consider as you pilot, or fail to pilot. This article explores some of these considerations.

Whether, How, Why, And When To Pilot Your Online Course

To pilot or not to pilot? That is often the question in online course design. The answer sounds simple enough, of course; you should! But, as with most things, the devil is in the details. What is a pilot exactly? How do we pilot? When is the right time to do a pilot? And why do we pilot? As I’ve discovered over the years, the humble pilot is more complex than it seems, and there’s not a lot out there to guide us on how to do it. In that spirit, this article attempts to answer some basic questions around whether, how, why, and when to pilot your online course.

What Is A "Pilot"?

Good question. The answer, like so many things, depends on who is asking the question. In the world of teacher education, which I inhabit, a pilot is a "user test" or a "dry run" of the online course before it is fully launched. It is an opportunity to "test out" the course in "Petri dish" conditions with a smaller cohort of users to gather information on the technology, directions, content, activities and the whole User Experience so that any problems can be fixed before the course is "fully launched".

For others a pilot may be "beta testing", commonly used when developing technology products. Intrinsic to beta testing is the notion that an online course is essentially a piece of software. A pilot or "beta test" simulates the presence of the material in the same online platform in which it will be hosted so that any problems can be identified, fixed and debugged. Beta testing may be more narrowly, and technically, focused on the technology and design-related elements of the course as for example, bugs, broken links, APIs that don’t work, issues of browser incompatibility, issues of functionality, navigability, and use. Unlike what I refer to above as "user testing", the beta-testing (sometimes called "usability testing", and even confusingly, "user testing"... You see how messy this is!) may be carried out by a small group of people.

In the government, bilateral aid agency, and corporate-funded large-scale international education project world I also inhabit, a "pilot" often has an ex-post meaning. Typically, the first iteration of an online program is a pilot. In other words, those of us in this world often roll out an online program without doing any, or minimal, user testing at all. During the course of the online program, with our first (fairly large) cohort, we begin to identify and document technical, design, teaching and learning issues. Hopefully, we then fix those issues in the second go-round, though unfortunately, because of funder priorities and timelines, this doesn’t always happen. There are many problems with this definition of a "pilot", the most critical of which is that we often don’t do the up-front work involved in an ex-ante pilot, developing surveys, guiding questions, "look for's", etc., so we miss out on potentially valuable information.

Additionally, in large-scale, externally-funded education programs, a "pilot" may, in fact, be an "evaluation". I don’t particularly think this is in keeping with the spirit of a pilot, (or fair to course designers) but a corporation or government agency or foundation may, again, not have time, and want to immediately assess and evaluate the fruits of its investment.

In other fields (say, academia), pilots may be part of field tests or part of research design.

And, in fact, pilots everywhere may be a mix of all of the above.

I’ll conclude here with "variations" on the "what is a pilot?" theme to reiterate that probably the most basic and true thing about a pilot is that it depends on who is doing it.

Why Should We Pilot?

At the risk of being somewhat repetitive, there are numerous reasons to pilot your online course. Arguably, the most important is that piloting has a formative function, informing designers about what design and navigation elements work well, work poorly, or do not work, so they can be fixed.

Pilots serve other purposes too. They serve as an "early warning system" about the technology. There are numerous technology-related questions to ask during a pilot, but two of the most critical are: Is this an appropriate platform or virtual learning environment? Does the technology facilitate or impede the kind of teaching and learning we want to see in the course?

Pilots also serve as an early warning system about the educational aspects of the course. Through pilots, we may discover that content, activities, and assessments are simply too complex (or simplistic), not relevant and not useful for our audience, or that directions are so unclear that the learner doesn't know what to do.

Pilots have numerous purposes and numerous beneficiaries. In addition to course designers, they can also help funders and decision makers understand what additional resources may be necessary to ensure that these online courses are a success. They can help orient, prepare and introduce online learners (especially novice ones) to the rigors, demands, and responsibilities of an online course, especially ones of medium and long-duration, as we often have in many education programs. They also help online instructors self-assess (and be assessed) on their own performance so they can make adjustments in terms of facilitation strategies, response time, presentation of content, directions, etc. They can help online program designers see what sorts of offline supports are necessary to help (again, in my case) teachers transfer learning from the online course to their actual classrooms.

To summarize: Pilots help a range of actors in the online course design and delivery process, and they serve multiple purposes. Most critically, pilots allow us to "dip stick" the effectiveness, usability, and functionality of the course from a broad user (in this case, an online learner) perspective.

When Should We Pilot An Online Course?

Again, that depends on numerous factors, i.e. your course development timeline, and what you want to know as well as when you want to know it. Most of the time, I (try to) pilot my own courses when they are 100% complete. However, I’ve really begun to think that this represents the triumph of my OCD, perfectionist personality over the receipt of demonstrably qualitatively better information.

You can pilot courses at 80% to 100% as long as the important content is included (Benjamin Martin of Learning Solutions suggests 90%) [1]. You can also pilot the course when it is under development.

From my own unscientific online research on this question, the best answer to this question is that you pilot when two conditions are met. First, you reach a point where you need information from potential users. And, second, the course is built out "enough" that your usability or beta testers can give you the information you need.

How Should We Pilot?

By now you should know that the answer is, "Well, that depends" on a number of factors. There are no hard and fast rules on how to pilot, so I’ll share some of my own thoughts (and hope the eLearning Industry community weighs in here). How you conduct your pilot depends on the purpose, beneficiaries, the audience for pilot results (are pilot results internally used or externally disseminated?), what you want to know, and what you’ll do with this information.

A pilot is or should have, two main characteristics. First, it should be done before the full launch of an online program, not after. Second, it should be formative in nature, not evaluative. The aim of a pilot is to identify what works and what doesn’t for the user so designers can undertake evidence-based corrective actions, inputs, supports and design considerations to assure a successful teaching and learning experience for the online instructor and learners.

With those points in mind, here are a few considerations for conducting your online course pilot:

  • Developing Key Questions Or Criteria For Your Pilot
    What are you looking for exactly? It’s important to develop guiding or focusing questions and/or criteria to help you and your "user testers" know what to look for, even if these questions are, "What works? What doesn’t? What should we fix?".
  • Identifying And Creating Your Data Collection Methods And Instruments
    Do you have testers document (either in the course itself or via a form)? Do you observe them as they test drive the course and note issues and questions? Do your "test drivers" just take the course, document issues, and leave? Will you use surveys, interviews, or focus groups to get a richer idea of the User Experience? Again, I’ve done all, and a combination of such approaches which are all valid ones depending on what you want and how much time and energy you can devote to getting the information you need.
  • Selecting Your Audience
    Who are your user testers, usability testers, pilot testers, beta testers (whatever term you want to use—I’m obviously struggling with terminology here!)? How will they be selected? I recommend drawing from as diverse a potential pool as possible so you have a sample that is representative of your ultimate, larger user cohort.
  • Size Of Your Usability Testers
    There is no correct sample size. A large sample size (say, 30) may get you a lot of diverse information, but I’ve seen redundancy and narrowness of information from large groups of usability testers. Conversely, the small sample size may not get you the information you need (or it might). Users who are new to online learning may not know enough to give critical enough and targeted feedback. On the other hand, experienced online learners may not be representative of your target audience. I think the rule of thumb here is to ask: What mix of users can give you the best information? How much time and resources can you devote to collecting information? How much time and resources can you devote to analyzing it?
  • Location
    Do users take the online course as they normally would, say, at home? Do you bring them into a lab? Or is it something blended? How is the online instructor involved in each scenario?
  • Length Of The Pilot
    If the course is 4 weeks long, do you give your "pilot testers" four weeks? Or do you brute force it and have them do it over a few days? I’ve done both, and there are tradeoffs in both approaches, mainly concerning time, budget and user energy levels. Again, if it’s a facilitator-lead course, what’s the role of the facilitator here?
  • The Role Of Course Designers In The Pilot
    Do the same people design the course pilot it, or is this done by a different person or group? Again, there are tradeoffs in both approaches, one of which is how honest user testers might be in terms of their feelings about an assessment of the course or courses and how receptive designers are to hearing criticism. This is often more complex than we might think.
  • How Real Do You Make It?
    Do you help users when they struggle, or to simulate a real distance-based online experience where users may be separated from their instructor and other learners? Do you let them struggle in order to gather more information for your pilot testing?

Piloting our online courses is one of the most important actions we can take in terms of both quality assurance and ensuring a valuable experience for our online learners. Unfortunately, there’s not a lot "out there" on "best practices" in piloting online courses. A positive potential outcome from such scarcity is that we can explore different options to see what works best for our online courses. I hope this article gets us, as an eLearning community, started on more conversations around piloting online courses. I look forward to that conversation.

Reference:

[1] Martin, B. (2010). Beta testing an online course. Retrieved from https://bit.ly/2kQD33h