Leveraging Learner And Course Evaluation Data

Leveraging Learner And Course Evaluation Data
Summary: As an IDer in higher education, I’m often asked how to effectively use course or learner evaluation data. In this article, I’ll explain how to select and design questions in course evaluations, how to analyze the results, and share practices to implement your findings.

Course Evaluations: Where Do I Start?

If possible, you want to design your course with your learners in mind. A few eLearning Industry articles talk about what to do when designing your eLearning or online course (of course, a simple search will provide you with more resources).

While developing a course with your learners in mind is best practice, it is not always feasible. For example, a faculty member may start in the fall at a new university. They may have one or two weeks before the start of the semester to design and develop a course. Or a trainer may start at a new company and is tasked with implementing a program they did not design. Time is a huge factor in developing an effective course—especially when taking into consideration your learners' needs. With the COVID-19 outbreak, many face-to-face courses need to be quickly ready to move online at any time. In some cases, this quick shift does not take online learning into consideration. However, time is not always on our side. So, what do we do now? Well, we plan for collecting data during the course and after the course to ensure its meeting our learners' needs. These considerations can be applied to both online and on-ground courses.

Collecting In-Progress Data

There are many ways to ensure your learners are on track while teaching a course. The first, and probably the most obvious, is through the quality of work they are submitting via assignments, assessments, activities, discussions, etc. If learners are meeting the course objectives through formative assessments, chances are the content is being effectively delivered to them. If scores do not reflect learning, changes need to be made. To figure out where the disconnect is you can do a few data-collection methods.

Depending on your course size and the number of students struggling, you could hold one-on-one meetings with individual learners. I like meeting students individually for a variety of reasons. First, especially in online courses, these mini-meetings show that you are a real person behind the computer screen and that you care and you want to see them grow and develop in the course. Meeting virtually can help close the transactional distance or space learners sometimes feel when taking online courses. You can also hear tones, facial experiences, and other indicators that are hard to read in text-based formats. Second, meetings may expose whether the issue(s) is at an individual level, content level, or instructor level.

In some cases, however, learners may not feel comfortable being completely honest with the instructor (thinking it may affect their grade, fear of ramifications, anxiety with meeting with instructor, communicate better in writing or other means, etc.). In these cases, consider using a written module or mid-course evaluation. By presenting the learners with an opportunity to provide feedback early on, you as the instructor can quickly identify any problem areas. In looking to design these types of evaluations, I would suggest the following:

  • Make them anonymous
    Learners will feel more comfortable providing feedback in this type of environment.
  • Keep it short
    Do not bog learners down with a lot of questions. You want a quick snapshot of how they are learning and taking in content from the course. Try to ask open-ended questions with areas for specific notes to be taken into consideration.
  • Address the findings
    Collecting the data is nice, it provides you with the information you need. However, make sure to address the findings with your class. Which data pieces can you address in the course moving forward (e. g., shorten discussion posts requirements, reduce the number of weekly readings, increase peer activity)? Which comments are misunderstandings about the course (e. g., cannot work ahead because items are scaffolded)?
  • Set an end date
    If implementing in class, give the learners time in class to complete it. Additionally, you want to make sure you have enough time to evaluate the data once you have it. In online courses, I personally try to set about a week aside for students to complete the survey and then another two to three days to analyze the results. An easy way to test the amount needed is to complete the survey yourself and add an additional five minutes (because you are already familiar with the questions you are asking, your learners are not).

Sample questions include:

General Instructor and Course Strengths

  • What about the class best helps you learn?
  • What would you like to see more of between now and the end of the course?
  • What does the instructor do well?

General Instructor and Course Weaknesses

  • What changes could the instructor make to improve your learning?
  • What do you think the course could cut down on?

Personal Learning—Student Snapshot

  • How many hours per week, outside of regularly scheduled class meetings, do you spend on this class?
  • What could you as a student do to improve the class?

For those who have not used a mid-course evaluation, I want to provide a few resources below to help guide you. These sites provide different styles of providing the evaluations as well as questions to consider.

After you have collected the data, now what? First, read through the results. In the initial review, do not make any rash decisions that will affect the course. Every person takes feedback differently, so make sure you are aware of what works for you. In some cases, reading the results and letting them stew for a day may help you come back with a clearer mind. In other cases, having a peer read through the results provides a third-party view. Whatever works for you, make sure you give yourself enough time to analyze the learner feedback.

Second, group all answers by question to identify themes in each question through the learner responses. You will always have a few outliers. That is, there will be answers on both sides of the spectrum (extremely positive and extremely critical). Try not to get too bent out of shape if one student is overly critical of your class—there is always one. Now, if 70% of your responses show that there is too much reading for the week, you need to consider what can you do to address this issue. Are there one or two readings that are essential and others that can be moved to “additional” or “supplementary”? Can you order the readings as the most important or critical for the week? Can a video cover the topics of a reading? Can learners skim through some articles or readings? Of the identified themes, make a game plan of changes that you can implement and those items you do not want to change in your course based off your identified themes.

Third, address your students. I recommend doing this within a week of receiving and analyzing the results. You should first thank them for their feedback, the time they took to complete the evaluation, and remind them the results are anonymous. Let them know your process, that you read through the comments, thought about the changes you can make or adjust, and items that are non-negotiable. Then, share the findings. Not all responses need to be shared with the group, but you should address the major themes you found. Some instructors do this in the form of an email, others a video, a PowerPoint presentation—whatever method works for you.

For non-negotiable items, make sure you explain to your students as to why these are required elements. For example, in an online course I teach each semester, students want me to allow for all discussion posts and replies to be made on the same day (they are required to make postings throughout the week). I always address this by explaining that online discussions are different than in-person discussions. That is, the direction of the conversation in an online format can change in the matter of a few hours, let alone days. Therefore, reading posts and responding to peers a few times a week on separate days allows the learners to take full advantage of the conversations online. It helps to retain the information and consider it throughout the week. Explaining this process and why the requirement is in place helps the learners understand as to why my discussions are structured that way and how it benefits them.

Lastly, retain the mid-course evaluation data. Specific items can be considered at the end of the course to help clarify course concepts, make changes or modifications, or add in additional pieces of content for the next time the course runs. Additionally, if this is a course that you teach repeatedly, feedback themes across different offerings of the course can help to illuminate course changes that may need to be taken into consideration.

Collecting End Of Course Data

In addition to collecting mid-course evaluation data, you can (and should) be gathering end of course data from your learners. End of course data allows you to collect feedback on the full course. It also provides you with a list of items to potentially help update your course before it is offered again. Many universities and companies collect end of course data, but I’m always shocked to find how many do not do anything with the data once collected. Instructors normally review the data once it is available but then push it aside after they have read the comments. If this is you, stop. This data is crucial to improving your course after it runs and ensuring your learners are connecting with the course content. Rather than push it aside, let us evaluate your course and use the data to make improvements. I previously wrote an article on "7 Keys For Successfully Updating Online Courses." One of the seven keys to success is using your student data to update your course. Before you do that, however, you need to ask the right questions.

Many universities have a repository of questions to consider in your end of course evaluation. Make sure to select questions that will help you as the instructor to make changes and elicit the feedback you need to gain a full picture of the learner experience. That is, consider using open-ended questions so that students can write responses to what specific elements did or did not work in the course. In addition, let’s say you tried a new tool in the class, ask a direct question regarding the tool usage (e. g., Did using Padlet increase your knowledge? What are your thoughts on using VoiceThread in the classroom? How effective was Kahoot! in your own learning?).

You want to make sure if your company, college, department, or university has a set core of questions that are required of all courses (see Penn State’s link below for an example of university mandated questions). In most cases, these types of questions are used for promotion or evaluation of instructors but must be the same across all university, college, or company courses. Therefore if a set of questions are required, make sure you select additional questions that do not overlap with the required questions already being asked. Many organizations have a bulk-listing of questions for you to select from for your individual course evaluation. Below are a few popular banks of questions.

Unlike the mid-course evaluation, you will not need to share the results of the end of course evaluation with your students (typically because the data is shared after the course has closed to ensure student grades are not impacted by feedback given). You can (and should) however still analyze the data. Once you have identified themes from your learner feedback, you should try to update your course based on these findings. If you have evaluation data from different times the course has been offered (e. g., spring and summer), you can combine the data to see if any themes appear across different student populations. When this occurs, if a finding appears across multiple sessions, this is something you should take into consideration, as the root of the problem is not within a specific learner population but across learners and at different points in time.

Lastly, it is important to note, that not all elements of the course evaluation data will directly apply to the course design. In some cases, instructors may need to improve their teaching method or approach due to data obtained. For example, an instructor may create videos each week to cover topics. However, the quality of the video is poor (difficult to see or blurred images, sound or background noise issues, etc.). These videos are something that the instructor needs to improve on – and have no bearing on the course design itself. Another example could include the feedback learners are (or are not) receiving. Learners may be receiving confusing or unclear feedback or no feedback at all. Again, this element of the course evaluations has more to do with the instructor style and teaching methods more than the course design or content alone.


Learner and course evaluation data is a critical piece of data that can help you to know and understand your learners. I hope this article provides you with new insights into how to leverage this data to improve your courses and teaching practices. As mentioned earlier, I suggest sharing results with a third-party person to help explore and consider different strategies to resolve the issues from the results. If you have any other practices to share in regards to utilizing course and learner feedback, feel free to drop them below in the comment section!