Bringing Light To The Smile Sheet: The Danger Of Complacency

Bringing Light To The Smile Sheets: The Danger Of Complacency
Summary: Hello, learning professional! And I use the term “professional” with great intent. Terminology matters! At least here it does, as you will see… In this article, I will discuss learning measurement, and specifically smile sheets.

How To Bring Light To The Smile Sheet

My name is Will Thalheimer. I’m a learning consultant and research translator from the Greater Boston area (Massachusetts, in the United States). I’ve also just published a book on learning measurement, Performance-Focused Smile Sheets: A Radical Rethinking of a Dangerous Art Form.

Christopher Pappas has asked me to write a series of articles on this topic. I’m going to make them fun. I will try anyhow. Indeed, since Christopher is an innovative young gentleman, we’ve agreed to experiment with some embedded questions. We hope you like them. Indeed! We hope you like us! Either way, you’ll get to give us feedback with our brilliantly-integrated questions.

Here Goes The First Article 

Do you hate smile sheets or love them? Tolerate or ignore them? Do you stash your results under your pillow at night to bring you good dreams or do you shred them without even a peek?

Wait! You might not know smile sheets by that name. You see, smile sheets lurk in the shadows of the learning field, stealth operators, often intent on blending in, hiding in plain sight as a chameleon might. Smile sheets are also known as:

  • Happy sheets.
  • Reaction forms.
  • Response forms.
  • Level 1’s.
  • Evaluations.
  • Feedback forms.
  • Et Cetera…

A smile sheet is a set of questions presented to learners during or after a learning event - utilized to gather learner perspectives regarding the learning experience or its effects. Smile sheets can be used for classroom training, eLearning, conference sessions, etc.

That smile sheets are known by multiple names may create confusion, making it especially hard for hard-working professionals to search for valid information. Worse than that, smile sheets are so ubiquitous in our field —in learning— that we’ve stopped examining them. We’ve stopped looking for improvements. Most of us have… anyway.

By the end of this series of articles, I hope to help you see smile sheets in a whole new light — to see new possibilities. That is my hope… Enjoy the read!

First Set Of Questions 

Let me first ask you three questions.

Change your privacy settings to see the content.
In order to see this content you need to have advertising cookies enabled.
You can adjust your cookie preferences here.


Thank you for answering the questions. In a later article in this series, we’ll reveal the results.

In the next section, you’ll be presented with a situation and will be asked to provide your wisdom. The future of the learning enterprise may just rest on your shoulders!

What’s A Chief Talent Officer To Do? 

Denise has worked in the workplace learning field for 25 years. She’s been a technical trainer, an Instructional Designer, an eLearning developer, a project manager, and a senior functional manager. Recently promoted to Chief Talent Officer (CTO), she now finds herself responsible for reporting on the success of the whole learning enterprise. Indeed, she has a meeting in two weeks with the CEO and his senior managers to review the previous year’s results.

As you might expect, Denise is a bit nervous, knowing the importance of “first impressions”. In her first meeting with the senior managers, Denise wants to cultivate a reputation for her competence and business savvy. To prepare for the meeting, Denise gets input from her senior managers and reviews the reports from the previous five years.

Denise learns from her managers that last year’s report was very well received by the company’s senior management. What she sees in the five previous annual reports is a progression to more-and-more visually appealing reports, with dozens of charts and tables documenting the activity of the training function. In the two most recent reports, she notes that more emphasis is given to benchmarking the company’s results against others in the industry. The results appear to be revealing, with her company’s average scores in the most recent report being 4.15 while the industry average is 3.86 on a five-point scale for their Level 1 results and 3.94 versus the industry average of 3.62 for their Level 3 results. While all the key questions are reported, the two questions that are averaged for each course have the following stems and utilize the same five-item Likert-like scale (strongly agree, agree, neutral, disagree, strongly disagree).

  • Overall I am satisfied with the quality of the learning experience.
  • The learning experience will help me be effective in my current role.

Denise figures that she won’t be held accountable for the results that are reflected in the report, but will be judged for the completeness and clarity of the report itself — and for her interpretation of the results. Given these considerations, Denise’s first inclination for this year’s report is to utilize the format of the previous year’s report, but bolster it with additional background data. Most importantly, she knows she needs to understand the report at a deep level.

Change your privacy settings to see the content.
In order to see this content you need to have advertising cookies enabled.
You can adjust your cookie preferences here.


Can you handle the truth? The report, as described, is largely meaningless from a learning-effectiveness standpoint.

Here’s what we know from the description:

  1. The company relies on smile sheet questions (aka response forms, reaction forms, evaluation forms, level 1’s) as their main source of feedback.
  2. The company uses delayed smile sheets, which they see as providing Level 3 information about employee performance.

Here’s the problem: Smile sheets have been found, in two meta-analyses covering over 150 scientific studies, to be virtually uncorrelated with learning results (Alliger, Tannenbaum, Bennett, Traver, & Shotland, 1997; Sitzmann, Brown, Casper, Ely, & Zimmerman, 2008). What this means is that if we rely on smile sheet results as traditionally captured, we’re using bad information to make decisions. In particular, relying on traditional smile sheets tells us nothing about (1) learning effectiveness, nor (2) the real worth of the learning function. Moreover, comparing one’s results to industry averages is equally meaningless, because you’re comparing your meaningless smile-sheet data to other companies’ meaningless smile-sheet data. Finally, while beautiful visualizations may add credibility to your report, they are just as likely to hide bad data from those looking at the report.

Let me be specific in regard to the answer choices from the last question:

A. Last year’s report was at least moderately successful in documenting the worth of the learning function.

  • NOT TRUE. The worth of the learning function is related to its ability to produce successful on-the-job results. Giving learners poorly conceived delayed smile sheets is NOT good enough - even if you call them Level 3’s.

B. Last year’s report did a reasonably good job in gauging learning results.

  • NOT TRUE. They used Likert-like scales and traditional smile sheet questions, which research has shown to be uncorrelated with learning results.

C. Last year’s results show that her company produced more effective learning than the industry average.

  • NOT REALLY TRUE. Last year’s results showed that her company produced somewhat higher scores on meaningless smile sheets, but since traditional smile sheets are NOT correlated with learning results, the report’s claim cannot be trusted.

D. The visuals from last year’s reports are likely to have been very helpful in clarifying the organization’s learning results.

  • While well-conceived visuals can help clarify, what’s most critical is that visuals are based on good data. Bad questions do not make good data. Instead of clarifying good data, the beautiful visualizations here probably hid bad data.

E. Last year’s results are largely meaningless but should be reported anyway.

  • Unconscionable!

F. Last year’s results are largely meaningless but should be reported anyway, with the caveat that major improvements will be incorporated into the following year’s report.

  • This is an acceptable answer. If Denise chooses this answer, she can explain the need for a better evaluation process going forward.

G. Last year’s results are largely meaningless and should not be reported. Instead, a critique should be offered along with a plan for improvement of the learning-measurement approach.

  • This is acceptable, but dangerous. Denise’s stakeholders will be suspicious.

As with any case, there is a lot of information left unsaid here. Still, the scenario offers a good starting point for our discussions of smile sheets. It hints at the following:

  1. Traditional smile sheets are flawed.
  2. Too often, our organizations don’t see these flaws and take our smile-sheet results as a clear barometer of our effectiveness.
  3. Likert-like scales and numeric data are too fuzzy to support smile-sheet decision making.
  4. Benchmarking our company’s results against meaningless data from other companies is a fool’s game. Click to read more on this.
  5. Beautiful visualizations of our data are a double-edged sword. Too often they hide the fact that the data is based on poor data-gathering approaches.

The Next Article In This Series 

In the next article in this series, we’ll dig deeper into the reasons that traditional smile sheets have proven to be so ineffective.

This Article’s Smile-Sheet Questions

Change your privacy settings to see the content.
In order to see this content you need to have advertising cookies enabled.
You can adjust your cookie preferences here.



  • Alliger, G. M., Tannenbaum, S. I., Bennett, W., Jr., Traver, H., & Shotland, A. (1997). A meta-analysis of the relations among training criteria. Personnel Psychology, 50, 341–358.
  • Sitzmann, T., Brown, K. G., Casper, W. J., Ely, K., & Zimmerman, R. D. (2008). A review and meta-analysis of the nomological network of trainee reactions. Journal of Applied Psychology, 93(2), 280–295.