What Good Learning Design Looks Like
There is a particular kind of eLearning module that most of us have sat through. It opens with a regulation summary. It progresses through a series of bullet-pointed obligations. It ends with a ten-question quiz that tests recall of what was just displayed on screen. And then it marks you as compliant. This approach has always been a poor substitute for learning. For the EU AI Act, it is also a liability.
The problem is not effort or intention; it is design. Most compliance eLearning is built around information transfer, not behavior change. These are different problems requiring different solutions, and the learning science on this has been consistent for decades.
Transfer—the ability to apply learning in a new context—does not happen automatically after exposure to content. Research on context-dependent memory shows that retrieval is cued by the environment in which learning occurred. If someone learns what the AI Act requires by reading slides, they are most likely to recall that information when sitting in front of slides. They are least likely to recall it when they are in a meeting, under pressure, about to make a decision on whether to flag an AI tool to their compliance team.
Spaced retrieval—returning to material over time, rather than covering it once—consistently outperforms single-session training for long-term retention. Yet the vast majority of compliance programs are built as one-and-done events, often timed to coincide with a regulatory deadline rather than a learning curve. The result is training that produces completion certificates, not competence. For a regulation that explicitly requires workers to demonstrate appropriate AI literacy, that distinction matters enormously.
What Article 4 Actually Demands From A Learning Design Perspective
Article 4 of the EU AI Act states that providers and deployers of AI systems shall take measures to ensure—to the best of their ability—that staff have sufficient AI literacy. The regulation does not specify hours of training, module formats, or assessment methods. It specifies outcomes. This is worth sitting with, because most L&D teams read regulatory language as a constraint when it is actually an invitation.
The regulation asks: do your people have sufficient literacy to interact with AI systems appropriately within their role? That question is entirely answerable through Instructional Design. The question of what "appropriate literacy" looks like for a procurement manager who reviews AI-generated supplier risk scores is different from what it looks like for an HR administrator using an AI-assisted CV screening tool. These are not the same learning problem, and a single generic module cannot address both.
The instructional implication is a shift from program-level thinking to role-level thinking. Before a single slide is designed, the learning design question is: what decisions does this person need to make, and what do they need to understand in order to make them correctly?
This is standard task analysis, applied to AI literacy. The AI Act does not require a compliance course. It requires that people can do something—specifically, that they can engage with AI systems with enough understanding to recognize risk, ask appropriate questions, and escalate when necessary. Instructional Designers know how to design for that. The regulatory framing should not distract from the craft.
Scenario Design: Putting Learners In The Decision, Not The Lecture
If Article 4 is an outcomes specification, then scenario-based design is the obvious delivery mechanism. The goal is not to teach the regulation; it is to build the judgment to act correctly under conditions the learner will actually encounter.
Effective scenario design for AI Act compliance starts with realistic workplace contexts. Not abstract descriptions of "a company using AI," but the specific situations your target learners face: the hiring manager who receives a ranked shortlist from an AI screening tool and has to decide whether to follow it; the customer service team leader whose AI system flags a customer interaction for review; the analyst who is asked to present AI-generated forecasts to a board without the model documentation to hand. Each of these is a decision point, not an information point. The scenario's job is to place the learner inside the decision—with enough context pressure that the choice feels real—and then reveal the consequences of different paths.
Branching is essential here, but branching done poorly is just multiple routes to the same end screen. The branches need to reflect the actual range of reasoning your learners bring to a situation. One branch for the learner who follows the AI output uncritically. One for the learner who escalates appropriately. One for the learner who recognizes a problem but handles it incorrectly—the most educationally valuable path, and the one most often omitted.
The error path is where learning happens. If a learner takes the wrong branch, they need to experience why it was wrong—not be told immediately, but experience the downstream consequence. A realistic follow-up: the complaint, the audit question, the moment a colleague pushes back. Then the reflection, tied directly to the decision they made.
This requires more production time than a slide-based module. It also produces meaningfully different outcomes. Learners who practice decision-making in context are more likely to make correct decisions in context. That is not a design philosophy; it is what the transfer research predicts.
For AI Act programs specifically, the most productive scenario themes tend to cluster around a few core decision types: when to trust AI output and when to override; how to identify whether an AI system is being used within its sanctioned purpose; and how to escalate a concern without knowing the full technical picture. These are not knowledge questions. They are judgment questions, and they require judgment practice.
Measuring What The Regulation Actually Cares About
Completion rates are not a learning outcome. They are a participation metric. For many compliance programmes, this has not mattered; the regulatory requirement was demonstrably met by evidence that an employee completed a module. Article 4 complicates this, because the outcome the regulation points toward is not completion. It is capability.
Assessment design for AI Act programs should therefore test application, not recall. A question that asks "what is the definition of a high-risk AI system?" tests memory. A question that presents a scenario—"Your procurement team wants to use an AI tool to score supplier contracts; what should you do before approving this?"—tests judgment. These are not equivalent, and assessments built from the first type will not produce evidence of the second.
From a design perspective, this means building assessment scenarios that are distinct from learning scenarios but parallel in structure. The learner should not recognize the assessment as a repeat of content they have already seen; they should encounter a situation they have not practiced specifically, and demonstrate that they can reason through it correctly.
For programs that need to demonstrate compliance, performance data on scenario-based assessments is substantially more defensible than a completion certificate. A record showing that a learner correctly identified and escalated a high-risk AI use case, under assessment conditions, is evidence of capability. A record showing they clicked through 12 slides and scored 80% on a recall quiz is evidence of attendance.
Instructional Designers should make this argument to their compliance and legal colleagues early. The evidence standard that L&D can produce, if the program is designed correctly, is actually stronger than what most organizations are currently generating.
The Documentation Layer L&D Keeps Ignoring
There is a design problem embedded in AI Act compliance programs that most L&D teams have not yet confronted: the audit trail. Regulatory compliance requires not just that training happened, but that the appropriate training happened for the appropriate people, and that there is a record of it. For programs built in standard LMS environments, this is often treated as an automatic output: the system logs completions, therefore the documentation exists.
This is insufficient for a few reasons. First, a completion log does not capture what was completed, only that something was. If the program is later questioned—by a regulator, an auditor, or an internal review—the documentation needs to show that the learning content was appropriate to the learner's role and the AI systems they work with. Generic modules logged in a generic LMS do not demonstrate this.
Second, if the program uses branching scenarios, the most valuable documentation is not just completion—it is pathway data. Which decisions did learners make? How many attempts did a learner require to pass assessment? Was a remedial pathway triggered? This information is evidence of genuine engagement with the learning, and it is almost never captured by default.
Designing for documentation is not a legal task. It is a design task. It means specifying, at the outset, what data the LMS or learning platform needs to capture, and ensuring the program architecture produces it. This is a conversation between Instructional Designers and LMS administrators that needs to happen before build, not after launch.
What "Appropriate" Actually Means For Instructional Designers
The EU AI Act uses the word "appropriate" 17 times. For legal teams, this ambiguity is a headache. For Instructional Designers, it is working space.
"Appropriate" AI literacy is not defined centrally because it cannot be. What is appropriate for a radiologist using an AI diagnostic tool is not appropriate for a warehouse operative whose shift scheduling is managed by an algorithm. The regulation is asking organizations to make a contextual judgment, and that judgment is fundamentally an Instructional Design problem: who needs to know what, in order to act how?
Organizations that treat Article 4 as a box to tick will build the cheapest module that satisfies the narrowest reading of the requirement. Organizations that read it as a design brief will build role-differentiated programmes, grounded in realistic scenarios, assessed on demonstrated judgment, and documented in a way that holds up to scrutiny. The second approach takes more skill. It also produces training that actually works—which, in the long run, is the point.
The ambiguity in the regulation is not a reason to wait for clearer guidance. It is a reason to apply good Instructional Design practice and document the rationale. If the learning objective is clearly tied to a specific role, a specific set of AI interactions, and a specific standard of judgment (and if the assessment evidence demonstrates that learners can meet that standard) then the compliance case is strong. That is what Instructional Designers are trained to build. The AI Act just made it mandatory.