Moving From Static eLearning And AI-Generated Content To Competency-Driven Learning Experiences
With more than 25 years of experience in Learning and Development, Dimitris Tolis is the Founder and CEO of Human Asset, where he has led the design of custom eLearning, learning academies, and AI-powered learning solutions for European agencies such as EUAA, CEPOL, EUDA, and international organizations, such as the Council of Europe, ESM, United Nations ITU. As a Senior Instructional Designer, Certified Executive Coach, and AI Researcher at the University of Turku Finland, he brings together Instructional Design, neuroscience, and educational technology to create learning experiences that are more human-centered, adaptive, and practice-based. Through initiatives such as gAImify Hub, he is helping shift the conversation from faster content production to more meaningful learning design. Today, he speaks with us about the opportunities, risks, and future of AI in workplace learning.
Based on your experience, what are the risks of current AI use in learning, and how can they hinder meaningful L&D journeys?
One of the biggest risks is that AI is solving the wrong problem in learning. It helps us create content faster, but speed alone does not improve learning. Instead, it can lead to content mediocrity at scale: more slides, quizzes, and modules, but with weaker instructional depth, less originality, and a poorer learner experience. It can also create what I call a "little God" effect: the illusion that because content can be generated instantly, meaningful learning has also been designed. Without strong Instructional Design, this quickly leads to content inflation and lower quality.
A second risk is cognitive offloading combined with overdependence on AI. When learners receive instant answers, simplified summaries, and predictable feedback, they may engage less deeply. Critical thinking, reflection, and judgment can weaken over time, as we already notice happening.
Another serious risk is AI hallucination. Large language models can produce outputs that sound fluent, confident, and credible, even when they are inaccurate, misleading, or completely false. In a learning context, that is especially dangerous, because learners may trust the answer simply because it is well written. If this is combined with weak review processes, poor prompts, or no instructional guardrails, AI can spread confusion rather than support understanding.
So meaningful L&D journeys can be hindered when AI makes learning faster but also flatter.
My view is optimistic, though: these are not reasons to step back from AI. They are reasons to design it better.
What are some of the most overlooked opportunities for AI in learning, and why should organizations shift from content generation to meaningful learning experience design when implementing this emerging technology?
One of the most overlooked opportunities is that AI can help us move from information delivery to capability building. Most organisations still use AI mainly to generate content faster. However, the real value lies in designing learning experiences that are more adaptive, more contextual, and more practice-based.
A good example is the role of adaptive quizzes. Too often, quizzes simply check recall. With AI, they can become part of the learning process itself. The level of challenge can shift dynamically, weaker areas can be reinforced, and custom feedback can guide the learner forward. That makes quiz practice more developmental and much closer to real learning.
Another major opportunity is open-ended practice with personalised feedback. Many important workplace skills, such as interviewing, giving feedback, coaching, handling conflict, etc., cannot be developed through multiple-choice questions alone. Learners need to respond in their own words, make judgements, and reflect on their choices. AI can support this through AI coaching personas that provide more targeted feedback on clarity, reasoning, empathy, tone, and intent.
This matters because meaningful learning is not created by making things easier. It is created by offering the right challenge with the right support. Aristotle's insight still holds true: learning requires effort. Real learning and development happen when learners are challenged. And Bloom's 2 Sigma research reminds us of the value of personalised guidance. AI gives us a chance to bring both together at scale for the first time in human history.
Finally, AI creates an important opportunity for customisation. Instead of one-size-fits-all training, learning can be shaped around the organisation, the role, the competencies, and the context. That is why organisations should shift from content generation to meaningful learning experience design.
What is the importance of human-centered AI and human-in-the-loop approaches when building competency-driven learning experiences?
Hallucinations, the black-box nature of LLMs, and what I often call the "prompt and pray" approach are exactly what make AI risky in learning. If we simply ask a model to generate content, feedback, or assessment without strong structure, we may get outputs that sound fluent and convincing, but are not necessarily accurate, relevant, or pedagogically sound.
That is why human-centred AI and human-in-the-loop are so important, especially in competency-driven learning. They help move AI from improvisation to disciplined design.
With the right architecture, we can keep AI focused through specific competency frameworks, grading rubrics, clear instructional goals, guardrails, and moderation logic, and of course, human review and approval. This makes a major difference. Instead of letting AI wander, we guide it toward what matters: the skills, behaviours, and standards we actually want learners to develop.
In practical terms, that means AI can support the experience by generating practice, feedback, and adaptation, while humans remain responsible for quality, alignment, and trust. The result is a learning environment that is more reliable, more transparent, and more developmentally meaningful.
For me, this is the real value of a human-centred approach: it makes AI more trustworthy, but also more useful. It allows us to benefit from speed, responsiveness, and personalisation without losing pedagogical integrity. In competency-driven learning, that balance is essential.
Can you describe a representative AI-powered learning transformation use case from your work?
Yes. A representative example from our work involves a major law enforcement academy in Europe, where we are co-designing an AI-powered Train-the-Trainers capacity building program focused on helping trainers strengthen their instructional design and delivery skills.
What makes this case especially meaningful is that the course is designed around a dual purpose: to reduce AI risks, such as hallucinations, overreliance, weak judgment, and poor instructional use—and at the same time to unlock AI opportunities in more personalised, adaptive, and practice-based learning.
The transformation is not about adding AI on top of a traditional course. It is about redesigning the learning experience itself. We are using AI-assisted course design with structured templates, customisation to the academy's context and trainer roles, adaptive quizzes that support practice rather than simple recall, open-ended scenarios with coaching-style feedback, and AI avatar simulations that allow trainers to rehearse realistic conversations and facilitation moments. We also use competency frameworks, rubrics, and human-in-the-loop review to keep the experience trustworthy and aligned with the academy's standards.
What I find most exciting is that this kind of project moves AI from content generation to capability building. For me, that is a very strong example of AI-powered learning transformation: not faster content, but better learning design.
Is there a recent development project, product launch, or another initiative you'd like to share with our readers?
Yes, I would be very glad to share gAImify Hub, one of our most important recent initiatives at Human Asset.
gAImify Hub is our AI-powered, gamified learning platform designed to help organisations create learning that is more adaptive, more practice-based, and more closely connected to real workplace performance. What makes it especially important to us is that it reflects a very deliberate philosophy: AI should not simply help us produce content faster. It should help us design better learning experiences.
The platform brings together AI-assisted course design, contextual customisation around the organisation and the role, adaptive quizzes, open-ended scenarios with coaching-style feedback, real-time AI avatar simulations, and gamified learning journeys. So instead of relying on static eLearning alone, organisations can create experiences where learners think, respond, practise, reflect, and improve.
A key part of the innovation is also the human-in-the-loop approach. AI supports the design and the learner experience, but learning professionals remain in control of review, refinement, and approval. For us, that is essential. It keeps the experience more trustworthy, more relevant, and more aligned with real learning goals.
Just as importantly, gAImify Hub has been designed with a strong emphasis on ethical AI and compliance. That includes responsible use of AI, clear human oversight, and attention to requirements around data protection, trust, and governance, including GDPR and broader Legal readiness. We see this as a necessary foundation for innovation in learning, not as an afterthought.
These innovations can be applied in two ways: build new adaptive learning experiences with gAImify Hub or upgrade existing SCORM courses with inSCORM AI.
What do you think the future holds for AI in adaptive learning academies?
I believe the future of AI in adaptive learning academies is extremely promising, but it will depend on the choices we make now. The future of AI in education will not be decided by who produces the most content, but by who designs the most meaningful learning.
The strongest academies will use AI to move beyond static courses and create learning ecosystems that are more adaptive, more practice-based, and more connected to real capability development. They will not simply deliver information. They will help learners think, practise, reflect, receive feedback, and improve over time.
For me, one principle is essential: AI should make learning more challenging and engaging, not easier in the wrong way. It should not reduce effort or encourage passive dependence. It should help create the right kind of challenge, with the right support, at the right moment. That is where adaptive learning becomes truly powerful.
I also believe academies will become much more intelligent in how they respond to learners. We will see stronger use of adaptive assessment, open-ended scenarios, simulation-based practice, and feedback loops that make development more visible and more personalised.
At the same time, the best academies will remain deeply human-centred. They will combine AI with strong pedagogical design, ethical guardrails, and human judgment.
So, I am optimistic. I think AI gives academies a real opportunity to evolve from content libraries into living environments for growth, reflection, and performance. That, to me, is the more inspiring future.
Wrapping Up
Thanks so much to Dimitris Tolis for sharing his insights on the potential risks and opportunities of using AI to create personalized, adaptive learning experiences. If you'd like to delve deeper into this topic, check out Human Asset's guide, AI in Workplace Learning: From Content Generation to Meaningful Learning Design.