Onboarding Ends. The Development Need Doesn't.
There is a document that almost every new hire receives in their first week. It goes by different names, but the structure is always the same: 30 days to learn the product, 60 days to manage your first accounts, 90 days to be operating independently. The 30-60-90 plan is one of the most widely used onboarding tools in professional environments, and it is not a bad framework. The problem is not the document. The problem is what organizations assume once it is complete. They assume the person is onboarded.
I come from an L&D background, and I now lead a customer success (CS) function. That combination gives me an uncomfortable view of the same problem from both sides. On the L&D side, I understand why the event model persists: it is measurable, it is deliverable, and it gives the business something tangible to point to. On the CS side, I see what it costs. New team members who complete their 30-60-90 and are then left to navigate promotions, product changes, and increasingly complex accounts without any equivalent support structure. The onboarding ends. The development need does not.
In this article...
- The Metric That Does Not Stop At Day 90
- Why the Right Model Has Always Been Difficult To Deliver
- What AI Changes, Specifically
- Building The Proof Of Concept
- The Provocation For L&D
The Metric That Does Not Stop At Day 90
The pressure on L&D teams right now is significant. Executives want time-to-productivity and time-to-proficiency to come down. They want new hires contributing faster, ramping more smoothly, and staying longer. Those are legitimate commercial imperatives, and they are exactly the right things to measure.
But time-to-efficiency is not a day-90 metric. It recurs at every transition point in a professional's career. When someone is promoted, they enter a new level with new expectations and a new efficiency gap. When the product changes significantly, the whole team faces a version of the same gap. When a professional takes on a new type of account, a new market, or a new leadership responsibility, they are, functionally, onboarding again. The organization has a commercial interest in closing that gap every single time, not just in the first quarter of someone's employment.
The event model of onboarding is not just a learning design problem. It is a sustained loss of performance that nobody is formally measuring, because we stopped counting after day 90.
Why the Right Model Has Always Been Difficult To Deliver
The alternative is what I would call perpetual onboarding: the recognition that development is a continuous cycle, and that the support infrastructure built for new hires should, in principle, apply to every meaningful transition a professional makes throughout their tenure.
Most L&D practitioners instinctively understand this. The reason it has not become the default model is not intellectual; it is operational. Delivering personalized, context-sensitive development support to every person on a team, at every stage of their career, at the moment they need it, is a human resource problem. A manager cannot be a continuous coach for six people simultaneously, each at different levels, each facing different challenges. So organizations design programs for the average person at the average stage, deliver them on a schedule, and measure completion because completion is what can be counted.
The result is exactly what Josh Bersin's research has consistently shown: completion rates go up, performance outcomes do not follow. The learning infrastructure gets optimized for the metric that can be captured rather than the outcome the business actually cares about. I saw this from the L&D side for years. Sitting in a CS leadership role, I feel it differently. The gap between what the onboarding program promised and what my team actually needed was not a content problem or a budget problem. It was a model problem.
What AI Changes, Specifically
Artificial Intelligence (AI) does not fix the underinvestment problem in L&D. Anyone telling you it does is selling something. What AI does is remove the human bottleneck that has made the perpetual onboarding model operationally impossible at scale.
A well-designed AI coaching system can be present at the moment a professional is preparing for a high-stakes conversation with a client or senior stakeholder. It can respond differently to a question from a new starter and to the same question from a senior practitioner, because the support those two people need is fundamentally different. It can recognize when someone is navigating a context outside their previous experience and increase its scaffolding accordingly, without requiring a manager to notice and intervene. It can do all of this simultaneously, for an entire team, at any hour.
That is not Artificial Intelligence replacing human development. It is Artificial Intelligence making the right model operationally viable for the first time.
Building The Proof Of Concept
Earlier this year, my team and I put this to the test. During a company hackathon, we built an AI coaching agent called CSM 360: a perpetual onboarding system designed for customer success managers, from their first day in the role through to senior leadership.
The framework is grounded in Charles Jennings' 70-20-10 model and Bersin's capability academy research, but the design decision that matters most is simpler than any theoretical framework: the coach treats every significant transition as a new onboarding moment. A promotion is an onboarding moment. A major product release is an onboarding moment. A new enterprise account after years of mid-market experience is an onboarding moment. The 30-60-90 structure covers the first loop of the cycle, but the cycle does not end.
The coach differentiates by level, drawing on our internal CS skills matrix to adjust not just the depth of its responses but the type of support it offers. A new starter asking about an at-risk account receives scaffolding, process guidance, and reassurance that escalating is the right call. A senior CSM asking the same question gets challenged to diagnose the root cause before any framework is offered. Same question, entirely different response, because the development need is entirely different.
We built this in a hackathon, with a small team. The point is not the specific agent. The point is that the concept is operationally viable, and a small team with a deadline proved it.
The Provocation For L&D
The conversation about AI in L&D has spent too long focused on content generation and course automation. Those are real applications, but they are optimizations of the existing model, making the event-based approach slightly faster and slightly cheaper. They do not change what the model is capable of.
Perpetual onboarding, supported by intelligent performance tools embedded in the flow of work, is a different model entirely. It is one that finally aligns what L&D builds with what the business actually measures: not completion, but capability, and not at onboarding, but continuously. The professionals I manage do not stop developing at day 90. The executives I report to do not stop caring about time-to-efficiency at day 90. The question L&D needs to sit with is why the support infrastructure stops there.
If you work in L&D and own any part of the onboarding experience, or if you are a leader who genuinely cares about enablement rather than just its optics, the starting point is simpler than building an agent from scratch. Take the AI tool your organization already has access to. Stop using it to polish emails. Start using it to close the efficiency gaps that open up every time someone on your team transitions, gets promoted, or faces a challenge their onboarding program never prepared them for. The infrastructure for perpetual onboarding is already in your hands. The only thing missing is the decision to use it that way.