Bright L&D Future: From Streetlights To Spotlights



Find Your L&D Measurement Spotlights: Start With The Business Objective

In the previous article in this series, we explored the streetlight effect through the old story of a drunkard searching for his key under the streetlight instead of where he lost it. L&D measurement often struggles with its own streetlight effect: measuring where they can instead of where they should.

It is important to keep in mind that we measure and evaluate learning for different reasons. You may want to do continuous improvement of your programs, prove compliance, or measure the effect (including ROI). Know your reason first before you start measuring!

So, how do we escape the streetlight’s spell in L&D? The first step is to change where we start our search. Instead of designing a training and then later asking, “Okay, how do we measure its impact?”, flip the script: begin with the end in mind. Identify what business outcome you’re trying to achieve, and let that drive both the training design and the measurement plan.

Building A Data Strategy Backward

Starting with the business goal and working backward might sound obvious for some, but it represents a major shift. Astonishingly, less than 4% of companies say they design learning programs based on specific, defined metrics up front [1]. The remaining 96%? Many create programs based on perceived needs or requests, deliver the training, and only then think about evaluation (if at all). By not baking measurement into the design phase, L&D teams “have no way to measure their efforts other than the very basics,” hence the overreliance on those easy post-hoc metrics [1].

Starting with the business objective means clarifying what success looks like in organizational terms. For example, if the business aims to reduce safety incidents by 20%, that is your north star. With that in focus, you can work backward:

  1. Who can reduce the safety incidents? Directly and indirectly? (You may need to pick your target audience with the most significant impact, as you can’t serve all.)
  2. What behaviors need to change to reach that 20% reduction?
  3. Which employees (audience) need to adopt those behaviors?
  4. What’s preventing them currently (skill gaps, knowledge, motivation, process issues)?
  5. Only then decide if training is part of the solution, and if so, design the intervention to target those behaviors.
  6. Crucially, you also pinpoint Key Performance Indicators (KPIs) up front—in this case, the safety incident rate—and plan to track it. Your measurement approach might involve gathering baseline safety data, then comparing it after training (and perhaps against a control group or a trend line) to see if the needle moved. You might also plan for on-the-job observations or assessments to see if employees follow new safety procedures (a direct behavior measure).

This approach is sometimes called “backward design.” It ensures that training is not a shot in the dark. In fact, it might reveal that training isn’t the right solution at all. Perhaps the root cause of the problem is a broken process, a lack of proper tools, or an incentive system that rewards the wrong behaviors. In those cases, the solution might be something outside traditional training (e.g., fixing the process or providing job aids). By starting with the business goal and a thorough needs analysis, L&D can avoid wasting effort on training programs that shine light in the wrong place.

Alignment With The Business

The Association for Talent Development’s new research found that only 43% of talent development professionals say their business and learning goals are aligned [2].

When L&D does this business-aligned design, measurement becomes much more straightforward. You set out clear targets (the KPIs or behavior changes) and gather data on those targets. You’re not searching aimlessly; you have a map that points you to the park where the keys were lost, even if it’s dark at first.

Over time, this practice also builds credibility. Business leaders see L&D focusing on outcomes that leaders care about (for example, sales growth, quality improvement, turnover reduction) rather than reporting about how many employees attended a course or viewed a resource. And when a training doesn’t achieve the desired outcome, it’s an opportunity to learn and adjust, rather than a reason to hide behind vanity metrics.

Measurement should be about learning what works and what doesn’t, not just proving success. When L&D mainly focuses on what happens after any learning event to ensure the desired outcome, it shifts from a cost center under the streetlamp to a strategic partner illuminating data-driven insights that businesses can use to make decisions.

Frameworks And Models To Guide L&D Measurement: Kirkpatrick, ROI, And LTEM

Fortunately, L&D professionals aren’t entirely navigating in the dark. There are established models and frameworks for training evaluation that act like signposts (or maybe different kinds of lanterns) to guide our measurement efforts [3]. Three of the major ones are Kirkpatrick’s four levels, the Phillips ROI model, and the Learning-transfer evaluation model (LTEM). Each offers a lens on what to measure, and together they push us to go beyond the easy metrics.

Kirkpatrick’s four levels of evaluation is the most well-known and well-documented, so I’m not going to spend time on it here. The challenge I’ve seen with the model is in the practical implementation at workplace learning: L&D starts with level 1 evaluation and often gets stuck there. Even when getting to level 2 (learning), measurement is often about short-term recall (or worse, rote memorization during a course).

Jack Phillips, through the ROI Institute, added a level 5: ROI on top of Kirkpatrick’s model. ROI (Return On Investment) essentially asks, was the training worth it financially? The Phillips model involves calculating the monetary benefits of the training and comparing them to the costs, yielding an ROI percentage or ratio [4]. For example, if a leadership development program cost $100,000 and led to an estimated $300,000 in improved productivity or sales, the ROI would be 200%. This appeals to executives because it speaks the language of finance.

Calculating ROI for every project can be tricky and sometimes contentious: isolating the training’s effect in dollar terms involves some assumptions. Phillips advocates techniques like converting improvement metrics to money and even asking participants to estimate how much improvement was due to training (and then discounting for optimism). The most important takeaway for me is that it emphasizes that we ultimately care about outcomes, not just activity. The ROI Institute now also has TDRp as a standard set of measurement library. Check it out [5]!

Both Kirkpatrick and Phillips highlight a key point: training evaluation isn’t complete until you’ve looked at the impact on the job and the organization. Or put another way, did it change behavior, and did that matter to the business?

The Learning-Transfer Evaluation Model

In the last five years, I’ve been implementing a newer model, the learning-transfer evaluation model [6]. LTEM was developed by Will Thalheimer as a response to the shortcomings he saw in the common practices of measurement. It’s an eight-tier model that explicitly focuses on learning transfer, meaning: are people actually using what they learned?

The lowest tiers of LTEM (tiers 1 and 2) cover things like attendance and participation: basically, did people show up or complete the learning activity? For example, we’ve been measuring engagement (defined as extended focus on task) at tier 2 through 3 components: physical (what they do), emotional (how they feel or how they connected), and cognitive (how much they are challenged and reflect). Tier 3 is learner perceptions, just like Kirkpatrick level 1, but with LTEM, we implemented a new set of questions that are performance-focused and revolve around behavior drivers (MOJO: as in motivation, opportunity, job capabilities, and outcome).

Tiers 4-6 examine what was learned in a more substantive way, from simple retention of facts up to skill demonstration in realistic scenarios (task execution). Still, these are often measured in a training context (quizzes, simulations)—important, but not yet the real world. Tier 7 is where the magic happens: it measures learning transfer—are learners performing correctly on the job [7]?

Behavior Change Does Not Happen By Chance

LTEM tier 7 corresponds to behavior change on the job, similar to Kirkpatrick’s level 3, but with an emphasis on directly assessing performance in the work environment. Finally, tier 8 looks at the effects of that improved performance on broader results—basically the organizational impact, akin to Kirkpatrick’s level 4 (and even beyond, to ripple effects on colleagues or customers).

One of the reasons why we chose LTEM is its nuanced view and messaging about what matters: it puts a spotlight on the fact that training value comes from what happens after the training. Along with the working backward design mentioned earlier, this model provides practical guidance for all L&D roles to make a difference. More on that in the next article.

Isolating The Training Impact: L&D Measurement

One of the top barriers mentioned by the ATD survey is that L&D professionals feel it is too difficult to isolate the impact of training. I agree. They are not wrong. And this is why I strongly recommend not only measuring but designing any solutions backward: starting with the business goal and desired payoff (or any other effects indirectly related to key metrics), the supporting performance goals, the audience who can make it happen, and then the behaviors. If there is no behavior change, there is no impact.

No matter what measurement model or framework you use, applying the backward chain from the business will make it easier to isolate learning impact. But what about the lack of time, resources, and expertise to do this at scale? In the next, final, article, we’re going to look at how AI can help and how different L&D roles can benefit.

References:

[1] Measuring Learning’s Impact

[2] ATD Research: Organizations Struggle With Measuring the Impact of Training

[3] Model vs Framework: Understand How Each of Them Work

[4] ROI Methodology

[5] What Role Does TDRp Play in the Measurement Space?

[6] Beyond Kirkpatrick: 3 Approaches to Evaluating eLearning

[7] Measuring Learning: Asking The Right Questions



Source link

Scroll to Top