THANK YOU FOR SUBSCRIBING



How do you define “learning impact” beyond completion and satisfaction scores?
Completion and satisfaction scores are not in my definition of learning impact. They are supportive data points that can be helpful when measuring trainer skills, engagement and learning operation systems and costs. But real learning impact can only be measured by success drivers outlined by the business. Typically, this can be found by the department that generates revenue- they should have clear data points and benchmarks for what behaviors create return. Partnering with the business is essential to defining learning impact. If the business can’t give you those success drivers in data point form - then your automatic reply is that you can’t deliver a learning solution that doesn’t have a benchmark. It is irresponsible to use company time and costs to deliver a learning experience that doesn’t have a clear target.
Which business metrics matter most when tying learning to performance?
This is a tricky question, because every business has many nuanced ways of measuring performance and it’s not so easy to figure out L+D’s “measurable place” in that metric.
We know which business metrics matter most- the ones that generate revenue! For L+D, we don’t ‘generate revenue directly’ and we aren’t typically ‘billable to a client’. For L+D it is about finding how learning solutions fit into those metrics. I recommend establishing small learning solution systems that when layered, can be seen for the sum of their parts. That sum then displays the whole picture and that should align revenue generating business objectives. For example, if your business has a quality metric and production metric which are the truest indicators of performance- then one or both of those is the end points that L+D should be expected to contribute to, not your starting point.
There is a reason I highlight that this is not your starting point. One L+D team can’t be expected to boil the ocean- or, impact the entire performance outcomes of an organization- especially when they aren’t the true operators behind those metrics. As an LD team, it’s unreasonable for us or the wider organization to expect that we can have an impact on a performance metric that outweighs the impact a revenue generating role can have on that metric.
“Typically, revenue generating operators are moving fast and focused on BIG error categories. L+D has an opportunity to sweep up the small stuff, turn it around quickly to make it better and collectively that can add up big.”
Going back to the example, if quality is the key metric L+D is going to help make an impact on, start by creating a partnership with the teams who contribute to that quality score and understand what small key behaviors could be improved to increase the overall quality score. Again, don’t go in trying to ‘overhaul quality’. L+D teams should be using quality analytics to find where ‘errors’ occur. No matter how small the error, then create learning interventions to reduce those errors.
Typically, revenue generating operators are moving fast and focused on BIG error categories. L+D has an opportunity to sweep up the small stuff, turn it around quickly to make it better and collectively that can add up big.
How can AI help surface patterns between learning activity and employee behavior?
Well let’s look at the example from the last question: I pointed out how narrowing down your search to look for multiple small quality errors, can provide real measurable opportunities for L+D. There are AI tools out there that can locate those error categories and identify patterns your L+D teams may not have visibility to. AI Agents that understand these error patterns, can then be employed to work as a ‘guide by your side’ to help employees see their small errors before submitting their work. Having a workforce that receives objective feedback in real time increases upskilling, changes employee behavior for long term results and creates visibility for performance analytics.
What are the biggest challenges in proving ROI for leadership development today?
That is a loaded question! Ultimately, leadership development is broad category and I truly believe that we can’t measure it in total for its ROI. Said more simply, the biggest challenge is that it’s too big to measure. However, going back to the theme, you can measure leadership development in smaller categories. The challenge then is getting your organization to decide what those categories are that matter and how to prioritize them.
How can learning teams use AI insights without losing the human side of development?
I think AI insights actually promote the human side of development. AI will be used in learning to provide objective feedback, quickly. Taking on this task, removes the one of the greatest burdens of learning professionals: informing employees that there is performance gap. I can’t tell you how often I’ve witnessed employees being sent to L+D for development- and they have no idea why. Handling the objective feedback on performance, opens up the space for L+D professionals to do more of what they are good at: coaching, connecting and reframing problems and leading people on a path to solutions.