Immersive learning insights and resources

Simulation Learning: Optimizing Behavior Change at Scale | ETU

Written by ETU | January 11, 2022

In a recent survey of 400+ L&D leaders, 95 percent of those using simulation learning reported positive learner impact and behavior change, and success in achieving organizational outcomes.

Contents

Executive summary

While online simulations have been around for over two decades, an increasing number of training professionals have been rediscovering learning simulations as an efficient, learner-embraced L&D delivery tool. However, as today’s learning leaders evaluate these innovations in training, they still have the same core questions that came up early on:

  • Are behavioral simulations effective in delivering against business goals?
  • Can they be easily and cost efficiently adapted for new uses and applications?
  • What are the considerations and components necessary for simulations to be measurably effective?

A 2021 survey conducted by Chief Learning Officer, in combination with research from LinkedIn Learning and Accenture, suggests it’s time to take a second look, as there is now ample evidence of learning simulations’ ability to effectively deliver meaningful, engaging, impactful learning content to virtually any targeted audience. To be successful though, the survey made one thing clear: correct measurement and reporting tools must be incorporated into the simulation structure from the beginning. This summary of survey results touches on the original core questions and whether the new generation of learning simulations has what it takes to competently address them. Let’s loop back to the beginning.




Are learning simulations measurably effective?

(Back to top)

Learning simulations appear to be highly effective — in fact, among the most effective of all training delivery modalities — but only when engineered from the beginning with robust measurement components that are aligned with business objectives.

Effectiveness has always been a major concern for L&D initiatives. In the survey conducted by Chief Learning Officer, tapping professionals who direct their organization’s learning and development initiatives, 95% of respondents using learning simulations reported positive learner impact and success in achieving organizational outcomes. What makes them so effective? Three things: Users can see the results of their choices and actions in real-life situations; feedback is real time and given on actual behaviors that are measured in specific applications; and learners are able to take more direct control of their training environment, learners are immersed in realistic scenarios that are emotional, memorable and engaging.

Historically, simulation training has been most regularly used in mid- to largesize companies, public and private, in education, health care and technology. In those environments simulations were found to generate stronger retention rates and were principally used to improve skills application. The practicality of allowing learners to walk through scenarios removed of real-world consequences of failure has been simulation’s inherent benefit for learners and instructors for years.

Among current organizations using simulations to deliver learning, 55% use them when employees need to learn and retain a critical skill or knowledge and 95% believe simulations have a strong positive impact on their company’s business outcomes.

However, learning simulations’ traditional downside has been in metrics and measurement, with nearly three-quarters (71%) of organizations using soft, qualitative metrics (e.g. employee satisfaction) as key measures of effectiveness. In addition, fewer than 10% of organizations using simulation training have implemented quantitative or hard metrics to measure their effectiveness. It would seem that for learning simulations to impact L&D, a conscious effort will need to be made in assuring that they measure the same key workforce behaviors and KPIs that are evaluated by the organization’s C-suite executives.

Traditional high-consequence, mission-critical skills have always required the kind of laced interactivity available through learning simulations, and simulation learning has always been a highly effective content delivery method when done correctly. The reason? They incorporate the right metrics and measurement into the design, and build a solid reporting procedure that both encourages use and boosts effectiveness. However, connecting learning simulations to business impact goes beyond simply measuring platform usage and learner satisfaction scores. Rigorous measures such as skill gaps, mistake trends and behavior application all serve to raise the level of quality on mission-critical programs. Higher-order metrics make
a difference in high-consequence environments and can make a difference in almost any environment.

Can learning simulations be easily adapted for new uses and applications?

(Back to top)

It seems some of today’s managers still hold a 10- to 20-year-old perception of simulations as a narrowly focused learning delivery tool that is expensive and difficult to produce. But, according to ATD, “One trend to watch among organizations that include simulations in a high percentage of TD programs is the use of quicker, simpler simulations.” And Clark Aldrich, author of Short Sims: A Game Changer, recently observed that, “we’re getting away from the idea that every simulation has to be a massive production ... and refocusing simulations on putting people in compelling situations with compelling questions in a sustainable way.”

In 2015, 76% of organizations used non-tech simulations and 48% used tech-based simulations in some or all of their training development programs. In 2020 the percentages grew to 87% and 75%, and in 2025, nearly every organization with training and development needs is expected to be using simulations (non-tech 94% and tech-based 95%). Supporting those growth trends and thoughts, again from the 2021 CLO survey, 44% of the responding organizations had more than 2,000 employees, and over 85% of them rated simulation learning either effective or extremely effective. Only “coaching or mentoring” came in higher (89% to 95%), though with a much lower ability to scale.

Those who have recently rediscovered learning simulations have done so for three reasons:

  • They’ve become much simpler to build - new authoring technologies allow designers with limited experience to create sims rapidly and with ease.
  • They’re no longer restricted to niche targets — learning simulations run from hire to retire and can be cost-effectively used for onboarding, risk, DE&I, sales, leadership, customer service — just about any activity in any industry.
  • They’re cost competitive — given a comparable media type they’re on a par with any learning modality.

But challenges continue to stem from the fact that business leaders typically do not understand the measures that are available to L&D, and L&D leaders struggle in communicating the effectiveness of L&D initiatives in business terms.

Of those respondents using simulation learning, 71% report measuring learner impact primarily through participant satisfaction surveys and anecdotal evidence. This qualitative feedback can give organizations an idea of success, but many business leaders prefer more concrete metrics. As for measurable organizational impact, a little more than a third of survey respondents reported measuring against business KPIs such as increased productivity, reduced mistakes and other business management measures. This means that the organizational impact of simulation learning is measured primarily through learner satisfaction. Understanding which metrics matter most to which stakeholders is the key to collaboration and adoption, and the principal issue L&D must get their arms around.

What considerations and components are needed for simulations to be effective?

(Back to top)

Given their history, asking what it takes to make simulations effective is not unexpected. The reality, though, is simulations have always been effective, they’ve just had difficulty validating it. The question really isn’t “What’s needed for simulations to be effective?” but “What’s needed for simulations to prove they’re effective?” And here there are two issues:

  • what KPIs do executive managers use to assess simulation — or any training tool; and
  • which of those metrics can be consistently and reliably captured?

In a report by Action Learning Associates (below), metrics critical to upper management assessment were found to have the lowest reporting occurrence of any of the 19 metrics measured. Those quantitative metrics that were more aggressively reported, like courses completed, had nominal value, and to broaden the disconnect even further, those highly reported qualitative metrics like learner satisfaction and instructor quality ratings, while important, had marginal impacts on core business KPIs and are considered secondary
evaluative metrics.The problem over the years hasn’t been that L&D didn’t want to provide more actionable metrics; it’s been that they haven’t been able to. Reporting a metric like ROI requires a visible, reliable and consistent association of dollars spent developing and delivering simulations to revenue generated. According to the ROI Institute, the information that business leaders want focuses on three main areas — application, results and value — and the good news is, it looks like things have moved in that direction.

In assessing learning simulations, it has become important to collect data throughout every stage of a program and measure the impact on learning. This allows the tracking of behavioral data and the identification of skills gaps, and collects data back to the mission-critical KPIs, which then gives organizations an accurate picture of the true ROI. A Learning Simulation Platform with measurement incorporated into its systems is a solution that benefits employee learners, L&D teams and C-suite decision-makers. Learning Simulation Platform companies like ETU have developed systems to make this connection with impact measures, including behavior analytics, based on real-time data flows.

Understanding the impact of a learning method like simulations on people’s ability to apply what they learned and then use their new skills and capabilities to deliver desired business results is crucial. There is a more than 75% improvement in learning quality and retention when learning is delivered through an immersive learning simulations experience. While that’s compelling, if you can’t prove that a platform is producing results then, for many business leaders, there is no justification for investment.

Getting started with virtual learning simulations

(Back to top)

Learning Simulation Platforms can increase the application of new skills on the job, with a special focus on critical soft skills that power business success. Measurements taken throughout the learning process can connect learning programs to specific business outcomes. As executives and external stakeholders look to L&D leaders to produce measurable behavior change and relatable results, it will be helpful for them to start their measurements from the beginning of an initiative. The incorporation of key metrics from the start — and at each step along the way — will build both success and trust.

Questions to consider with simulation learning

  • Are we clear on our end goal, including which behaviors we want to change?
  • Are we incorporating key metrics aligned with those goals?
  • Are we using a platform that helps measure key behavior metrics from the start?
  • How often will we review key behavior metrics and how will we use the data to inform future talent decisions?