The Recognition-Primed Decision Model: How Experts Make Decisions And Why Your Scenarios Aren’t Teaching It
Think about the last time you watched someone experienced handle a crisis at work. A senior manager walking into a meeting that’s gone sideways, a veteran clinician spotting something in a patient’s chart that nobody else noticed, or a seasoned customer service lead reading the room and changing approach before a complaint escalates. What you’re watching, in those moments, isn’t someone carefully weighing options; they’re acting on that recognition almost immediately.
This is the subject of one of the most interesting bodies of research in cognitive science: the Recognition-Primed Decision model. It sits within a broader field called Naturalistic Decision Making, which studies how experienced professionals perform in messy, time-pressured, high-stakes environments. If you design training for any role that involves judgment, this research should be shaping how you build scenarios, run debriefs, and think about expertise development.
The firefighter study
In 1985, the U.S. Army Research Institute gave Gary Klein’s team a modest grant to study how fireground commanders made decisions under pressure. The research team interviewed 26 experienced commanders, with an average of 23 years of service, probing 156 decision points across a range of critical incidents. Klein and his colleagues expected to find that under time pressure, these professionals would at least narrow their options down to two or three before choosing. The prevailing models of decision-making at the time assumed that good decisions came from comparing alternatives, weighting criteria, and selecting the optimal option (Klein, Calderwood and Clinton-Cirocco, 2010).
In over 80% of the decision points, the commanders didn’t compare options at all. They assessed the situation, recognised it as similar to something they’d encountered before, and implemented a course of action without ever generating alternatives. In fewer than 12% of decisions was there any evidence of side-by-side comparison. The commanders weren’t being reckless or lazy; they were drawing on deep experience that allowed them to see the situation as an instance of a familiar pattern, complete with a viable response already attached (Klein, Calderwood and Clinton-Cirocco, 2010).
This runs counter to how most of us have been taught to think about decision-making. The rational model says you list your options, evaluate them against criteria, and pick the best one. That’s what we tend to teach in leadership programmes, and it’s what we build into most of our scenario-based learning. But Klein’s research, replicated across firefighters, critical care nurses, pilots, military commanders, and chess masters, suggests that experienced people in complex, dynamic environments rarely work that way (Klein, 1998).
The three ways recognition works
Klein’s Recognition-Primed Decision model, or RPD, describes three variations of how this recognition process operates, and they represent a useful gradient of cognitive difficulty.
The first variation is a simple match.
The expert recognises the situation as typical, and that recognition generates four things simultaneously:
the relevant cues to pay attention to,
expectations about how the situation will unfold,
plausible goals,
and a typical course of action.
In familiar situations, this can happen in as little as eight to sixteen seconds. If you’ve ever watched a skilled professional make a call that seemed instantaneous, this is likely what was happening beneath the surface.
The second variation kicks in when the situation isn’t immediately clear.
The expert knows what actions are available but can’t quite place what they’re looking at. Something doesn’t fit the initial pattern; expectations are violated. At this point, the expert has to construct a story, a mental model of what might be going on, before they can settle on a response. This diagnostic process adds time and cognitive effort, but it’s still fundamentally different from comparing options against criteria.
The third variation occurs when the expert recognises the situation, but no standard response seems adequate.
Here, they mentally simulate a candidate course of action, projecting forward to see whether it would work. If the mental simulation reveals a problem, they modify the plan or move on to the next option. Critically, this is serial evaluation: options are tested one at a time until something workable emerges. Klein found that these mental simulations typically involve around three variables and six transitions, constrained by the limits of working memory (Klein, 1998).
For those of us designing training, these three variations suggest a natural progression. You can start with clear pattern-matching situations, introduce diagnostic ambiguity as people develop, and then present situations requiring novel action planning. That’s a very different approach from the typical scenario design, where every exercise looks roughly the same regardless of the user’s experience level.
Naturalistic Decision Making: studying expertise where it happens
The RPD model is the most prominent framework within a broader field called Naturalistic Decision Making, which emerged in the late 80s.
Lipshitz, Klein, Orasanu, and Salas (2001) defined the conditions that characterise naturalistic decision-making research: time pressure, high stakes, ambiguous or incomplete information, dynamic conditions that change rapidly, experienced decision makers, ill-defined or shifting goals, and organisational constraints. If that list sounds like a description of most managerial and professional work, that’s rather the point. These are the conditions under which most workplace decisions of consequence are made, and they’re the conditions that traditional decision-making models handle least well.
Two related frameworks are worth knowing about.
Mica Endsley’s situation awareness model (Endsley, 1995) describes three levels of how professionals understand what’s happening around them:
perception (identifying key elements in the environment),
comprehension (integrating those elements into a holistic understanding of their meaning),
and projection (forecasting future states based on that understanding).
Or, what’s happening, what does it mean, and what’s going to happen next.
This maps directly onto the RPD model’s situation assessment component and helps explain why expert advantage lies in cue recognition, rather than in option selection.
Jens Rasmussen’s skill-rule-knowledge framework (Rasmussen, 1983), developed in the context of nuclear power plant safety, describes three levels of cognitive control:
skill-based behaviour (automated, sensory-motor patterns),
rule-based behaviour (following stored procedures),
and knowledge-based behaviour (reasoning from first principles in novel situations).
This is essential for understanding what kind of training intervention is appropriate for a given performer. Someone operating at the skill level needs practice and fluency; someone at the rule level needs clear procedures and decision aids; someone at the knowledge level needs conceptual understanding and mental models. Most training treats all three levels the same, which is a significant part of why so much of it doesn’t work.
What experts see that the rest of us don’t
If the RPD model tells us that expert decision-making is driven by recognition, the obvious question for us is: recognition of what? Klein identified eight categories of perception that are available to experts but invisible to novices, and this list is worth understanding because it reframes what we should be training for (Klein, 1998).
Experts notice patterns that novices don’t see.
They detect anomalies, including events that didn’t happen but should have, violations of expectation that signal something is wrong.
They grasp the big picture and understand how different elements of a situation relate to each other.
They understand causal mechanisms and can see opportunities for improvisation.
They can see additional opportunities and potential for improvisation beyond existing procedures.
They can perceive events that happened in the past or are likely to happen in the future.
They detect differences too small for novices to register.
And they recognise their own limitations, knowing when they’re out of their depth.
So, expertise is primarily a perceptual achievement. Experts don’t just think differently; they see differently. Training that focuses on teaching people decision rules while neglecting perceptual development is missing the primary source of expert advantage. So when creating scenario-based training, rather than solely asking “What would you do?”, we should also include questions that focus on “What do you notice?”
Chase and Simon (1973) demonstrated that chess masters could reconstruct board positions from memory after only five seconds of viewing, but performed no better than novices when the pieces were arranged randomly. The expert advantage was domain-specific pattern recognition, not superior general memory. Masters store an estimated 50,000 meaningful clusters, or “chunks,” in long-term memory, enabling rapid encoding of familiar configurations.
Schema theory, developed by Bartlett (1932) and extended by Rumelhart (1977), explains how experts organise domain knowledge into structured mental representations that guide perception, categorisation, and action. Chi, Feltovich, and Glaser (1981) showed that physics experts categorised problems based on deep principles such as conservation of energy, while novices categorised them by surface features such as whether the problem involved an inclined plane. This deep-structure organisation is what allows experienced professionals to recognise a situation’s underlying dynamics rather than being distracted by its surface appearance.
Why “just trust your gut” is the wrong takeaway
Before we go any further, I want to address the most dangerous misreading of this research. The RPD model does not say that intuition is always reliable. It describes a specific cognitive process, pattern matching combined with mental simulation, that requires extensive domain experience in environments with valid, learnable regularities. It does not endorse blind gut feeling.
This distinction was worked out in detail through a collaboration between Klein and Daniel Kahneman, who spent approximately six years exploring where they agreed and disagreed about the reliability of expert intuition. Their joint paper, published in 2009, identified two conditions that must both be met for intuitive expertise to be trustworthy:
the environment must be sufficiently regular, providing valid and learnable cues to the nature of situations;
and the decision maker must have had adequate opportunity to learn those regularities through prolonged practice with accurate and timely feedback (Kahneman and Klein, 2009).
Both researchers agreed that subjective confidence is not a reliable indicator of whether someone’s intuition is sound. People can feel very certain about judgments that are completely wrong, particularly in environments that don’t provide consistent feedback. Stock picking, long-range political forecasting, and clinical judgement in domains without clear outcome data are all examples where confidence regularly outstrips accuracy.
In our work, the Kahneman-Klein framework provides a diagnostic question to ask before investing in expertise-building training:
Does this role operate in an environment with learnable regularities, and do the people in it receive adequate feedback to learn from?
If the answer to either question is no, then building pattern recognition through experience may not be the right approach, and more analytical frameworks, checklists, and structured decision aids might serve people better. If the answer to both is yes, then the RPD model should be shaping your training design.
How this applies to learning design
The first and most important consideration is in what scenarios are designed to practice. Most scenario-based learning is structured as a series of decision points where the users selects from predetermined options, receives feedback on whether they chose correctly, and moves on. The RPD model suggests this is optimising for the wrong thing. Experts spend most of their cognitive effort on situation assessment, on understanding what they’re looking at, and the appropriate action tends to follow from that assessment. Novices spend more time deliberating about what to do because they haven’t yet developed the situation assessment capability that would make the answer apparent (Klein, 1998).
This means our scenarios should be asking different questions. Instead of “which of these four options would you choose?”, we should be asking:
“what cues do you notice in this situation?”,
“what do you think is going on here?”,
“what would you expect to happen next?”,
and “what would make you change your assessment?”
These questions develop the perceptual and diagnostic capabilities that underpin expert performance, rather than simply testing whether someone can identify the correct response from a list.
Note: I’m not saying that we should never ask people what they would do, but that solely asking that is not going to accelerate the development of expertise in the same way that asking this broader range of questions will.
The second is in how we design debriefs. Post-exercise debriefs in most organisations focus on outcomes: did the team achieve the objective, did the individual select the right answer, what went well and what didn’t? But there is a huge opportunity for learning when debriefs focus on the reasoning process. The U.S. Air Force Weapons School uses a framework that categorises errors into three types:
perception errors, where the person didn’t notice the right cues;
decision errors, where they noticed the cues but misinterpreted the situation;
and execution errors, where they assessed correctly but implemented poorly.
This categorisation maps directly onto the RPD model and produces targeted instructional fixes rather than generic feedback.
The debrief questions that matter most are:
What did you notice first?
What did that tell you?
What were you expecting to happen?
When did your assessment change?
And, what cues did you miss or dismiss?
These questions make the invisible cognitive work visible, and they give learners access to the reasoning patterns that experts use but rarely articulate.
Cognitive Task Analysis
If expertise is primarily perceptual and much of expert knowledge is tacit, how do you extract it in order to design training around it? This is the problem that Cognitive Task Analysis was developed to solve. Cognitive Task Analysis is a family of research methods for uncovering the mental processes that underlie observable expert behaviour: the decision-making, pattern recognition, mental models, and tacit knowledge that traditional task analysis, which focuses on observable actions and procedures, simply cannot capture (Crandall, Klein and Hoffman, 2006).
Probably the most useful method for us is the Critical Decision Method, or CDM.
First, an expert tells the complete story of a challenging, non-routine incident without interruption.
Second, the interviewer and expert construct a detailed timeline of key events, decisions, and turning points.
Third, the interviewer returns to key decision points with cognitive probes: what cues did you notice, what were your goals, what options did you consider, what knowledge guided your decision, what would a novice have missed?
Fourth, the interviewer tests the expert’s mental models with “what if” hypotheticals (Klein, Calderwood and MacGregor, 1989).
Note: You’ll notice that this aligns with a number of different ways that instructional designers and L&D professionals already work. Elements of the critical decision method appear in action mapping and backwards design, as well as various approaches in the product, design, and development world.
Interestingly. Klein’s team learned early on that when they asked firefighters about “decisions,” the commanders insisted they didn’t make decisions. They just did what needed doing. When the researchers started asking about “tough cases” instead, the details poured out.
If you’re going to interview subject-matter experts in your organisation, ask them to tell you about the hardest situations they’ve faced, not about their decision-making process.
So let’s look at an example of the critical decision method being used. Crandall and Getchell-Reiter (1993) used CDM with neonatal intensive care nurses to elicit concrete assessment indicators for early sepsis detection in newborns. The expert nurses could detect life-threatening infections before blood tests confirmed them, using cues that weren’t documented in the nursing or medical literature, and some of which were opposite to adult infection indicators. The resulting instructional guide was rated useful by every evaluator. That knowledge existed inside those nurses’ heads, invisible to traditional training design approaches, and CDM extracted it.
The evidence for CTA-informed training design is strong. A meta-analysis by Tofel-Grehl and Feldon (2013) found a large effect size (Hedges’ g = 0.871) for training designed using CTA methods compared to training designed without them. Richard Clark estimated that failing to conduct CTA results in a 70% performance deficit after training (Clark et al., 2008). These are significant numbers, and they suggest that most L&D programmes are leaving enormous value on the table by skipping this step.
For those who want a more accessible entry point than the full CDM, Militello and Hutton (1998) developed Applied Cognitive Task Analysis, or ACTA, as a streamlined toolkit that instructional designers can use without specialised research training. It consists of a task diagram interview to break work into cognitive subtasks, a knowledge audit probing aspects of expertise, including pattern recognition and anomaly detection, a simulation interview walking through a scenario with the expert, and a cognitive demands table synthesising all findings. Evaluation studies showed that graduate students trained in ACTA produced training modifications rated as accurate and important by domain experts, confirming that you don’t need a PhD to do this well.
ShadowBox
One of the frustrations with academic research, from a practitioner’s perspective, is that it often describes what experts do without offering a practical training method to develop those capabilities in others.
ShadowBox was originally developed in 2008 by Neil Hintze, a Battalion Chief with the Fire Department of New York. The core mechanism is elegant: trainees work through a realistic, progressive scenario, and at several predetermined decision points, they rank a set of options and write a rationale explaining their ranking. Their rankings and reasoning are then compared against those of a panel of five to seven subject-matter experts who completed the same scenario independently. The trainees’ answers are then compared to those of the panel of subject matter experts; the discrepancy reveals flaws in the trainee’s mental models, cue recognition, and priority setting (Klein and Borders, 2016).
The evidence, while based on modest sample sizes, is consistently positive. In a DARPA-funded study with Marines, 59 participants received three hours of non-facilitated, paper-based ShadowBox training. The group that received expert feedback improved their alignment with expert rankings by 28% compared to the control group. In a second study with soldiers at Fort Benning, one hour of tablet-delivered ShadowBox training produced a 21% improvement (Klein, Hintze and Saab, 2013). Perhaps most interesting for those of us concerned about scalability: Hintze’s original study found that a facilitated discussion condition produced only about 6% higher alignment than expert feedback alone, and that difference wasn’t statistically significant. In other words, the self-paced format without a facilitator is nearly as effective as the facilitated version (Klein and Borders, 2016).
ShadowBox doesn’t require expert facilitators for every session; it requires expert input during the design phase. You capture the expertise once, structure it into scenarios, and deploy it repeatedly.
The expert-novice gap
There’s a tension in all of this because it constrains what we can achieve through training. The RPD model is explicitly a model of experienced decision-making. Klein himself noted that before the firefighter study, the prevailing assumption was that novices impulsively jumped at the first option they could think of, while experts carefully deliberated. The research revealed the opposite: it was the experts who could generate a single, effective course of action, while novices needed to compare different approaches because they lacked the pattern library to recognise the situation immediately (Klein, 1998).
This means we can’t simply teach expert heuristics to novices and expect them to perform like experts. The pattern recognition that drives recognition-primed decision-making is built on thousands of hours of varied experience with quality feedback. You can accelerate this process through well-designed training, but you can’t eliminate the need for experience altogether.
The practical implication is one that most L&D programmes still ignore: training must be differentiated by expertise level. A one-size-fits-all scenario exercise is simultaneously too complex for your novices and too constraining for your experienced people. Novices need structured guidance, clear decision aids, and procedural scaffolding. Intermediate performers need varied practice with diagnostic feedback. Experienced professionals need challenging, ambiguous situations that stretch their pattern libraries and expose the limits of their mental models.
Where to start
If you’re interested in applying this research, the practical entry points are more accessible than you might expect.
Start by conducting a lightweight Cognitive Task Analysis with the subject-matter experts in your organisation. You don’t need to follow the full four-step CDM protocol; even asking two or three experienced people to walk you through their toughest recent situations, probing specifically for what they noticed, what they expected, and what a less experienced person would have missed, will surface insights that your current training design almost certainly lacks.
Build critical cue inventories from those conversations. Document the specific cues, patterns, and anomalies that experts attend to and use them as the foundation for scenario design.
Redesign your scenarios around decision points rather than content delivery. At each decision point, ask users what they notice before asking what they’d do. Embed realistic ambiguity, and use the three RPD variations as a progression framework: clear pattern matching for less experienced learners, diagnostic ambiguity for intermediate learners, and situations requiring novel action planning for experienced practitioners.
Restructure your debriefs around the reasoning process. Ask what was noticed first, what it meant, what was expected, and what would change the assessment. These questions make cognitive work visible and give people access to expert reasoning patterns that would otherwise be locked inside experts’ heads.
If you want to go further, Klein’s Sources of Power (1998) remains the most accessible introduction to this entire field, and Crandall, Klein, and Hoffman’s Working Minds (2006) provides the practitioner’s guide to Cognitive Task Analysis. Both are well worth the investment.
The broader point is this: for any role that involves judgment under uncertainty, sharpening cue recognition, and training the ability to read situations will produce better outcomes than teaching people to select from predetermined options.
References
Chase, W.G. and Simon, H.A. (1973) ‘Perception in chess’, Cognitive Psychology, 4(1), pp. 55-81.
Chi, M.T.H., Feltovich, P.J. and Glaser, R. (1981) ‘Categorization and representation of physics problems by experts and novices’, Cognitive Science, 5(2), pp. 121-152.
Clark, R.E., Feldon, D., Van Merriënboer, J.J.G., Yates, K.A. and Early, S. (2008) ‘Cognitive task analysis’, in Spector, J.M., Merrill, M.D., Van Merriënboer, J.J.G. and Driscoll, M.P. (eds.) Handbook of Research on Educational Communications and Technology. 3rd edn. New York: Routledge, pp. 577-593.
Crandall, B. and Getchell-Reiter, K. (1993) ‘Critical decision method: A technique for eliciting concrete assessment indicators from the intuition of NICU nurses’, Advances in Nursing Science, 16(1), pp. 42-51. (PAID)
Crandall, B., Klein, G. and Hoffman, R.R. (2006) Working Minds: A Practitioner’s Guide to Cognitive Task Analysis. Cambridge, MA: MIT Press. (PAID)
Endsley, M.R. (1995) ‘Toward a theory of situation awareness in dynamic systems’, Human Factors, 37(1), pp. 32-64.
Kahneman, D. and Klein, G. (2009) ‘Conditions for intuitive expertise: A failure to disagree’, American Psychologist, 64(6), pp. 515-526.
Kalyuga, S., Ayres, P., Chandler, P. and Sweller, J. (2003) ‘The expertise reversal effect’, Educational Psychologist, 38(1), pp. 23-31.
Klein, G. (1998) Sources of Power: How People Make Decisions. Cambridge, MA: MIT Press.
Klein, G. and Borders, J. (2016) ‘The ShadowBox approach to cognitive skills training’, Journal of Cognitive Engineering and Decision Making, 10(3), pp. 268-280.
Klein, G., Calderwood, R. and Clinton-Cirocco, A. (2010) ‘Rapid decision making on the fire ground: The original study plus a postscript’, Journal of Cognitive Engineering and Decision Making, 4(3), pp. 186-209.
Klein, G., Calderwood, R. and MacGregor, D. (1989) ‘Critical decision method for eliciting knowledge’, IEEE Transactions on Systems, Man, and Cybernetics, 19(3), pp. 462-472.
Klein, G., Hintze, N. and Saab, D. (2013) ‘Thinking inside the box: The ShadowBox method for cognitive skill development’, in Proceedings of the 11th International Conference on Naturalistic Decision Making. Marseille, France.
Lipshitz, R., Klein, G., Orasanu, J. and Salas, E. (2001) ‘Taking stock of naturalistic decision making’, Journal of Behavioral Decision Making, 14(5), pp. 331-352.
Militello, L.G. and Hutton, R.J.B. (1998) ‘Applied cognitive task analysis (ACTA): A practitioner’s toolkit for understanding cognitive task demands’, Ergonomics, 41(11), pp. 1618-1641.
Rasmussen, J. (1983) ‘Skills, rules, and knowledge; signals, signs, and symbols, and other distinctions in human performance models’, IEEE Transactions on Systems, Man, and Cybernetics, SMC-13(3), pp. 257-266.
Tofel-Grehl, C. and Feldon, D.F. (2013) ‘Cognitive task analysis-based training: A meta-analysis of studies’, Journal of Cognitive Engineering and Decision Making, 7(3), pp. 293-304.


While I recognize this expertise is desirable, and even necessary, it's not practical in many instances. Really, unless the consequences *and* urgency are high, it's likely too costly to do the training you suggest. In most cases, these people progressed from models of decisions, over many instances, to develop the pattern recognition cited. Pilots, doctors, military and emergency responders are people that if they screw up, people die. Not many others have such consequences. You can take time to decide whether the concrete on a building is sufficient quality. You can stop a meeting and reconvene. I'm not saying this isn't valuable, but most of what we do makes sense. There are other ways we can, and should of course, be developing people over time, via coaching & mentoring, extended learning, and more. We typically get people up to a minimum standard, and then let them develop more. I love your research, and this *is* interesting, but I think it's rarer than your article suggests. Happy to be wrong!
I agree with both of you. I can't see anything approaching this level of application in most workplace learning scenarios, but I think there is value in the approach and specifically the Cognitive Task Analysis (What did you notice?, What did you expect to happen?). This is a potentially useful tool to help model how experts (like me my 60s ;-)) approach tasks. It provides a way to explore potentially useful tacit knowledge. Think Action Mapping for Experts or the Shadow Action Map.