The Decision Tax
A training intervention can be rich with content, thoughtfully designed, grounded in solid instructional principles, and loaded with options and pathways and branching scenarios that give the learner all the freedom they could possibly want, and still not work. People get stuck before they even reach the material that matters, because the sheer volume of choices they face on the way to it has become a task in its own right.
This is the decision tax: a cost that people pay every time they have to choose between options that aren’t the point of the exercise. The more options we give them, the higher that tax becomes.
This is the second article in our series exploring how user experience principles apply to the design of training interventions, whether digital, in-person, or blended. Last week, we looked at the UX Honeycomb and the six essential considerations that every training experience should address. This week: how the number and complexity of choices within our interventions affects the people we’re trying to help.
The Psychology of Choice
More choices make decisions harder. Of course twenty options take longer to process than three. The relationship, though, isn’t linear. Decision time follows a logarithmic curve, rising steeply at first and then flattening out as options pile up. The jump from two options to four has a proportionally much larger effect on decision time than the jump from twenty to forty.
This relationship was formalised in 1952 by William Edmund Hick, a British psychologist working at the Medical Research Council’s Applied Psychology Unit in Cambridge, and subsequently confirmed by the American psychologist Ray Hyman in 1953. Hick’s original experiment was simple: ten lamps arranged in a circle, each corresponding to one of ten Morse code keys operated by the participant’s fingers, with a pre-punched tape activating a random lamp every five seconds (Hick, 1952). The participant had to press the correct key as quickly as possible after a lamp lit up, and Hick measured how long this took across varying numbers of active lamps. A clear logarithmic relationship emerged between the number of possible choices and the time required to respond correctly.
Hyman extended this the following year using a different experimental setup, eight lights in a matrix with verbal responses, confirming Hick’s findings while also establishing the relationship between reaction time and the amount of information transmitted (Hyman, 1953). Together, their work produced the Hick-Hyman Law, often refered to as Hick’s Law in the UX world: RT = a + b log₂(n), where RT is reaction time, n is the number of equally probable alternatives, and a and b are constants that depend on the task and the individual.
You’ll often see this illustrated as a characteristic logarithmic curve, a line that rises sharply on the left and levels off to the right. Later research by Longstreth (1988) found the logarithmic model becomes less accurate beyond about eight choices, with a power function providing a better fit across wider ranges, and there are identified exceptions for verbal responses to familiar stimuli and for unequal probability distributions (Proctor and Schneider, 2018). For the range of choices we typically present in training interventions, somewhere between two and a few dozen, the principle holds: more options demand more cognitive processing, and the greatest cost comes from the initial increase in choice complexity.
The Cognitive Load Connection
The family resemblance between Hick’s Law and cognitive load theory is obvious. Both concern the finite processing capacity of working memory and what happens when we exceed it. Working memory can process perhaps four chunks of information at any given time (Sweller, 1988; Cowan, 2001), and extraneous load, the unnecessary demands imposed by poor instructional design, directly competes with the cognitive resources people need for learning.
Every unnecessary decision point in a training intervention is extraneous cognitive load. A participant trying to figure out which of twelve menu options leads to the module they need, or spending five minutes deciding which of six possible activities to start with in a workshop, is burning through working memory capacity on navigation and selection rather than on the material designed for them. Decision time increases with the number of options; unnecessary cognitive demands undermine learning. The decision-making experience within our training interventions deserves as much design attention as the content itself.
If you’d like to explore how you can use research from psychology, neuroscience, the cognitive sciences and behavioural sciences in your work in L&D, HR, performance enablement or leadership, consider attending this year’s Evidence Informed Practice Conference.
As a reader of this Substack, you can get 25% off your ticket using code CPDW25 at checkout.
Outside L&D
Other fields have been responding to this challenge for longer than we have. The television remote control is a surprisingly good case study. The old approach gave users a button for everything: input selection, picture settings, audio modes, channel favourites, teletext (for those of us old enough to remember it), all visible and available at all times. Modern remotes have stripped this back to a directional pad, a few navigation buttons, and perhaps three or four quick-access shortcuts. Everything else lives behind a screen-based menu that appears only when you need it.
This is progressive disclosure, one of the most effective strategies for managing choice complexity (Nielsen, 2006). Show users only the most important options initially, and reveal additional options when they’re needed. Create layers of choice, each containing only what’s relevant at that stage. Well-designed mobile apps follow the same principle: four or five primary actions on the home screen, everything else tucked behind secondary menus or contextual triggers, revealed when you’re most likely to want them.
Inside L&D
A learner logs into their organisation’s LMS, and what greets them is a catalogue page containing thirty-two course categories, each with sub-categories, each populated with anything from three to a hundred and five individual modules. Every second spent navigating that catalogue is extraneous load, cognitive effort spent on finding the training rather than doing it. If the information architecture doesn’t match the learner’s mental model of how the content should be organised (which, given that it was probably designed by an L&D team rather than tested with actual users, it likely doesn’t), the search becomes even more effortful.
The same problem shows up in training rooms. Consider a facilitator who, with the very best of intentions, opens a session by laying out thirteen possible topics and asking the group to decide which ones they’d like to tackle first. The thinking is sound, but the execution creates a twenty-minute quagmire in which a room full of professionals are engaged in a group decision-making exercise that nobody asked for and nobody is enjoying. By the time a decision is reached, the energy in the room has dropped and a chunk of the session time has evaporated.
Within the training material itself, the same dynamic appears whenever we present too many options simultaneously. An e-learning module that opens with a menu of six topics arranged in a grid. A scenario-based exercise that offers eight possible actions at a single decision point. A workbook with fourteen reflection questions on a single page. Each forces the user to pause, process all the available options, evaluate them against each other, and select one, before they can get on with the thing we want them to focus on.
Card Sorting for Information Architecture
How do we figure out what the right organisation looks like? Card sorting, a staple of UX research for decades, remains underused in L&D.
In a card sort, participants organise individually labelled cards, each representing a topic, task, module, or concept, into groups that make sense to them, and then name those groups (Nielsen Norman Group, 2024). The method reveals the mental models that real users bring to your content: the categories they naturally create, the groupings they expect, the hierarchies that feel intuitive.
The most important distinction is between open and closed sorts. In an open card sort, participants create their own groups and labels from scratch, with no predefined categories, surfacing how people naturally think about the content without being influenced by whatever structure we’ve already imposed. In a closed card sort, participants sort cards into categories you’ve already defined, which is better suited to validating an existing structure than discovering a new one (Maze, 2025).
Here’s how to run a useful open card sort for a training intervention.
Identify all the individual topics, modules, tasks, or concepts that your intervention needs to cover. Write each one on a separate card, whether physical index cards or digital equivalents. Keep the labels clear and concise; avoid jargon that participants might interpret differently. Aim for somewhere between thirty and sixty cards. Fewer than that and you won’t generate enough complexity to learn anything useful; more than eighty and participants will find the task overwhelming.
Recruit participants from your actual target audience, the people who will use whatever you’re designing. Do not do this exercise with other learning designers, your L&D team, or subject matter experts alone. The people who do the work day-to-day think about these topics differently from the people who design training about them.
Run the session with an observer who watches but does not guide. The observer’s job is to note the process, not just the outcome: which cards caused hesitation, where people changed their minds, what reasoning they articulated as they sorted. This running commentary is often more valuable than the final groupings, because it reveals the connections and tensions that participants are negotiating.
Have participants name their groups after they’ve finished sorting, not before. Providing category names in advance, even as suggestions, introduces a frame that shapes how people think about the sorting task. That’s a form of coercion, however gentle, and it will compromise your results.
Run the exercise with multiple groups drawn from different parts of your audience, people with different experience levels, from different teams, with different perspectives on the work. Where groups converge on similar structures, you’ve found robust patterns you can design around with confidence. Where they diverge, you’ve found the areas that need more investigation or that might require different navigation paths for different user groups.
Debrief thoroughly. Ask participants which cards felt like they could belong in more than one group and whether any important topics seemed to be missing. Record everything, not just the final arrangement, so that you can revisit the reasoning when making design decisions later.
A well-run card sort gives you a user-tested information architecture for your intervention: which topics belong together, what language to use for categories and navigation, where the natural boundaries lie. You reduce choice complexity by organising options in a way that matches how your audience already thinks about the subject.
The Danger of Over-Simplification
There’s a trap here. The opposite of too many choices isn’t no choices; it’s the right choices, presented in the right structure, at the right time.
Over-simplification in training design creates an abstraction gap between the training environment and the reality people are trying to perform in. If your sales team encounters twelve different types of customer objection in their daily work, and your training covers three of them because you wanted to keep things simple, you’ve reduced instructional validity. If real-world decision-making involves weighing five or six competing factors simultaneously, and your training scenarios present tidy two-option choices to avoid analysis paralysis, you’ve designed something that feels efficient but doesn’t transfer.
Bjork’s work on desirable difficulty shows us that certain kinds of challenge during training produce better long-term retention and transfer even though they feel harder in the moment (Bjork, 1994). Difficulty that comes from processes supporting learning is valuable; difficulty that comes from poor design is waste. Place challenge where it serves performance, and strip it away from everything else: information architecture, navigation, the mechanics of participation, and the overhead of making decisions that don’t contribute to anyone’s development.
Designing for Decisions
Audit every point in a training intervention where someone has to make a choice. Does that choice serve the learning purpose, or is it adding cognitive cost?
Reducing options from ten to five saves more decision time than reducing from fifty to forty-five. Simplifying a few key decision points, the landing page of your LMS, the opening navigation of your e-learning module, the first five minutes of your workshop, will have a disproportionate impact on the overall experience.
Layer your choices so that people encounter only what they need at each stage, with deeper options available when they’re relevant. Discover how your users think about the content before you decide how to organise it. And remember that every unnecessary decision consumes working memory resources that would be better spent on learning.
The real world is complex, and effective training must prepare people for that complexity. We owe it to the people we’re designing for to make the experience of engaging with our work as straightforward as it can be, so that their effort goes where it should: into developing the knowledge and skills they came here to build.
References
Bjork, R. A. (1994). Memory and metamemory considerations in the training of human beings. In J. Metcalfe and A. Shimamura (Eds.), Metacognition: Knowing about Knowing (pp. 185-205). MIT Press.
Cowan, N. (2001). The magical number 4 in short-term memory: A reconsideration of mental storage capacity. Behavioural and Brain Sciences, 24(1), 87-114.
Hick, W. E. (1952). On the rate of gain of information. Quarterly Journal of Experimental Psychology, 4(1), 11-26. (PAID)
Hyman, R. (1953). Stimulus information as a determinant of reaction time. Journal of Experimental Psychology, 45(3), 188-196.
Longstreth, L. E. (1988). Hick’s law: Its limit is 3 bits. Bulletin of the Psychonomic Society, 26(1), 8-10.
Maze (2025). Card sorting: How to improve IA and uncover mental models.
Nielsen, J. (2006). Progressive disclosure. Nielsen Norman Group.
Nielsen Norman Group (2024). Card sorting: Uncover users’ mental models.
Proctor, R. W. and Schneider, D. W. (2018). Hick’s law for choice reaction time: A review. Quarterly Journal of Experimental Psychology, 71(6), 1281-1299.
Sweller, J. (1988). Cognitive load during problem solving: Effects on learning. Cognitive Science, 12(2), 257-285.


