Find information on medical topics, symptoms, drugs, procedures, news and more, written for the health care professional.

* This is the Professional Version. *

Cognitive Errors in Clinical Decision Making

by Douglas L. McGee, DO

Although quantitative mathematical models can guide clinical decision making, clinicians rarely use formal computations to make patient care decisions in day-to-day practice. Rather, an intuitive understanding of probabilities is combined with cognitive processes called heuristics to guide clinical judgment. Heuristics are often referred to as rules of thumb, educated guesses, or mental shortcuts. Heuristics usually involve pattern recognition and rely on a subconscious integration of somewhat haphazardly gathered patient data with prior experience rather than on a conscious generation of a rigorous differential diagnosis that is formally evaluated using specific data from the literature.

Such informal reasoning is often fallible because heuristics may cause several types of unconscious errors (cognitive errors). Studies suggest that more medical errors involve cognitive error than lack of knowledge or information.

Types of cognitive error

There are many types of cognitive errors, and although it is obviously more important to avoid errors than to properly classify them once made, being aware of common types of cognitive errors can help clinicians recognize and avoid them.

Cognitive errors may roughly be classified as those involving

  • Faulty assessment of pre-test probability (overestimating or underestimating disease likelihood)

  • Failure to seriously consider all relevant possibilities

Both types of error can easily lead to improper testing (too much or too little) and missed diagnoses.

Availability error occurs when clinicians misestimate the prior probability of disease because of recent experience. Experience often leads to overestimation of probability when there is memory of a case that was dramatic or that involved a patient who fared poorly or a lawsuit. For example, a clinician who recently missed the diagnosis of pulmonary embolism in a healthy young woman who had vague chest discomfort but no other findings or apparent risk factors might then overestimate the risk in similar patients and become more likely to do chest CT angiography for similar patients despite the very small probability of disease. Experience can also lead to underestimation. For example, a junior resident who has seen only a few patients with chest pain, all of whom turned out to have benign causes, may begin to do cursory evaluations of that complaint even among populations in which disease prevalence is high.

Representation error occurs when clinicians judge the probability of disease based on how closely the patient’s findings fit classic manifestations of a disease without taking into account disease prevalence. For example, although several hours of vague chest discomfort in a thin, athletic, healthy-appearing 60-yr-old man who has no known medical problems and who now looks and feels well does not match the typical profile of an MI, it would be unwise to dismiss that possibility because MI is common among men of that age and has highly variable manifestations. Conversely, a 20-yr-old healthy man with sudden onset of severe, sharp chest pain and back pain may be suspected of having a dissecting thoracic aortic aneurysm because those clinical features are common in aortic dissection. The cognitive error is not taking into account the fact that aortic dissections are exceptionally rare in a 20-yr-old, otherwise healthy patient; that disorder can be dismissed out of hand and other, more likely causes (eg, pneumothorax, pleuritis) should be considered. Representation error is also involved when clinicians fail to recognize that positive test results in a population where the tested disease is rare are more likely to be false positive than true positive.

Premature closure is one of the most common errors; clinicians make a quick diagnosis (often based on pattern recognition), fail to consider other possible diagnoses, and stop collecting data (jump to conclusions); often, even the suspected diagnosis is not confirmed by appropriate testing. Premature closure errors may occur in any case but are particularly common when patients seem to be having an exacerbation of a known disorder—eg, if a woman with a long history of migraine presents with a severe headache (and actually has a new subarachnoid hemorrhage), the headache may be mistakenly assumed to be another attack of migraine. A variation of premature closure occurs when subsequent clinicians (eg, consultants on a complicated case) unquestioningly accept a previous working diagnosis without independently collecting and reviewing relevant data.

Anchoring errors occur when clinicians steadfastly cling to an initial impression even as conflicting and contradictory data accumulate. For example, a working diagnosis of acute pancreatitis is quite reasonable in a 60-yr-old man who has epigastric pain and nausea, who is sitting forward clutching his abdomen, and who has a history of several bouts of alcoholic pancreatitis that he states have felt similar to what he is currently feeling. However, if the patient states that he has had no alcohol in many years and has normal blood levels of pancreatic enzymes, clinicians who simply dismiss or excuse (eg, the patient is lying, his pancreas is burned out, the laboratory made a mistake) these conflicting data are committing an anchoring error. Clinicians should regard conflicting data as evidence of the need to continue to seek the true diagnosis (acute MI) rather than as anomalies to be disregarded. There may be no supporting evidence (ie, for the misdiagnosis) in some cases in which anchoring errors are committed.

Confirmation bias occurs when clinicians selectively accept clinical data that support a desired hypothesis and ignore data that do not (cherry-picking). Confirmation bias often compounds an anchoring error when the clinician uses confirmatory data to support the anchored hypothesis even when clearly contradictory evidence is also available. For example, a clinician may steadfastly cling to patient history elements suggesting acute coronary syndrome (ACS) to confirm the original suspicion of ACS even when serial ECGs and cardiac enzymes are normal.

Attribution errors involve negative stereotypes that lead clinicians to ignore or minimize the possibility of serious disease. For example, clinicians might assume that an unconscious patient with an odor of alcohol is “just another drunk” and miss hypoglycemia or intracranial injury, or they might assume that a known drug abuser with back pain is simply seeking drugs and miss an epidural abscess caused by use of dirty needles. Psychiatric patients who develop a physical disorder are particularly likely to be subject to attribution errors because not only may they be subject to negative stereotyping but they often describe their symptoms in unclear, inconsistent, or confusing ways, leading unwary clinicians to assume their complaints are of mental origin.

Affective error involves avoiding unpleasant but necessary tests or examinations because of fondness or sympathy for the patient (eg, avoiding a pelvic examination on a modest patient or blood cultures on a seriously ill patient who has poor veins).

Minimizing cognitive errors

Some specific strategies can help minimize cognitive errors. Typically, after history and physical examination are done, clinicians often form a working diagnosis based on heuristics. At this point, it is relatively easy to insert a formal pause for reflection, asking several questions:

  • If it is not the working diagnosis, what else could it be?

  • What are the most dangerous things it could be?

  • Is there any evidence that is at odds with the working diagnosis?

These questions can help expand the differential diagnosis to include things that may have been left out because of cognitive errors and thus trigger clinicians to obtain further necessary information.

* This is a professional Version *