Even when in possession of good and useful data, humans behave irrationally. City government is itself given to irrationality, particularly when politics is involved. Given that the purpose of this series (here and here, so far) is to explore how best to gather, analyze, and apply data to city government, what to do about this persistent irrationality?
Perhaps the best start is simply to understand the sources and nature of the irrationality in question. In a prior note, I referred to Nalbandian’s proposed “constellations of logic” for elected officials and city staff. In brief, the currency of elected officials is narrative and the currency of city staff is data. I refer you to the fuller description in the prior note; I have found Nalbandian’s discussion helpful.
But there is more to the story. Like many, I have been reading Kahneman and Tversky’s work on decision-making, by way of Thinking, Fast and Slow and Michael Lewis’s The Undoing Project. Recently I pulled their classic article from 1974, “Judgment Under Uncertainty: Heuristics and Biases.” There, Kahneman and Tversky introduced three heuristics and biases: representativeness, availability, and anchoring. In this note, I sketch the particular application of those biases to city government.
If you are not familiar with Kahneman and Tversky, an initial note. A “heuristic” is a mental shortcut. When faced with a complex problem, humans apply such a shortcut to make a decision rather than fully unravel the complexity: “When you come to a fork in the road, take it.” Such shortcuts can be quite useful. But they can also lead to costly errors. The heuristic is the possibly effective shortcut; the bias is the possibly resulting error.
The really interesting thing, as Kahneman and Tversky tell it, is that we use these heuristics whether we want to or not. They are an embedded part of our thinking. As such we are inevitably led to error. We are predictably irrational.
Representativeness
The first bias is representativeness. I have struggled with a concise and clear definition, but it eludes me. How about this: We exhibit the representativeness bias when we judge the likelihood of a particular output by the degree to which the input is “representative” of some other known input. It becomes much clearer in example:
[C]onsider an individual who has been described by a former neighbor as follows: “Steve is very shy and withdrawn, invariably helpful, but with little interest in people, or in the world of reality. A meek and tidy soul, he has a need for order and structure, and a passion for detail.” How do people assess the probability that Steve is engaged in a particular occupation from a list of possibilities (for example, farmer, salesman, airline pilot, librarian, or physician)?
Steve is representative of the occupation of librarian, and we therefore find it more likely that Steve is a librarian. But in fact there are many more farmers than librarians, so it is in fact more likely that he is a farmer, despite the charm of matching the description to the prediction.
A manifestation of the representativeness bias applicable to local government is insensitivity to sample size. Kahneman and Tversky describe a bag containing 2/3 balls of one color and 1/3 balls of another color. In sample one, the subject draws 5 balls and finds 4 red balls and 1 white ball. In sample two, the subject draws 20 balls and finds 12 red balls and 8 white balls. In which case is it more probable that the bag contains 2/3 red balls and 1/3 white balls? According to the math, the odds of such are 8:1 in the first sample and 16:1 in the second sample. But almost invariably we guess the first sample to be more probably 2/3 red and 1/3 white. The error is in undervaluing the sample size; a sample size of 20 is much more valuable than a sample size of 5. Why do we make the error? Because an 80% draw of red looks like a better bet than a 60% draw; the first sample is more representative.
Imagine, then, an elected official in a city of 100,000 who reasons as such: “I talked to 10 people today, and 7 of them have experienced a missed garbage pickup in the past six months.” Surely that is compelling evidence of a problem with the sanitation department, right? Actually, no. The sample size is far too small to draw any conclusions. But good luck in overcoming the story with facts.
Availability
The second bias is availability, which is easier to define. We assess “the probability of an event by the ease with instances or occurrences can be brought to mind.” Kahneman and Tversky give this example. Sample a random text in the English language. Will it contain more words that have the letter k in the first position (kitchen) or in the third position (awkward)? We guess the first position, because the set of words beginning with the letter k is more readily called to mind, more available. But in fact there are many more words in English with k in the third position.
How might this play out in local government? Consider a city that has been hit by a severe snow storm in the past few years. When weather patterns suggest the possibility of another snow storm, city officials may grossly overestimate the likelihood and/or severity of the storm. Rather than making reference to the meteorologically predicted probability and severity of the storm, the officials will make reference to the readily available story.
Availability particularly attaches to public engagement. Ask a city leader how many problems with trash pickup exist in the city. In the absence of systematic study, two anecdotal datasets exist. First, the vast number of conversations the city leader has had over the past weeks and months in which trash pickup was not discussed. Second, the six people who attended a public hearing on trash pickup and complained. The second set is more vivid and thus more available.
Anchoring
The third bias is anchoring, which observes that we misestimate the likelihood of any event if arriving at the prediction involves adjusting an initial value – even if the initial value has absolutely no connection to the event being estimated:
In a demonstration of the anchoring effect, subjects were asked to estimate various quantities, stated in percentages (for example, the percentage of African countries in the United Nations). For each quantity, a number was determined by spinning a wheel of fortune in the subjects’ presence. The subjects were instructed to indicate whether the number was higher or lower than the value of the quantity, and then estimate the value of the quantity by moving upward or downward from the given number. Different groups were given different numbers for each quantity, and these arbitrary numbers has a marked effect on estimates. For example, the median estimates of the percentage of African countries in the United Nations were 25 and 45 for groups that received 10 and 65, respectively, as starting points.
The initial value, even a random value, creates inertia in the adjustment.
Anchoring might arise in city government in a variety of contexts. The most obvious is in budgeting. The current budget creates a strong anchor for the next year. Zero-based budgeting is an attempt to counter the anchor, but pretending to ignore the anchor doesn’t make it go away. Existing tax rates and fees are also anchors. Yet there are far less rational anchors – employee head count, utilization of sick and vacation time, workplace injury rates, and so on. In each situation the base case is highly determinative of the substantive norm.
What is Going on Here?
Why do we deceive ourselves by such biases? It turns out that Nalbandian is onto something. We understand stories and symbols far better than we understand statistics. When faced with a complex question – e.g., how efficient and effective is our youth recreation program – we grow bewildered and overwhelmed. Then we tell a story to make sense of it all: “Well, I remember little Max, whose grades went way up when he joined the city after-school basketball league. Surely the program works.”
In that light, we just need to finesse Nalbandian’s analysis. Elected officials explicitly rely on stories. City staff members also rely on stories, perhaps to a lesser extent, but they say (or believe) that they don’t. The lesson, I think, is that becoming a data-driven city is harder than it sounds. Even when presented with lots of very good data, we default to the storytelling mode. And it appears that the best we can do is to be aware of this tendency.
But wait. The entire mechanism of governing and reporting on government exacerbates these biases. To gauge public sentiment, we have public hearings and workshops, we monitor social media, we listen to the loudest voices. Even putting aside self-selection: Small sample size, vividness of description, and suggestion of a base rate all conspire to mislead us. Then, the media reports by telling a human interest story about the issue at hand. Which story yields more read-throughs: (1) “xx% of veterans receiving care at the local VA hospital reported timely and conscientious care,” or (2) “Dan thought, that after fighting for his country’s freedom, he would at least be able to see a doctor for his persistent headaches.” The storytelling method of decision analysis and decision making reinforces itself.
The expected next step in this note would be to say, hey, watch out for irrationality and story-telling, follow the data, thanks for reading. But I want to push back, at least a little. Data-based decision making is not necessarily always the best approach. In the snow storm example above, the recent storm not only offers an availability model, it also means that failure to prepare for the next storm would be politically disastrous. The storm might not be any more likely – but if it does happen and the city is not prepared, there will be hell to pay. A similar dynamic attaches to the six voices in the pubic hearing on trash pickup. What might be said about a city that ignores the concerns of six citizens who made the civic investment of attending a public hearing?
This is a narrow but important point. City government operates not just in the data-driven, rationally determined world. Cities are symbols. They have stories. Their citizens respond not only to effectiveness and efficiency, but also to meaningful displays of responsiveness and concern. Most of all, city leadership at its best has a vision, a story for the goals and aspirations of the future. Data might provide insight into the more effective methods of realizing that vision, but leaders must conceive the goal and inspire staff and citizens to pursue it. That, it seems, requires stories and symbols.
Thus I suggest that data-driven analysis should be valued but not venerated. The work of city government can be shaped, guided, and improved by using data and recognizing our biases. But we will still have stories to hear and tell.
Great points Eric. They apply to any form of human governance – leadership must be mindful of the persuasive cues, some rational and some not, that drive behaviors in the proper or desired direction.
Well written Eric, I see that I am a story thinker. I believe to connect with both sets of decision makers we must use stories and data together.
Check out the book lead with a Story by Paul Smith. I think it will help you connect your message to more people.