Incident, Suspension, Darfur
Labels: Darfur
Humanitarian and Non Governmental Organization Safety & Security
- Do no harm, do know harm - Since 2005 -
Labels: Darfur
Labels: Kenya
Labels: Darfur
Labels: Darfur
Labels: Iraq
Labels: H5N1
Labels: Darfur
Labels: PMC
Labels: Tech
Labels: PMC
Labels: Gaza
Risk assessors in various fields — from health and safety to insurance to business continuity — always end up with some variation of the“impact times probability” formula. Instead of “impact,” other formulations say “magnitude” or “consequence”; instead of “probability,” they sometimes say “frequency.” But everyone agrees that the technical and financial calculation of risk (as opposed to the cultural or psychological calculation) means multiplying two factors: how bad it is and how likely it is.
As you know, this is easier said than done. Measuring probability is a methodological can of worms, especially for awful things that have never happened. (How do you even list all the awful things that have never happened, much less calculate their probability?) Even for chronic risks, there is often very high uncertainty surrounding a probability estimate — consider, for example, the probability that low-level emissions of dimethylmeatloaf contribute to the incidence of birth defects. But at least we know exactly what we’re trying to measure. Measuring risk magnitude (“impact” in your terms) is a conceptual and moral can of worms. What is the relative magnitude of the death of a human versus the extinction of an animal species? The death of a healthy child versus the death of a virtuoso violinist versus the death of a terrorist versus the death of an Alzheimer’s patient?
Nor is there universal agreement that risk magnitude and risk probability deserve equal weighting, as the formula suggests. Most people intuitively think that really horrific outcomes (the end of life on earth, say) are unacceptable even if they have commensurately low probabilities. We prefer a lower-magnitude higher-probability risk, even though the magnitude-times-probability product is the same. But at the other end of the distribution, we are also profoundly uncomfortable with sacrificial lambs: If we just let them kill this one person, the odds of a good outcome will improve markedly for the rest of us.
Even when both magnitude and probability are well-established, people have very different responses to mathematically equivalent risks. Some of this is attributable to perceptual heuristics — some risks are more vivid and emotionally resonant than others, for example. But some of it is real judgment that risks deserve to be assessed by more standards than just their magnitude and probability. Most people know that they are likelier to die in a fatal car accident than in a terrorist attack. Most people know that a million dollars spent on highway safety will reduce mortality more than a million dollars spent on homeland security. But compared to highway safety, terrorism isn’t just more vivid, more emotionally resonant, less familiar, more dreaded, and the rest. It is also more important — morally, politically, culturally. We are willing to spend more on it per life saved.
All these complexities, discontinuities, and inconsistencies are the venue of what I usually call outrage. In my “Risk = Hazard + Outrage” formula, my “hazard” is what you mean by “risk” — that is, magnitude times probability. “Outrage” is all the other considerations that make us assess some risks differently than others, including the considerations that make us right to do so. When I first coined the formula, I was working mostly on environmental controversies, and “outrage” nicely captured the mix of anger, righteous indignation, and worry that people felt about industrial pollution. It isn’t really the right word for some other sorts of risks. The same “outrage factors” explain why people are more attentive to West Nile Virus than to flu, notwithstanding the fact that flu is by far the bigger risk technically. But people are more fearful than angry about West Nile Virus. It feels off to call it “outrage.” Years ago, Sheldon Krimsky and Alonzo Plough wrote about “technical rationality” versus “cultural rationality.” That captures the distinction in a more universal way than “hazard" versus “outrage.” I tend to stick to my own formulation; it’s shorter and it’s mine.
Labels: Risk
Labels: Tech