I’d like to begin by thanking the Knight Foundation for the invitation to speak.
I’ve been asked to give a talk about decision-making. I’m going to focus today on bad decisions, on the causes and repercussions of failure. The failure I’ll be talking about is my own.
For those who do not know who I am, let me give you a brief summary. I am the author of a book on creativity that contained several fabricated Bob Dylan quotes. I committed plagiarism on my blog, taking, without credit or citation, an entire paragraph from the blog of Christian Jarrett. I also plagiarized from myself. I lied to a journalist named Michael Moynihan to cover up the Dylan fabrications.
My mistakes have caused deep pain to those I care about. I am constantly remembering all those people I’ve hurt and let down – friends, family, colleagues. My wife, my parents, my editors. I think about all the readers I’ve disappointed, people who paid good money for my book and now don’t want it on their shelves.
I have broken their trust. For that, I am profoundly sorry. It is my hope that, someday, my transgressions might be forgiven.
I could stop here. But I am convinced that unless I talk openly about what I’ve learned so far – unless I hold myself accountable in public – then the lessons will not last. I will lose the only consolation of my failure, which is the promise that I will not fail like this again. That I might, one day, find a way to fail better.
The lessons have arrived in phases. The first phase involved a literal reconstruction of my mistakes. I wanted to have an accounting, in my head, of how I fabricated those Dylan quotes. I wanted to understand the mechanics of every lapse, to relive all those errors that led to my disgrace. I wanted to understand so that I could explain it to people, so that I could explain it in a talk like this. So that I could say that I found the broken part and that part has a name. My arrogance. My desire for attention. My willingness to take shortcuts, provided I don’t think anyone else will notice. My carelessness, matched with an ability to excuse my carelessness away. My tendency to believe my own excuses.
But then, once I came up with this list of flaws, and once I began to understand how these flaws led to each of my mistakes, I realized that all of my explanations changed nothing. They cannot undo what I’ve done, not even a little. A confession is not a solution. It does not restore trust. Not the trust of others and not the trust of myself. What’s more, I came to see that my explanations were distracting me from the more important reality I need to deal with.
Because my flaws – these flaws that led to my failure – they are a basic part of me. They are as fundamental to my self as those other parts I’m not ashamed of. This is the phase that comes next, the phase I’m in now. It is the slow realization that all the apologies and regrets are just the beginning. That my harshest words will not fix me, that I cannot quickly become the person I need to be. It is finally understanding how hard it is to change.
Character, Joan Didion wrote, is the willingness to accept responsibility for one’s own life. For too long, I did not accept responsibility. And by not accepting responsibility – by pretending that all of my errors were accidents, that my carelessness was not a choice – I kept myself from getting better. I postponed the reckoning that was needed.
There is no secret to good decision-making. There is only the obvious truth: We either confront our mistakes and gain a little wisdom, or we don’t and remain a fool.
What I’d like to talk about today is how I’m attempting to confront my mistakes in the future. I don’t have any wisdom, just a story that gives me a small measure of comfort. It’s a hard story to tell, because it’s not about me and my mistakes, at least not directly. But I want to tell it anyways, because it has helped me understand what I need to do next.
My story is about forensic science. At the time my career fell apart, I was working on an article about the mental flaws that plague forensic researchers. These are flaws that, if left unchecked, can lead to false matches and wrongful convictions. My article focused largely on the research of Dr. Itiel Dror, a cognitive neuroscientist at the University College London. His most widely cited work involves fingerprint analysis. Dror has repeatedly shown that when fingerprints are unclear – and prints lifted from crime scenes are often full of ambiguity – it’s possible to get forensic scientists to alter their conclusions by telling them a different story about the prints. In one of his experiments, Dror showed that four out of six experienced forensic examiners reversed their verdicts when presented with different contextual information.[i] The evidence remained the same, but that didn’t matter. Dror could trick them into changing their minds.
A similar logic even applies to so-called “complex genetic evidence,” which are DNA samples that are badly degraded or contain biological material from multiple individuals. (It’s estimated that about a quarter of DNA samples from crime scenes are complex.) In a recent study, Dror presented genetic data from a 2008 felony case that involved a gang rape in Georgia. The two forensic scientists assigned to the case concluded that, based on the DNA evidence, the three suspects could not be excluded from the crime scene.
Here’s where things get unsettling. Dror then sent the same DNA sample to seventeen additional forensic scientists. The only difference was that these scientists did the analysis blind—they weren’t told about the brutal gang rape or the prior criminal history of the defendants or that one or the suspects had agreed to a plea deal. Of these seventeen additional experts, only one concurred with the original conclusion. Meanwhile, twelve directly contradicted the finding presented during the trial, and four said the sample itself was insufficient.
It’s important to note, of course, that Dror isn’t alleging conscious bias. He doesn’t believe that these forensic scientists are intentionally switching their verdicts to bolster the prosecution. Rather, he argues that they are victims of their hidden brain, undone by flaws so deep-seated they don’t even notice their existence. The examiners think they are seeing it straight, observing the print as it is. But we see nothing straight.
It’s easy to brush these shortcomings aside, to insist, quite accurately, that the overwhelming majority of forensic testimony is valid and true.[ii]
But you know how it goes: close is not close enough. If we are not prepared to deal with our mistakes – if we try to hide them away, as I did – then even minor errors can become catastrophes.
This is what happened to the FBI forensics lab. Three days after the terrorist bombings in Madrid on March 11, 2004, the FBI received a fingerprint lifted from a plastic bag full of detonators found in a stolen van near one of the bombing sites. This print was immediately entered into the FBI fingerprint database, the largest biometric database in the world. A few hours later, the computer generated an initial list of 20 possible matches. After looking at all of the possibilities, an FBI forensic scientist concluded that one of these prints was an exact match. A second FBI examiner confirmed this conclusion. The print in question belonged to Brandon Mayfield, a lawyer living in Portland, Oregon.
The FBI then opened an intensive investigation of Mayfield, obtaining warrants for electronic surveillance and physical searches of his property. The detectives soon discovered that Mayfield was a Muslim, married to an Egyptian immigrant, and had represented a convicted terrorist in a child custody dispute. On May 6, the FBI arrested Mayfield as a “material witness” in the Madrid bombings. Based solely on this fingerprint match – there was no other corroborating evidence – Mayfield was sent to the Multnomah County Detention Center, where he was placed in solitary confinement and kept in his cell for 22 hours a day.[iii]
But the scientists were wrong. Mayfield’s print was not a match. In fact, it wasn’t even close. In retrospect, the certainty of the FBI examiners seems hard to understand. For one thing, the entire upper left quadrant of the original print failed to match Mayfield’s fingerprint. What’s worse, the print from the crime scene was taken from a right middle finger, even though it was matched to a print from Mayfield’s left index finger. It’s for these reasons that the Spanish National Police advised the FBI, nearly six weeks before Mayfield was formally exonerated, that he could not be linked to the crime scene, that their match was a mistake. Needless to say, the FBI ignored this warning.
In the wake of the Mayfield failure, the FBI had several options. They could have denied the systematic nature of the problem, insisting that the Mayfield case was a mere aberration.
Or perhaps the FBI could have acknowledged the remote possibility of error and decided that a little education was more than enough. Maybe Dror could have given a lecture on cognitive bias to the Bureau scientists.
This is a tempting solution. I have been tempted by this solution, by the possibility that if I simply research the psychology of deceit, that if I investigate the neuroscience of broken trust, then I can find a way to fix myself, that the abstract knowledge will be some kind of cure. But such knowledge is not enough. I know this from personal experience.
The month before I resigned from my job, I conducted an interview with Dan Ariely, a behavioral economist at Duke. We were talking about his excellent new book on dishonesty. The essential premise of the book is that tiny falsehoods are everywhere; the human mind is a confabulation machine.
Here’s a typical experiment that Ariely conducted on unsuspecting MIT students. He gave the students a series of puzzles and told them that they would get 50 cents for each correct answer. One group of students was randomly assigned to the control condition, in which it was impossible to cheat. The second group, in contrast, was allowed to check their own work and then, after shredding the answer sheet, report their results.
What did Ariely find? While those in the control condition solved, on average, four of the problems, those who graded their own work reported six correct solutions. Furthermore, this spike in scoring did not result from a few bad apples. Instead, the improved performance was the result, he writes, of “lots of people who cheated by just a little bit.”
I believed in Ariely’s hypothesis, which is that it’s all too easy to shade the truth, to report the wrong answers, to find yourself engaged in the dirty business of rationalization. During our conversation, I asked Ariely several questions about his dishonest subjects, always using the reassuring detachment of the third person. It never occurred to me that the mistakes he was describing were about to become my own.
The self-blindness on display here is emblematic of a larger pattern, a consistent asymmetry in the ways in which I noticed error. I never had a problem contemplating the flaws and bad decisions of others. Oh, the comedy of human folly! What ridiculous people these other people are.
But I was totally incapable of applying this same standard to my own life.
My failures were my fault alone. But I’ve come to believe that, if I’m going to regain some semblance of self-respect, then I need the help of others. I need my critics to tell me what I’ve gotten wrong, if only so that I can show myself I’m able to listen. That is the test that matters – not the absence of error, but a willingness to deal with it.
And this leads me back to the FBI. After being prodded by the Justice Department, the FBI recognized that apologies, re-training, even a big legal settlement, were not enough. If the Bureau was going to prevent another Mayfield-type failure from occurring, then it needed to fundamentally reform its scientific culture.These reforms – the reckoning with failure – are why I’m still interested in this story. The changes give me a little hope.
The most important reforms at the FBI have to do with the ways in which disagreements and contradictions – those troubling signals that someone, somewhere, might have made a mistake – are dealt with by the lab. While such disagreements were routinely dismissed before Mayfield – two forensic scientists within the FBI had doubts about the original Mayfield match, but their doubts were never taken seriously – the new standard operating procedures require that all “technical disagreements” among forensic scientists, and between the FBI and other law-enforcement agencies, be fully acknowledged. The scientists are supposed to sit down and calmly discuss the inconsistencies in their verdicts, going over each whorl of skin and what it might mean. Their differences are documented in writing and then passed along to the Unit Chief. If the disagreement cannot be resolved, it triggers yet another analysis, done this time by an independent examiner. While the pre-Mayfield culture at the FBI rarely encountered disagreements – verifiers almost always verified the original conclusion – the reforms have done a lot to change that. According to a 2011 report from the Office of the Inspector General in the Justice Department, forensic scientists are no longer afraid to contradict each other. They know that the only way to avoid big failures is to consider every little one.[iv]
There’s a phrase that appears again and again in the Justice Department report: standard operating procedure. The investigators spend many pages reviewing these procedures in tedious detail, how forensic scientists are supposed to go over each ridge and in what order, what the procedures are for dealing with uncertainty or disagreement, even how analysts are supposed to document their evidence in the case files. When I first read this report nearly a year ago, all this talk of standard operating procedures struck me as bureaucratic nonsense. The failures of forensics reflect deep-seated biases, fundamental cognitive weaknesses. How could a rewritten manual possibly help?
But you know what? I’ve come to appreciate this fixation on standard operating procedures.
That is how, one day, I will restore a measure of the trust that I have lost. Not with the arrangement of words, not with the apology, but with the commitment to a set of decent rules. To not have these procedures and processes in place is to expose myself to the possibility of indifference. It is to slip down a slope and not even notice.
There is a wonderful section in Charles Darwin’s autobiography where he writes about his “golden rule.” The rule is simple: Whenever Darwin encountered a “published fact” or “new observation” that contradicted one of his beliefs, he forced himself to “make a memorandum of it without fail and at once.” Why did Darwin do this? Because he had “found by experience that such facts and thoughts” – those inconvenient ideas – “were far more apt to escape from the memory than favorable ones.”
This really is the golden rule. It begins with a recognition of inherent weakness, but contains this weakness with a conscious habit, something that Darwin has learned to do “without fail.” It is the recognition that character requires constant vigilance, that the moment we take our good decisions for granted is also the moment we expose ourselves to the possibility of making some very bad ones.
What I clearly need is a new list of rules, a stricter set of standard operating procedures. If I’m lucky enough to write again, then whatever I write will be fact-checked and fully footnoted. It doesn’t matter if it’s a book or an article or the text for a speech like this one. Every conversation with a subject will be tape recorded and transcribed. If the subject would like a copy of their transcript, I will provide it. There is, of course, nothing innovative about these procedures. The vast majority of journalists don’t need to be shamed into following them. But I did, which is why I also need to say them out loud.
Such a writing process will take a discipline I don’t yet have, but that’s why there are standard operating procedures. They are there for those days when I’m not strong enough to look at my errors, when I’d rather pretend they don’t exist. I need the rules because I know that simply knowing is not enough. That listing my failures or hearing about the failures of others won’t prevent me from doing the same thing. These procedures are my fail-safe.
Designers refer to this sort of rule as a forcing function. These functions are everywhere and they keep us from doing all sorts of stupid things. Just think of your car. There is, for instance, the reverse lockout, which prevents us from throwing a moving car into reverse and accidentally ripping apart the transmission. Or the annoying chime that reminds us to put on our seatbelt or take the keys out of the ignition. By forcing drivers to notice their mistakes and bad decisions, or by simply making these mistakes impossible, forcing functions shrink the scope of error. We still might crash the car. But at least we’ll be wearing a seat belt.
In the past, I have quoted a line from the physicist Niels Bohr. An expert, he said, is a man who has made all the mistakes which can be made in a very narrow field. Bohr is right: we learn how to get it right by getting it wrong again and again.
But here’s the crucial addendum, which I failed to appreciate: screwing up is not enough. Because I certainly made lots of mistakes – I just tried not to pay attention to them. I did my best to look the other way.
And that’s why I need my new standard procedures. They are a forcing function, forcing me to deal with my own bad decisions. Writing about science, about the hard struggle for truth, has always been a profound privilege. When I lost the trust of readers, I lost that privilege.
These rules are my attempt to make sure that never happens again. If nothing else, my new procedures are a mark of my mistakes, a reminder that whatever I do next will be shadowed by what I’ve done.
There are days when that strikes me as a tragic fact. And then there are days when I see it as a necessary burden, when I can imagine my failure as a kind of protection, like an antibody in the bloodstream after a disease.
I’d like to end with a quote from Bob Dylan, one he actually said: “She knows there’s no success like failure and that failure’s no success at all.” I used to believe that this verse was inherently inscrutable, just a hollow riddle. But now I see that the words are literal; there is no riddle at all. Because success does require failure. It requires that we struggle and screw up and keep going. That we learn from what we cannot do well.
But that is a cliché. The poetry of Dylan’s line exists in the inversion, for even as he insists on the necessity of failure, he acknowledges that every failure is still a failing. It is a lapse, a shortcoming, a regret that wakes us up in the darkest parts of the night. The power of the verse is inseparable from this honesty, for Dylan manages to compress a brutal fact of life – failure is both necessary and terrible – into a few seconds of mournful singing. He is speaking a difficult truth, which is really the only kind.
I have learned a difficult truth about myself. I have learned about parts of me that I tried for too long not to see. But entangled with that truth is the possibility of improvement. Not redemption, not forgiveness. Just the mere possibility of improvement. The hope that, one day, when I tell my young daughter the same story I’ve just told you, I will be a better person because of it. More humble. More careful. Less tempted by shortcuts and my own excuses.
What I will tell my daughter is that my failure was painful, but that the pain had a purpose. The pain showed me who I was, and how I needed to change.