I’ve had one true crank on this blog. He jumped into the comments on this post with mathematical gibberish he claimed disproved relativity. Another time I saw a crank letter written to a researcher at JPL who worked on dark matter. This crank even provided a little mechanical apparatus intended to demonstrate the existence of dark matter. It consisted of a rubber or nylon sheet that was stretched over a wire frame, and then you were supposed to roll a marble around on it.
It’s kind of surprising that these cranks fit so well with the descriptions of many others in Martin Gardner’s Fads and Fallacies in the Name of Science. Half a century after Gardner wrote his books, cranks, and belief in what they have to say, hasn’t changed much.
I picked up this book after Douglas Hofstadter mentioned it in an article reprinted in Scientific American after Gardner’s recent death. It’s essentially descriptive, spending surprisingly (and refreshingly) little time refuting crank theories of physics and medicine, and instead mostly detailing them. Gardner does, of course, refute each crank theory, but his most important contribution is to collect enough of them that cranks begin to look similar. (You can read Gardner’s generalizations about cranks in the Hofstadter article, or in chapter 1 of the book.)
Another surprising fact was that cranks are not just weirdos shouting loudly on obscure corners of the internet (ahem). Many cranks were fairly normal, and even learned and respected people outside of their crankery. A surprising array of famous, respected people bought into and campaigned for crank theories. Upton Sinclair recurs throughout the book, advocating a number of useless medical and dietary systems. Some other delusional supporters or even creators of crank ideas include Aldous Huxley, Clifton Fadiman, Oliver Heaviside, Walt Whitman, Arthur Conan Doyle, William James, H. G. Wells, and Jesus (last one added by me; the others are from Gardner. However, many of Gardner’s cranks theories are motivated by proving or justifying religious claims).
It seems that as you cross over into the realm of crankery, you begin to believe your discovery has more and more power and wider and wider applicability. Medical cranks, for example, rarely believe they have a cure for cervical cancer. They think they have a cure for everything. Sometimes they even branch out and extend their theory of physiology to explain physics.
Crankery is dangerous, because in some ways it’s difficult for a layman to see the difference between crank science and real science. In crank science, the observations frequently go against the crank’s theory. The crank then comes up with excuses for why this is so (read Gardner’s chapter on Dr. Joseph Banks Rhine’s work on ESP for an especially clear example). But you can find scientists doing the same thing! A chemist’s reaction doesn’t come out right, so he assumes it was contaminated. A particle physicist doesn’t see the effect he was looking for, so he assumes it occurs at just slightly higher energy. How can we tell the difference between honest excuses – those that are truly identifying mistakes in the experimental conditions – and dishonest ones – those that are the result of a researcher who would find an excuse under any circumstances? In recent years I’ve heard from time to time about new attempts to publish scientists’ negative results and to make their complete lab processes and all data openly available. These are two efforts that should help distinguish them from cranks.
But another problem with the crank mindset is that there’s no sharp dividing line. Aside from science, I’ve read a bit about training distance runners, so I’ll use that here. One clear crank is Percy Cerutty, a coach who demanded his runners carry spears and “run like the primitive man”, advocated strange diets, and in general believed, as cranks do, that he had stumbled onto secrets that no one else knew. Eventually, his runners left him. A more marginal case is Arthur Lydiard. Lydiard is a coach who created a fairly rigid, systematized training system and then advocated it as being the best possible. His system was based on trial and error in his early days of coaching. He tried a few different things and then stuck with what seemed to work best. But he began to believe that all his advice was better, stronger, and more iron-clad than it was. He also began to think his general ideas applied not just to running, but to all athletic endeavors (specifically shot put, rugby, and rowing come to mind). He’s an in-between crank, because he did hold himself accountable to the results of his methods, and he did coach Olympic champions, but he also lost touch with reality (Lydiard still has a large following of distance runners today, many of whom would be incensed if they read this summary.)
Modern coaches, too, tend to believe in their methods beyond the level their results support, and babble on endlessly about aspects of human physiology that are not as well-understood as they indicate. But the point is that they do this to varying degrees, with coaches ranging widely from true cranks to rational, down-to-earth people with a healthy dose of skepticism towards even their own practices and a realistic viewpoint on the success and failure of their athletes.
I have frequently found myself buying into crank athletic ideas, believing, for example, that all my injuries are due exclusively to running on hard roads (as opposed to trails or grass), although I had no data to support the belief. After reading scores of books and hundreds of articles, I now believe mostly that I’m not very sure about anything regarding training distance runners.
Surely, there is a crank continuum in science as well. On the one hand, there is an ideal scientist who (perhaps) evaluates all new evidence they receive with a perfectly-rational Bayesian approach, drawing conclusions only to the extent warranted by the evidence (and their prior beliefs). But scientists, even good ones, don’t do all do that. Once in a while they begin to believe in their own theories even when the evidence starts to pile against them. The outcomes they want to see happen affect the results of their experiments, or they choose not to publish results they don’t like. Their error bars grow just large enough that the data is consistent.
Usually it’s not hard to tell a crank. Also, as Gardner points out in his book, just because there are some intermediate cases, doesn’t mean that most cases aren’t clear-cut. But I’m glad I read about what cranks do, how they justify their delusions, because I don’t have to look too long and hard to see hints of the same behavior in myself.