## Posts Tagged ‘rationality’

### On the Height of a Field

January 1, 2013

This is a short story about belief and evidence, and it starts with the GPS watch I use when I go for a run. Here’s the plot of my elevation today:

It looks a little odd until I show you this map of the run:

Each bump on the elevation plot is one lap of the field. In the middle, I changed directions, giving the elevation chart an approximate mirror-image symmetry. (I don’t know what causes the aberrant spikes, but my friend reports seeing the same thing on his watch.)

According to the GPS data, the field is sloped, with a max height of 260 feet near the center field wall and 245 feet near home plate. It’s insistent on this point, reiterating these numbers each time I do the run (except once when the tracking data was clearly off, showing me running across parking lots and through nearby buildings.) I disagreed, though. The field looked flat, not sloped at 3 degrees. I was disappointed to have found a systematic bias in the GPS data.

But I occasionally thought of some minor consideration that impacted my belief. I remembered that when I went biking, I often found that roads that look flat are actually uphill, as can be verified by changing directions and feeling how much easier it becomes to go a given pace. I Googled for the accuracy of GPS elevation data, and found that it’s only good to about 10 meters. But I didn’t care about absolute elevation, only change across the field, and I couldn’t find any answers on the accuracy of that. (Quora failed me.) I checked Google Earth, and it corroborated the GPS, saying the ground was 241 ft behind home plate and 259 in deep center field. But then I read that the GPS calibrated its elevation reading by comparing latitude/longitude coordinates with a database, and so may have been drawing from the same source as Google Earth.

People wouldn’t make a sloped baseball field, would they? That would dramatically change the way it plays, since with a 15-foot gain, what was once a solid home run becomes a catch on the warning track. Googling some more, I found that baseball fields can be pretty sloped; the requirements are fairly lax, and in fact they are typically sloped to allow drainage.

I was starting to doubt my initial judgment, and with this in mind, when I looked at the field, it made more and more sense that it’s sloped. Along the right field fence, there’s a short, steep hill leading up to the street. It’s about five feet high and at least a 30-degree slope. It’s completely unnatural, as if it exists because the field as a whole used to be considerably more sloped, but was dug out and flattened. The high edge of the field was then below street level, so there’s that short, steep hill leading up. And if the field was dug out and flattened, maybe they didn’t flatten it all the way. The entire campus is certainly sloped the same general direction as the GPS claimed for the field. It drops about 70 feet from north to south, and it’s frequently noticeable as you walk or bike around. There’s another field I run on with essentially the same deal, and I found that when I knew what to look for, I could indeed see the slope there.

Eventually, the speculation built up enough to warrant a little effort to make a measurement. I asked a wise man what to do, and he suggested I find a protractor, hang a string down to detect gravity, and site from one side of the field to the other. I did so, expecting to feel the boldness of an impartial, truth-seeking scientific investigator as I strode across the grass. That wasn’t what I got at all.

First, I felt continuous fluctuations in my confidence. “I’m 60% confident I’ll find the field is sloped,” I told myself, then immediately changed it to 75, not wanting to be timid, then felt afraid of being wrong, and went back to 50. I’ve played The Calibration Game and learned what beliefs mean, and mostly what it’s done is give me the ability to not only be uncertain about things, but to be meta-uncertain as well – not sure just how uncertain I am, since I don’t want to be wrong about that!

Second, I felt conflicting desires. I couldn’t decide what I wanted the result to be. I wanted the field to be flat to validate my initial intuition, not the stupid GPS, but I also wanted the field to be sloped so I could prove to myself my ability to change my beliefs when the evidence comes in, even if it goes against my ego. (A strange side-effect of wanting to believe true things is that you find yourself wanting to do things not because they help you believe the truth, but because you perceive them to be the sort of things that truth-seekers would do.) I recalled a video I had seen years ago about Gravity Probe B, and the main thing I remembered from it was a scientist with long, gray hair and huge unblinking eyeballs explaining in perfect monotone that he didn’t have a desire for the experiment to confirm or refute general relativity; he only wanted it to show what reality was like.

On top of all this, there was the sense of irony at so much mental gymnastics over a triviality like the slope of a baseball field, and the self-consciousness at the absurdity of standing around in the cold pointing jerry-rigged protractors at things. So at last I crossed the field and lined up my protractor for the moment of truth

It didn’t work. I had placed my shoes down on the grass as a target to site, but from center field they were hidden behind the pitcher’s mound. I recrossed the field and adjusted them, and went back. I still couldn’t see the shoes; they were too small and hidden in the grass. I could see my backpack, though, so I sited off that. But it still didn’t really work. I didn’t have a protractor on hand, so I had printed out the image of one from Wikipedia and stapled it to a piece of cardboard, but the cardboard wasn’t very flat, making siting along it to good accuracy essentially impossible.

I scrapped that, and after a few days went to Walgreens and found a cheap plastic protractor and some twine that I used to tie in my water bottle as a plumb bob. Returning to the field, I finally found the device to be, well, marginal. Holding it up to my eye, it was impossible to focus along the entire top of the protractor at once, and difficult to establish unambiguous criteria for when the protractor was accurately aimed. I was also holding the entire thing up with my hands, and trying to keep the string in place between siting along the protractor and moving my head around to get the reading.

Nonetheless, my reading came to 87 degrees from center field to home plate and 90 degrees from home plate back to center field. This three-degree difference seemed pretty good confirmation of the GPS data. In a final attempt to confirm my readings, I repeated the experiment in a hallway outside my office, which I hope is essentially flat. It’s 90 strides long, (and I’m about two strides tall) and I found 88 degrees from each side, roughly confirming that the protractor readings matched my expectations. (I’d have used the swimming pool, which I know is flat, but it’s closed at the moment.)

I’m now strongly confident that the baseball field is sloped – something around 95% after considering all the points in this post. That’s enough that I don’t care to keep investigating further with better devices, unless maybe someone I know turns out to have one sitting around.

Still, there is some doubt. Couldn’t I have subconsciously adjusted my protractor to find what I expected? There were plenty of ways to mess it up. What if I had found no slope with the protractor? Would I have accepted it as settling the issue, or would I have been more likely to doubt my readings? It’s perfectly rational to doubt an instrument more when it gives results you don’t expect – you certainly shouldn’t trust a thermometer that says your temperature is 130 degrees – but it still feels intuitively a bit wrong to say the protractor is more likely to be a good tool when it confirms what I already suspected.

The story of how belief is supposed to work is that for each bit of evidence, you consider its likelihood under all the various hypotheses, then multiplying these likelihoods, you find your final result, and it tells you exactly how confident you should be. If I can estimate how likely it is for Google Maps and my GPS to corroborate each other given that they are wrong, and how likely it is given that they are right, and then answer the same question for every other bit of evidence available to me, I don’t need to estimate my final beliefs – I calculate them. But even in this simple testbed of the matter of a sloped baseball field, I could feel my biases coming to bear on what evidence I considered, and how strong and relevant that evidence seemed to me.  The more I believed the baseball field was sloped, the more relevant (higher likelihood ratio) it seemed that there was that short steep hill on the side, and the less relevant that my intuition claimed the field was flat. The field even began looking more sloped to me as time went on, and I sometimes thought I could feel the slope as I ran, even though I never had before.

That’s what I was interested in here. I wanted to know more about the way my feelings and beliefs interacted with the evidence and with my methods of collecting it. It is common knowledge that people are likely to find what they’re looking for whatever the facts, but what does it feel like when you’re in the middle of doing this, and can recognizing that feeling lead you to stop?

May 14, 2010

Do you think rationally about all the opinions you read, carefully considering why you agree or disagree with any given viewpoint, or is your method for discourse more like the way you sift through a hundred crappy photos of yourself to find the kinda-hot-but-not-too-slutty one that will be your Facebook profile picture? Oh yes, I like this one. All the other can go now.

It’s been a long time since I last read the internet with you, so it’s time to do that again. Hopefully you’ll be entertained, and also question the way you think about facts and reality. Although this is a links dump, incredibly none of it involves cats or pornography.

Via Swans on Tea, Feynman discusses, in a tangential manner, what magnetism is.

When I launch into an explanation, my goal is something is along the lines of, “I’m going to say something to you, and when I’m done, you’ll understand it the way I do.” My guess is that most people implicitly think about explanation the same way. An explainer says some words, possibly along with drawing pictures or doing a demonstration, and the explainee watches, listens, and understands.

We expect some confusion and some back-and-forth questions. Also, the scope of what is explained may be very small, so that the explainer perhaps knows a lot more details, but despite these caveats I think this “I will give you my knowledge” approach is the subtext for most of our explanations.

The strange thing is that if you ask people directly what explanation is, they do not believe this. They believe that explanations are highly context-dependent, and that they’re imperfect, and that their scope is limited. (“I don’t expect the explainee to get everything. The explanation just gives the general idea, and they’ll work out the details in due time…”), but when I watch two people engaged in a explainer/explainee interaction I get the feeling that they will consider the exchange a failure (or at least not wholly successful) if the explainee ultimately does not understand the subject the way the explainer does. Even the drastically different approaches people take when explaining something to an adult or to a child seem based on the principle that in order for the explanation to be effective, it must be worded to suit the audience, but the explainer still hopes to be completely understood. They just need to find the right way to say things.

Feynman points out that this sort of explanation is impossible because knowledge doesn’t consist of tidbits. Feynman cannot take his knowledge of magnetism and “dumb it down” in any sort of accurate way, because that knowledge is couched in the context of everything else he knows about nature. Feynman’s understanding of magnetic forces was much more thorough than the interviewer’s because Feynman understood the fundamental forces involved; he knew all about quantum theory and the interaction of light with matter, and had a feeling for what things were and were not already known and explained by physical models. He also had practical experience with magnets, and had taught students about magnetism and investigated all sorts of magnetic phenomena. But in addition to this knowledge of the theories and models of magnetism, Feynman’s understanding is tempered by his abilities. What separates the scientist from the layperson is not their knowledge of science, but their ability to mathematically manipulate the model, or even create a new one, to derive understanding.

If Feynman were still around and he sat down to tutor me in all aspects of electromagnetism, we could probably make a lot of progress. With enough time, he could teach me everything he knew. But I still wouldn’t understand it the way he did.

With that, let’s look at an explanation I particularly liked:

We Recommend a Singular Value Decomposition
David Austin at the American Mathematical Society.

This is an explanation of the singular value decomposition, a basic tool in linear algebra. I remember learning about it while studying linear algebra, but I didn’t understand it very clearly. I thought about it only formally, and I kept getting the idea of what it was confused with the proof that it exists. As a result, if I were asked to explain singular value decompositions to someone else, I’d have first gone back to my linear algebra book to review, then pretty much repeated what it said there, trying desperately to do things just differently enough that I wasn’t copying.

I got the feeling that Austin did the opposite in writing this article. he did not sit down and say, “Okay, what are all the things I know about SVD and all the good examples of it, and then how can I condense them all and make it appropriate to the audience?”

Instead, it seemed like he said, “I happen to know a couple of good pictures that make this clear in the case of a 2×2 matrix. Based on that, what sort of presentation of the SVD makes sense? What level of detail would muddy the presentation? If I change the order I present the ideas, how will that change the reader’s perception of the SVD’s theoretical and practical importance? What can be left out, and how can I get straight to the heart of the matter and communicate that first?”

Very quickly in the essay, Austin gets to this picture:

which illustrates the singular value decomposition of

$\left[ \begin{array}{cc} 1 & 1 \\ 0 & 1 \end{array}\right]$.

There are only a few short paragraphs before that, but already we’ve walked through a story that motivates it. Austin gives three examples showing how we can understand linear transformations visually, and by the time we finish the third, it was apparent to me that a singular value decomposition is a logical extension of the linear algebra I was already familiar with. He had me hooked for the rest of the article.

After giving his example, Austin builds directly to the equation

$M = U \Sigma V^T$

which illustrates why it’s a “decomposition”, and what each part of the decomposition means. Only after giving a fairly complete explanation of what a singular value decomposition is did he start to go into how to find it and how to apply it.

Lots of math or physics writing I see doesn’t take this approach. Instead, the first I see a particular equation is at the end of its derivation. That means that all the derivation leading up to it seemed unmotivated to me. Austin doesn’t even include the derivations. There’s enough detail that I could work through the missing parts by myself, ultimately understanding them better than I would if each step were spelled out for me. For example, he writes

In other words, the function $|M x|$ on the unit circle has a maximum at $v_1$ and a minimum at $v_2$. This reduces the problem to a rather standard calculus problem in which we wish to optimize a function over the unit circle. It turns out that the critical points of this function occur at the eigenvectors of the matrix $M^TM$.

That’s actually more effective for me than actually going through the details of the calculus problem. It points me in the right direction to go over it when I’m interested, but in the meantime lets me continue on to the rest of the good stuff.

By reorganizing the material, omitting details, and (literally) illustrating his concepts, Austin finally got me to pay attention to something I ostensibly learned years ago.

Next, I’d like to illustrate my lack of creativity by returning to Feynman, this time his Caltech commencement address from 1974

Cargo Cult Science

Feynman identifies a problem:

In the South Seas there is a Cargo Cult of people. During the war they saw airplanes land with lots of good materials, and they want the same thing to happen now. So they’ve arranged to make things like runways, to put fires along the sides of the runways, to make a wooden hut for a man to sit in, with two wooden pieces on his head like headphones and bars of bamboo sticking out like antennas—he’s the controller—and they wait for the airplanes to land. They’re doing everything right. The form is perfect. It looks exactly the way it looked before. But it doesn’t work. No airplanes land. So I call these things Cargo Cult Science, because they follow all the apparent precepts and forms of scientific investigation, but they’re missing something essential, because the planes don’t land.

and suggests a solution:

Details that could throw doubt on your interpretation must be given, if you know them. You must do the best you can—if you know anything at all wrong, or possibly wrong—to explain it. If you make a theory, for example, and advertise it, or put it out, then you must also put down all the facts that disagree with it, as well as those that agree with it. There is also a more subtle problem. When you have put a lot of ideas together to make an elaborate theory, you want to make sure, when explaining what it fits, that those things it fits are not just the things that gave you the idea for the theory; but that the finished theory makes something else come out right, in addition.

For an example of awful science, take a look at a story that made it to Slashdot a little while ago, Scientists Postulate Extinct Hominid with 150 IQ.

The Slashdot summary says,

Neuroscientists Gary Lynch and Richard Granger have an interesting article in Discover Magazine about the Boskops, an extinct hominid that had big eyes, child-like faces, and forebrains roughly 50% larger than modern man indicating they may have had an average intelligence of around 150, making them geniuses among Homo sapiens. The combination of a large cranium and immature face would look decidedly unusual to modern eyes, but not entirely unfamiliar. Such faces peer out from the covers of countless science fiction books and are often attached to ‘alien abductors’ in movies.

Slashdot is known for being strong on computer news, not for their science coverage, but still it’s surprising to me that such a ridiculous bit of claptrap got so much attention. A few commenters point out how absurd the conclusion that an entire race of people had an average IQ of 150 is, but there is so much white noise in the comments of any large online community that most people usually don’t read them, probably including the people who write the comments in the first place.

And even if Slashdot will publish sensational cargo cult stories like this, what business does it have in Discover Magazine, which I don’t read, but had assumed was fairly reputable? Discover published this quote about the Boskops:

Where your memory of a walk down a Parisian street may include the mental visual image of the street vendor, the bistro, and the charming little church, the Boskop may also have had the music coming from the bistro, the conversations from other strollers, and the peculiar window over the door of the church. Alas, if only the Boskop had had the chance to stroll a Parisian boulevard!

First, that doesn’t sound like high intelligence to me. It sounds like autism. Second, how the fuck would you know that from looking at some skulls? Such conclusions obviously have no place in the science-with-integrity Feynman described.

20 years ago, if I had read that story I would not have gone to the effort to follow up on it. (For one thing I’d have been five years old, and so instead of doing some research I would have drank a juice box, gone outside to play, and pooped myself.) Now we have the internet, and follow-up is very easy. Fortunately, high up on the Google results is John Hawks’ article, The “Amazing” Boskops. Hawks, summarizing his review of literature on the Boskops, writes,

…in fact, what happened is that a small set of large crania were taken from a much larger sample of varied crania, and given the name, “Boskopoid.” This selection was initially done almost without any regard for archaeological or cultural associations — any old, large skull was a “Boskop”. Later, when a more systematic inventory of archaeological associations was entered into evidence, it became clear that the “Boskop race” was entirely a figment of anthropologists’ imaginations. Instead, the MSA-to-LSA population of South Africa had a varied array of features, within the last 20,000 years trending toward those present in historic southern African peoples.

Hawks then followed up with more detail later.

The good news is that the Boskop nonsense will die out because it’s wrong, and our system works well enough that things that are wrong do eventually die out.

In that little vignette, I looked at a big magazine and published book that were nonsense, and debunked by a blog. It’s not always easy to determine the credibility of a source, and its reputation can be misleading. Blogs have a terrible a reputation in general, while some people seem to believe that if it’s in a book, it must be true. (Unfortunately people take this to the extreme with one particularly poorly-documented and self-contradictory bestselling book!)

A more difficult stickier issue is anthropogenic global warming. There is little doubt in my mind that anthropogenic global warming is real, but unlike with evolution, I do not believe that because I have looked at the scientific evidence and thought about the arguments for and against. I haven’t examined the methods of collecting raw data or the factors accounted for in climate models. I don’t even know how accurate those models’ predictions are. I take it all on the word of climate scientists and a cursory review of their reports. I do not see this as a problem or a failure of my rationality. I do withhold judgment on whether global warming is as important an issue as, say, pollution or direct destruction of natural resources, but I do not feel reservation in stating that I think it is very likely that if humans continue on the way they’ve been going, the Earth will warm with severe consequences.

What does this have to do with cargo cult science? Cargo cult science is the reason I believe the climate scientists rather than the climate skeptics. My goal here isn’t to convince you one way or another about climate science, or to link to the best-reasoned discussions about it or to give an accurate cross-section of the blogosphere’s thinking process. These are various opinions on anthropogenic global warming, and my hope is that reading for the underlying decision-making process is an instructive exercise.

Here is Lord Monckton, a prominent global warming critic:

Here he is interviewing a Greenpeace supporter about why she believes in anthropogenic global warming:

Here is the UN group Monckton criticizes, the
Intergovernmental Panel on Climate Change
In particular, their Climate Change 2007 Synthesis Report, a 52-page summary of all things climate science. For more detail, their Publications and Data are available.

Here is a recent letter published in Science. It discusses the process scientists use to create reports on the climate, the uncertainty in scientific results, the fallibility of scientific findings, and the role of integrity in science.
Climate Change and the Integrity of Science

Here is statistician and blogger Andrew Gelman talking about expert opinion and scientific consensus:
How do I form my attitudes and opinions about scientific questions?

Here is famous skeptic James Randi on the pressure for scientific consensus, the fallibility of scientists, the uncertainty in models of complicated phenomena, and his skepticism of anthropogenic global warming:
AGW Revisited

Here is the petition Randi describes, the
Petition Project

Here is a reply to Randi and the Petition Project from PZ Myers, a biologist and well-known angry internet scientist.
Say it ain’t so, Randi!

Here is a graphic by David McCandless. Its goal is to present an example of the arguments one would uncover in an attempt to self-educate about climate science using only the internet.
Global Warming Skeptics vs. The Scientific Consensus

Greg Laden writes about skepticism, rationality, and groupthink in a lengthy post.
Are you a real skeptic? I doubt it.

Here is the Wikipedia Article on anthropogenic global warming, along with tabs to the discussion page for the article and the article history. This is a featured article on Wikipedia.
Global Warming

My focus on the process people are using to come to terms with global warming isn’t meant to deemphasize the importance of this issue and of other aspects of the relationship between humanity and our biome. Our Earth is a fantastically diverse and endlessly beautiful home. Of course I want to understand it better.

Also here is a physics blog story about a mathematical model of cows.