Equating mystery with meaning, awe with ignorance.
I’ve got this problem. It’s a small problem, fer sure, but an interesting one.
It hinges on that fact that, apparently, many people equate “mystery” with “meaning.” Or feel that “awe” is spoiled by “knowledge.” You hear people say something like “there are some things it’s better not to know,” or they argue that something’s beauty is diminished if you understand it “too much.”
Such sentiments are likely familiar to you, and people more eloquent than I have addressed the issue, notably Richard Feynman talking about the beauty of a flower, which I’ll link to.
==So what’s your problem?
It’s this: I kinda used to be one of those people, and I can’t remember *why* I was.
I’m a middle aged musician, and though I’ve always been interested in science and philosophy, when I was younger I surrounded myself with intellectual walls. There were many directions that thinking or reading would want to take me where I would slam my fist down and say no, going there is a mistake.
And I would like to remember why I felt that way because I think it might help me better relate to people who are still in that place.
Because now, I just don’t get it. I see beauty everywhere and my life is awash in meaning. And regarding awe, I honestly think I experience that most wonderful of feelings more since I embraced rationality, since I relinquished my illusory control of facts and started noticing reality for what it, uh…
=for what it really is?
I can only think of one way I still relate to that stance. When I read a really good novel, my mind is blown. I can’t for the life of me wrap my head around how any human, with just a handful of symbols, can evoke such extraordinary images, narratives, and personalities in my head. C’mon, if that isn’t a superpower I don’t know what is. And I kinda don’t want anyone to explain it to me. I know there are probably many tricks of the trade, short cuts and methods for eliciting certain reactions (I know many of them for writing music), but, yeah, no thanks, don’t tell me, I don’t want to know. I enjoy thinking of Murakami as a superhero.
But of course I know that’s just a personal quirk. I would never, ever argue that because I feel that way, *no one* should look under the hood. I would never say “how to write fiction is just something it’s better not to know.” The equivalent of which I hear people arguing all the time. And besides, if I did decide to write a novel, I wouldn’t hesitate to learn all I could about the skill.
=I mean, duh. It’s a skill
So why did I used to put up intellectual walls? I figure the answer isn’t simple, of course; studies of religion continue to show us how crazily complicated the reasons for people’s belief systems can be. But I also suspect that one element is pretty straightforward: I didn’t really have a reason. It wasn’t a stance I had arrived at consciously. It was a default that, somehow, I had just taken, maybe influenced by other artists around me. (That attitude seems pretty common among artists.) And it’s the kind of stance that is self-perpetuating, closing the door to any idea that might shake it loose. Which I think makes it kinda dangerous. And I tell ya, I do not miss it.
What is the truth about X? I’ve heard a lot of people fear mongering about X, but I also know a lot of people who fully support X. They can’t all be right, can they?
I have a feeling the truth about X will piss off both the supporters of X as well as the anti-Xers. Because this time, X is just X.
The pro-X folks say X=X+1. The haters say X=X-1. But maybe X just equals X.
You hear all these people shouting X’s possibilities, and you also hear others warning of X’s dangers. I looked into it, dug into X as deeply as I could. And it looks like X=X, no more no less. So the other day I said hey, I don’t know, I think X=X. And everyone freaked out!
Okay I know I’m being annoyingly vague, but isn’t this familiar? After all, if X=X+1 then Whoa! think of the possibilities! And if X=X-1 then oooh beware.
The thing is, the +1 side and the -1 side have something in common. Both feel more important, more dramatic, more consequential than, meh, maybe X just equals X.
And this makes the meh of X crucial.
Taking a stand for the meh of X matters. Meh can be important. As dull as it might seem, as much as it might annoy pretty much everyone, sometimes getting up on that soap box and saying “meh” is just what the debate needs.
I’ve been thinking a lot about the role of the rank and file skeptics, us regular, non-pro skeptical schlubs. How do we spend our skeptical time?
Here’s a question: If we make a generalization that skepticism was refined in the late 20th century by Sagan, Randi, Nickell, et al, what has changed since then? (Don’t just say ‘the internet.’)
Of the no doubt many answers to that question, I’m going to focus on just one, which I’ll generalize as the psychology of belief. In the last couple of decades we’ve seen tons of research into the why and how people believe what they believe. And how people change their minds. And how people, well, kinda rarely actually change their minds.
Without doing a dull layman’s recap of all this research, let’s stick to the backfire effect. Essentially, the backfire effect is that, when we believe something, being confronted with evidence that contradicts that belief actually strengthens it, rather than weakens it. 
Understandably there’s been a fair amount of discussion among skeptics as this and similar findings have appeared, developed, been replicated, and as far as I can tell, become consensus.
So should this change our approach? I’m not going to pretend I have the answer, but I think it does give credibility to something a lot of us do in our day to day lives as skeptics: stepping on flames to try and stop the larger fires.
Posting a Snopes link, pointing out opposing arguments, even just saying “hmm, I’m not so sure about that.” These little things can barely be called activism, but my guess is they do slow down wacky ideas, keep them from becoming quickly entrenched. If a new woo-meme’s appearance is accompanied by some good old ‘Um, Uh, Yeah that sounds fishy…’ maybe we can keep it from fossilizing to the point where the backfire effect and its friends reign supreme.
What’s new today is woo tomorrow. Likewise, today’s incomplete science or downright bad science is tomorrow’s pseudoscience. Fighting back against poor reporting, poor press releases, conclusion-jumping, unreplicated results… While we can’t always stem the tide of false belief, we can slow down its progress, get people to think twice before incorporating preliminary findings into their worldview.
Which brings me back to traditional skepticism and new skepticism. The traditional tool kit is great, but the psychological evidence that’s accumulated suggests that doing what we can to stop false ideas from taking root in the first place is important work too. Just being the doubting voice in a conversation has value.
When it comes to the cultural incorporation of new beliefs, one of the most important messages for the rank and file skeptics to spread is a simple one, a classic: slow down.
My partner in crime and I came up with a useful new term the other day: straw skeets. Or maybe straw skeet shooting.
Either way. You’ve probably guessed, it’s a handy combination of the straw man fallacy and skeet shooting. (I don’t know anything about real skeet shooting, so I’m talking about the cartoon version, where clay discs are tossed in the air one by one, and a shooter takes aim at each and fires.)
You may be familiar with the gish gallop, the term coined by Eugenie Scott of the National Center for Science Education to describe the debate technique of the late creationist Duane Gish. He would barrage his opponent with statement after statement, false facts, straw men, misdirection, one after the other so there was no way anyone could respond to them all. Also called “spreading,” this can leave the impression that the gish galloper has a mountain of evidence, so even if a few points are addressed by the opponent there’s not even a dent in the pile.
So straw skeet shooting is trying to deal with this. Statement, bang that’s a lie; statement, bang that’s out of context; statement, bang that’s a red herring; statement… even this caricature of it is exhausting.
"I’ve heard her debate before, so I’m prepared to be doing a lot of straw skeet shooting."
"Shooting straw skeets is a waste of time. No more debates for me."
"Wow, you’ve got a whole sack of straw skeets there, don’t you."
"Nah, it’ll be great. I’ll gish gallop, you straw skeet shoot, then we’ll go down the slippery slope and lunch on red herrings by the poisoned well."
Straw skeet shooting. You should adopt the term! Here’s why: it’s memorable, it’s meaningful, it fills a vast void in our language, it’s the most clever term ever, all the kids are saying it, it tastes like armadillo flesh, it smells like snoring…
A while back, in an online argument, my opponent wrote that the person who had recently set fire to a papier-mâché animal during a protest march invalidated the protest—that it proved the political claims of the marchers couldn’t be taken seriously.
I responded: really? If, in a peaceful group of twenty thousand people, one person does something stupid, the whole group’s arguments can be ignored? Their response: yup.
That’s one version of something we all see every day. The shrug off.
You know the shrug-off. It’s when someone says, directly or indirectly, “I don’t have to take them seriously because…”, and here insert any number of things: I don’t have to take them seriously because they got that fact wrong, their grammar and spelling are terrible, they used a logical fallacy, they’re in a cult, they believe in god, this website has been wrong before, they smoke pot, they…
There are so many. I could go on…
It’s kind of a combination of an ad hominem and denying the antecedent, but generalized— it’s more of an attitude than an argument. The version that skeptics are, unfortunately, at danger of falling in to is the Argument from fallacy, or the Fallacy Fallacy.
So you hear someone say “water is a juice, and you can mix juice with whiskey; therefore water can be mixed with whiskey.” And you jump in and you’re like nuh-uh you shark turd water isn’t a juice! The argument is fallacious, so clearly water can NOT be mixed with whiskey.
And you not only look like a jerk, you are in fact a fallacious jerk.
You know, it’s a little difficult to come up with a good example, because a fallacy, out of context and highlighted, is easily recognized. The shrug off happens quickly, in the heat of discussion, or when scrolling through piles of information. The filters or short cuts you use, which you need to keep focused, it’s easy for them to devolve into the shrug off.
And of course, even just using words that have slightly different meanings to different people, can make it seem there’s a fallacious argument, when perhaps it’s just a misunderstanding.
Shrug offs are going to happen. People are going to shrug off what you’re saying without actually wrapping their head around what you’re saying. And yes, you are going to shrug off what someone else says for reasons that have nothing to do with the point they’re trying to make. We’re all bombarded by people saying things. And I don’t care how skilled you are in the skeptical arts, it’s gonna happen.
So, what to do for those who aspire to intellectual honesty? Perhaps an alternative is: shrug-off, shoot the moon. This would be where you treat every statement, by everyone, as fallacious. Of course if you really did that, consciously, you’d be miserable and everyone would hate you. So instead, be skeptical. And this is one reason we’re skeptics: everything anyone says, regardless of their status, low or high, their fame, their intelligence or lack thereof, their menial job, their nobel prizes, every statement might be wrong.
This skeptical stance seems negative to some, not to me (obviously). But one good thing it does is slow you down, it makes you remember that intellectual honesty is really hard.
A series of studies was recently published in the journal of Psychological Science. The summary, which I admit I’m lifting directly from the press release, is
"Liberals tend to underestimate the amount of actual agreement among those who share their ideology, while conservatives tend to overestimate intra-group agreement"
Briefly, the researchers conducted several surveys on groups of people that asked a combination of questions to establish political leanings and also neutral leanings (such as coffee preference). They also asked to what degree they felt their peers agreed with them on their answers to the various questions. And basically, liberals perceive themselves as unique, even when they’re not, and conservatives perceive themselves as the norm, even when they’re not.
Don’t worry. I’m not going to talk politics. I’m not even saying this study is right— as far as I can tell it hasn’t been replicated yet, though the results certainly make intuitive sense.
Personally I’m a fan of skepticism’s attempts to be non-partisan. While it’s important to remember that any empirical political claim is fair game, the fact that I regularly meet fellow free thinkers who vote completely differently from me is such a relief. And the environment we often create that welcomes disagreement is one of our traditions I hope will trickle out to everyone else.
But whenever I read about studies like this, I always wish, for the questions, there had been an option to choose a skeptical answer, like, you could check the box that says “I haven’t seen enough relevant data to answer this question with sufficient confidence.”
And this got me wondering: how do skeptics know what they agree on? You certainly see people celebrating, and bemoaning, topics of apparent skeptical consensus. So when studies such as these show, as usual, how many different ways there are to be wrong about what you think you know, perhaps they should serve as a reminder to free thinkers: whatever consensus you think exists, or doesn’t exist, in our community… you might be wrong.
I would love to see a study done on us, we who love to think about how we know what we know. For the rational community, I’d be curious to learn: how do we know what we think we know about what we know?
Here’s a haiku:
Plum falls from a tree And rolls on dew-glistened grass The skeptic wants proof
I have a new album! Day Trips. It’s a collection of my audio collages. They’re field-recording-mash-up-portraits I’ve made over the years.
(For those accustomed to my chamber music or songs, this is something different: they’re more like sound sculpture than music. Actually, a friend called them “Almostmusic” and I think I’m going to run with that term.)
You can read some more about it at my website, where there are also links to iTunes and CDbaby and such. (This is a download-only release; it should be available from any of the familiar online music services.)
Let’s start with a quote from journalist Tania Lombrozo: “The striking implication, for which the researchers find support, is that getting people to appreciate their own ignorance can be enough to rein in strong opinions.”
It’s been widely covered in skeptical circles: the evidence has been piling up that many people, maybe even most, when confronted with evidence that challenges their beliefs, actually end up feeling stronger that they are right. It’s frustrating, to say the least. You can be diligent, do your homework, arm yourself with citations and papers, and ironically, the more evidence you have, the less likely you are to convince them.
Now, in the Spring of 2013 a paper was published titled “Political Extremism Is Supported by an Illusion of Understanding,” by Ferman, Rogers, Fox, and Sloman. The basic idea is that people with extreme political views tend to be imagining a caricature of something that is actually complex. (This held for people on both the right and the left.) As the authors say, “polarized attitudes are enabled by simplistic causal models.”
I doubt this is news to you. But their experiments have also provided data on what worked to *moderate* those extreme views. And the winner is: asking the person to, in detail, explain the issue, and to lay out, step by step, how the policy in question would actually work. When made to slowly walk through the causal chain they are imagining, people tended to realize they weren’t quite as sure as they thought.
This study got some decent attention, but I think all skeptics should be aware of it. After all, it provides some real (if preliminary) data that is relevant to what we have been calling the ‘tone wars.’
The authors called their technique ‘Explanation generation.’ To me it also smells a bit like the Socratic method.
If I can be so bold as to speculate a little, this study — which is of course far more detailed than I’ve represented here (we’ll link to it, and you should check it out) — this study is evidence that supports listening, and asking clarifying questions, as opposed to making statements of fact or arguing over details. Keep it cool. Listen. Discuss.
As a skeptic and naturalist, I kind of revel in my comfort with the fact that we live in an uncaring universe. So many people are so freaked out by the idea, dislike even considering it, that I’m happy it doesn’t bother me in any way. At least there’s one anxiety I don’t wrestle with.
I know I don’t need to convince you guys, but just for fun, let’s make a comparison. I live in Queens, so I’ll replace “universe” with “island.” (Yes, Queens and Brooklyn are both, physically, part of Long Island.)
If one of my neighbors said to me “But if the island doesn’t care about us, if all this dirt and rock has no feeling, then our lives are meaningless.” I’d just… blink. I wouldn’t know what to say. That would be an utterly absurd thing to believe. Millions of people believe the equivalent, I know, but it’s just weird.
Nevertheless, I want to let you know why I’m saying buh-bye to the uncaring universe. I mean the phrase, the phrase “uncaring universe.”
Allow me to quote Matt the Poet from the JREF forum a few years back. He sums up nicely why I’m undertaking this linguistic endeavor:
"By asserting that the universe ‘doesn’t care’ about you, you’re admitting to the idea that it might, and thus already tacitly anthropomorphizing it." —Matt the Poet
Matt’s right, and I’m hereby pledging to abandon the phrases “uncaring universe,” “unfeeling universe,” or any related anthropomuffinizations of the cosmos.
It’s not a big thing, of course, just a form of false dichotomy, really. But it does represent one of the greatest divides between freethinkers and everyone else. Many of the beliefs of theists, new agers and fuzzy thinkers can be boiled down to imbuing the universe with agency, and if I disagree and say that, in fact, I think the universe doesn’t care, I’m putting myself in opposition to them. But I’m not in opposition. It’s not that I think the universe has no feelings, it’s that I think it doesn’t even qualify as a topic of discussion: it’s absurd.
Put another way, what if my neighbor insists that 2+2=fish?
Once again, I just blink. It’s not that I think 2+2 doesn’t equal fish. It’s that, as far as I can tell, 2+2=fish doesn’t mean anything in the first place. (Of course, my neighbor also pities me and worries that, without believing that 2+2=fish, I am immoral and eternally doomed, which is just… what? How does that follow?)
The reputation of freethinkers is plenty bad already. It doesn’t need someone like me, using lazy language that implies a belief I don’t hold, giving bales of hay to those that would build a straw man of us.
The universe is… neither caring nor uncaring. The universe… is.