I’m generally skeptical when someone proclaims that “rationality” itself should get us to throw out 90%+ of philosophy. So I was a bit puzzled when someone on our Facebook group pointed at some articles by Luke Muehlhauser (specifically “Philosophy: A Diseased Discipline” and “Train Philosophers with Pearl and Kahneman, not Plato and Kant“), host of the excellent Conversations from the Pale Blue Dot podcast, that took this hardcore stance:
Large swaths of philosophy (e.g. continental and postmodern philosophy) often don’t even try to be clear, rigorous, or scientifically respectable. This is philosophy of the “Uncle Joe’s musings on the meaning of life” sort, except that it’s dressed up in big words and long footnotes… Analytic philosophy is clearer, more rigorous, and better with math and science, but only does a slightly better job of avoiding magical categories, language confusions, and non-natural hypotheses. Moreover, its central tool is intuition, and this displays a near-total ignorance of how brains work. …a few naturalistic philosophers are doing some useful work. But the signal-to-noise ratio is much lower even in naturalistic philosophy than it is in, say, behavioral economics or cognitive neuroscience or artificial intelligence or statistics…
This might appear to be garden-variety scientism, but it’s not a rejection of philosophy. Like Pat Churchland, Luke acknowledges that many fundamental problems are philosophical, and that scientific studies do not in themselves settle conceptual issues, and that without good philosophical analysis, science can waste a lot of time investigating ill-formed questions. But still, the accusation is that most philosophizing is useless unless explicitly based on scientific knowledge on how the brain works, and in particular where intuitions come from.
This blanket dismissal seemed strange to me given Luke’s spending so much time talking to so many different, religiously inclined folks on his podcast, but the answer became clear when listening to one of his final episodes (it mostly stopped production with this episode in Feb 2011, with only a couple after that later in 2011): his interview with Eliezer Yudkowsky. Eliezer is responsible for lesswrong.com, the site where Luke posted the above essays. He’s an AI researcher, and his big cause is promoting “friendly artificial intelligence,” which he sees as the pressing need of our age. If AI becomes a reality soon (which, contra Dreyfus, Eliezer thinks it surely will), then unless measures are taken to ensure that these are designed so as not to alter themselves to have values at variance with human needs regarding our environment, then we’re in big trouble.
So here’s one alleged reason for hostility for any kind of philosophy that isn’t obviously “useful:” there are pressing needs that intellectuals are needed for, and the rest of us are slacking on our responsibilities. Of course, the same accusation could be and has been made about reducing human poverty, solving environmental crises, and many other goals that are not so speculative as this concern of Eliezer’s (which I’m not in a position to judge here). There’s some discussion of this on our forthcoming Plato episode: that philosophy, even if “useless” in terms of technological application and not obviously even helpful into making us morally better people (there are plenty of jerky philosophers), is worthwhile in itself, just like appreciation of art or many other joys of life. Of course, as philosophy types, we don’t rest comfortably in this answers, and always feel a little weird about spending time on this (or anything else)… we may well be wrong.
As was made clear on our 2011 episodes about proofs for the existence of God and the new atheists, even when we consider a particular philosophic issue, reason itself doesn’t tend to necessitate a definitive solution: evaluation of these arguments is not a purely subjective matter where everyone’s equally right, but neither is it so cut and dry that we can dismiss all theists or atheists as unreasonable, confused, or stupid (even if many in both camps undoubtedly are).
Much less does reason itself necessitate a certain whole world view, a way of determining what’s truly important and worth spending time on and what isn’t. Many Chinese folks who mix Confucianism, Taoism, and Buddhism in their personal philosophies understand this. Though these views all have different practical upshots (e.g. re. do you respect state authority and tradition or not?), they all contain insights that have worked well in various contexts, and they are (in some portion of the cultural traditions relevant to this example, anyway) based more on different, non-empirically-verifiable attitudes towards life, rather than (as in some Western creeds) on some alleged matters of fact (which makes it much harder for someone to be a Christian-Jewish-Muslim hybrid, though it can be done).
So being a scientist, even one highly tuned into the latest development in cognitive science, statistics, and the like, does not actually dictate a single overall attitude toward life, a mission, a set of core beliefs. And yet, this is what we see in Eliezer’s attitude as exemplified in this podcast and on Less Wrong, which contains numerous articles on mistakes in reasoning that come from an ignorance of such advances as Bayes’s theorem. Now, the site is interesting, and the points about reasoning are well taken: we often, for instance, misrepresent probabilities in making intuitive judgments about how we think the world is. If your version of “doing philosophy,” then, is making a lot of bullshit generalizations about things, then, yes, the Less Wrong approach will be useful for you. But to then throw out the mass of the philosophical tradition because it has been ignorant of these tidbits is to fundamentally miss the boat, to badly oversimplify perennial problems, in short, to “cheat” at philosophy in exactly the same kind of way that’s been attempted time and again by, e.g. logical posititivism, pragmatism, and others from Hume to the present. While Eliezer’s version of the rational life is less silly than Ayn Rand’s (at least it doesn’t seem to be so explicit in endorsing, say, a particular and most unempirical political agenda), and I don’t think suffers from the comparable mistakes of self-proclaimed champions of Reason throughout history (e.g. Kant’s view that morality comes from Reason, or Locke’s view that Natural Law is determinable through Reason itself), I think it’s instructive to contrast Eliezer with David Chalmers as he appeared on our interview with him. Chalmers is a guy who is very much on top of the science in his field (including the AI business about singularity), and yet he is not on board with any of this “commit X% of past philosophy to the flames” nonsense, doesn’t think metaphysical arguments are meaningless or that difficult philosophical problems need to be defined away in some way, and, most provocatively, sees in consciousness a challenge to a physicalist world-view, even as his own theory of consciousness allows for rigorous investigations of mind-body correlations and buys into functionalism (i.e. it doesn’t dismiss probably anything that Eliezer and his AI mates are cooking up). I respectfully suggest that while reading more in contemporary science is surely a good idea (and I’m sure we’ll have some more episodes on physics, philosophy of mind, evolution, etc.), the approach to philosophy that is actually schooled in philosophy a la Chalmers is more worthy of emulation than Eliezer’s dismissive anti-philosophy take. By all means, though, listen to his side of this, and take a look at his site and at Less Wrong (he’s also made lots of appearances on YouTube), and let us know what you think.