On Arthur Schopenhauer’s The World As Will and Representation (1818), book 2.
Sure, we know from our senses and science what the world looks like to creatures like us, but if you buy Kant’s view that this “world as appearance” is a construct of our minds, what’s the reality behind the appearance?
Schopenhauer thinks that we can know this: The world is what he calls “Will,” because just like know what it’s like to be you from the inside (you don’t just see your body move but know what it is to will that body to move), the rest of the world has a comparable inside, and it’s that inside that explains all the striving that we see around us, whether it’s plants growing toward the sun, animals acting on instinct, or physical objects following laws of motion and gravity.
So in a sense, everything is alive, and moreover, it’s all the SAME living thing: a massive, singular Will that exists outside space, time, and the causal chain: It’s a whole different thing than anything we perceive through the senses, yet all those perceived things are somehow manifestations of this Will. It’s like the Force, except blind and futile.
That amounts to some highly chewy metaphysics for the regular four to slobber over. Read more about the topic and get the book.
Support the podcast by becoming a PEL Citizen or making a donation. Citizens can sign up to remotely attend the Aftershow on 5/3 at 5pm Eastern.
End song: “Sinking” from Mark Lint & the Simulacra, recorded in 2000, mixed just now. Descend into the Will itself!
Sponsor: Visit thegreatcourses.com/PEL, for massive discounts on great lectures.
Schopenhauer picture by Genevieve Arnold.
If you haven’t seen it, there is a scene in Curb Your Enthusiasm when Larry David is casually walking across a parking lot having just parked his car. An African-American man walks past him in the same direction as Larry’s vehicle. Larry suddenly turns and clicks the button on his remote keyless system, locking his car doors with a swift electronic bleep that breaks the silence like a well-timed punch line. The man turns and glowers. “Think I’m gonna steal your car?” he challenges. “No . . . it’s not you”, Larry awkwardly bumbles, “it’s not a race thing”.
Whether Larry’s choice of moment to lock his vehicle was merely inopportune or whether he was really being racist is left to our imagination. What is interesting is the peculiar comic potential in this moment. If Larry’s locking his car door was a manifestation of a subconscious prejudice, then it’s almost as if we don’t know whether to protest or to laugh at him for his bad luck.
This inherent tension in our reactions brings up an interesting philosophical question – one that is taken up by philosopher Neil Levy in a couple of postings this month (here and here) on The Brains Blog. Science shows that though a person accused of discrimination may sincerely deny harboring any kind of prejudice, their choices and actions may have been modulated by ‘implicit biases’ operating below the level of their conscious intention. Are people morally responsible, and so legitimate objects of blame, for discrimination resulting from their implicit biases?
Levy argues that, on any plausible version of the most widely held forms of compatibilism about moral responsibility, we are not morally responsible for actions resulting from our implicit attitudes. In his blog post, he focuses on what he calls “control-based” theories. On control-based accounts, whether a person is morally responsible for some action is a matter of whether they possess the right kind of control over their performance of it.
Levy dedicates considerable space to articulating exactly what is the right kind of control, but the general conclusion is that this is what is crucially missing in the types of cases in question. When an employer chooses a worse qualified male candidate for a job role over a better qualified female candidate, due to implicit sexist attitudes biasing the selection process, she does not have the sort of control over her acting discriminatorily necessary for her to be responsible for it.
This is only the gist of a much more detailed and complex argument. Rather than wading into the fine details, I’ll just make some general remarks about Levy’s overall approach.
One thing that strikes me as interesting about the type of cases Levy is concerned with – cases where a person behaves discriminatively due to implicit biases – is that they are instances of what the philosophers Thomas Nagel and Bernard Williams referred to, in two classic papers from the 70s*, as “moral luck.” Moral luck occurs when a person can legitimately be treated as the object of moral assessment despite the fact that what she is being assessed for is partly due to factors beyond her control. As Nagel points out, moral luck can enter into our moral assessments in a variety of different ways. Of particular relevance here is what Nagel calls “constitutive luck,” or luck in the traits and dispositions that make us up, and “circumstantial luck,” which is luck in the circumstances that we find ourselves in.
Take an example. Suppose a police officer fatally shoots someone in a situation that the officer perceived to be volatile and dangerous. At the time the officer sincerely believed her actions to be appropriate and proportionate to the threat posed. However, she is confronted with evidence that retrospectively forces her to realize that in fact she acted with disproportionate aggression. Her judgment of the situation was skewed by implicit biases about the victim’s race.
In certain respects, the officer’s actions were due to factors beyond her control. Her implicit biases are the product of conditioning by an environment that was, to a considerable extent, not of her choosing. Equally, that she found herself in a situation that made manifest those implicit attitudes in such a drastic way can also be attributed partly to factors beyond her control. Nonetheless, we would still expect someone in her position to feel a terrible form of regret over her actions.
In his paper on the topic, Williams talks about “agent-regret.” Agent regret is different from what he calls “bystander regret,” – the regret that someone with a spectator’s perspective might feel about some event – in that it is constituted by the subject’s first-person thought that it would have been much better had she chosen or acted otherwise. Agent-regret is also distinctive in the ways it is expressed. For example, it might include a desire to compensate a person for the harm they were done. Speaking of a lorry driver who, through no fault of his own, hits and kills a child, Williams writes, “we feel sorry for the driver, but that sentiment co-exists with, indeed presupposes, that there is something special about his relation to this happening, something which cannot merely be eliminated by the consideration that it was not his fault.”
In a section of his later work, Shame and Necessity, where Williams is discussing Oedipus’s discovery, in Sophocles’ Oedipus Rex, that he has unwittingly killed his father and married his mother, Williams explains this special relation between the agent and the event thus – “in the story of one’s life there is an authority exercised by what one has done, and not merely by what one has intentionally done.”
Returning to the example of the officer, while we can acknowledge the tragedy of what happened, we would also expect her to experience profound agent-regret over what she did. Such an attitude cannot, and should not, be alleviated, cancelled out, or swept aside as irrational in light of the sorts of considerations adduced by Levy. Were the officer to merely feel a bystander’s regret because she was persuaded that she lacked the correct sort of control necessary to make her morally responsible for what happened, we would probably think that something was badly wrong.
One of the things that Williams thought to be especially important about examples of moral luck is that they show that there is something seriously out of sync between what he called the “peculiar institution” of modern morality and human moral psychology. That institution is founded on a related cluster of concepts – voluntariness, intention, control, blame and blameworthiness, amongst others – which it is the object of a great deal of contemporary moral philosophy to analyze and define. The consequence is that, for all its rigor and ingenuity, a great deal of contemporary moral philosophy does not really resonate with us on a personal and psychological level. It simply does not speak to the reality and particularity of our inner ethical lives.
*Bernard Williams’s article “Moral Luck” and Thomas Nagel’s response to it under the same name both appeared in the Proceedings of the Aristotelian Society in 1976. Williams’s article was reprinted in the 1981 collection Moral Luck(available as a PDF here), and the Nagel article appears in the 1979 collection Mortal Questions. A video of a lecture summarizing Nagel’s article is available here and here.
Gil Percival is the producer of UK philosophy/comedy podcast, The Philosofa.
On 4/8 and 4/19 we took two stabs at one of the biggie tomes of philosophical history, Arthur Schopenhauer’s The World as Will and Representation, covering first metaphysics, and then aesthetics (focusing to a good degree on music).
For Ep. #114, we read all of Book II (of Volume 1, which is really the book proper published in 1818, the other volumes having come out far after that, because he couldn’t leave well enough alone), which focuses on the ultimate metaphysical stuff of the universe according to Schopenhauer: will.
This episode will be out very soon (Monday), and the Aftershow for it will take place on Sunday, May 3 at 5pm Eastern time, featuring host Danny Lobell. Go sign up to attend!
What is will? Well, you already know one species of it, which is what it feels like to be you. For everything else we experience, we only get to see the outside, the appearance. Even if we break something open, or dissolve it chemically, or use a particle accelerator to really get in there, that’s all just part of the world of appearance for Schopenhauer. This, of course, he got from Kant, and you can hear all about how Schopenhauer modified Kant’s picture of how our minds create the world of experience by listening to our Episode 30 on his previous book On the Fourfold Root of the Principle of Sufficient Reason.
The difference in emphasis is that for Kant, yes, there’s a Thing-In-Itself lurking behind all appearances, and what we call reality and study through science is a matter of how this unknown raw material gets processed through our perceptual apparatus, where we apply categories of space, time, number, causality, etc. to it, but Kant wouldn’t want to say that this experienced world isn’t in some quite ultimate sense real on this account. Schopenhauer, following Eastern philosophy (recall our Buddhism episode), says that this world of phenomena is a veil of illusion, which gives rise to desire and suffering.
So what’s behind the veil? Well, again, you already know that in the case of yourself. You encounter your body as a thing in the world among things, as one more appearance in this veil, but you are also this body, and know it in a more intimate way, through a feeling of its wants, its striving, its pleasure, its pain, and especially through the feeling of you actually doing something: the subjective correlate of a movement of your body is an act of will.
Schopenhauer argues that it’s very unlikely that you’d be the only creature in the universe to have such an “inside” in addition to the visible outside. While you can’t use the ordinary tools of knowledge (e.g. scientific induction) to prove that there is will in the world beyond just you, it’s not something, Schopenhauer thinks, that any wise person is going to miss. Not only would I be obtuse in denying YOU and other people this inner will, but look around at all the striving, all the activity: animals running around acting on instinct, with they and plants growing more or less according to some predefined blueprint, even inorganic matter seemingly striving in ways characterized by Newton’s laws of motion, the law of gravity, the laws of electromagnetism, etc.
So even though this force that’s ultimately behind everything isn’t going to be exactly the same as your or my individual will (given that those beasts and plants and rocks can’t use concepts or in many cases even have representations), Schopenhauer wants us to call it Will, because he thinks that image evokes the ever-striving, conflicted character of existence, as all these forces battle each other.
What is Will and how does it exactly relate to what we see? Well, Schopenhauer thinks that since the mind (qua Kant) adds things like number, space, and time to create the world of experience, therefore the world BEFORE experience, the Thing-In-Itself, must NOT have any of those characteristics (note that this is not a conclusion that Kant thinks we can reach). The Will is a unity, outside space and time. Really (in a way that should sound familiar to listeners of our our Maimonides episode) there’s really nothing positive we can even say about Will itself.
The Will can’t cause phenomena, because causality is a relation within the realm of phenomena, not a relation linking phenomena to the Will. Instead, things of experience come about because the world is “objectified,” i.e. put in front of a subject and so made into an object. This means literally that before there was any being that could have representations, there was no “world,” in the sense of flowing space and time. Yet nonetheless, as soon as the first perceiving creature showed up, the whole infinite temporal series was in a sense created, flowing back and out in all directions indefinitely. If this sounds bizarre, well, cosmology always sounds bizarre when you’re talking about some unitary, non-temporal entity somehow “giving rise to” time and space itself, so this is not a problem unique to Schopenhauer, though S. embraces idealism a la Berkeley more directly and fervently than Kant ever did; and idealism is always embarrassed by the problem of what the world was like before any minds were around to perceive it.
There’s more to this “objectification” though: it has gradations. Inorganic matter is a lower form of objectification, asserting a blind striving predictably according to physical laws (note that the laws don’t cause the striving; they merely describe its patterns). Higher grades of objectification assert more apparent teleology: plants and animals grow according to their genetic blueprints, and with people, things get even more complex, because we can act on motives, which have to do with conceptual thought that abstracts from particular representations. Each person has his or her own character that gets acted out through the particular circumstances we meet up with. We are the most vivid objectification of the will, making the strife inherent in all creation explicit with our unceasing desires and the actions we take to fulfill them, which invariably run up against the strivings of other people and the natural forces that surround us, until inevitably entropy gets the better of us and death represents the victory of the lower forms of will over the momentary emergence of a particular higher organism.
Another way of thinking about this stratification, which Schopenhauer discusses in the first sections of Book 3, is by bringing in Platonic forms. What guides organisms in growing the way they do? Well, Schopenhauer didn’t have the notion of DNA at his disposal, nor Darwinian thinking in general, so he posited that it must be some underlying unity to animals or plants of a given kind. The telos towards which the thing grows is a function of the kind of will that underlies the thing, of the ultimate unity of all those individuals as pale reflections (shadows on the wall of the cave) of the Form of that thing. So Forms, for Schopenhauer, are just the Will barely objectified. Since the Form for dog and the Form for cat are different, they evidently have number (i.e. there are many different Forms), but the Form still exists outside space and time. Schopenhauer clarifies that on his view (whatever Plato might think, which is not really clear), there would be Forms only for natural objects, and not a Form for “chair” or other artifacts. Moreover, Schopenhauer thinks that as most vivid objectification of the will human beings are associated not only with the general Form of human (and for that matter, we’re also subject to gravity, etc., i.e. have all that in common with the lower levels of objectification of Will), but each of us has our own “intelligible character,” i.e. our own private Form. (Without buying into this as a metaphysical claim, Nietzsche could then take this as an inspirational call for us to “become ourselves.”)
I’ll wait for the sequel to this post to outline how Schopenhauer thinks that we can use art (and most directly music) to get in touch with these Forms, to understand ourselves truly as Will and so ironically to become less willful, to relax and become a Subject of Pure Knowing, to disown our own desires and really our own individuality in favor of ascetic renunciation of whole world of Representation and the lies it tells us.
Back on March 22, host Stephen West was joined by Wes, Michael Burgess, Ken Presting (calling in from the center of a vacuum cleaner, apparently), and (after 15 min or so) Law Ware reflect further on whether hermeneutics can really give us a reading of the Bible that gives us modern folks something that’s not morally backward or otherwise crazy.
This last Sunday, April 19. Stephen was AWOL, and so I jumped in (immediately after recording for three hours on Schopenhauer… I’ll post about that soon) to host and more or less turned over the reigns to Michael Burgess, who wrestled with the Parables with Ken Presting (with much improved audio over the previous after show, though the first 5 minutes or so from him are rough on the ears), joined eventually Nik Burlew. I know the SNAFU in getting started on time caused some folks who planned to come to miss it, and we apologize.
What? You think Burgess is a British swashbuckling genius and want him to take over as host for the next ones? Anything’s possible. Be on the look out for a wandering Stephen! Is he living under that bridge? We don’t know!
Puzzled why there are only men here, or only people who are WRONG? Why, fill the lack with your own presence when next we do one of these! (To Be Announced)
Here it is copied from Wikipedia:
Then the Kingdom of Heaven will be like ten virgins, who took their lamps, and went out to meet the bridegroom. Five of them were foolish, and five were wise. Those who were foolish, when they took their lamps, took no oil with them, but the wise took oil in their vessels with their lamps. Now while the bridegroom delayed, they all slumbered and slept. But at midnight there was a cry, “Behold! The bridegroom is coming! Come out to meet him!” Then all those virgins arose, and trimmed their lamps. The foolish said to the wise, “Give us some of your oil, for our lamps are going out.” But the wise answered, saying, “What if there isn’t enough for us and you? You go rather to those who sell, and buy for yourselves.” While they went away to buy, the bridegroom came, and those who were ready went in with him to the marriage feast, and the door was shut. Afterward the other virgins also came, saying, “Lord, Lord, open to us.” But he answered, “Most certainly I tell you, I don’t know you.” Watch therefore, for you don’t know the day nor the hour in which the Son of Man is coming.
— Matthew 25:1-13, World English Bible
Clearly the story here is about improper tooth brushing technique. Now you have to keep in mind that in the olden days, what with all the people getting smashed in the face by things and bad oral hygiene in general, it was about average for people to have only 10 teeth, such that if you had more than that, you were considered a heretic and “encouraged” to donate your some of your “extra” teeth to the military for use in fashioning arrowheads. Ignorance of these basic historical facts has left many unable to properly interpret this parable.
What still speaks to today’s society is that often people brushing their teeth will just lightly rub the front teeth, forgetting to reach the brush into their mouths to reach the molars and even the back of the incisors. The brusher fails to “oil” the back teeth (the term “oil” is a dead giveaway if you understand that ancient brushers employed petroleum, routinely referred to in the scriptures as “black gold” or “Texas tea”), and so when the “bridegroom” comes (this referring to The Doctor, who with his strange blue box can be seen throughout history, albeit often with a glamour to distort historical records of him, though in this case his presence in Jesus’s time is clearly indicated by the lack of a Doctor Who episode about Jesus that would settle the “historical Jesus” questions once and for all… Clearly the Doctor’s presence has only been revealed by the modern historical record in less controversial settings such as when he discovered the moon to actually be a giant hatching egg), then he will “not know,” i.e. not want to smell, those back teeth, since they are decayed and smelly. Don’t just wait for gingivitis! Be prepared, says Jesus! Sound advice, and it’s good that Jesus used the parable presentation so that everyone could understand, except of course only those in the know get to understand, as was Jesus’s intention.
While my account above clearly leaves no room for other interpretations, I encourage one and all to reply to this post on partiallyexaminedlife.com with your take on the matter. The one with the objectively best interpretation will gain the Kingdom of Heaven.
Well, as readers of this blog are no doubt aware, there’s an increasingly generous buffet of other philosophical podcasts to try. So how does a new podcast like ‘The Philosofa’ compete?
The premise of the show is to take two stand-up comedians as hosts, and let them chat with philosophers and other intellectuals about a philosophical topic. This is a good idea. As I’ve found from running Stand-up Philosophy for the last few years, letting comedians and philosophers talk to each other can have fascinating results.
Philosophers spend years – decades, even – on a single thought; comedians are quick, sharp, and keen to learn. When it works well, a good comedian can process and summarize philosophical arguments into crystals of intelligence that make for perfect podcast material.
‘The Philosofa’ may be a work-in-progress (the first two episodes are now available on iTunes) but it’s certainly on its way to creating that kind of chemistry. The comedian-hosts Helen Arney and Omar Hamdi are charming hosts and there’s a clear rapport between them. Arney is calm and reasoned – as well as being very witty – and Hamdi brings a Tigger-like bounciness and clear enthusiasm to engage the guests.
And the guests seem to be of real quality. In the second episode, they were Nietzsche scholar Ken Gemes, Steve Keen of ‘Debunking Economics’ fame, and pop science author David Bodanis.
Ken Gemes in particular is an outstanding guest, developing arguments with a clarity and verve that makes him eminently listenable.
In fact, the best parts of the show were when the hosts took a step back and left the guests chat with each other. The exchanges between Keen and Gemes on the status of Economics and Medicine as a science or pseudoscience were particularly fascinating, and it’s unusual to hear thinkers from such different fields being able to talk so productively.
With this in mind, it’s also good that Hamdi seems now to have reined in his first-episode habit of interrupting the guests, which demonstrates the efforts that the show is making to get really good. He’s still quick to push his own quips and mini-rants, but with a more disciplined style. The secret to being great in any panel show is that less is often more – and when the hosts ask questions and listen to the answers, both are really superb at directing discussion.
If I had one complaint, it would be with the running time: at about an hour per episode, it’s a bit too long for the average commute, and a bit too short to go into anything in much depth. But this is a minor quibble, and it usually takes a few episodes before a podcast like this really settles into its format – so we can suspend these kinds of criticisms until it really gets going.
Overall, The Philosofa is a very promising addition to the world of philosophy podcasts, and if the quality of guests keeps up, it will be worth sticking it in your subscriptions list next to PEL.
This post in the seventh in a series on Science, Technology, and Society. The previous post in the series is here. All posts in the series have previously appeared on the Partially Examined Life group page on Facebook.
“Science, it would seem, is not sexless: he is a man, a father and infected too” Virginia Woolf
“This is not about women doing science differently than men. It is about everyone doing science differently when the gender ideology shifts.” Evelyn Fox Keller
Although it might seem a bit silly at first, if we (those of us who are men, at any rate) can step outside our gender for a moment and reflect on the language of science, it’s not hard to see that it is permeated with a great deal of oddly sexual and masculine language. Scientists choose subjects that are “wide open” for exploration, “probe” “fertile territory,” ask the “hard” questions, arrive at “penetrating” insights,” and reach conclusions that are “pregnant with meaning.” Hopefully, if their arguments are “strong,” they will “erect” a “dominant” theory, which will “expose” the “laws” that “govern” nature. A reasonable observer might well wonder whether science is some kind of oedipal rape fantasy directed against “mother nature.”
According to Evelyn Fox Keller (1936 – ), an American bio-mathematician and historian of science who pioneered the study of gender and science, this language is no accident. In Reflections on Gender and Science (1985), she argued that, to an astonishing degree, the origins of science are steeped in weirdly gendered language. Francis Bacon, for instance, wrote an unfinished essay called “The Masculine Birth of Time” (c. 1605), frequently spoke of “dominating nature,” and famously declared that “knowledge is power.” In this same essay he said “I am come in very truth to lead you to Nature with all her children, to bind her to your service and to make her your slave.” This imagery was hardly atypical. For generations the Baconian conception of science competed against another, quite different tradition, which had its origins in the alchemical writings of Paracelsus and Cornelius Agrippa. Agrippa wrote a Declamation on the Nobility and Preeminence of the Female Sex in which he frankly praised women’s superiority, while Paracelsus thought of nature as a combination of masculine and feminine elements, and held up the image of a pregnant mother as the appropriate metaphor of discovery. What kind of science, she wondered, might have emerged from the tradition of Agrippa and Paracelsus, if it had been pursued with as much vigor as the Baconian program?
She insisted, from her own laboratory experience, that male colleagues frequently held to male gendered biological theories (such as the “pacemaker” explanation for the growth of slime molds, or the “central controller” model of cellular growth, despite persistent failure to locate the theorized structures. On the other hand, explanations offered by female scientists, such as Barbara McClintock (one of the 20th century’s most distinguished cellular biologists), which laid more emphasis on holistic interactions, were routinely ignored. If problems of gender and its influence on research were squarely faced, she argued, science as a whole would benefit. The main thing, after all, is supposed to be the quality of the ideas – not the qualities of the people who propose them.
In The Mirage of a Space between Nature and Nurture (2010), Keller argued that this seemingly eternal dispute is actually an invention of nineteenth century biologists like Charles Darwin and Francis Galton on the one hand, and sociologists like Auguste Comte and Karl Marx on the other. Both sides had definite political motivations for drawing the distinction. Darwin and Galton were imperialist, conservative millionaires, who wanted to believe that traits were inherited in order to refute reformers who thought social reform could improve living conditions for the working poor. Comte and Marx were revolutionaries committed to the violent overthrow of existing governments, on the assumption that the only thing holding the people back was the intransigence of conservatives who profited from their systematic exploitation.
According to Keller, recent discoveries, such as epigenetics and a diet-based treatment for the IQ-lowering genetic protein deficiency called PKU, make it clear that the terms of the debate make no sense. A gene cannot and does not act independently of an environment, and neither is there a human environment independent of genes. Further, she argued that many studies of heritability and environment fail to clearly distinguish a multitude of meanings that can hide beneath a single, over-stretched term. As a result, they tend to assume too much on the basis of the evidence they actually discuss. It is not, therefor, a question of deciding one way or another between nature and nurture, or even finding a synthesis between two opposed approaches. Rather, the debate should simply be abolished as intrinsically absurd.
Evelyn Fox Keller is a leader among a generation of feminist scholars interested in questions of gender and science. Some of the other leaders of this movement, which emerged after the Port Huron Statement (1962), include Sandra Harding (1935 – ), who linked feminism to post colonialism and the study of various oppressed (or “subaltern”) social groups; Lorraine Code (1935 – ), who argued for the validity of a uniquely feminine heuristic; and Elizabeth Anderson (1959 – ), who has integrated feminist critiques of science with moral and political theories of democracy. Although feminist philosophy (etc.) of science is a complex and controversial field, and these scholars frequently disagree among themselves as to what changes are desirable or realistically attainable, in general they share a commitment to broadening the scope of science so that it does not (as they argue it has and does) devalue feminine perspectives as a kind of structural principle. There is nothing particularly masculine, they argue, about advancing the cause of human knowledge – it is an activity that should be open to anyone with the intelligence and dedication to contribute.
Daniel Halverson is a graduate student studying the history of Science, Technology, and Society in nineteenth-century Germany. He is also a regular contributor to the PEL Facebook page.
Philosophical Fiction read Virginia Woolf’s novel To the Lighthouse for our conversation in April. You can hear us discuss the plot, characters, and themes, with quotes and passages so… beware of spoilers. Myself along with PEL members Dan, Cezary, Daniel, and Laura, talk about: Time, Beauty, and Life; we bring up Ulysses by James Joyce and The Map and the Territory by Michel Houellebecq–I also make one point by saying “Sympathy through the boots.”
You can Philosophical Fiction’s full conversation on To The Lighthouse on the Free Stuff for Citizens page (click the “Not School and Aftershow Discussions” tab and scroll down a bit).
‘And that’s the end,’ she said, and she saw in his eyes, as the interest in the story died away in them, something else take its place; something wondering, pale, like the reflection of a light, which at once made him gaze and marvel. Turning, she looked across the bay, and there, sure enough, coming regularly across the waves first two quick strokes and then one long steady stroke, was the light of the Lighthouse. It had been lit.
In a moment he would ask her, ‘Are we going to the Lighthouse?’ And she would have to say, ‘No: not tomorrow, your father says not.’ Happily, Mildred came in to fetch them, and the bustle distracted them. But he kept looking back over his shoulder as Mildred carried him out, and she was certain that he was thinking, we are not going to the Lighthouse tomorrow; and she thought, he will remember that all his life. (p.68, HBJ)
We are currently reading Kafka on the Shore by Haruki Murakami. You can join our conversations, give a suggestion or send us a great quote, by visiting the Partially Examined Life’s Not School at Philosophical Fiction.
Illustration: “Virginia Woolf” by Roger Fry – http://artprints.leeds.gov.uk/index.cfm?event=catalogue.product&productID=110410. Licensed under Public Domain via Wikimedia Commons
This post in the sixth in a series on Science, Technology, and Society. The previous post in the series is here, and the next post is here. All posts in the series have previously appeared on the Partially Examined Life group page on Facebook.
“The aim of physiology is to explain the organism in health and disease; the aim of mechanics is to understand machines which work and machines which fail; bridges which stand as well as those which fall. Similarly the sociologist seeks theories which explain the beliefs which are in fact found, regardless of how the investigator evaluates them.”
David Bloor (1942 – ) is a sociologist at the University of Edinburgh and founder of the Edinburgh School in the field of Science Technology and Society (STS.) In Knowledge and Social Imagery (1976) he continued the Scottish tradition pf philosophical skepticism and defined a new and controversial approach to the Sociology of Knowledge, called the Strong Programme.
According to David Bloor, sociologists in the past confined their investigations to the explanation of false beliefs. True beliefs were either self-evident or arose out of a rational process of discovery, and therefore required no explanation. Or, put another way, their truth was their explanation. This habit of deference amounted to a structural lack of nerve within sociology, which he called the Weak Programme. Bloor challenged his colleagues to explain true as well as false beliefs in terms of the same social processes. Just as natural scientists investigate all corners of the physical world, and do not admit that any kind of physical phenomena is beyond their powers of investigation, so too, the scientists of society (sociologists) should be willing to investigate the origins of all kinds of beliefs, whether true and false – especially the hitherto forbidden realm of scientific knowledge.
Bloor defined knowledge as beliefs which people confidently hold to and live by, and the goal of Sociology of Knowledge as the discovery of the causes which produce it. Scientific knowledge, like any other kind, has a social origin. That is to say, it arises out of a particular context, and that context is demonstrably a product of contingent historical and social factors. It follows, then, that scientific theories cannot be understood in separation from that context, any more than a work of literature, art, or music can be.
In consequence, both scientists and philosophers of science misunderstand the nature of scientific knowledge when they attribute the success (or failure) of a theory to its value (or lack thereof) as a descriptor of objective reality. This practice amounts to a Teleology of nature (i.e. a faith that it contains a special purpose for man, and that this purpose can be treated as a causal factor in history.) Because teleological explanations are always invalid in science (if not necessarily in philosophy), these kinds of explanations can and should be rejected out of hand.
Having dispensed with “The Weak Programme,” Bloor went on to outline his own, “Strong” alternative. A scientific (that is, sociological) theory of knowledge should adhere to four principles:
- Causality: the sociologist is concerned to explain how and why beliefs gain currency;
- Impartiality: he does not attempt to distinguish between true and false beliefs – instead of arbitrating truth claims, he investigates their origins and function in society;
- Symmetry: he applies the same causes to true and false beliefs – the same causal mechanisms will explain the success of true and false beliefs alike, just as the same mathematical principles explain the stability or instability of a bridge, and the same physiological principles explain the health and disease of the organism;
- Reflexivity: he applies the same analytical tools to himself and to his work.
These tenets were cast in the manner of methodological assumptions which did not have to be proven before they could be used. Rather, just as in physics, the validity of the approach would be demonstrated retroactively, by evaluating the practical results it produced.
Whereas Thomas Kuhn had suggested that science might not be an entirely rational activity, and Paul Feyerabend had drawn certain philosophical and political conclusions from a rather more strident belief, David Bloor was the first scholar to offer a methodological approach for systematic study. Rather than taking the claims of science for granted, and adopting an internalist view of science that either sought to justify its results or prescribe better practices, Bloor argued for an approach that ignores the truth status of scientific theories and instead concentrates on their social context of production. Needless to say, the idea that truth claims arising out of science can be ignored at all, let alone as a systematic methodological principle, was and is controversial. Between Kuhn, Feyerabend, and Bloor, the anti-realist and critical posture known to its opponents as “relativism” emerged as a serious position within the academy. Attempts to hash out the issues raised by these three, and to incorporate them in some way with the philosophy and actual practice of science, led to the establishment of Science, Technology, and Society departments across the United States in the 1960’s and 70’s. It remains a vibrant, multidisciplinary field of inquiry today.
Daniel Halverson is a graduate student studying the history of Science, Technology, and Society in nineteenth-century Germany. He is also a regular contributor to the PEL Facebook page.
In contemporary analytic philosophy the word ‘exists’ has a very limited meaning: the things which exist are, more or less, the referents of our referential language. If I say ‘over there is a skyscraper’ then I take there to be such things as skyscrapers. And I take my sentence to pick-out, in all that is, that which is a skyscraper (ie. that my sentence refers to a skyscraper). This kind of existence does not presuppose any global nature to ‘Being as such’, nor does it connote any global understanding of what makes such existence possible. To say ‘there are skyscrapers’ is to say there are things of the skyscraper kind, but it is not to settle larger metaphysical questions about whether skyscrapers are ‘dream-skyscrapers’ or ‘just-skyscrapers’ (to distinguish between the case where we are in a dream or not).
This first-order existential language then is metaphysical in the sense that it provides the explanandum for metaphysics: is it that which is to be explained. The vast majority of metaphysical systems (varied enough to include Plato, Hume, Hegel, Heidegger, Quine, etc.) thus pay special attention to referential language or ‘first-order existential language’ as I have described it. The key questions about our ‘world-talk’ are: what makes such talk possible? What is the nature of the things it entails? How does our world-talk refer to the world? Or more generally, what is the relationship between world-talk and the world? Throughout the history of metaphysics then, philosophers have been motivated to provide a metaphysical framework in which to situate talk of this kind.
The key opening gambit in metaphysics here is in Plato (of course) whose parable of the cave begins by framing the duality (there being two) of world and world-talk as a dualism (consisting of two parts). For Plato, life is very much like being trapped in a cinema only seeing images of the world rather than the world itself. We’re to suppose it’s possible to leave the cinema itself, to ‘visit reality’ and to be amazed and astonished that we ever thought movies were ‘the real thing’.
It’s first important to note that this parable outlines a second-order framework for analysing first-order language. It says that language of the kind ‘there is a flower’ should be read as referring to movie-flowers (shadows) not Real flowers (forms). So there is two sense of Real here, the first order sense: there are, of course, flowers. And the second order sense, but these flowers aren’t Real flowers. This notion of ‘Real’ then, of there being a second-order real, can create a lot of confusion. Plato’s Flowers are merely real, but not really Real.
Claims of the kind ‘the table before me, and all tables, are just illusions’ should be decomposed into two kinds of claim then: 1. there are tables 2. all the things which are tables are illusions. Thus, for example, old scepticisms should be understood as a position which buys into a dualism of the Real and wishes to deprive many or all objects of Reality in the second-order kind: objects could be dreams, matrix-code, or anything else.
However contemporary scepticisms in analytic philosophy are usually presented as anti-realisms which deny the first order kind of reality (they don’t care to, or don’t presuppose a Really real and merely real). So an anti-realism about, say, electrons isnt saying ‘yes there are electrons, but like everything else, they are shadows on the wall’ it’s saying, ‘no there are no electrons; electron just refers to a piece of mathematics’.
It seems plausible initially, at least, that we should be satisfied with first-order reality – so where do second-order notions of Real come from? It is my claim here that they arise due to a confusion in first-order talk (and I think, must necessarily arise this way). Sometimes language refers: ‘over there is a cat on the mat’ is referring if there’s a cat on the mat – if I say it now however, sat in a chair in a room with no cat, such a statement is not referring. There is no cat (and indeed, no mat). So we have a duality in our world-talk, there is talk that ends up being referential and talk that ends up being non-referential. This duality I believe has become metaphysically deified: that the possibility of non-referential world-talk is elevated to the level of an unReality of world-talk all together. Those instances where we do not find “what our words tell us to look for” are contrasted to those instances where we do find “what our words tell us to look for” and this contrast is unduly given grand status as a split in the world itself rather than as a liability of language and world-talk.
Let’s focus this problem: how could a contrast between Real and unReal (eg. Appearance) ever even be formulated? All we ever have is first-order talk: when we’re given examples of Appearances these very examples are set within the language in which Reality is contrasted. The question ‘could everything be a mirage?’ can be immediately answered: no. A mirage is something which is set in contrast to something that isn’t a mirage, to be able to ask the question presupposes that some world-talk refers to mirages and some world talk doesn’t: Appearance and Reality are in reality as just the conditions refers and does-not-refer. Appearance is that situation where you point to something and make a mistake, lie or pretend (eg. ‘there is no beer in the fridge’), you use language about the world that does not refer to it. Reality is that situation where you use referential language (‘there is beer in the fridge’) and it does refer. There are no examples to give ‘outside of’ first-order talk to evidence a second-order notion of Real.
Notice here that ‘mistake, lie, pretend’ are epistemological terms: they are about whether we know that our language refers. When I walk into a bathroom, just before I get into my filled bathtub, I believe that the water I’ve been running is hot: I think, in effect, ‘the water should be hot enough now’, but I might be wrong – the water heating system might be broken. Thus this duality concerning language is actually an epistemic duality: we are sometimes wrong in how we describe the world and we do not know, ahead of time when that will be. We may have correct beliefs (knowledge) but we don’t also know when we know (our belief that our beliefs are correct isn’t always correct). Sometimes our language refers, sometimes we’re right. Sometimes our language does not refer, sometimes we’re wrong.
Thus there is something deeply suspect when we’re asked to transpose these conditions into metaphysical divisions or dualism. We can phrase this suspicion as a question to Kant: Kant, you claim there is a noumenal and a phenomenal: a metaphysical split between what Really is and what really is, between what our words can never refer to – what we can never talk about – and what we we always talk about – but where is this split? If we assume Kant is right and we’re stick in the phenomenal (the cave) then how ever could we discriminate between phenomenal and noumenal? Where in the cave is the noumenal? It is nowhere. It is a presupposition that can never be evidenced. Kant transposes the dualism of reference and non-reference into a dualism of phenomena and noumena.
If we’re in the cave we cannot justify it’s the cave (rather than the sunlight) so why even presuppose it? Thus we should say that are no Appearances in the sense dualism supposes, second-order appearance. It can appear that there’s beer in the fridge, and I might be wrong. But the fridge itself does not Appear – nor is it Real. So what are ‘appearances’, if not ghostly objects floating on top of the noumena? Let’s take the classic example of an appearance: the case where a stick appears bent because it is in a glass of water. Notice here everything is, globally, exactly how it should be – the way the world is, the layout of stuff we’re referring to with our language, is correct: how else could we have discovered refraction, if upon further investigation we found an Appearance of bentness and not refraction!
Appearance enters when we interpret the world, produce world-tallk – appearance is in the form of the proposition ‘the stick is bent’ – we take the situation to be one of bentness but we’re wrong, it’s one of refraction. We then contrast refraction and bentness and call one ‘Real’ and the other ‘Appearance’ but this isnt evidence of a dualism but a duality – our language doesn’t always refer. The bent stick isnt a lens which gets us outside reality and into Reality to see that there is Real and Appearance. The world is always exactly as it should be, it never Appears to be. Appearance is a feature of our descriptions of the world.
Interpretation of the world, production of world-talk, is the act of describing what there is (correctly or incorrectly). If we are inside a cave then world-talk is a description of what there is in a first-order sense only – it might be that outside the cave our first-order objects end up being something else ( ‘interpreting inside a cave’ might be called phenomenology). However if we realise that it doesn’t even make sense to presuppose this dualism, to presuppose we’re in a cave, then first-order interpretation is the final interpretation (‘interpreting in the sunlight’ might be called hermeneutics). There is no ‘Real’ to speculate about and thus no science of appearance to suffer – our world-talk is talk of the world simpliciter.
Michael Burgess participates in and organizes several philosophical groups in the UK and has recently completed graduate work in physics. He is presently writing on issues in metametaphysics with a view to graduate study in this area, and has written and presented on a variety of other topics: from academic articles on Leibnizian historical interpretation to machine learning in a quantum computing context.
Illustration by John Holbo at examinedlife.typepad.com.
Drawing on John Dominic Crossan’s The Power of Parable, mainly ch. 3 (2012), Paul Ricoeur’s “Listening to the Parables of Jesus” (1974), and Paul Tillich’s The New Being, ch. 1: “To Whom Much Is Forgiven…” (1955).
Are these weird stories riddles, where if you figure out what they mean you get salvation? Are they homilies, telling you to go be like the Good Samaritan? Crossan says they’re protests, meant to challenge the moral and social status quo. So what do philosophers have to gain by studying these? What do they add to moral theory, and do you have to have any religious commitments to get something out of them? Mark, Wes, and returning guest Law Ware talk through the various ways of interpreting the Good Samaritan, The Sower, The Two Debtors, The Talents, the Hidden Treasure, the Ten Virgins, the Three Amigos, and more. Read more about the topic and get the books.
Sponsor: Visit thegreatcourses.com/PEL, for massive discounts on great lectures.
Jesus picture by Corey Mohler
I’m really not sure when the beginning of this song came into being: a couple of years ago, I think, but that could have been when I reminded myself of it, and it could have been from 2000 or before. I know it sort of made me roll my eyes: Who needs another song name checking Jesus, much less a snarky one? But with this episode coming upon me, I decided to finish it, and changed the simple strumming to a nice fingerpicking thing, added the chorus, wrote a bunch of verses, and tried to actually put some of myself into it, in that it’s (sort of) about my relation to atheist bluster. I grew up going to church, and was in the choir, and my early love of music was very much connected to sitting in church listening to that massive, awesome organ. I learned both how to sing out and how to sit quietly in reverence, i.e. how to be the ideal audience member for a concert, which despite how many bars I’ve since played does not, for me, involve drinking and gyrating vaguely around and having a shouted conversation.
As with my last attempt at one of these religious songs, the lyrics here lean on that old standby, the problem of evil, which would be sufficient to motivate atheism even if all the rest of the concepts involved made sense to me. The “noise” in question is the recurrent refrain of Jesus in the culture, which has a similar tone and source and negative disposition to me as commercial jingles. And just as with the advent of DVRs and streaming video, I pretty much don’t have to deal with commercial earworms any more, so has religion retreated to a safe distance, so that now I feel more eager about exploring it via podcast discussions just as I like to poke into other historical and cultural pockets to see what the fuss is all about. However, I wrote this prior to recording the current episode, and my indulgence is not so on display here. Despite our coming down on the new atheists as philosophers (and my appreciation of perspectives like Law Ware’s), my heart is still with them; the whole thing seems a disastrous historical mistake at this point, and this song is a lament of that fact.
About the recording: There’s no way in a band I could convince all the other guys to just stay out until the second or third verse on song after song, but continuing to run my solo recording gig here (I started laying this down a month back) means that I can indulge both my love of stripped-down guitar and vocal and, as the song progresses, add in some more layers, including my most elaborate cello part yet, some newly competent snare rolls (I’ve been playing drums pretty much constantly for the past few months), some guiro (typically associated with Latin music, but I just used it in lieu of hi-hat), a little falsetto choir at the end (me four times singing “oooo”), and I’m very excited about the return here of lead guitarist Geoff Esty from my college days with The MayTricks (if you like our interaction here, try these old gems: “Love Song #1” and “Bring You Down.”) While I layered a bunch of short, harmonized acoustic lines myself, any fast any fancy playing you hear here is him, as are those tasteful licks in the early verses. He did this cool reversed part 2.5 minutes in that I liked so much I electronically repeated it a bunch of times to make it into the bulk of the guitar solo (I did something similar over the part when the drums come in), and then he recorded this repeating/syncopated thing under much of the bridge that directly inspired my adding the percussion, and then I just harmonized some of the lines he came up with for the last chorus.
Those last layers came together more or less at the last minute; I had an oboe and a violin session both scheduled that got canceled for various reasons, and received Geoff’s part (as a single track) only a few days ago, so I had to scramble to fill all the gaps and make this seem finished to me. As a friend of mine pointed out, 20 years ago I might well have kept a song like this under three minutes, but I hope that the arrangement keeps things captivating enough to deserve the longer running time.
Our Not School our mainstays are all continuing this April, and we have one new group running so far. There will also be more Aftershow discussions, with the for #113 scheduled for Sun., 4/19 at 5:30pm Eastern. Go join the group to signal that you’ll be there. If you like the podcast but haven’t signed up to be a PEL Citizen, take a look here at what it offers.
First up, Mark Linsenmayer has started a group to listen to Thomas Sheehan’s Stanford lecture series entitled “Historical Jesus.” This ties into PEL episode #113, looking at Jesus Christ through a philosophical lens, so it should be an interesting one. The group will discuss the series over a Google Hangout later this month.
The Philosophical Fiction Group is planning on reading a Haruki Murakami novel, but we’re still voting on which one. There’s time to for anyone who joins up soon to have their say in the matter, so don’t delay if you’re interested. A live discussion will follow whenever the group finishes the book.
The Philosophy and Theater Group is reading part one of Philip Auslander’s From Acting to Performance: Essays in Modernism and Postmodernism, which includes the first three essays in the book (around 30 pages). New members could easily catch up in time to discuss it with us, which we plan to do over a live call in mid-April.
Also, don’t forget to check the Citizens’ Forum for new group proposals. If nothing stands out to you, propose something new.
- Daniel Cole
The New Statesman is a ‘British political and cultural magazine’ – it’s mostly a place for budding writers to attempt journalism, or sarcastic British public intellectuals to write editorials. It’s the kind of place where articles begin with a tid-bit of academic general knowledge, a theoretical curio, which is immediately dispensed with when the underequipped authors drop their crow-bars. Though if they’re ambitions enough, they’ll pick them back up for the conclusion, to give the piece the illusion that it’s a coherent whole.
However clichéd, such sandboxes are vital places for writers to practice in public, a journalistic busking to save our revered stages from accidental dissonance. It must then be very intimidating when attempting a review of Zizek to have him pop-up with a reply. And especially here when there is no alternative: every review of Zizek is an attempt. It would be a rather mean thing for an A-List singer to walk past a street performance of their song and set-up opposite to prove that the performer has it all wrong.
Nevertheless Zizek occasionally writes for the New Statesman, and we might read some calculation into a dismissive review of his books published there (‘perhaps if I sing terribly outside his house, he might come out!’).
I’m not going to be too critical of the reviewer however, he’s done nothing wrong in his misreading. Zizek knows very well how mischievous he is: with a deliberate relish (enjoyment, perhaps?) he produces statements designed to be misread.
Yes yes, we shouldn’t be idiots. We should be good readers of the text and understand what he meant – the Zizek religion is of course Jewish: the text has to be interrogated, not quoted verbatim!
We, however, either need to incorporate some eye-rolling into our reading of his replies like this; or Zizek needs to be more honest. Like a theoretical Loki, he’ll just keep on going: “oh Nazism Good? I never said that! I said it was GOoD!”; “Violence? Why never of course! Only vIolencE!”…
No doubt he sees himself as a Lacanian figure, kissing the cheek of culture at the right moment so as to disturb its psychical neuroses – each polemical world a calculated cure. From the other side however, it feels very much like an old man has spit all over our faces. And when we decide to avoid the next session, he’ll call us to remind us that this moist therapy is essential.
Well, Yes, OK, Zizek: I’ll smack my son at his bar ziztka when he reads too much into Less than Nothing when you stop setting drool-ful traps for the teenage mind.
Nicholas Humphrey, professor of psychology at The London School Of Economics, is a leading investigator of what philosopher David Chalmers dubbed the “hard problem” of consciousness. His Recent book Soul Dust approaches the second part of the Hard Problem: why human beings have consciousness, and why consciousness should have evolved at all. It is an excellent read for anyone interested in philosophy of mind and the evolution of the brain.
While there have been many attempts to get at what consciousness is (or what consciousness is like, see PEL episode 21), the goal of Soul Dust is describing why consciousness is evolutionarily advantageous–or, more exactly, why natural selection has led to creatures with the remarkable quality of being phenomenally conscious. It is an interesting question given that being conscious (not to be confused with being intelligent) does not seem to grant any survival skills. As Jerry Fodor famously asked “What mental processes can be performed only because the mind is conscious, and what does consciousness contribute to their performance? Why then did God bother to make consciousness?”
Humphrey’s solution is not terribly complex. Consciousness, he claims, does not add or enhance some survival ability (as, say, wings allow birds to fly). Consciousness improves the chance of survival because it makes life worth living. Being phenomenally conscious grants import, meaning, and ego, essentially fooling us into striving towards fulfillment. Humphrey makes this point by quoting several artists, writers, and theologians, such as Oscar Wilde: “The aim of life is self development. To realize one’s nature perfectly – that is what each of us is here for. To love oneself is the beginning of a lifelong romance.” Quoting Thomas Nagel, Humphrey points out the strange fact that “There are elements which, if added to one’s experience, make life better; there are other elements which, if added to one’s experience make life worse. But what remains when these are set aside is not merely neutral: it is emphatically positive. . . The additional positive weight is supplied by experience itself, rather than by any of its contents.” To put it simply, “we accept that nature made sex pleasurable to encourage us to have more sex. Then why not make living magically delightful in order to encourage us to engage in life?”
Humphrey further argues that the desire to “experience life” has “increased our fear of death.” To quote Philip Roth, “I’m afraid of dying. . . I’m 72. What am I afraid of? . . . Oblivion. Of not being alive, quite simply, of not feeling life, not smelling it.” Consciousness fools us into believing that we are somehow individually unique and makes the experience of life, on balance, emphatically positive. It also allows us to extrapolate from the deaths of others that we are going to die and that death is the loss of all experience. This knowledge, something that nonconscious and less acutely conscious animals seem to lack, makes being alive precious and something that needs to be actively defended and improved upon, even in the absence of immediate threats.
Humphrey asserts that it is this love of being alive and omnipresent fear of death that motivates us to fight for our “honor,” “legacy” and other abstract goals that just so happen to involve a lot of resource gathering, competition, and constructive activity that will positively affect our progeny. As per Humphrey, the more beautiful and meaningful life becomes, the harder the conscious creature will fight to improve and extend its own existence. And, the harder it will work to improve the likelihood of the success of it’s offspring. Thus, being conscious is a huge evolutionary advantage and is explainable in a standard Darwinian framework.
Elsewhere in the book, Humphrey attempts to explain the how as well as the why–how consciousness developed and how it works. Much of this material is drawn from his previous work Seeing Red. But Humphrey stresses that this is the least of his goals with Soul Dust. His focus and attention is on the why. Humphrey confirms that he is on the side of Dan Dennett, Owen Flanagan, and other antidualistic thinkers who feel that neuroscience will probably answer the how in due time and that consciousness is entirely physical. His theory on the what and how is included mostly to avoid the criticism that he has totally ignored it, going so far as to say at the onset of the chapter on math and brain function that the reader can feel free to skip it entirely.
All in all, Soul Dust is a great read. It is a comparatively easy to understand text and is a convincing argument (if you are willing to be convinced that you are an animal and that there is no “soul”). For my own part, I have to say it makes a lot of sense. There are parts of life that make me want more life, and it seems this is also true for people much less fortunate than myself. If my love of say, very early mornings on Lake Mendota is a survival mechanism, it’s working.
You can find a lecture by Nicholas Humphrey summarizing the ideas of Soul Dust here:
If you were curious and confused as I was when Law started talking about the “second naïveté” on our Ricoeur episode, check out this page for a quick explanation. We start out (with the “first naïveté”) taking all these religious fairy stories at face value. We then grow up and acquire critical distance, which not only involves applying what we’ve learned by actually dealing with the world (e.g. you wouldn’t believe your spouse was cheating on you without adequate evidence, so why would you believe in this stuff?) and from science, but also applying the insights of Marx, Nietzsche, and Freud to look at your beliefs from these different points of view. But the agnosticism (at best) that typically results from this isn’t the end of the story.
Fundamental to authentic religion (for any religious existentialist) is spiritual, i.e. emotional engagement, which when it comes to Christianity at least has to do with “being called.” While the hermeneutic strategies to “open up the text” that Ricoeur presents are not simple or childlike, they’re only the first step in engaging with the ideas. If you understand “the meek shall inherit the earth” as a radical idea, what do you do with that? How do you apply it? How do you let it change you? Following Gadamer, we’re supposed to put ourselves at risk, allowing the possibility that the text could be life-changing.
I got to try this in preparing for episode 113 on Jesus’s Parables: Could I, by learning about the original circumstances (so far as they can be reconstructing) of their delivery and then their recording, really listen to them? Well, you’ll have to wait for that episode to see whether you think I got the message. For the most part, I’ll confess that I’m not sure how one is really supposed to institute such a “naïveté.” As Gadamer said, the only handle we have on a text is our own foreunderstandings, and though we can try to improve these by learning more about history, and literary tropes, and other topics, the end result of this is not in any sense a dropping one’s guard. Surely this can’t be referring simply to the very first necessary step of not writing off the text out of hand, of admitting that there is some value in listening to the text at all.
According to the summary of Ricoeur linked to above, the naïveté involves being able to approach the symbols in the appropriate manner, keeping in mind (as per the Jaspers bonus discussion), that they don’t point actually point to any specific thing. This is what the summary refers to as the “mystic stage,” so it sounds like it’s a matter of being open to a certain kind of religious/mystical experience: putting your mind at peace, clearing it of thoughts, focusing on your senses, your breathing, all that jazz, as one would have to do perhaps to enjoy certain types of jazz or other music that doesn’t obtrusively jump out and grab you (“catchy” music). But doing this only works if you’ve already done the prior work to understand (whether or not you can articulate it) the musical language involved (see our Goodman episode). The system of symbols for a given religion is still conventional, and not merely a matter of seeing the infinite another person or nature as Buber describes. But what system of religious symbols am I equipped to understand? I went to church throughout my youth, waving palms and lighting candles and singing in front of a massive, room-filling organ, eating the croutons and drinking the grape juice, listening to the stories and the homilies and the calls to be closer to God and follow in the example of Jesus. That’s the language I learned, and its symbols are simple and to my adult eyes basically unhelpful; they’ve long since been replaced by music and love and certain movies in putting me in touch with that feeling that I used to call religious by seems well enough described nowadays as ASMR.
It’s easy to accuse anyone defending a mature version of religion as still just trying to covertly defend a set of much less mature, less admirable sentiments; the suspicion that Ricoeur brings to bear is more than welcome. When I hear the call to put one’s presuppositions at risk in reading the Bible, I hear “suspend your disbelief.” I hear “well, how do you know that miracles don’t really occur? Scientific laws are just generalizations of the observed, and can’t disprove divine exceptions.” I hear “lower your arrogant countenance in the presence of the divine!” These are the fore-understandings with which I approach such apparently enlightened approaches to Christianity, and the call for me to become impossibly naïve does not allay my suspicions. Will nothing short of a full seminary education, or a Ph.D. in religion, actually qualify me to read the symbols and so allow me to get the desired result out of my open heart?
Image credit: Ratna Sari
Quassim Cassam wants you to know that conspiracy theorists have bad character. In other words, bad thinking is not just bad thinking; it’s also a vice. Maybe Cassam is right. Intellectual character or the lack thereof is often overlooked, at least in general conversation. It’s not that we have an overabundance of trust and tolerance in our public discourse, which is obvious to anyone, it’s just that we tend to see people who hold a multitude of unjustified beliefs as ill-informed, dim-witted, or maybe even insane. And while we easily find the nerve to accuse our ideological opponents of bad faith or insincerity, that’s mostly a purely moral accusation. Cassam’s idea is more interesting than that. The real problem with people who believe fervently in climate change denialism or 9-11 “trutherism,” is not that they’re dumb, crazy, or motivated by greed or power. Rather, it’s that they lack the intellectual character needed to form sound beliefs.
Think of an accountant who embezzles from their clients, a journalist who can’t find the courage to report the truth, or the employee who coasts on the job. These people are dishonest, cowardly, lazy. It may even be that these morally malformed creatures just can’t help themselves, such is the nature of their bad character. It’s the same with intellectual character. The belief formation process doesn’t go wrong on account of low intelligence or a merely cognitive failure of rationality, rather, what’s to blame is a bad intellectual quality in the person. It’s not clear whether this approach takes the terms psychologists and philosophers already use to name rational shortcomings and simply recasts them as character terms, or, whether it proposes alternative (morally-laden) terms and concepts with allegedly more fundamental causal power, but never mind. I actually don’t doubt that a non-trivially large portion of conspiracy theorists have at least temperamental characteristics that lead them astray – characteristics that can’t be reduced to or equated with bad faith, mental illness, or purely cognitive failings.
My worry is instead over how poisonous this approach could be to our discourse: How about this for a possibility: suppose, every once in a while, our views turn out to be wrong (heaven forbid), but instead of merely expressing our disagreement we accuse our interlocutors of possessing deep character flaws for disagreeing with us? Let’s take one example that’s particularly frustrating to me, the so-called “Hot Hand Fallacy” basketball players, coaches, and fans are said to employ (that this is a topic I would cite as frustrating as opposed to something important like, say, climate change, may reveal something about my character, but that’s a topic for another day).
In basketball, a player who hits many shots in a row is often said to have “the hot hand.” But statisticians have painstakingly pointed out that there’s no such thing. People just have a tendency to see patterns in randomness. Basketball coaches who, in the face of this evidence, still cling to their belief in the Hot Hand are accused by Cassam of having bad intellectual character; it’s not only that they hold a mistaken view about their sport. As for the Hot Hand Fallacy itself, there really is something compelling in the research of psychologists like Gilovich, Vallone, and Tversky, (hereafter GVT) who first identified the belief as a fallacy in the 1985 paper “The Hot Hand in Basketball – On the Misperception of Random Sequences.”
GVT cleverly got a bunch of survey participants to agree that after a player hits a shot or two, the player’s next shot has a higher percentage than normal of going in, and this belief in rising probability was debunked by their research. Further, it’s demonstrable that random sequences of numbers are indistinguishable from what appear to us to be hot streaks. In other words, random numbers sometimes streak as well – 10101111111111 might look like a pattern, but random sequences also behave this way at times. You could even call each term H or M (for hit or miss) rather than 1 or 0 for sake of maximizing the sense of similarity to shooting. Basketball fans have a hard time accepting all this. It’s not an exaggeration to say that conversations on the Hot Hand are often contentious. Tversky says “I’ve been in a thousand arguments over this topic… and I’ve convinced no one.”
The topic even found its way into our pop culture when former Clinton Treasury Secretary, Harvard President, and Winklevii naysayer Larry Summers paid a visit to the Harvard Crimson basketball team for a pep talk. Summers asked the players if they believed in the hot hand. The players all nodded. Summers then informed them that they were wrong, firmly explaining the fallacy, the content of which must imply that years of practice are incapable of building the muscle memory capable of getting a player in the kind of groove that can lead to a genuine hot streak – how inspiring.
Lo and behold, it turns out there’s new data. It purportedly shows that there really is such a thing as a groove, (and therefore the Hot Hand) and that the phenomena can be teased out empirically. The data are apparently persuasive enough that even Summers has given a nod, “Better data plus better statistical techniques means we’re going to understand the world much better.” Openness to new data is certainly the kind of virtue we should all aspire to, but before we brake our arms patting ourselves on the back, new data doesn’t resolve the problem of ambiguity that’s lingered all along.
Ludwig Wittgenstein taught us to pay close attention to the way words are used. GVT got people to agree that a player with the Hot Hand who hits a shot would display a rising probability of success on subsequent shots, but again it turned out this phenomena couldn’t be justified statistically, so, fair enough. But to this rabid basketball fan of over 30 years, that definition sounds, while not altogether incorrect, still a bit tortured when presented as exhaustive, perhaps to say something the torturers needed to get on record in order to proceed. But again, never mind, I don’t want to quibble with what some portion of people out there think the hot hand is.
What no fan worth his vintage jersey would believe is that GVT’s definition represents the only common way the concept is used, and, indeed, if the researchers had been as curious about the subtleties in the way people use words as they were about statistical implications, we probably wouldn’t be having this conversation, as thoroughly clarified theses that narrow their headline claims tend to get less attention than sweeping ones. To be fair, GVT did allow that all the things people mean by “hot streak” might not be captured by their analysis. But over the years, the general notion of a hot hand has been said to be dis-proven by clever statistical techniques, and in any case GVT then pivoted back to their exhaustive definition, asserting without authority that however the terms “Hot Hand,” and “hot streak” are employed, the common ways all imply that “the probability of a hit should be greater following a hit than following a miss,” and that “the number of streaks of successive hits or misses should exceed the number produced by a chance process.” In other words, the common notion as defined by GVT happened to line up perfectly with what their research claimed to disprove. But in the end, the researchers are psychologists, not ethnographers, linguists, or Wittgensteinian philosophers, so maybe we can give them a pass.
It’s just that the way the issue has been presented over the years has produced a messy conversation. So per Cassam’s accusation of bad intellectual character, we have little idea what the coaches have in mind when they dismiss the assertion that a fallacy is at play, because too little time has been spent clarifying the issue, too little attention paid to sifting through what might be meant by the key terms allegedly being debunked. Let’s be clear that it won’t do to simply retreat back to GVT’s exclusive definition of the hot hand, because the issue gains its popular traction from the incredulous reactions from basketball fans (and apparently now, coaches) who are all entitled to their untutored uses of the concept, but whose views on the sport they love are dismissed by the statistical cognoscenti. Sure, the data tell us that admonitions from fans such as “Give it to Toney, he’s got the Hot Hand!” turn out to be bits of advice that aren’t any more reliable than following random chance, even if Toney appeared to be on a hot streak.
Everyone should admit that this is an interesting result, and certainly the average fan isn’t entitled to just any belief. But the claim is sweeping in that, in order to demonstrate that the Hot Hand doesn’t exist, we would need to show not only that we do a poor job of spotting hot streaks taking place in real time, or that our odds of predicting whether the next shot will go in aren’t better than random chance. No, we’d also have to show that our retrospective looks at, say, Jamal Crawford’s 16 consecutive made shots in a 2007 NBA game are not memories of a hot-streak in the relevant sense, and in fact each shot was either random or was due to skill only on a case by case basis – that Crawford was in no groove. That, no study has done, and it’s indeed hard to imagine how any could. That might sound like an unreasonable standard to have to meet, but after all, that would be the implication of showing that the Hot Hand doesn’t exist.
If I show you the results of a player’s shots over time (H’s and M’s), say, a string of shots like Crawford’s cited above, and you show me a randomly generated sequence (also call each term in the sequence H or M, for effect) you do not win the argument by simply showing me that random sequences also sometimes cluster in streaks, not without begging the question. The most you’ve done is to raise a case of underdetermination. In other words, one activity is admittedly random (the number generator) and the other, basketball, is at least allegedly influenced by human practice and skill, in a way that allegedly sometimes leads to grooves or “in the zone” experiences. That randomly generated sequences can look just like streaks doesn’t mean “in the zone” experiences aren’t occurring in one of the data sets. Just from the sequences the researchers laid out before us, (the shots and the random sequence) we simply can’t tell if both sets are random or if one contains genuine hot streaks – the data can bear both interpretations.
But before we get too enamored with data, maybe we can just think about it like normal people, which is an avenue that’s been open all along. Have you ever swung a hammer all day? If so, did you get better as the day progressed? Were there parts of the day when you somehow managed to hit the nail right on the head, rather than hitting your thumb, for a long period of time? The question is whether each swing was independent or if the swings at least some of the time bore a relationship to one another. That is, if the swings were related by virtue of being in the pattern of movement you somehow achieved via all those swings. To be fair, the researchers were focused on basketball, not hammering, but both are physical activities, and the Hot Hand skeptics would have us believe that physical grooves cannot exist in shooting a basketball – that the influence of an “in the zone” experience could not be felt throughout successive shooting attempts.
After all, grooves are what Hot Hands are often made of. It’s not necessarily that basketball fans believe that some mystical quality has befallen an athlete or that somehow this power is such that we must throw the athlete the ball and that the next shot will go in. That may be what some fans mean some of the time, but that’s not an exhaustive list of the concept’s uses. A retrospective look at a practiced athlete with muscle memory operating in a groove will do, so far as fair use of the concept goes (notice that this look-back doesn’t necessitate the general belief that each time a player hits a shot or two, that the probability of subsequent shots going in rise). No one needs to consult statisticians or philosophers to confirm how obvious this is, just talk to some folks at a sports bar or ballgame and see if that use passes (“wow, Crawford really had the Hot Hand that night”). Our own reports of our naive experience shouldn’t be a trump card, but to doubt the most obvious forms of our experience in the face of ill-fitting data is to be constitutionally fickle.
So what should I say about Cassam’s mention of the hot hand fallacy when employing his suggested method of assessing intellectual character rather than just sticking to evaluating reasons? Given the irony that the so called fallacy was dragged out by Cassam to demonstrate the bad character of coaches who believe in the Hot Hand, not to mention that the data are now divided on the topic? If there are any character assessments that apply, I’ll leave them to others, and we can imagine the back and forth that could ensue from there. In the wake of that thought, I wonder if our public discourse has the capacity for many negative assessments of one another’s intellectual character – how long we can go before our conversations become fit for cable news rather than reasoned discussion.
We should distinguish between two traditions within phenomenology: realist phenomenology and idealist phenomenology (fathered by Heidegger and Husserl respectively). The distinguishing feature is how they treat their ‘pre-bracketed’ and ‘post-bracketed’ states: in the realist case when we interpret (describe) the world we can bracket the truth of the claims epistemologically: we can say ‘there is a table there’ regardless of whether there is really. In the idealist case we can metaphysically bracket claims: we say the description ‘there is a table’ has no commitments at all about what is real or not, and indeed we can come up with a system of such ‘impoverished descriptions’ with no metaphysical commitments – and in Husserl’s case, use these statements as a foundation for science.
So the realist phenomenologist says ‘there is a table there’ is a useful interpretation, it does certain things (coordinates our behaviour, engagement with the world, etc.) and it can do this while being true or false – but it nevertheless is about the world – it says that a table exists. The idealist phenomenologist says, rather, that it cannot be false because it has no metaphysical content, is a mysterious kind of description that does not posit the existence of anything.
How does idealist phenomenology accomplish this? In The Idea of Phenomenology , Husserl outlines the key idealist move:
“Every intellectual experience, indeed every experience ever, can be made into an object of pure seeing and apprehension while it is occurring… [therefore,] it would make no sense at all to doubt its being.” (Lecture II).
The metaphysical trick here is to assert that the world we encounter is just a series of partial objects within consciousness (ie. pure phenomena) rather than aspects of total objects which consciousness encounters. Thus, our global picture is this: consciousness is a container of objects of a special kind – phenomenal objects – when we believe we have seen a bent stick (cf. refraction) we are “really” just encountering a series of bent-sticky phenomena within consciousness, and hence make no mistake at all.
Phenomenology thus transposes the problem of failed judgement (“there is a bent stick before me”) into metaphysical theatre: it isn’t a failure of judgement, it’s that you’re only in contact with phenomenal objects, not real ones – so you are certain, correct and not failing in anyway when you say “I saw a bent stick” if we interpret this as phenomenological puppetry on the stage of The Mind.
Idealist phenomenology thus believes there is a privileged class of descriptions (interpretations of the world) which are immune from failures of judgement, which are true in virtue of being about phenomenal objects. I cannot myself, even granting idealist metaphysics, think of any description of the world which doesn’t, in its particular ontological carving-up, make itself immune from a hermeneutic critique: that, at least, there is no “privileged” way to carve up the world and many terrible ones (such as those immediate ones which come to mind sat in an arm chair).
However the foundational problem here is that consciousness is not a container for objects; this assertion mostly derives from another: that the world itself seems to be one way but is another, thus in its initial state of “seeming to be” it cannot be itself real (that illusion is metaphysical). The world, however, never seems to be other than it is. When there’s a “bent stick” in some water, The World isn’t in a state of “seeming” where it has changed its nature – the world is exactly as it should be: light is being refracted through water. Our judgement that this effect is not refraction but a bending of the stick is the error. “Perception” (and seeming, illusion) is a property of judgement, not the world: the world only seems to seem. We perceive only in the sense that our encountering objects is not epistemologically determinative: we don’t always know what it is we’re encountering that does not imply, whatsoever, that the world is thus itself “seeming to be” and so, in effect, is “really” inside consciousness.
Husserl’s reply to the metaphysical sceptic is thus to assert that we can have non-metaphysical interpretations of the world, by asserting that what we are interpreting isn’t the world but an ideal theatre – and thus just either eliminates reality or turns it into ideality, and in either case, neither bypasses metaphysics nor successfully replies to the sceptic.
So, what guarantees knowledge? Given that the response to this question in the idealist-phenomenal tradition has been to perform metametaphysical acrobatics (transforming consciousness into a container; objects into phenomenal spectres there in, and encountering objects as a constitutive of them) – can we do better? What exactly is the comparison here however, scepticism about knowledge lead idealists to do what exactly? Merely, to assert that knowledge was certain. Rather than say our judgements about the world could be foundationally in error we assert that judgement is foundationally certain, and that the world thus bends to its abilities – so we bring the world under the umbrella of knowledge, within consciousness, and say “there is no problem of scepticism, because knowing makes things the case!”.
The idealist tradition has, in each stage of its developmental reply to scepticism just asserted new things about the power of judgement, which incidentally only ever prompted a reformulation of scepticism in higher-order terms – rather than “that stick might not be bent!”, the sceptic says “there’s no way of deciding whether your knowledge is determined by your judgement or of the world itself or of some corrosive mixture!”, and here too we might just offer Nietzsche’s rebuttal to Kant:
‘How are synthetic a priori judgements possible?’ Kant asks himself – and what is really his answer? ‘By means of a means’ – but unfortunately not in five words.”
The problem to those who take knowledge to require higher-order justification is that there’s always a higher-order the sceptic can go to: idealist epistemology says that you can only claim “I know there is a table there” if you can also explain how you’ve come to know that, so the idealist builds up a metaphysical picture which guarantees the how, but then the sceptic replies: and how do we know your metaphysics is true? The realist however will simply say: go ahead claim to know things, knowledge only collapses in toto if the radical sceptic is right, in all other cases it doesn’t – and since we’ve no reason to believe the radical sceptic, we’ve no reason to throw away knowledge claims.
Thus the realist phenomenologist says only that the world is, in the general case, itself, and only itself. If it “seems” to be a delusion, the world isn’t changing, our judgements are. The world is not “external”, as though it could ever have been “internal”: the mind is not a container, but a process (of encountering). We are embedded in the world, as water runs through soil, the light of bent-sticks runs through us. We don’t have any “absolute, creative, ideal freedom” to run it a different way. The stick isn’t made by our heads, the light isn’t a spectre of our mind – it’s running through us. The mind, insofar as it does anything, casts its own shadow on the world – a shadow which aids our judgement but is neither identical to it or the world.
Thus knowledge is provided contingently: whether we are actually correctly describing the world is determined by how things are, not as a necessary property, artificially guaranteed by judgement. If a person looks out over a desert and sees a mirage then his judgement is wrong about the world, and if another person looks out and sees an oasis, then his judgement is right. Neither person can alone and a priori give any certainty to their knowledge. There is “certainty” but it’s a property of states of affairs (of my body, the world, our relationship), not of the mental. The mirage-man does not have certainty, but the oasis-man does – as a feature of their relationship to the world, not as a feature of their knowledge: oasis-light is flowing through the oasis-man, but there is no mirage-light flowing through the mirage-man.
There can be no reply to the encircling sceptic, other than to turn his scenarios around on him, and ask, “but what would have to be the case for your alternative possibility to be true?”. And we always find a great mess which cannot be justified. In a field of alternatives where none can win out against the others on certain grounds, we’re compelled to the one which invents the least. And as far as metaphysical gymnastics goes, the least assertive is that which says, “my hand here, is my hand, in the world in which I am in, and I know this if, and only if, it is the case”.
The observation on which all scepticism rests is that judgement never comes with a judgement of itself, that to know a thing is not to know that you are knowing it or how you are knowing it. And scepticism only forces us to concede this minor point, and none other: not that their Rube Goldberg metaphysics is worth consideration, and especially not that we have to become Goldbergs ourselves.
This post in the fifth in a series on Science, Technology, and Society. The previous post in the series is here, and the next post is here. All posts in the series have previously appeared on the Partially Examined Life group page on Facebook.
“Einstein’s results again turned the tables and now very few philosophers or scientists still think that scientific knowledge is, or can be, proven knowledge.”
Imre Lakatos (1922 – 1974) was a Jewish-Hungarian philosopher of science and mathematics, and a long-time colleague of Karl Popper at the London School of Economics. For most of the war Hungary was ruled by the nationalist Horthy regime, which cooperated militarily with the Nazis but refused to cooperate in the Holocaust. When the Horthy regime was forced from power in 1944, Hungary’s Jews were exposed. Imre Lipschitz changed his last name to Molnar and joined a unit of Marxist guerrillas; his mother and grandmother were murdered at Auschwitz. After the war he changed his last name again (to Lakatos) and accepted a university position in the Soviet-backed Rakosi regime. He was jailed for several years, but released on Stalin’s death in 1953. He participated in the 1956 uprising, and had to flee the country when it was suppressed by the Red Army. He finally settled in Great Britain, where he earned his doctorate and spent the rest of his career.
Like Feyerabend, he worked to reconcile traditional, proscriptive approach (Popperian philosophy of science) with the comparatively recent descriptive approach (Kuhnian history of science.) Unlike Feyerabend, he thought an orderly process of discovery survived the merger – namely, the Methodology of Scientific Research Programs. According to Lakatos, these programs are composed of a “hard core” of assumptions which are not normally open to questioning, and a “protective belt” of claims that can be modified as needed in order to suit new and potentially problematic observations. When observations do not match predictions, the usual assumption is that the fault lies in the peripheral and not the core elements – however, if enough of these mismatches pile up, eventually the core itself could be threatened, and the Research Program jeopardized.
Unlike Kuhn, who held that a single Paradigm dominates all science at once, Lakatos argued that multiple Programs compete within or across fields simultaneously. The value of an individual Program might then be assessed by comparing the results it generates with those of its competitors. A progressive Program modifies its protective belt in a way that incorporates new information and generates new predictions; a stagnant Program incorporates new information but does not generate new predictions. The validity of a Program is a matter of its short term history of (and thus prospects for) generating useful information. However, a stagnating program is not necessarily a dead one – just as progressive theories can break down, worn out theories can be revitalized. Put another way, it is the dynamics, rather than the statics, of a program that determine its validity.
To take an example from the history of science, it was known in the 19th century that light had certain wave-like properties. Because a wave is a disturbance in a medium, and light clearly traveled through space, it was reasonable to suppose that space contained such a medium, which the physicists called “luminiferous ether.” Lakatos would say that the protective belt of the theory were modified in order to reconcile the hard core assumptions with observations. However, as problems began to pile up for the Newtonian view of particle physics, and repeated experiments failed to discover any trace of the supposed ether, the core itself was jeopardized. This (among other things) led to what Kuhn would describe as a period of “revolutionary science” in the first decades of the 20th century, in which the new programs of Relativity and Quantum Mechanics displaced the old Newtonian program.
In the 1930’s, however, physicists was realized that the predicted behavior of galaxies under Einstein’s model of gravity did not square with observations. Reasoning that the observations (not the theory) were in some way faulty, physicists decided that a significant portion of these galaxies must be composed of a non-luminous (and hence invisible, or “dark”) matter. In the 1990’s physicists also realized that galaxies were not slowing down (as one would expect if gravity is acting as a drag on their momentum) but speeding up. A new (and also invisible, or “dark”) form of energy was hypothesized in order to account for this. In both cases, the hard core of the program was preserved by modifying the protective belt. According to Lakatos, there is nothing wrong with this, because the validity of a program is a question of its momentum, not its internal consistency. However, because the confidence which scientists place in a theory is ultimately dependent on the success of its predictions, dark matter and dark energy cannot stay invisible forever. Eventually they will either be observed, or Relativity will be discarded in favor of some new explanatory model. In other words, the protective belt can only be arranged so many times. The intense interest of physicists in the questions of dark energy and dark matter are propelled by this sense of suspense, for it is clearly a question of fundamental importance for their work, whether Relativity will be confirmed by the discovery of dark matter and dark energy, replaced by some new Program, or perhaps challenged by a revived Newtonian Program (MOND).
Imre Lakatos died unexpectedly in 1974, of a brain hemorrhage. His papers, which had previously been scattered throughout various scholarly journals, were published posthumously in 1978, under the title Philosophical Papers. His discussion of Research Programs appears in Volume One. Perhaps also of interest is his philosophy of mathematics, published in Proofs and Refutations (1976), which held that mathematical proofs are fallible(!), and advocated an experiment-driven approach to their verification.
Daniel Halverson is a graduate student studying the history of Science, Technology, and Society in nineteenth-century Germany. He is also a regular contributor to the PEL Facebook page.
Should the social sciences be like the natural sciences? Wilhelm Dilthey didn’t think so. This early 19th and 20th century figure who went on to influence Martin Heidegger, Hans Georg Gadamer, and Paul Ricoeur contended that the concept of Verstehen is crucial in our interpretation of human thought and behavior. Verstehen literally means “understanding,” and Dilthey believed that whereas we look for explanations of phenomena in the natural sciences, Understanding or Verstehen in Dilthey’s technical use as applied to the social sciences means interpreting human behavior in view of generalizations made from descriptions of past or ongoing behavior and whatever judgments or practical rules we can derive from such behavior. Dilthey fundamentally believed that human beings were both historical creatures and creatures with complex agency, and both these assumptions make us track what count as empirical data much differently in the social sciences than in the natural sciences.
Whereas the natural sciences look for laws that govern phenomena, Dilthey did not think that we could be so lucky in understanding human beings. Think of laws in terms of counterfactuals. In Physics, say, if you know a particles’ position and velocity, you can determine the particles’ behavior. You can even account for changes in the behavior. Not so with humans, beyond whatever general principles it’s possible to derive. You can wonder why Sam went to the company picnic and give some reasons why he decided on going instead of staying home but you can’t know, apart from taking in a whole host of other countless factors, if he’s going to go if he gets a stomachache. Maybe he will, maybe he won’t. Part of what we think about whether Sam will go under different conditions has to do with what we think is appropriate or inappropriate in the circumstance, and these conditions of appropriateness range from the most local possible to the most global. As Dilthey explained in his book, Introduction to the Human Sciences, this tells us that our understanding of human behavior, as opposed to other kinds of behavior, is inherently normative, about what people ought to do in such-and-such a circumstance.
Yet we need not conclude from Dilthey’s characterization of Understanding that it is something fundamentally anathema to naturalistic explanation, and he himself did not seem to believe as much. Pace the way in which the concept of Understanding may have been taken up post-Dilthey, it is possible to construe his treatise on the social sciences as continuous with the rest of naturalistic inquiry. The biologist-turned-philosopher Massimo Pigliucci, probably unknowingly, is in agreement with Dilthey. He has argued that the way we interpret human behavior is fundamentally irreducible to the way in which we investigate the natural sciences because, in addition to making use of normativity in the social sciences, we more particularly believe that there’s such a thing as agency, that is, that people and other animals (and maybe robots too) act from thoughts and values. Pigliucci writes in his book, Nonsense on Stilts, “Suffice to say that free will is a way to label the complex decision-making processes in which the human brain engages, consciously as well as probably subconsciously,” and Pigliucci suggests that this is as much a naturalistic way to do rational inquiry as anything else.
Another consideration worth remembering, lest we make too big a deal of the possible discontinuity between explanation and Understanding, is that what makes the social sciences real and reasonable endeavors just like the natural sciences is that people within particular fields generate hypotheses and check them against empirical data, which Pigliucci reminds does not only require experiments, since science “can be done with an intelligent use of observational evidence.”
The common thread in all science is the ability to produce and test hypotheses based on systematically collected empirical data (via experiments or observations). How these hypotheses are generated, how exactly scientists go about testing them, and the degree of success we can expect from different sciences varies from science to science and from problem to problem.
Dilthey argues regarding the social sciences in general that the two considerations of paramount importance when thinking of human beings is their historical conditions and also the degree to which human agency confounds our potential understanding. Pigliucci, however, pushes further, arguing that when considering the sciences in general, we can easily see that historical considerations permeate several scientific fields and that the issue of human agency is one instance of the more general problem of complexity. Pigliucci informs us that what count as empirical data have a lot to do with the kinds of problems people are interested in solving within a field and the degree to which a field is both historical and complex.
[O]n the one hand, we have a continuum from completely historical (paleontology, astronomy) to partially historical (evolutionary biology, geology) to essentially ahistorical sciences (physics, chemistry)… On the other hand, we have a second continuum, from sciences that deal with simple, highly tractable systems where one can apply strong inferential methods (physics, chemistry) to sciences dealing with extremely complex objects, where statistical treatment is necessary and where the ability to explain and predict phenomena is much reduced (evolutionary biology, psychology).
This point of the Two Continua generalizes beyond fields that are generally accepted as social sciences, fields such as Psychology, Economics, Political Science, and Sociology, and into fields which have sometimes been considered separate from the social sciences and regarded as the Humanities, fields such as Literary Studies or Religious Studies. If what has preceded is true, then the extent to which these fields make use of empirical data and test hypotheses, they are sciences. Once we bear in mind a couple of the common features of the sciences, and how the Two Continua bear on the kind of empirical data the fields can collect, we have less worry to think that the social sciences are radically discontinuous from the natural sciences, and neither for that matter the Humanities. Human beings are both historical and rational creatures, and our empirical data follow from those bedrock assumptions, but then that’s not any different from any other science, from which the research programs are built on their respective assumptions and on the basis of which questions constitute interesting problems in particular fields.