The History of Literary Criticism

I recently read Joseph North’s Literary Criticism: A Concise Political History (Harvard, 2017). North is very good at characterizing what the field settled into since the beginning of the 2000s, once the glory days of postmodernism were over. The main result of the latter was to play down concern with the aesthetic dimension of literature in favor of teaching students to think about literature from a “historicist” and “contextualist” – code for “political” and often crudely ideological – perspective.

That much is clear, but the question is, how did it happen?

According to North, the Baby Boom generation that accomplished the turn from aesthetics to politics relied on a misrepresentation of the so-called “practical criticism” of I.A. Richards, which they encountered in the form of the American school of New Criticism. This, the younger generation believed, was an essentially conservative enterprise that encouraged political passivity by isolating literary value from the wider world. The rebellion against it culminated in the New Historicism, which paved the way for post-colonialism, queer theory, disability studies, and the rest.

Continue reading

A Modest Proposal

UC Berkeley’s law school recently changed its (unofficial) name from “Boalt Hall” to “Berkeley Law” because John Boalt, after whom the building that houses the law school was named, had “said racist things,” in the words of Dean Erwin Chemerinsky.

But how does referring to the school with the name of someone who not only said racist things, but also owned slaves, solve the problem?

The city of Berkeley is named for the notorious slave-owner Bishop George Berkeley (1685–1753), an Irish philosopher who took the opportunity provided by some years spent in Rhode Island to buy some of his fellow human beings and force them to labor on his plantation.

Continue reading

The Case Against Democracy

In his book Against Democracy (2017), Jason Brennan presents a formidable amount of empirical evidence to the effect that the more someone is involved in politics, the worse he or she becomes as a person.

  • The more you are involved in political debate (especially as the representative of a group or ideology), the less likely you are to reach reasonable conclusions. Participation increases people’s tendency to ignore facts that don’t support their position, to argue in manipulative and deceptive ways, to adopt extreme views, and in general it makes people more biased and less reasonable.
  • The more active you are in politics, the less likely you are to talk with people whose views run contrary to your own. In fact, you may reach the point where you are unable even to imagine a point of view other than your own. As a result, the more active you are in politics, the less good you will be at doing what politicians are supposed to do: see enough sides of an issue to craft and sell a compromise.

Continue reading

Progressivism and Disciplinary Power

Over the last decade or so, “progressive” activists have exhibited a desire to regulate the personal behavior and values of their fellow citizens. Language, attitudes, expressions, gestures, feelings, and even thoughts are to be policed, with the aim of enforcing principles of conduct established by self-appointed “experts” in the workings of racism, sexism, classism, ableism, and other injustices.

Foucault’s concept of disciplinary power might conceivably help us think about the rise of illiberalism on the progressive left. There are at least as many differences as there are similarities, however, between disciplinary power and the regulation of personal behavior pursued by activists today.

What is disciplinary power? Foucault’s view was that after the Enlightenment had undermined the moral authority of religion, modern societies developed professional and academic disciplines that purported to use scientific methods to acquire empirical knowledge of human behavior. These sciences – psychology, sociology, economics, anthropology, criminology, medicine – established how human beings normally behaved under various circumstances.

Theoretically, “normal” meant “average.” But in practice, “normal” was implicitly taken to mean “good” or “ideal.” This, Foucault argued, made possible a form of oppression that was characteristic of liberal democratic societies: individuals “internalized” the norms established by the disciplines and regulated themselves accordingly. In this way, social scientific “experts” in human behavior played the role of the earlier religious and moral authorities.

Continue reading

What do Horkheimer and Adorno mean when they say: “myth is already enlightenment”?

Myth is “already” enlightenment because myth and enlightenment have something in common: the desire to control nature, rooted in a fear of nature and aggression.

In myth and religion, human beings tried to control nature (at least to the extent of sustaining the right amounts sun, rain, and fertility) by propitiating the gods, offering them sacrifices and other signs of devotion. Myth and religion understood nature in personal terms, seeing in forces such as sun, rain, and wind the recognizably human qualities of purpose and desire. Knowledge of reality was acquired by means of inspired mystical experiences, and passed down through the generations as authorized by tradition.

The central principle of the Enlightenment, on the other hand, was the sovereignty of reason. Reason is the highest source of intellectual authority, and its findings trump religion and tradition. Reason establishes the value of religion and tradition, but neither religion nor tradition can evaluate reason. As Immanuel Kant put it in the Critique of Pure Reason (1781):

Our age is, to a preeminent degree, the age of criticism, and to criticism everything must submit. Religion through its sanctity, and the state through its majesty, may seek to exempt themselves from it. But then they arouse just suspicion against themselves, and cannot claim the sincere respect which reason gives only to that which sustains the test of free and open examination. (Critique of Pure Reason A.xii.)

Continue reading

The Question of Being

In Being and Time (1927) Heidegger says that he wrote the book in order to “reawaken the question of the meaning of Being.” It’s important to pay attention to all the words in this phrase.

Heidegger’s question is not the one that Leibniz asked, namely “Why is there something rather than nothing?” That question asks for an explanation – a cause, a sufficient reason – of the fact that anything exists at all. Heidegger, on the other hand, wants to know what it means to say that something exists. Or rather, he wants to ask what it means. That implies that the meaning of Being is not well-understood, which, Heidegger thinks, is significant in ways we don’t sufficiently appreciate.

Asking “What is the meaning of Being?” is paradoxical in that the use of the word “is” in the sentence implies that the meaning of Being is already known. The meaning of a sentence such as “The scarf is blue,” for example, seems clear enough.

That isn’t to say that “is” is completely unambiguous. According to logicians, “is” as used in the sentence above is just one of four ordinary uses of the verb “to be”: predication (as in the sentence), identity, subsumption, and existence. Logicians use different symbols for each sense of “is” to remove the ambiguity, but in ordinary communication it is usually clear which sense is intended – if not from the sentence, then from the context.

Heidegger argues that each of the four senses derives from a more fundamental sense, although that sense has receded so deeply into the background that we’re not fully aware of it. Being, Heidegger says, means that something is present to to us in a way that makes sense. In Heidegger’s various formulations, it has been “uncovered,” “unconcealed,” “disclosed,” “granted,” or “bestowed.” To put it differently, Being is that which reveals. In Being and Time, that which reveals is our comportment – the “understanding of (the meaning of) Being” that’s embodied in our ability to differentially respond to the various entities in the world. For the later Heidegger, that which reveals is language, and changes in philosophical language over time track the variations on the Platonic understanding of Being that constitutes much of our spiritual history.

Continue reading

On Self-Reliance

In “Self-Reliance,” Emerson says that “[w]hat I must do … not what the people think … may serve [as the rule] for the whole difference between greatness and meanness.” Later he adds that “we have not chosen [our occupations] but society has chosen for us.”

If the problem is that society has chosen our occupations for us, then the remedy would seem to be for us to choose for ourselves. But that’s not quite right, because the choice of an occupation – or rather, a vocation or calling – is importantly different from the ordinary exercise of free will.

In “Spiritual Laws” Emerson writes:

I say, do not choose; this is a figure of speech by which I would distinguish what is commonly called choice among men, and which is a partial act, the choice of the hands, of the eyes, of the appetites, and not a whole act of the man. But that which I call right or goodness, is the choice of my constitution; and that which I call heaven, and inwardly aspire after, is the state or circumstance desirable to my constitution; and the action which I in all my years tend to do, is the work for my faculties.

The “work for my faculties” is my vocation or calling. The choice of vocation is not accomplished by “what is commonly called choice” but is rather “the choice of my constitution,” i.e. of my whole self. A vocation chosen by one’s “whole self” is one that defines oneself, and in that way it is more like an unconditional commitment than a decision, relative to the circumstances, to take one course of action rather than another. Committing oneself to a calling is something like yielding to necessity.

Continue reading

What did Nietzsche mean by “decadence”? Has the culture become decadent?

The short answer is that decadence, for Nietzsche, is being drawn to what is bad for you.

Acting effectively requires self-confidence, great passion to achieve one’s aim, and unity of purpose. To be very successful, and certainly to achieve anything truly great, all of one’s abilities and all aspects of one’s personality must be devoted to achieving one’s aim.

This requires self-mastery, by which Nietzsche means the ability to cultivate one’s drives, desires, and abilities in ways that maximize their contribution to one’s project. Self-mastery is not achieved by conscious deliberation alone; it is an “instinctive” ability to do what is good for you. Faced with a choice, one with self-mastery will identify the best course of action without needing to deliberate.

He guesses what remedies avail against what is harmful; he exploits bad accidents to his advantage; what does not kill him makes him stronger. He collects instinctively from everything he sees, hears, lives through, his sum: he is a principle of selection, he leaves much behind. He is always in his own company, whether he associates with books, human beings, or landscapes: he honors by choosing, by admitting, by trusting. (Ecce Homo, “Why I Am So Wise” §2.)

The decadent, on the other hand, chooses what is bad for him, again in a largely non-deliberative way. A decadent or “corrupt” person instinctively seeks out that which harms him.

Continue reading

What did Wittgenstein mean when he said that if a lion could speak, we couldn’t understand him?

This is a much-discussed aphorism (Philosophical Investigations II 190), and even now Wittgenstein scholars differ over how it should be interpreted. But everyone can agree that its meaning depends crucially on what Wittgenstein means by understanding (verstehen).

Here’s what he says about that in Philosophical Investigations §§531–532:

We speak of understanding a sentence in the sense in which it can be replaced by another which says the same; but also in the sense in which it cannot be replaced by any other. (Any more than one musical theme can be replaced by another.)

In the one case the thought in the sentence is something common to different sentences; in the other, something that is expressed only by these words in these positions.

Then has “understanding” two different meanings here? – I would rather say that these kinds of use of “understanding” make up its meaning, make up my concept of understanding.

For I want to apply the word “understanding” to all this.

At one level, understanding means grasping the general meaning of an expression. At this level, “I’m going to walk the dog” and “I’m going to take the dog for a walk” mean the same thing. If you heard someone (call her Alice) use either expression under the right circumstances, you’d be justified in forming the expectation that she would soon walk the dog. You’ll have understood her well enough to predict her behavior.

Continue reading

Men Without Art

What would life be like without art?

One answer to the question is that it would be like the life of an animal.

The human mind is distinctive in that we have “meta-beliefs”: beliefs about our beliefs, such as that a belief is true or false. We also have meta-desires: we desire our desires to be appropriate, and we are sometimes concerned that our emotional reactions are inappropriate. We are not merely conscious, we are self-conscious. We not only know things, we know that we know them.

It’s impossible to be certain, but so far as I can tell my cat isn’t conscious in this way. When she forms the belief that a mouse is within striking distance, she doesn’t ask herself how she knows that her belief is true. She just knows that a mouse is near. And she certainly doesn’t ask herself whether she has a right to attack the mouse. She simply pounces when the opportunity arises, without any moral deliberation at all. Continue reading