Archive for ‘self-irony’

December 18, 2008

More thoughts on the lumpenbourgeoisie

by Carl Dyke

*I’m staying away from faculty unions for a second in this one. Yay, unions. For further discussion in that area see Dean Dad’s post linking several more from across a spectrum of circumstance and opinion. Here I’m sketching some more general ways to think about the liberal academy and disagreements/conflicts therein.

*One way a perfectly good discussion can run aground is if the participants are cognitively or morally or aesthetically mismatched between the view that things are/should be either one thing or the other, and the view that things may/should be complex assemblages of disparate elements. There is a lot of leverage in simplification, a clear enemy and a clear agenda, as we know from the histories of racism and sexism; but as those examples show, if it’s programmatic rather than true to life the thoughts, feelings and actions that result are distorted and distorting.

*What is the liberal academy good for? It’s certainly not to prepare people immediately for employment, although when we’re desperate we trot out marketing slogans about how our degrees prepare folks to be effective in any career. We do have functions in the production of a value-added educated labor force, but honestly there are way more efficient ways to do that than degrees in medieval literature or classical philosophy. Our legitimating, hegemonic functions are probably more a matter of lingering (convenient) habits than careful planning and effective resource-allocation by the class overlords at this point. Nor are we and our graduates at least generally happier or more fulfilled than the average bear.

*We’re not structurally that important. A little legitimation, a little status, a warehouse for surplus labor, a containment system for irritating radicals (this is the mistake the Russians made in the 19th century – they trained a critical intelligentsia to show how progressive they were, but gave them nowhere to roost). In a sense we’re pets. We are paid accordingly. When academic administrators try to tap into a more corporate model they are trying to tap into a higher and better-compensated level of structure. They’re following the money, of course they are. To do that they need to look right (pdf, Chaudhuri and Majumdar, “Of Diamonds and Desires: Understanding Conspicuous Consumption from a Contemporary Marketing Perspective”) to the target audience, which is why they need better salary, amenities and perqs than the workforce. This is no mere venality, but a bootstrapping investment; it’s a smart one, although it’s not at all clear that it can succeed. But if it fails, the alternative is to not be tied into corporate funding, which puts the whole institution at the mercy of the market and of the indirect scraps of corporate success the government in a capitalist society is able to skim off. And it is all ultimately tied to the U.S.’s ability to extract far more than our ‘fair’ share from the global economy.

*If we’re good for anything apart from the little services mentioned above, it’s to practice, model and teach the arts of complexity and dispassionate analysis (Weber’s “science as a vocation,” Bourdieu’s reflexive “interest in disinterest” [I apply this kind of analysis at length here – pdf]) — to produce more thorough, balanced and reliable understandings of the world. This is a way cool thing we know how to do! We can start with us. Competence in the humanities = ability to construct persuasive accounts of multiple perspectives. Joining a gang is not critical thinking. Partisanship is instantly delegitimating. Can we do better, or at least differently than that? Bracket our biases, even overcome them, as we teach our students to do? Speak truth to power, not shout our corporate interests and conveniences at power? Well, here’s a test. Is the academy a simple place with heroes on one side and villains on the other? Here’s another one. Can we see the ‘problem’ of academic proletarianization as a direct and elementary unintended consequence of the expansion and liberalization of higher education to include proletarians? Just as the inclusion of women feminizes institutions by downgrading them, and the extension of voting rights inevitably dilutes the value of each vote. Yay; oops. A sense of humor helps so much here.

*At this point we’ve got mass institutions trying to do elite work. That’s a recipe for disappointment on all sides. We’d all like a pony. You can have wealth, status and distinction or you can have openness and inclusion; you can tweak a compromise mix, which is the game we’re really playing now; but you can’t have all you want of everything at once.

*I’m just sayin’.

December 14, 2008

Grading

by Carl Dyke

I’ve figured out, again, that if I want to get any serious work done I have to get away from my distractions. So for the last few days I’ve been taking up tipspace for as long as I thought polite at a series of restaurants and coffee shops, buying just enough to rent the table and overtipping like mad, bringing down the atmosphere of leisure no doubt with my gradebook, big piles of papers and journals, and the occasional anguished cry.

Actually the papers have not been half bad overall. I finally figured out that I need to teach what I want them to know and do, so I’ve got my teaching and assessing aligned (I ‘teach to the test’). It’s sort of amazing to me how long I got by with a reputation as a good teacher without doing that. Anyway, the papers are much better when I’ve taught them how to read, think and write in the ways one must to write a good paper. ‘What gives’ is content coverage. It’s amazing too how little I care about that if by the end of the class they can develop a coherent, well-supported thought about something in particular relevant to the course for 5-8 pages.

December 7, 2008

Readability: Hitler

by Carl Dyke

The Dec/Jan 2009 issue of Bookforum has an interesting interview with Timothy Ryback about his book on Hitler’s Private Library. (In Pierre Bayard’s readability system as discussed in How to Talk About Books You Haven’t Read, to which I will be introducing my students in the sophomore seminar in the Spring, Ryback’s is an HB+: a book I’ve heard of and have a good impression of.) Ryback was able to identify a number of Hitler’s most personally significant books from more than a thousand housed at the Library of Congress; and prompted by Walter Benjamin’s “Unpacking My Library” he was able to use them to figure out some things about Hitler as a reader and a thinker.

Most notably, Ryback found that “Hitler was animated not by the excitement of the autodidact discovering a vast world of knowledge but by the intellectual insecurity of a high school dropout who needed to overpower everyone else in the room.” (I know plenty of Ph.D.s with the same insecurity, but the point is we know this type.) Hitler’s genius was for collecting very broad, very shallow knowledge. He liked encyclopedias. He was not a critical reader or thinker; he took what he read at face value and lumped everything together without distinction. In conversation he was a dazzling reciter of facts, constructing detailed but superficial comparisons by juxtaposition.

So far so good – a nice triangulation of something we already knew or at least assumed about the guy. Perhaps the surprise is that Hitler was a compulsive reader; we might have thought him even more shallowly absorbed in his own wacky thoughts and the echo-chamber of his cronies than that. But here’s where Ryback seems to get into some trouble. Ryback is an old-school liberal artist and bookworm – one imagines corduroy, tweed and elbow patches – who struggles to imagine how reading could not be positively transformative. “‘We believe literary reading is an ennobling enterprise,’ he says. ‘The underlying assumption is that we are better people for reading. What’s shocking about this is that we had a man who read to fuel exactly the opposite, everything that was destructive to intellectual processes. Out of this imbibing emerged such evil that it flies in the face of what we believe reading actually does.'”

Not so fast with that “we,” Tim. We’re not all congregants in your religion; books do not light up all rooms with their halos. If books are sacred things you might be right, but if they’re human things, not so much. And sure enough, books are read, and first written, by human beings, who are what they are before they write or read any particular book. Good humans usually write good books, and bad humans generally write bad books, although the reverse can sometimes be true. Good humans tend to prefer to read good books, and bad humans gravitate toward bad ones (Hitler was a big fan of Henry Ford’s and Madison Grant’s racist tracts); but also good humans may read bad books well, and bad humans may read good books badly. Nor is it a simple thing to sort out ‘good’ and ‘bad’ with respect to humans or books. One needs a moral system for that, and moral systems are contested.

We are riddled with confirmation bias, hard-wired for jamming new data into old schemata. Of the three basic kinds of analytical thinking – habit, belief, and theory – only theory is readily subject to disconfirmation by new information. I sometimes tell students that the way to tell if your theory is a good one is to track your surprise. A good theory will prepare you for reality, a bad one will leave your head spinning every time something that doesn’t fit happens. By this standard, the theory about the ennobling powers of literary reading is a bad one; but of course, if it’s really a “belief,” as Ryback says, and not a theory, his surprise will motivate no substantive transformation of his thinking. And sure enough, his own reading will not have ennobled (or better, enlightened) him, either.

November 26, 2008

Class consciousness in the lumpenbourgeoisie

by Carl Dyke

I’m going away for the long weekend, so for those of you escaping the loving clutches of family and unrescued by football I thought I’d leave a long, debatable one to chew on. It’s rough (I wrote it around the edges of a lot of grading) but if you’re patient and read generously I think the gist is here. I promise to reply faithfully to comments when I get back.

One of my first posts on this blog ventilated my thoughts about academic labor. Now a new institutional outrage in the Tennessee higher education system, which pays adjuncts $15k a year without benefits for a 5/5 teaching load (five courses per semester; four is generally considered high for permanent faculty), has once again refreshed my treachery toward the interests of my class. See, unlike many of my colleagues I am not convinced that it makes sense to describe people getting paid for academic work as exploited, oppressed, overworked, downtrodden, what have you. And although I am affectionately sympathetic to this kind of argument, and believe it is appropriate and strategic to make in a lot of situations for a lot of people, I think it is in some important ways counterproductive for academic professionals to make it about themselves.

Of course ‘making sense’ only happens within structured systems of meaning – cultures, theories – and I don’t mean to sidestep the relevant one here. Obviously it makes no sense to a calvinist, a daoist, a stoic, a burkeian conservative or a libertarian to describe academic work as exploitive, because that standpoint of critique does not exist in those systems of meaning. I’m saying I don’t think it makes sense to describe academic work as exploitive in marxist terms, which is the native vocabulary of such critiques. I’ll mention in passing that I also don’t think it makes sense to describe academic work as ‘oppressive’, but only because I find that to be a catch-all pseudo-critique that’s flung about by some folks on the left like monkeys fling poo anytime something upsetting happens.

Marx of course wanted to smash capitalism, but he admired capitalists and considered them a progressive historical force: destructive in important ways, usefully doomed by their own success. His disdain was reserved for well-meaning clueless intellectuals of various kinds, who he considered worse than useless, including utopian socialists (“Communist Manifesto”), liberals (“On the Jewish Question”), Young Hegelians (The Holy Family, The German Ideology), anarchists (The Poverty of Philosophy), reformers and trade unionists (“Critique of the Gotha Program”). For a brilliant redeployment of these critiques onto recent radical politics, see Robert Meister’s Political Identity: Thinking Through Marx. There’s a lot of sophisticated suspicion of the radical cred of eggheads in these references, but we’ll start with the obvious:

College professors are not proletarians.

I sometimes jokingly refer to my years as an itinerant adjunct as strawberry-picking, but it’s only a joke because it’s transparently silly. I did honest work but I wasn’t breaking my back in the hot sun, humiliated, subordinate and expendable, little more than a sentient machine. My working conditions were pleasant (I find schools pleasant), I enjoyed virtually complete autonomy in my workplace, I was respected as a professional and got full social credit for my work. Although I was sometimes needed, sometimes not, I accepted my responsibility to make my work ongoingly desirable. And as an independent contractor I could say screw this anytime, and I fully controlled the means of my production. It’s an insult to the struggle of real working-class folk to compare my life to theirs.

I was not well-paid. I’m still not by professional standards. Big deal. I coulda gone to law school. I make enough to live on. All needs beyond subsistence are social (Grundrisse) and I’m comfortable with many sociabilities. More importantly, since the bourgeoisie are themselves alienated in their own way, every bit of what I do in this job is my choice and my responsibility, or logically follows from my choices and responsibilities (e.g. there must be administrators; there must be assessments; to fight these things is to fight ourselves). I do not produce commodities, I work with students; and they are mirrors in which I see reflected my essential nature. My work is inherently satisfying, “a free manifestation of life, hence an enjoyment of life.” In short, my labor is unalienated and I am fully in touch with my species-being.

I was content to kibitz on other people’s posts about this until an intriguing reader comment at the post on this scandal at Easily Distracted drove me past the word-count threshold of polite commentary. Here’s what PQuincy said:

And I think we are exploiting adjuncts whom we pay $4500 a quarter for one course! Evidently, the market for academic proletarians is highly variable by region and institution.

But that still doesn’t justify radically divergent pay-scales for different groups with fundamentally similar qualifications. The steady differentiation between ‘full-time’ and ‘part-time’ faculty may be part of the ongoing commodification of expertise, but paradoxically, it also contributes to our ongoing movement (back) towards a society of estates in which privilege and distinction, not qualification, are primary determinants of status, and in which rent-seeking, not profit, drives all sorts of economic decisions.

This is a nice challenge. Just for reference, in the late 90’s I was paid as little as $1200 and as much as $3500 per class; as a tenured associate professor I am currently paid about $1700, I believe, for overloads. I don’t think these numbers are important in themselves, nor did Marx. There’s nothing about “justifying” different pay scales in Marx, or about fairness. In a capitalist economy everything is commodified, expertise being no exception. And as the expert and highly qualified Lumpenprofessor points out, in a capitalist economy work is not paid by its quantity or quality (the “labor” itself), but by its cost of reproduction – the amount it takes to get someone to do that work when it needs doing (the comments on his post are also illuminating):

Instead, Marx demonstrates that what the wage actually pays for is our “labor-power” — our capacity to do work. The wage pays a value equal to our means of subsistence — our house, car, food, clothes, cable-tv, health care, and kids — so that we can continue to come to work. This means that there is always a difference between the value of the wage paid and the value of the actual work done. The greater this difference, the better it is for the employer. This means that the difference in wages between tenure-track and adjunct faculty is not really about the amount or quality of work done, it is just about how well they eat.

That $1700 has nothing to do with my qualifications or my effort or my teaching ‘outcomes’. It has to do with securing a set minimum quality and quantity of work as needed. Apparently it’s sufficient, because I keep teaching overloads. From the labor-as-such standpoint all that matters is that I do it ‘well enough’. If I do it better than ‘well enough’, that’s a nice bonus for the students, the school, and my sense of vocation, but it’s irrelevant from a pay standpoint as long as I or someone enough like me keep(s) being willing to come back for the same pay.

It’s not hard to explain why the University pays adjuncts the minimum amount it takes to get them coming back. It’s much harder to explain why they ever pay more than that. And as long as we herd like lemmings to graduate programs and spend years earning doctorates for which there’s little apparent market, we will have little leverage to change this. No doubt it’s a nice ego boost to have a doctoral program at your school. Each new one incrementally damages the collective bargaining power of academics as workers. We’ll either need to dramatically cut our production of competitive laborers or wait for the revolution to solve that one.

But again – college professors are not proletarians. And the University is not (just) a capitalist enterprise. We operate in a capitalist context, which tends to drive the economics in ordinary ways. But there are also larger fiduciary responsibilities involved: the University is providing a service considered to be a general social good, and therefore providing as much of it as possible as cost-effectively as possible is a positive social good. Who is our employer? Students; society; the imagined community of a fully-educated population. For this reason, it also ought not to be hard to explain why committed academic professionals cheerfully provide instruction at levels higher than required to reproduce compensation. This is our mission, our ‘vocation’ in the calvinist/weberian sense, not just our job. We want our employers to get maximum value out of us for minimum cost; we should be actively complicit with this ‘exploitation’. To grub after money and quibble about what our colleagues make is a violation of our species-being.

Furthermore, when PQuincy says that pay inequality in academe “contributes to our ongoing movement (back) towards a society of estates in which privilege and distinction, not qualification, are primary determinants of status, and in which rent-seeking, not profit, drives all sorts of economic decisions,” s/he is on the right track, but there’s not a movement back here. The professional professoriate has always been a guild; its distinctive self-image, privileges and prerogatives go back to the medieval university. That’s why we wear the dopey robes to gragitation. Our remaining a guild is the only way to explain the fact that academic work has not been completely proletarianized, with price tags explicitly and universally attached to our every ‘product’ from teaching to advising to scholarship. Like all guilds, we are paid much more visibly in status and autonomy than mere, crude, dirty money.

Our leverage to get more than the market price of our labor-power and better than the usual conditions of work comes from that status, and is dependent on the University’s hybridity as both an enterprise within the economy and a status-conferring holy place hovering above it. It is accordingly catastrophically counterproductive to sink to the discourse of proletarian exploitation; it’s like throwing away the face cards in your hand and playing to lose. The question is not how to make ourselves more like workers by unionizing and struggling and Fighting The Man and whatnot. Furthermore this is not even more generally a way to achieve fairness and equality, and it’s a very dangerous strategy in its own right, as the UAW is currently discovering. We need to be working out ways to redescribe our status and privileges as foreshadowings of unalienated labor, then figuring out how to generalize this, not scrambling to join the chorus of the exploited – if for no other reason than they know better, and when they have their revolutions we eggheads are always among the first to get taken out and shot (or ‘re-educated’) no matter what.

So why is this discourse so appealing to people who ought to know better? Well, I’d say that has to do with the expansion of higher education in the postwar that brought a massive influx of proletarians into the academy. We control it now. We brought all of our class resentment with us and worked diligently to demolish the university’s elevated character, while simultaneously championing the right of every person to access its elevating gifts. Hmmm.

Colleagues, we must cease to soil our own roosts.

November 7, 2008

The art of the possible

by Carl Dyke

How to tell the leaders from the led in political discourse:

…[I]f the concrete political act, as Croce says, is made real in the person of the political leader, it should be observed that the characteristic of the leader as such is certainly not passionality, but rather cold, precise, objectively almost impersonal calculation of the forces in struggle and of their relationships…. The leader rouses and directs the passions, but he himself is ‘immune’ to them or dominates them [in himself] the better to unleash them, rein them in at the given moment, discipline them, etc. He must know them, as an objective element of fact, as force, more than ‘feel them’ immediately, he must know them and understand them, albeit with ‘great sympathy’ (and in such case passion assumes a superior form…).

— Antonio Gramsci, Quaderni del carcere [Prison Notebooks], notebook 26, § 5, 2299, my translation. (In this note Gramsci goes on to discuss irony and sarcasm as political stances; sarcasm is both a form of advanced consciousness and a passional means of criticizing contradictions in order to elevate consciousness in others.)

As many others have noted, Newsweek is currently doing a smashing job of documenting exactly what this kind of leadership looks like in practice in a series of reports on the Obama campaign.

November 3, 2008

Liberal bias in the liberal arts

by Carl Dyke

No one much disputes that academics are disproportionately liberal, although it may be the case that we are swinging back toward moderate. But does this mean that we indoctrinate the young?

According to three new studies surveyed by Patricia Cohen in the New York Times, the answer is no.

The notion that students are induced to move leftward “is a fantasy,” said Jeremy D. Mayer…. When it comes to shaping a young person’s political views, “it is really hard to change the mind of anyone over 15,” said Mr. Mayer, who did extensive research on faculty and students.

“Parents and family are the most important influence,” followed by the news media and peers, he said. “Professors are among the least influential.”

This squares with Tim Clydesdale’s work on first year students and the college experience (previously discussed here), in which he found that students put their core values in an “identity lockbox” and that very few students find a liberal arts education deeply transformative.

And it squares with the research (previously discussed here) suggesting that undecided people have really already decided, and with my observations about default theories.

And it squares with my own experience. If anything, higher education has made me more conservative over the years, as marination in the value of balanced critical thinking and seasoning with diverse perspectives (including outside the academy) has mellowed the strong flavors of my youthful radical certainties. Of course, balanced critical thinking and respectful attention to diverse perspectives are themselves liberal values, ones that are at the heart of the liberal arts. But there’s no traction in them for making anyone change their mind, because whatever you think already is part of what needs to be respected and understood on the way to a more comprehensive understanding. As conservative professor James Joyner wryly notes,

Even attending a state school in the Deep South, my political science and history professors were predominantly (but not exclusively) liberal. But debating them tended to reinforce my conservative leanings. Years later, teaching political science courses to predominantly conservative students, I oftentimes found myself taking a Devil’s Advocate stance simply to force them to challenge their own preconceptions. (Which, on reflection, made me wonder if my own profs hadn’t done the same thing.)

Yeah, I can work with that guy.

October 6, 2008

Palinpsest

by Carl Dyke

It’s something of a cottage industry right now for snarky intellekchles like me to gloat over the incoherence and grammatical incompetence of Sarah Palin. Language Log has a nice careful version of the genre. I am amused. I’m all for it. We are, however, missing the point.

We academocrats are used to a kind of hyperformalized orality that basically speaks in essays. I mean, we actually sit around at conferences and listen patiently to each other drone through readings of our specialized research findings. We get perplexed and offended when only a small fraction of our students want to sit still and take notes attentively while we buzz through carefully constructed, fearsomely overloaded lectures about the peanut market in mid-century coastal west Africa. We are some really strange folks.

“A good sermon differs from an essay in that an essay explains a subject, but a sermon appeals to people…” M.L. King Jr. said. People whose audience is not captive have to think more carefully about the rhetorical fit between what they’re saying, how they’re saying it, and who they’re saying it to. So no, Sarah Palin is not explaining anything coherently. She is sermonizing. Her audience already know what they think (or think they do) and she is not trying to change that; she’s trying to hook into it and activate it in her favor. Her objective is not grammatical correctness or propositional coherence but instant intelligibility, using highly familiar and oft-repeated (call-and-responsable) words, phrases, inflections and gestures to appeal directly to the emotions and prejudices of her target listeners.

Who are not us. Barack does the same thing, but his target listeners’ emotional response is triggered by a different sort of articulation. The point is, in their current public performances the candidates are not trying to govern the country, they are trying to win the election. And rightly so. There’s much to learn from Palin’s self-vulgarized orality about how the Republicans think they can do that — certainly their contempt for the intelligence of their base voter could not be more clear — but her personal ability to think and speak clearly is not part of the available information.

September 3, 2008

The wonders of college

by Carl Dyke

It’s that time of year in the halls of academe when hope springs and experience pings, when we imagine the sweet epiphanies we will share with excited and eager students, while remembering years past’s slow boring of hard boards.

Mikhail has some thoughts about the first year experience, I am teaching a class explicitly designed to frame the first year experience, each of us has memories of those rosy days, so this is probably a good time to recall Tim Clydesdale’s sociological work on teens in the first year of college. There’s a nice short review in the Chronicle, titled “The Myth of First-Year Enlightenment.”

He finds that students in their first year are perhaps uniquely resistant to the kind of deeply transformative experience we imagine is the real payoff of college, and indeed are busy just figuring out how to get along away from home. In the meantime they put the very core values we’d like to get them to question into an “identity lockbox” for safekeeping.

Clydesdale notes that “Only a handful of students on each campus find a liberal-arts education to be deeply meaningful and important, and most of those end up becoming college professors themselves…. And so the liberal-arts paradigm perpetuates itself, while remaining out of sync with the vast majority of college students.” Yup.

Practically, Clydesdale recommends several shifts of emphasis: from content inculcation to skills development; from lectures students will soon forget to class discussion of issues, perspectives and interpretations; and from grand goals about moral awakening to modest goals about competence.

Mikhail is quite right that our young charges “will have to get used to the idea that life is full of situations in which you have to learn something, even if it looks like a completely useless subject – remember, [they’re] not old enough or experienced enough to be the judge of what is or isn’t useless.” And the first year is part of that process. But as a matter of practical pedagogy in the face of brute sociological facts, much of what we can accomplish in the first year is to not so thoroughly turn them off with our sanctimonious attempts to jam goodness into their heads that they’ll never recover and will remain sullen anti-intellectuals for the rest of their lives.

August 28, 2008

"Science as a Vocation"

by Carl Dyke

Max Weber’s famous essay, delivered as a speech at Munich University in 1918, has since been regularly attacked and dismissed for attempting to create clear distinctions between the ethics of scholarship and politics, and between facts and values. We certainly think we know much more now about the many and subtle ways even our most scholarly interests, perceptions and interpretations are conditioned by politicized factors like class, race, gender, culture, and so on. Weber knew this. He also knew that surrendering to it and treating all knowledge as naked politics made any kind of clean, reliable data for informed decision-making impossible. Might as well just run around shouting “Yay us!” at that point. His essay is not so easily dismissed and remains worth grappling with, as I suggested in my comment on yet another remarkable post on Easily Distracted about the perils of political engagement.

While I was dredging through my copy for that part of the argument I found again one of my favorite passages, a beautiful and devastating diagnosis. Enjoy:

Consider the historical and cultural sciences. They teach us how to understand and interpret political, artistic, literary, and social phenomena in terms of their origins. But they give us no answer to the question, whether the existence of these cultural phenomena have been and are worth while. And they do not answer the further question, whether it is worth the effort required to know them. They presuppose that there is an interest in partaking, through this procedure, of the community of ‘civilized men’. But they cannot prove ‘scientifically’ that this is the case; and that they presuppose this interest by no means proves that it goes without saying. In fact it is not at all self-evident.

Cheers.

August 22, 2008

Undecision

by Carl Dyke

It turns out that when we’re undecided we may not be. Science reports a study by Canadian and Italian researchers who used image and word association to tease out self-declared undecided people’s political precommitments with 70% accuracy.

According to Denise Gellene of the L.A. Times (via the N&O) “[t]he researchers said it’s all part of an unconscious decisiveness that manifests itself in the hundreds of mundane, snap decisions people make every day, such as choosing which shoe to put on first or which seat to take on an empty bus.”

Yah. And we don’t even need a fancy theory of the unconscious to explain habituated pseudo-intentionality, although we do need a cultural theory to explain why some people are so resistant to the unremarkable observation that much of our living and thinking is automated for ease of handling.

If, as the study suggests, we’ve all mostly made up our minds already, I wonder about the conditions (psychological, sociological) under which people are inclined to defer or not defer their moment of bringing decision to consciousness. A vulgar behaviorist might wonder if there are rewards and punishments for some people for being, or appearing, decisive, deliberate, open-minded or accommodating. A good study would probably find that these conditions are highly situated, so that people who are inclined to defer decision in one context may be much more decisive in others. The great speckled ditherer is probably a rare bird. Power is certainly in play, but there’s power in both deciding and not deciding, so that’s another situated analysis.

And if undecisive people have already decided, what does this say about decisive people? It may be that only in cases of fundamental ignorance or complete disinterest is persuasion possible. Otherwise, as William James said, when we think we’re thinking we’re merely rearranging our prejudices.

August 7, 2008

Strategic incompetence

by Carl Dyke

This Fall I’ll be teaching my normal four-course load, three sections of introductory world history plus an upper-division seminar in world history since 1945. As an overload I volunteered to teach a section in the newly-revamped freshman introductory course, which we’re trying to move away from elementary life skills and shocking v.d. videos to something more like a college class. I’m on the Tenure and Promotion committee, I’m the faculty athletic rep, and of course I now have a rigorous blogging schedule to maintain, including all the fabulous value-added I offer to other sites with my wise and perspicuous commentary. Busy-ish by academic standards. Hey, it beats driving truck.

Yet yesterday I also agreed to serve on a distance education task force that will apparently be humping to make up for years of lost time by generating a strategic plan for the university in one semester. Happy to do it, I said, because asking was my friend and admired colleague Jane. And in some larger sense of duty and camaraderie I am happy to do it. But in another larger sense of life management I notice that I must have let my strategic incompetence skills deplorably erode.

Strategic incompetence is the art of making yourself more trouble than you’re worth in some area of unwelcome effort. This can involve being a painfully slow learner, a bumbler, or an impediment. In each case the objective is to make it easier for someone else to step in and do the work than to leave it to you. Arguably a species of passive aggression, although shading off into mere passivity or genuine incompetence. A famous example is from studies of gender-typed tasks. It seems that men who have done their own laundry just fine as bachelors will become helpless and, if necessary, error-prone (the red sock in the whites load) once they’re married; women who figured out just fine how to change tires, get things from high shelves, and take out the garbage when they were single become damsels in distress when a man is about.

No one thinks they are personally strategically incompetent or passively aggressive, although most of us recognize it easily enough in others. Dynamically it comes from some blockage on just plainly saying ‘no’, which may in turn come from real or perceived power gradients, conflict aversion, cultural programming (habitus), norms of courtesy, role confusion, communication styles, chickenshit, or a combination of these and other factors.

My own best strategy is “loose cannon.” In actual work I take pride in and responsibility for competence, so the classic strategies of incompetence are closed to me. (In fact, for this reason I am myself vulnerable to the strategic incompetence of others.) Instead of sabotaging any task I use more casual interactions over time to cultivate a general reputation of edginess, unorthodoxy and unpredictability that seems to disqualify me from being asked to do ‘serious’ tasks in the first place. I also express irony about tasks and ask meta-questions about their presuppositions and consequences, which has nothing to do with competence but does make me a PITA to those whose orientation is more narrowly performative. I’ll call this para-incompetence and in a larger sense certainly passive-aggression. By the way, I do none of this as a conscious strategy. It’s in the first instance an emergent effect of my unconventional personality in relation to conventional environments, in the second instance therefore a necessity of which I make a virtue.

In the strategic incompetence link above, the example is organizing the company picnic. Like many tasks subject to strategic incompetence this is a thing worth doing that no one wants to do. Ideally we take turns with such tasks or reward them extravagantly with money or praise. In the alternative it’s important to consider when one can actually say no but would rather not. Saying yes when one could say no would seem to ethically preclude later deploying strategic incompetence. In most situations there’s a possible negotiation over terms of service that is only activated by not instantly saying yes. Costs (e.g. PITA reputation, cultural dissonance) have to be weighed against benefits (e.g. not doing the nasty task, not going to the top of the patsy-for-nasty-tasks list, getting real help for the nasty task). When you’re really stuck doing the nasty task, strategic incompetence becomes a weapon of the weak and may be romanticized as resistance. When you’re not really stuck doing the nasty task, trying to get resistance cred is especially lame.

Social esteem and status are at stake both in the doing and the not doing of nasty tasks, and this too can sometimes be strategized. In general one wants to leave the nasty tasks to others, to redescribe nasty tasks as special and essential (this is an example of stigma management), or at least to be in charge and delegate. As Douglas and Isherwood point out in The World of Goods: Towards an Anthropology of Consumption, a remarkable attempt to anthropologize economics and especially economic blind spots like ‘tastes and preferences’, nasty tasks are status-degrading when they are or resemble high-frequency service of low scope (‘chores’, e.g., housework): “Anyone with influence and status would be a fool to get encumbered with a high-frequency responsibility.” In fact, competence at any common task is potentially status-degrading, as so many of our faculty colleagues understand: “All goods to some extent emanate messages about rank…. The class of pure rank-markers could be the highest-quality versions that serve no other purpose, like the best porcelain, the family heirlooms, ancestral portraits.” Or professors in named chairs.

Clearly enough one ideally wants to teach as little as possible, badly, and then only grad students who will go on to be professors; do arcane research of no immediate applicability; and by all means stay away from any sort of campus service, committee or administrative assignment where things are actually done. I am so getting it wrong.

August 5, 2008

PITA

by Carl Dyke

Up to 43.7% of the stuff that bugs me can be traced directly to me being a pain in the ass. Of the remainder, 68.32% is because I pay more attention than I need to to other people being pains in the ass. What’s left is trivial and easily managed.

July 24, 2008

Wordle pedagogy

by Carl Dyke

The commentary on Rough Theory’s wordle post of dissertation chapter 1 stimulated a further thought about Wordle, which its creator describes as “a toy.” I’ll agree with that to start with, because it’s fun to play with.

The “beautiful word clouds” generated from our more ‘serious’ work feel like they capture something, however. As Lynda said ironically at RT, “it’s all there, and presented much more eloquently than I could ever do with bothersome things like sentences.” NP wonders if they could be submitted in lieu of an abstract, and Lynda says “*Now* I know what my thesis is about.” I had the same reaction, including that shiver of embarrassment about certain words that should have been inconsequential turning out to be heavy in the distribution (Wordle removes linguistically common ‘stopwords’ and weights the rest by frequency).

Still, in principle it should matter what order and relation we put words in; otherwise we could all just stop with the bothersome sentences and write word lists for wordling. For example, frequency is not the only index of importance; sometimes a word that appears only once is the fulcrum of a whole argument. In fact, this transition from lumped word clusters to organized thoughts is pretty much what I’m trying to teach during my day job. I get papers that read like wordles all the time; if the words are well-enough chosen, they sometimes even pass. Now I find myself wondering if I could use Wordle itself to graphically represent to the students the difference between a word dump and a fully-articulated paper.

I’d welcome thoughts on this. Just as a first impression, I imagine requiring students a week before an early-semester paper is due to come to class with a Wordle printout of their introductory paragraph. I would then put them in work groups and have them attempt to interpret each others’ wordles to see how close they could get to the author’s intended meaning. In the process I think they would be clarifying in their own minds what ‘extra’ is needed beyond mere words to communicate a meaning and frame an argument. The additional benefit is that this would move their procrastination window up a week.

If this seems like fun, we could always experiment with my chapter wordles here or NP’s at Rough Theory….

July 8, 2008

What's left of philosophy

by Carl Dyke

In my little dreamworld the best thing this blog can do is cross-connect some questions and conversations that otherwise would miss each other. In that spirit please take a moment to visit Savage Minds, an excellent anthropology blog, to check out Chris Kelty’s post on experimental philosophy, a newish development that has some philosophers “exploring the possibility of actually talking to people.”

Philosophy used to include everything, and in its self-conception still does. In the history of knowledge-formation, however, over the last few hundred years philosophy has been getting whittled down by the spinning off of the sciences, history, law, economics, sociology, anthropology, politics, psychology and so on into separate disciplines. Each of those has some practical field of competence about real human relations in the world; indeed, it could be said (and was, by a defender of philosophy on that thread who may or may not have grasped the irony) that any time philosophy identifies a field of potentially-practical study about humans, it gets spun off into a different discipline. Cognitive science as the practical spinoff of epistemology is a recent example.

(I am being kind to philosophy here. In the last hundred years at least the sub-disciplining of the human studies has had very little at all to do with conceptual innovations in philosophy, and the reverse is increasingly true.)

What’s left for philosophy as such? Old unanswerable questions, abstractions, speculation, and no practical applications that can’t be better addressed by one or more of the successor disciplines. A playground for nerds, geeks, and bores.

July 1, 2008

Reading

by Carl Dyke

Mikhail at Perverse Egalitarianism, who incidentally along with his colleague Shahar has the best booze-fueled pretentious intellectual schtick I’ve ever seen, is reading a book on reading and not reading which is reminding me how odd our process of engaging with others and learning from/with them can be.

In a later comment Mikhail says

Seriously though I’ve been rather disturbed by Bayard’s book about talking about books you haven’t read – I thought it was going to be funny and tongue-in-cheek but it’s quite serious for the most part and addresses an issue I really haven’t seen in print before, that is, how we really don’t read the book we read or claim to have read – nothing psychological or super-theoretical, just the basic fact that we forget the books we’ve read in a very short time and then we read them again and selectively, so each of us has a very different memory (not just interpretation or a perspective) of the same book… In a sense, we’re all talking about a different book when we discuss, say, Kant’s first critique or Marx’s Capital.

So much for the Enlightenment! I like to own the books I read because then I can mark them up as I go. It’s like having a conversation with the author in the margin. Also the piles of them on the floor are festively decorative. When I go back to books I haven’t looked at in a while, I sometimes just take my own word for it and zero in on the parts I’ve blocked or commented on (e.g. when I’m refreshing for teaching), and then I’m usually alright. But if I actually re-read, I often find myself perplexed at why I picked out what I did, or what my train of thought was when I wrote what was clearly at the time a self-evident remark in the margin. It’s as if some stranger with different priorities and agendas had spritzed the book with his traces. Sometimes that guy was pretty smart, and sometimes he was a dead dunce.

For one thing I’m usually reading more than one book at once, seeding my environment with them, a pencil stuck in each to keep my place; so my reading ends up being an accidental conversation among me and several authors, which produces some terrific collisions but would be very difficult to reproduce, including for me later when I’m trying to explain why I think what I think. Not to mention all the ‘live’ conversations I’m having at any given time. We never enter the same stream of consciousness twice.

When I’m revered after my death little disciples will want to figure me all out and that will be funny as hell. They’d have to go back through all the books I’ve read, decode the marginalia, and dope out what order and circumstances I read them in. My head is bricolaged all the way down. Incidentally I tried to do this with Gramsci, whose personal library is preserved at the PCI archives in Rome. Turns out that because he grew up poor, with a reverence for the book, he wouldn’t have dreamed of writing in one. Bummer. But I guess we have that to thank for the Prison Notebooks.

June 20, 2008

Public relations

by Carl Dyke

“We must socially stimulate ourselves to place at our own disposal the material out of which our own selves as well as those of others must be made” (G.H. Mead, “The Mechanism of Social Consciousness,” Selected Writings, ed. Andrew J. Reck).

In a series of posts Larval Subjects has just discussed disgust with the blog medium, the frustration produced by rude and arrogant blog commenters, democracy and perverse internet egalitarianism, and upsetting mismatches between rhetorical effectiveness and the truth. As usual the reflections over there are first-rate, but in a very different theoretical idiom and emotional register than mine. LS will get where he gets without my kibbitzing. So instead of being impertinent there I’m going to address these questions here and attempt to show what a different way of looking at them might yield.

To my mind these posts are all related, and what relates them is relationships. Specifically, LS like many other people I know is disappointed by the distance of real relationships from ideal ones. Blogger to commenter, internet user to user, all of us to democracy, humans to the truth. In each case there’s a hope for something special to happen, a desirable better way to do things, and a letdown with what we actually get. Underwear and socks for Christmas again. Although we’re outside my way of thinking about things, I do see the point. In each of those spheres of relationship things are not as they ‘should’ be.

After the usual philosophical training in high idealisms of various kinds and only Marx to fall back on, what a treat it was for me to find George Herbert Mead. Mead does not torture himself and others with shoulds. There is no ideal against which the real is being compared and always, always found wanting — although he does explain why people tend to think that way. Mead starts with actual relationships and stays there. His abstractions come from the pragmatics of pattern and repetition. They are symbols or they are habits. People create and share abstractions — including a sense of self — as tools to keep track of and assign significance to the interactive networks and assemblages that they encounter in their lives. This process is his focus.

LS begins to entertain an idea along these lines when he considers Feuerbach’s view that, as LS puts it, “we project our highest aspirations and desires onto another being, but then experience these qualities not as existing in and from us, but in something else. God is thus an alienated and distorted image of our own essence or nature.” Right, and not just God. Notice, however, how LS has subverted the social-relational drift of Feuerbach (as Feuerbach himself does with the static concept of ‘species-being’) by smuggling in the language of ‘desire’ and ‘essence’. Later in the train of thought this re-idealizing reading would be better foreclosed by Marx and Durkheim, both of whom had similar ideas of how abstractions happen in the course of real social relationships, the former in his discussions of religion and fetishization, the latter in his analysis of the stability of collective representations compared to individual perceptions and explanation of religion as societies representing themselves to themselves. “Therefore the collective ideal that religion expresses is far from being due to some vague capacity innate to the individual; rather, it is in the school of collective life that the individual has learned to form ideals” (The Elementary Forms of Religious Life, 1912, trans. Karen E. Fields).

Marx was coping with Young Hegelians, utopian socialists and liberal political economists; Durkheim was trying to sociologize Kant while detheologizing the positivistic sociology of Saint-Simon and Comte; and both were thoroughly steeped in the western philosophical tradition, so the social relationality of thought is both demonstrated and discursively obscured in their work — as N Pepperell is showing brilliantly in Marx’s case. As much as I loves me the Marx and Durkheim, and can get what I need from them, there’s a lot of digging through the cluttered attic involved with finding any particular useful thing. Sure, I’ll do it if I have to. But what I like about Mead is that although he knows his philosophy, he started out as a railroad surveyor and has a nice direct way of laying his track right to the point in short, pithy essays. In fact, his writings are a little like blog posts.

The reason Mead didn’t write more, and more at length, is interesting. He was a very good teacher. He devoted his thought and energy to getting his points across to his students and colleagues, who were his first and realest audiences. To his fans’ benefit and dismay he did not apparently have any strong vocation to talk at more abstract audiences through a book. This conversation right now with these people right here was the one he cared about. There’s no platonism at all in Mead, no inclination to think of the world we live in as less than fully real, ‘phenomenal’ in relation to some more perfectly and enduringly real noumenon behind the curtain of our lamentable animal perceptions. My grad advisor David Luft taught me to take pride in being simpleminded, which was a wicked subtle joke he told on himself and the philosophers upstairs. Mead tells the same sort of joke in his essay conversations with his imagined audience.

For Mead “Our thinking is an inner conversation in which we may be taking the roles of specific acquaintances over against ourselves, but usually it is with what I have termed the ‘generalized other’ that we converse, and so attain to the levels of abstract thinking, and that impersonality, that so-called objectivity that we cherish” (G. H. Mead, “The Genesis of the Self and Social Control,” Selected Writings, ed. Andrew J. Reck). This is the ‘truth’ of real interactions in real communities, that gets its objectivity from being shared and effective. Roots of standpoint theory are here (going back to Hume, of course, or forward to, for example, Sandra Harding’s “strong objectivity”), without the metanarrative of heroism and villainy that’s usual in more contemporary versions. On this view rhetoric is not something opposed to truth, but a process communities use to work truth out.

Selves are the product of an ongoing series of feedback loops, to which at first we bring only basic biological dispositions to mood and attention, with localities gradually defined as ‘environments’ in a process of experimental differentiation, and specific others in relation to whom roles are worked out and abstracted into worldviews and ‘generalized others’. (Most of Freud’s developmental theory is in that last sentence, without the reification of accidents of Freud’s own culture.) It is by ‘taking the role of the other’, getting a feel for the game (as Mead, Wittgenstein and Bourdieu would all put it), that the self is differentiated in particular and then abstracted symbolic space. As an explanation of Mead this gesture may be enough. Because I don’t have a specific interlocutor for this post other than the existentially thin ‘Larval Subjects’ persona, I don’t know how much detail to go into here. Wouldn’t want to be a schooler. Comments would help, if you’ve made it this far.

So why do we blog? Lots of reasons, of course, or perhaps lots of rationalizations for the same reasons. But in terms of the contrast I’ve set up there are two contradictory possibilities. The first is the search for an ideal audience as against the disappointing real audiences who inhabit our real lives. Students, colleagues, friends, etc. who in their human, all too human ways come to their conversations with us with differing standpoints produced by differing self-formation processes in differing interactive histories. It’s hard work continuously reorienting ourselves in real time to others whose roles we have not yet taken and who may not give us much to go on to do that. Much easier to close off this real interaction and search for a more perfect one with ideal others who will echo, confirm and amplify the generalized other our thoughts already embody. I think this is what LS means by ‘democracy’ and as he says, as such it doesn’t exist. In this sense he is right that the academy is where eggheads go to protect themselves from democracy. The internet then turns out to be a place where that many more disappointingly imperfect others lurk.

The other reason to blog is to have (more of) these difficult conversations in which our selves are literally destroyed and recreated in dynamic interactions with really other others. Here self is not stabilized by being closed off from further (exhausting, painful) interaction but metastabilized by embedding in networks and assemblages of relationships. I realize this is a bit of a salto mortale, especially for selves whose interactive history is confusing or oppressive. But I agree with Mead that this is what it means to really think. So this is why I and perhaps some others blog, as Mead suggested in the opening quote.

June 12, 2008

Rescue

by Carl Dyke

Rachel recently got an inquiry about buying this painting from a couple who were married in the gallery in which it and its siblings were showing. Their wedding photos and video are full of her images, so here’s a sweet memento hookup that might happen. Music swells, eyeballs moisten.

In another dimension the connection is a little more odd, yet apt. This series of work is called “Rescue.” Rachel found a very old lifesaving manual and appropriated/repurposed some of its images and text as part of the layering in these canvases (there’s also antique player-piano paper and a whole bunch of other stuff going on, some of which you can see above). Her theme is good intentions, miscommunication, and hurting the ones we love, which is just about right for a lot of marriages but maybe not what most newlyweds have in mind.

The central point of the lifesaving manual is that when you go to rescue someone who’s distressed, you probably need to beat them up and disable them first or they’ll drag you down with them. So there are all these images of struggling and grappling and submission holds and whatnot. Both people want the same thing, but at least one is working at cross purposes and the way through is pretty unappealing. At this point the metaphor is eerily capturing some significant fraction of my interpersonal relations, with me on both sides at one point or another.

There’s a real danger that the peril of one will become the demise of two. My dad was reflecting on the “pacification techniques” he learned in his Red Cross lifesaving school in relation to his own indifferent swimming skills. It’s a nice image that when we’re floundering we would be rescued by some super-competent, patient and gentle hero. More likely it’s whoever’s handy, and they’re just barely making it themselves.

June 11, 2008

Words and things pt. 5: Practice of theory

by Carl Dyke

Hoping that a polite interval has passed I’m going to respond to N. Pepperell’s meme about the practice of theory, which I was not tagged for; but I do tend to barge into conversations.

(Btw the image I get from the metaphor of “barging in” tickles me every time.)

This barge is impressively authoritative.

This barge is big and stinky.

The question is roughly whether theory is a kind of practice; whether there’s anything practical about theory; whether interpreting the world in some sense contributes to changing it, to invoke Marx’s famous gesture at the question. As usual, my answer is yes; maybe; but.

NP does some very nice things with suggesting that theories are situated in history, as part of the general collective practices of places and times. This is inconsistent with a notion of transcendental truth, but is consistent with the sociology of knowledge, for example Gramsci’s distinction of thought into common sense, good sense, and philosophy, in which each is a dynamic general understanding of the times that is functionally more or less systematic, synthetic, and practical. In fact, any sociology of knowledge is going to start or get quickly to the notion that ideas do not float above it all but are constructed from the materials of their environments; this is why the real philosophers who are most inclined to ask these questions tend to hate real sociologists (and anthropologists) with a cold, murderous passion, when they’re not simply ignoring them. So NP’s sensible remarks about the contingent value of working out the conditions and possibilities of the world we (may) want to change, and about the limitations of our cleverness with respect to the intentions and consequences available in our situations are going inevitably to fall on some aggressively deaf ears.

Sociologies of knowledge track theories emerging variously out of the manifold of particular historical formations as if we thought of them ourselves. From this perspective theories only ‘work’ if they’re aligned with the other worky bits of the configuration. The grand ambition of changing the whole world requires a whole lot of alignment all at once; short of that, theories ‘work’ in a variety of ways, for example by aligning intellectuals in orthodoxies that control access to goods like jobs and publication opportunities and blogroll entries, or by creating insular little communities of righteousness. But again the question here has to do with political theories producing intentional political effects.

The paradox of unintended consequences figures subtly and diplomatically in NP’s analysis; my inclination is to move it right out front. The history of large-scale attempts intentionally to change the world (always for the better, in the view of the protagonists), is sometimes, and from some perspective always, a bad one. Ordinary kindnesses are appealing and can be even disproportionately effective, but the multi-thousand-year history of religious charity should offer a cautionary tale on expecting too much from that strategy. On another small scale unintended consequences abound in teachers’ attempts to shape students’ thinking, which is a very direct example of theories trying to work in what may be hostile or inhospitable environments. I am occasionally successful in getting a student to think the kinds of thoughts I do (you may by now be sorry to hear this); invariably they were already about 99% there when I found them. I try to move the others along the same 1%, which in some cases gets them to 1%. I counterproductively piss off my share too.

But it’s not just a matter of getting a hundred stimuli in the right sequence. We find the conditions we want to change, e.g. people’s heads, in a variety of orientations which must be taken into account. As Gramsci said in the Prison Notebooks, Q 24,

The unitary … elaboration of a homogeneous collective consciousness demands a wide range of conditions and initiatives. … A very common error is that of thinking that every social stratum elaborates its consciousness and its culture in the same way, with the same methods, namely the methods of the professional intellectuals. … It is childish to think that a ‘clear concept’, suitably circulated, is inserted in various consciousnesses with the same ‘organizing’ effects of diffused clarity: this is an ‘enlightenment’ error. … When a ray of light passes through different prisms it is refracted differently: if you want the same refraction, you need to make a whole series of rectifications of each prism.

Going back to the theme of the thread, it’s not that we can’t change things with a word. It’s that the same word in different situations and deliveries may catalyze transformative change or transformative opposition, contribute to the conditioning of a future transformation or cause counterproductive irritation, be misunderstood in any number of ways, or fail to connect entirely. Linear, intentional change at the level of language would require a complete unpacking of all of the dynamics of delivery and reception across all of the situations and materialities of the construction of consciousness. Controlling the stimuli in a dense interactive field is um, tricky, as any parent will tell you.

Starting with the press as an obvious point of departure Gramsci noted the intricacy of this field very quickly: “Everything which influences or is able to influence public opinion, directly or indirectly, belongs to it: libraries, schools, associations and clubs of various kinds, even architecture and the layout and names of streets.” That looks like a lot of little battles we ought to fight. But the impact of these sites of reality-formation is not simply additive, it’s dynamic. So dipping our paddles where we can may seem like what we each can do, but in the absence of a comprehensive understanding is just as likely to add eddies that slow the flow of change.

To Gramsci and other revolutionaries like Robespierre, Saint-Just, Lenin, Stalin and Mao who saw this problem it looked like rigorous coordination and conformity was necessary to get things moving right. Gulp. Wiseguys like me get taken out and shot in the first days of the revolution, for good reason. Terror ensues as mere humans fail to agree upon or measure up to the ideal of total virtue in the good cause. This is a bigger gamble than I care to venture.

And so far I’ve only addressed words. How shall we relate words to things? It’s not surprising that people who have only words to work with tend to think of words as powerful. And on them/us, they are. At the least common denominator this is straightforwardly a kind of magical thinking. Of course the efficacy of particular incantations is limited to those who share the magical vocabulary: abracadabra, change, proletariat, Volk, vafanculo. Figuring out the magical vocabulary of this or that group is one of the keys to effective power, as I illustrated in the first post in this string.

Demagicking words or better, contextualizing their power seems like a useful little service I/we can perform, but my ambitions are very small and local. Again Gramsci, who against his own analysis of the infinite complexity of the formation and reformation of reality had only the motto “pessimism of the intellect, optimism of the will” to offer, points to the difficulty:

“Hence it is a matter of studying ‘in depth’ which elements of civil society correspond to the defensive systems in the war of position. The use of the phrase ‘in depth’ is intentional, because these elements have been studied; but either from superficial and banal viewpoints, as when certain historians of manners study the vagaries of women’s fashions, or from a ‘rationalistic’ viewpoint — that is, with the conviction that certain phenomena are destroyed as soon as they are ‘realistically’ explained, as if they were popular superstitions (which anyway are not destroyed either merely by being explained).” Q 13.

Ahem. Mission accomplished?

May 9, 2008

Discipline and interdisciplinarity

by Carl Dyke

Again on Easily Distracted, there’s a terrific analysis of interdisciplinary programs. ED looks at the College of the Atlantic, which is an inspiring exemplar. Links are there.

I should say first by way of context or confession that I am barely disciplined. Although my doctorate is in modern European history, for the first four years after graduate school I taught philosophy, sociology, and human development but almost no history. (I am tenured in a nice little history department now and teach history exclusively, or at least that’s what it says in the course catalog.) My undergraduate degree was similarly eclectic, and while I was in grad school I identified and studiously avoided or resisted (which I now regret) the professors who made it their mission to discipline the younguns.

Much of my indiscipline I would now call preconscious. While I was teaching all that whatsis and looking for a permanent job, I got more conscious and thought a whole lot about discipline, indiscipline and interdisciplinarity. I also worked for several years in an interdisciplinary human development program, one of the great experiences of my life, and interviewed at a couple of interdisciplinary institutions. This does not make me an expert, just an interested commenter.

The concept of interdisciplinarity takes disciplines for granted. This is realistic. Knowledge systems are organized into disciplines as a matter of fact. There are accordingly two ways to accomplish interdisciplinarity. The first is to bring people with different disciplines together. I call this serial disciplinarity. The second is to expect individuals to become multiply disciplined, that is, actually conversant with and practiced in not just the material of different disciplines but their codes, practices, assumptions, debates, sacred texts. I’ll need to talk about this more in a second, but here I’ll just say that this is really exceptional. The third option, to train people outside of established disciplines, is what interdisciplinary programs usually shoot for. But the products of these programs are not interdisciplinary properly speaking. They are undisciplined.

The temptation is to think of disciplines as just databases or at most, bodies of knowledge. If I read “Gravity’s Rainbow” and use it as a source on receptions of WWII in popular culture, I am interdisciplinarily doing literature, right? Well, um, no. The discipline of literature is not defined by its materials, but by its habitus. Literature is a way of seeing, thinking and judging, not a thing to see, think about and judge. People disciplined to literature are disposed to see the whole world as a text, not just books. The purpose of literature departments is in part to organize the investigations and knowledge produced by the practices of the literature habitus, in part to reproduce themselves by passing on the dispositions of seeing, thinking and judging that define the field to new generations.

Historians have a habitus (which includes the various internal contestations of it, of course; all of those contesters are historians) and are disposed to examine everything historically, including texts. Philosophers also have a habitus, and so on. All of the disciplines of the humanities fantasize that they are the master discipline that encompasses all the others. This is self-evidently false, if we think about what disciplining means. When Lit types dabble in context they are not practicing historical interdisciplinarity, they are taking snapshots like intellectual tourists. And I just have to laugh when philosophers tell me things like they are Wittgensteinian/Hegelians, in that order. Well, you can be that in philosophy.

Getting back to habitus, becoming disciplined is a way to narrow, direct and focus one’s attention while providing a sense of purpose and belonging in a meaningful community of like-minded folks. Disciplines enable some conversations and disable others by foreclosing tangents and digressions, by specifying right and wrong questions and adjudicating right and wrong answers, by categorizing, and by providing shared vocabularies. The enabling is just as important to notice as the disabling. Taken as wholes, disciplines offer their disciples a morally ordered universe and a firm sense of ratified adult identity. This is why disciplined people forced into interdisciplinary contact with other disciplined people often end up feeling existentially angsty and deciding that the ‘others’ are immoral, as I have repeatedly seen.

ED points to this when he concludes “[i]n the end, for all of us who chafe at excessive departmentalization and balkanization in academia, this is a problem of culture, attitude, practice and orientation. Cultures change slowly and organically, and you can’t rush those kinds of transformations even by the radical redesign of underlying structures.” I agree completely, except – is it a problem? Why? He also admires generalists who have a conceptual map of the disciplines and thinks they’re a rare breed. Thanks! We know how to think outside of the box, play different games, speak different languages, pick your metaphor.

However, the role of the generalist is necessarily a limited one. Disciplines are ways of getting things done, after first defining what needs doing. Like any sort of groupthink they encourage narrowmindedness and arrogance if left unchecked; anxiety and defensiveness when challenged. I certainly saw both dynamics in play at the interdisciplinary programs in my experience. But disciplines are also a way to get grounded, to build leverage. We generalists tend to be a wifty lot. We’re good at playing with boundaries, but like Socrates or two-year-olds who keep asking whywhywhy we can get irritating to serious people fast. At a certain point you’ve just got to plant your feet somewhere and do stuff in a disciplined way.

The best role for the undisciplined generalist is probably translation. We don’t really have the chops that a lifetime of focused devotion to one discipline can bring, so we’re never the cutting edge. We can point to stuff that’s going on from field to field where intersections could happen. We can try to unpack disciplined information so that it’s usable in an undisciplined way. We can be oddly comfortable with our interstitial identities and remind people that boundaries are often arbitrary. There should always be some of us around. But I’m not sure we’re what a whole program should be built out of.

I notice I started in one place and ended up in another here. I don’t have a train of thought about this stuff so much as a pile of boxcars. This is another problem with indiscipline, of course.

May 5, 2008

Postmodernism?

by Carl Dyke

On Easily Distracted there are a series of fascinating posts loosely occasioned by a couple of recent flaps in academe. ED wraps up for the moment with a really excellent meditation on what he calls the “porcupine approach,” by which he means the inclination of academics to get all prickly, defensive and offputting when challenged on their expertise. He’s right about all of it (and as a commenter perceptively notes, it’s not a strategy limited to academe).

I’ve commented in my own somewhat prickly way on that site, but I wanted to add to the general gist of ED’s analysis here. He counters porcupinity with an ethic of clarity, openness and humility that I admire and aspire to (again, in an admittedly grumpy personal style). He thinks that’s consistent with postmodernism and I agree. In fact, I think it’s essential to postmodernism (I’m telling a joke here, explanation follows).

The essential critical insight of postmodernism, if such a thing is possible, is that there is no essential critical insight. As Lyotard canonically put it, postmodernism is “incredulity toward metanarratives,” including, inescapably, its own. That’s it – if you think there’s only one way to tell a story, you may be many wonderful things but you’re not a postmodernist. (The neatness of this exclusion is unsustainable within postmodernism.) This is why ‘postmodernists’ tend to collapse so easily into ‘wordy’, ‘over'(pun)ctu/ated, ‘tale-chasing’ ‘iron-y’. All the words and their parts you might use to describe it or do anything with it mean something else too in other equally valid narratives (discourses), and one must attend to these differences lest they sneak up and bite one from behind. Like all forms of skepticism postmodernism is conceptually inescapable, which is why practical people may toy with it but never entirely adopt it. You can break everything down nicely with pomo, but you can’t build anything up.

Now, here’s the thing about the academic flaps ED is talking about. Both cases involve people “dangling half-formed chunks of critical theory like a sacred totem about [their necks],” which is an awesome image. The thing they’re both doing that’s really bizarre is that the chunks of critical theory they’re dangling are derived from postmodernism. And they’re using those chunks to assert a privileged interpretive position with respect to their own work and postmodernism itself, apparently without irony of any kind. But the thing is, in pomo that is the ONE MOVE that you absolutely, positively cannot make without massive self-irony. Because a privileged interpretion is a metanarrative, and the first holy vow of postmodernists is to be incredulous of metanarratives.

It’s really interesting to see a discourse the purpose of which is to question power claims used as a power claim. In the ultimate irony, postmodernism has become dogma – or better, as ED says and NP analyzes, a fetish.