Archive for ‘how stuff works’

March 24, 2013

Wild yeast sourdough starter

by Carl Dyke

As a logical next step in my fiddlings with bread-making, I just baked my first sourdough loaf with home-made wild yeast starter the other day. To eliminate all suspense, it came out great – by which I mean, it reminded me of all the things I like about sourdough bread without introducing any new negative associations. I especially like it because I did it ‘all wrong’, which is what this post will now document.

“Softly now, softly now – try it, you won’t die.” Silkworm, “A Cockfight of Feelings

So, how I went about this is I got on the ol’ internet and googled ‘sourdough starter’. A little reading got me pretty quickly to the further qualification, ‘wild yeast’ – thus distinguishing the truly artisanal starter from the kinds someone else made that you can buy for a whole lot of money from specialty baking stores, if you’re a clueless snob, or Amazon, if you’re even more clueless but at least not a snob. So once I had the correct verbiage for cheap-ass diy starter, I did some more searching and read through some instructions. (I omit the links because I just told you how to diy, get it?)

Well, opinions about exactly what’s happening with sourdough starter seem to vary a bit, starting with where the wild yeasts are actually coming from. Is it the air around us? Is it the flour? Is it the whole grains you must treat with excruciatingly careful reverence to yield Gaia’s bounty of biomagic? With just a slight knowledge of these matters, I decided it was probably all of the above, plus everywhere else, since that’s where yeasts are. So I ignored the instructions that said I had to be careful not to cover the starter vessel with plastic wrap or anything else impermeable. I also ignored the instructions that said I had to hermetically seal the starter vessel, sterilize every instrument that ever came in contact with the starter, wear a hazmat suit, never use stainless steel, always use stainless steel, never use silicon, always use silicon, and so on.

Go Green!

Go Green!

In fact I pretty much ignored every single instruction designed to seal off the wild yeast starter from the environment it had somehow come from. I also ignored all the instructions designed to make my starter a delicate, difficult thing that required constant, meticulous care. I know people whose lives are given a rich sense of meaning by arranging to provide constant, meticulous care to other creatures, but that’s not me and if it was, I’d pick creatures other than yeasts and lactobacilli.

Speaking of lactobacilli, I paid a lot of attention to discussions of the multi-biotic nature of sourdough starter. It’s not the yeasts that are making the sour, it’s the bacteria. But the bacteria don’t make the bread rise, and they also have a tendency to make the ‘spoilt’ version of sour when they get lonely and pig out. So a functional sourdough starter is actually a community of beasties each creating some of the conditions for each others’ happiness, encouraging each others’ strengths and discouraging each others’ excesses, and incidentally each handling part of a fairly complex little biological process that assembles into a tangy leavening. Which of course wasn’t at all what they ‘intended’, but makes an excellent complement to garlicky cream cheese. So anyway, ‘building’ a starter is a process of getting that community together to work out a harmonious relationship under the conditions they enjoy.

“Control is when others’ locked-in interactions generate a flow of collective behavior that just happens to serve one’s interests.” Padgett and Ansell, “Robust Action and the Rise of the Medici, 1400-1434;” see also Padgett and Powell, The Emergence of Organizations and Markets (2012).

Those conditions are: flour and water. We’re talking about fermentation here, after all, which in real life is hard to keep from happening if you’ve got moist sugars around. Which brings up the mold problem, of which there’s plenty in my house, the dominant strain for unmysterious reasons being ‘bleu cheese’. But fortunately, between the acid the bacteria start producing right away, the alcohol the yeasts start producing soon enough, and the natural division of labor among the artistes of organic decomposition, mold is not actually much of a threat if you’re not trying hard to kill the yeast and bacteria somehow.

Mmmmmmm, stinky.

OK, so I read a whole lot about ambient temperature, water temperature, using bottled water, using distilled water and adding minerals back in, using orange juice, using pineapple juice, using white flour, using rye flour, not using white flour, not using rye flour. With just a slight knowledge of these matters, I reflected on the global success under the most extreme conditions of yeasts and lactobacilli, and decided not to sweat any of these factors too much (although, in principle, I wouldn’t have been completely surprised if a chlorine spike in my suburban tap water had set the critters back a bit). I did decide to take some of the chance out of the lactobacilli, mostly because I had an old tub of plain yogurt handy. And no, it was not any particular brand or type of plain yogurt, but it was past its expiration date as it happens.

I also looked at a lot of instructions about getting a kitchen scale, getting one that measures in grams because they’re more precise, calibrating hydration ratios, using a tall, straight-sided vessel with a dedicated lid, sterilizing this vessel and your hands before handling it, scraping down the sides so that, gosh, I don’t know. So anyway, here was my beginning recipe for my wild yeast sourdough starter:

Some flour
Some water
Some plain yogurt.

Roughly the same amount of each, by eyeball, probably a bit less yogurt because I thought of that as a ‘supplement’.

“My friends always say, the right amount’s fine. Lazy people make rules.” Silkworm, “A Cockfight of Feelings”

All of this went in a plastic bowl (with sloped sides because it has sloped sides) I also eat cereal, pasta, and curry from sometimes; with some plastic wrap loosely draped on top to keep it from drying out too fast. This then went on a corner of the kitchen table I wasn’t using for anything else right then. I am woefully ignorant of the exact temperature of this spot, but I can guarantee it was neither hot enough to bake nor cold enough to freeze my arse. I started with bread flour, I think, but I ran out of that before the next feeding so I switched to rye for awhile because I had a bag of that open and it kept getting mentioned in the instructions. Then for awhile what I had open and easy to get at was some white whole wheat flour, so I used that.

And speaking of feeding, I read all kinds of instructions about pouring out exactly [some ratio I forget] of the starter before each feeding, adding back [another exact ratio I forget] of flour and water, doing this once a day at first and then every 12 hours, carefully swabbing down the sides of the container, adding strips of tape to allow precise measurement of the starter’s expansions and contractions, holding the container between your knees and counting to 6,327 by perfect squares, and checking carefully for ‘hooch’, which is such a precise technical term that at least half of the folks using it have no idea it’s why there’s NASCAR.

Medicinal purposes only, of course.

What I did instead was pour some out and add some back, roughly the amount it had expanded in the interim; when I remembered it, which was anything from a couple times a day to every couple of days. I tried to keep it pretty soupy because I read the beasties like to be wet, and I’ve found this to be true. I did this for something between a week and two weeks – I did not keep track. About day 2 or 3 it got that sourdough smell, then it settled into a kind of sweet peachiness I had not expected. I got back onto the internet and found a long forum thread on the many, many different permutations of ‘sweet peachy’ smell ranging all the way to ‘spiced apple’ that can be expected from a properly harmonizing community of yeasts and bacteria. Reassuring. So when I got sick of waiting any longer, although I think I was supposed to, instead of pouring out the extra I poured it into a bowlful of the flour I happened to have handy and open right then. Whole wheat, rye, and kamut as I recall – kamut btw is fun stuff, an heirloom grain that has a lovely buttery flavor and adds amazing elasticity to a dough.

Here was the ‘recipe’: salt in the right amount for the flour, bit of sugar to be friendly, touch of olive oil and enough warm (tap) water to make a wet dough just drier than a batter. Because the beasties like to be wet. Once they’d fermented that up for most of a day, I stretched, folded, smeared, punched and kneaded in enough more flour that it would stay in a loaf shape (not doing this is how you get ciabatta); let it think about that for maybe an hour longer; threw it in a hot oven on the pizza stone; dumped some water in the bottom of the oven to get some steam to keep the crust from setting too quickly (thank you internet); and some time later there was delicious whole wheat / rye / kamut multigrain sourdough bread.

IMG_20130321_220153

Through all this I was aware that by failing to control for every possible variable the project could go horribly awry rather than pleasantly a rye. I reflected on the $.50 of flour and aggregate 10 minutes of work that would be irretrievably lost, and decided to roll those dice.

Does this mean none of the variables all that internet fussing is trying tightly to control don’t matter? On the contrary, I’m sure they do. But my little experiment suggests most of them other than flour, water, a container, and temperatures somewhere between freezing and baking are conditions of the ‘inus’ variety:

“The inus condition is an insufficient but non–redundant part of an unnecessary but sufficient condition” [quoting Cartwright, Nature’s Capacities and their Measurement, 1989, citing Mackie, The Cement of the Universe, 1980]. It’s best to read that backwards: you identify causal conditions sufficient to produce a given effect, but know that there are other conditions that could have produced the same effect. Within the sufficient conditions you’ve identified is a condition that couldn’t produce the effect by itself, is separate from all the other conditions that along with it could produce the effect, but must be among them for the effect to be produced through the causal pathway that’s been picked out. The inus scenario (any scenario containing an inus condition) shows up frequently in attempted causal analyses, and has to be accounted for somehow in any comprehensive causal theory (Chuck Dyke aka Dyke the Elder, “Cartwright, Capacities, and Causes: Approaching Complexity in Evolving Economies,” draft-in-progress).

There are lots of ways to skin a cat. Which means there’s an interesting sociology of popular science lurking in the internet’s various treatments of wild yeast sourdough starter. There are many strategies on offer, each presenting a series of essential steps to success. And each of the strategies will in fact result in a successful culture, while adding procedures that may be important only to offset the sabotage added by other procedures, or to create an outcome distinguished only by the specific way it was achieved; or not important at all except for attention focus or ritual (which, by the way, are not trivial considerations). Apparently when a thing happens to work one way, we can be inclined to leap to the conclusion that this is the one best way to make it happen; ignoring all evidence to the contrary, for example all the other ways described in their own loving detail by other practitioners just as convinced of the robust essence of their accidental triumphs.

Incidentally, this is also how I think about education in general, and general education in particular.

March 7, 2013

Complex systems made learnable

by Carl Dyke

My friend and sometimes tennis partner David just emailed me this link to a story at Phys.org titled “Through a sensor, clearly: Complex systems made observable.” It’s right up my alley, he thought, and right up our alley, I thought.

Now, I don’t have either the math or the graphical chops to get under the hood of this research. But I think I understand what they’re up to, and I think I know enough to spot a couple of places where questions might be asked. For example, if I understand correctly we’re talking here about describing a snapshot of a complex system; it’s my impression that once the system is actually complexing, the data-crunching becomes prohibitive. But if so, one moment of a dynamical system is of limited utility, since it captures the system but not the dynamical. If I’ve understood correctly, this is not a criticism, but an appreciation of where we are in the learning curve.

I also appreciate that there’s a devil in the details of observer design; that is, the sensors have to be able to tell the difference between information and noise, nonlinearity and randomness. In effect this means that the sensors have to be able to learn to discriminate intelligently, which most human brains are not that great at. But they’re just doing feasibility at this stage, and I gather they think if they can use graphical modeling to specify some system parameters, they can eventually walk-in the data-gathering to yield more satisfying descriptions.

Well, I bet about half of what I just said is at least a little bit wrong. What I hope is that I’m just wrong and not ‘not even wrong‘, that is, that I know at least enough to be worth talking to further by someone with a better understanding. And this brings me to the question for today, which is this. Given that the project here is to represent and understand complex systems, which explicitly include “biological systems [or] social dynamic system[s] such as opinion or social influence dynamics” – that is, to start with, citizenship and life itself – what responsibility does a university general education core program have to bring students up to a kind of elementary competence where they can participate responsibly in this kind of conversation? What and how would we have to teach to make that so? And what in the reverend paleo-disciplines and contents might need to retool or move aside to enable this development?

UPDATE: if nothing else comes of this post, at least I’ve learned what it means to be ‘fractally wrong‘.

March 3, 2013

“If we want things to stay as they are, things will have to change.”

by Carl Dyke

I’ve been thinking about democracy lately as one of a collection of strategies for managing complexity. The proximal stimuli are the recent American elections and their associated issues; the Eurozone ‘crisis’; and the Italian elections just now concluded. The immediate stimuli are an application I just wrote for a really interesting NEH summer seminar in Rome, titled “Italy in the Age of the Risorgimento – New Perspectives,” and a discussion of “Post-Democracy in Italy and Europe” at Crooked Timber.

Let’s stick with Italian politics. I’ve personally been following them more or less closely since the early 70s, when I was in Italian public school. The chronicle of this period is quite rich and contested, with the movement of the Communist Party into play for inclusion in the government (the ‘historic compromise’), right-wing paramilitary backlash sometimes called the ‘strategy of tension’, left-wing student and paramilitary activism, and in general lots of splashy violence, all of it collected under the rubric of the ‘years of lead’. This was clearly a period of crisis, although I must admit that it was not much visible in the lives of the kids I was hanging out with.

When I went back to Italy for a semester as an undergrad, Dyke the Elder plotted my political education by giving me the task of keeping a journal of the Italian press from left to center to right. Every day I would go to the newsstand and buy at least three papers, most commonly “Avanti!” and/or “il manifesto,” “Rinascita,” and “Il Secolo d’Italia.” Two things struck me at the time and have stayed with me since. The first was that having this range of explicitly partisan press in easy newsstand juxtaposition did a lot to discipline all sides’ relationship to ‘the facts’, so it was possible to get a pretty reliable skinny of events from any of the papers, accompanied with explicitly polemical analysis. The second was that Italian politics were again in crisis, this time most prominently over NATO and the placement of nuclear missiles on Italian soil, and the movement of the Socialist Party under Bettino Craxi into a position of leadership; according to many, at the expense of anything still resembling socialist principles. I could always get a good political tirade with my coffee, Totocalcio and groceries, but life went on.

When I was in Rome for my dissertation research Italian politics were in crisis over the collapse and fragmentation of the Communist party. More recently of course Berlusconi and the populist/nativist Northern League created a new state of permanent crisis, the media-savvy prime minister presiding over a circus-like political spectacle nicely foreshadowed by the notorious Cicciolina. At this point the common, and often at least half-accurate, perception of Italians that their politicians are a pack of grossly incompetent clowns who somehow also manage to enrich themselves with ruthless efficiency at public expense became the near-explicit basis of government; Berlusconi’s point being essentially that if it’s going to happen anyway, you might as well at least get some entertainment and vicarious wish-fulfillment out of it. That this shameless affrontery made enough sense to enough people to keep him in power for as long as it did (and maybe again now, even after his ‘ultimate’ disgrace less than two years ago) says something important, I think, about what sorts of functions Italians outside the talking classes take politics to perform. That more morally rigorous aspirations have been consistently damped and absorbed through succeeding regimes (see, e.g., Machiavelli, Mazzini, Garibaldi, Crispi, Turati, Gentile, Togliatti, Berlinguer, Pertini, Craxi, ‘mani pulite’ and the Second Republic) says something more. Grillo is unlikely to be a game-changer in this arrangement, but he’s the usual sort of fun intervention.

During most of this time I was also becoming a historian, which involved learning about all the ways Italian politics had been in crisis since the Risorgimento, which itself effectively created a national overlay for the regional and factional crises that had been going on since at least the Renaissance. In short, if you want to you can construct an account of Italian politics in permanent crisis for at least 500 years; although as we can see by my own short experience, the details vary quite a bit from time to time. And of course it’s self-evidently silly to call a dynamic that persistent a crisis, so it helps that the social history of Italy can be told as an account of long stretches of relative stability, relatively untroubled by the frantic political sideshows. I would now say ‘metastability’, however, since ‘the same’ outcomes kept being produced by ‘different’ means, hence the Lampedusa quote in the title. That is the story I now find the most fascinating.

To put my thesis bluntly, no one has ever gotten what they wanted out of Italian politics unless what they wanted was what they could get. I’d recommend that as a general orienting hypothesis about a lot of things, for example Iraq, Iran, Arizona, Russia, China, Baltimore, Britney Spears, Tunisia, Egypt, Syria, and women’s rights. What is the possibility space? How are agents built, e.g. constrained and enabled, in relation to the possibility space? What can we read back about possibility from how agents act? It seems to me that our analytical contrasts are severely distorted by the notion that intentions are a special kind of cause exempt from all the formation and interaction dynamics of complex systems. Let’s see if we can do better than Feuerbachian pseudo-theologies of empowerment, flattering though they may be. In any case, here’s how I put it in my NEH application, in pertinent part:

I’m assuming I’ll learn lots of new things and reconfigure some old ones, so any plan of study is necessarily speculative. But going in, I imagine it would be interesting to think forward from Gramsci’s contested analysis of the Risorgimento as a ‘passive revolution’ driven from above by elites, and connect that with recent developments in complex systems analysis. I’m thinking, for example, of Terry Deacon’s contrast between dynamical systems and self-organizing systems in Incomplete Nature. Just to gesture at that here, it seems to me that there’s only so much an active/passive agency analysis and abstractions like ‘modernity’, ‘capitalism’, ‘the state’, and so on can tell us about nation-forming and -forcing processes. At this point we could be looking for the kinds of emergent, self-organizing poly- or para-intentional actor networks and assemblages Gramsci was starting to notice and trying to reconcile with the structure/agency constraints of the Marxist revolutionary project and conceptual vocabulary. I guess if I were to frame this polemically I might say something about getting out of the agency metanarrative without falling through its structuralist or post-structuralist looking-glasses, but that all seems a little tired now and I’m much more interested in theories as hypotheses for figuring out what was going on and how it was going on, at various scales.

Getting down to cases and figuring stuff out is what my teaching is about at this point. So I would want to translate what I learn about the Risorgimento and its transnational linkages back to my classes in World History and Modern Europe both as content and as a model of how to do good analysis; and then extend those practices to other cases. For example, perhaps to look at trasformismo in comparison to other self-organizing, quasi-political strategies to manage the intractable complexities of modernization; or to investigate in my “Gender and History” class how the particular gender formations of modern Italy evolved around and through the opportunities and constraints created by the ‘fare Italiani’ project in its local, regional, national, and transnational contexts.

I’m out on so many limbs here I have to hope they weave together into something that will support a little weight. But I really like the idea of taking the stuff we’ve all been thinking about here at DV for quite awhile and focusing it on a notoriously hairy case study. Maybe the hair is inherent.

Which brings me to “Post-Democracy in Italy and Europe” at Crooked Timber. I haven’t read the book by Colin Crouch that’s under discussion, but it seems to me that to call the advanced industrialized countries ‘post-democratic’ they’d have had to once be democratic and now not be. And at least in the Italian case I’ve just sketched out, I’m not sure anything like that sort of categorical delimitation of the discussion can do anything but confuse us. Italy right now is more or less just as democratic as it’s been at least since the Risorgimento and arguably since the Renaissance, which is to say, not at all if we mean by democracy a formal system in which popular votes lead directly to explicit policy outcomes and intentional transformations of collective life; and amply, if what we mean by democracy is one domain of self-organizing dynamical systems – like markets, patronage networks, trade complexes, families, fashion – that take unmanageably complex inputs and constrain them into orderly outputs. And we can notice that while each of these systems creates means for human intentions to be effective, they do so by radically constraining what humans are able to effectively intend, in relation to more comprehensive systems that work the same way. Freedom is the recognition of necessity after all.

December 4, 2012

Can Odd Monisms Ruin Nagel’s Book? (4,3,6)

by Asher Kay

Yeah, that’s right — I used a cryptic crossword clue as a post title. I was going to go with “Something It Is Like To Be Bemused And a Little Relieved”, but that sounded too much like David Foster Wallace.

If you have solved my clue/title, you’ll know that this post is about Thomas Nagel’s newest book, Mind and Cosmos: Why the Materialist Neo-Darwinian Conception of Nature Is Almost Certainly False.

The first thing likely to grab your attention is the subtitle, which might seem ever so slightly strident if you were not aware that shortly before publication, he toned it down from, “Come At Me, You Worthless Reductionist Pissants”. Happily, the book’s content does not reflect the vociferousness of the subtitle — it’s actually a pretty humble and friendly book. Nagel, in fact, doesn’t even explicitly say that Darwinism is false. He says that “psychophysical reductionism” is false, and by “psychophysical reductionism”, he seems to mean an array of things, some of which will strike the physicalist as strawmen (such as the idea that everything can be reduced to physics), and some of which will just seem a bit tone deaf (like the “reducibility of the mental to the physical”, which doesn’t really involve a reduction, per se, for someone who holds that everything is physical).

One could spend an entire post poking holes in Nagel’s conception of the physicalist stance (see Leiter and Weisberg’s recent review in The Nation if you already had your hole-poker out), but it’s a big topic, and I think it makes more sense to lay out a positive description of physicalism and show how some of Nagel’s objections look in light of that than it does to discuss it from the negative viewpoint of refuting someone. I’ll try to do a post on that soon.

What I want to discuss here are the several of things that puzzled me about Mind and Cosmos. The first is Nagel’s conception of “value realism” (he also calls it “moral realism” in some places). The basic idea of value realism, for Nagel, is that the truths indicated by value and moral judgements are truths that are not dependent on anything else — they are true in themselves:

Realism is not a metaphysical theory of the ground of moral and evaluative truth. It is a metaphysical position only in the negative sense that it denies that all basic truth is either natural or mathematical. It is metaphysical only if the denial of a metaphysical position like naturalism itself counts as a metaphysical position. But value realism does not maintain that value judgments are made true or false by anything else, natural or supernatural.

Of course natural facts are what make some value judgments true, in the sense that they are the facts that provide reasons for and against action. In that sense the fact that you will run over a dog if you don’t step on the brakes makes it the case that you should step on the brakes. But the general moral truth that licenses this inference — namely that it counts in favor of doing something that it will avoid grievous harm to a sentient creature — is not made true by any fact of any other kind. It is nothing but itself.

For me, this view runs into two problems. The first is the question of how we are able to access these truths. Nagel doesn’t address this issue directly, and my sense is that he would not see it as a problem at all. He seems to be saying that we have access to them in the same way we have access to mathematical truths, but I don’t see how that makes the question any easier since we don’t have an explanation of how we access those either. Since the point of the book is to push for non-physicalist theories of mental processes, I am guessing that those theories are where Nagel would expect the question to be addressed.

The second problem is the weird duality of pleasure and pain. If, as Nagel says, “pain is really bad, and not just something we hate, and that pleasure is really good, and not just something we like”; and if the good and bad of pleasure and pain are not dependent on anything else (our like/dislike, their physical manifestations, the evolutionary consequences of our reactions to them, etc.); then we seem to have an awfully big coincidence going on:

Describing it is tricky, since it is obvious that biologically hard-wired pleasure and pain play a vital role in the fitness of conscious creatures even if their objective value doesn’t. The realist position must be that these experiences which have desire and aversion as part of their essence also have positive and negative value in themselves, and that this is evident to us on reflection, even though it is not a necessary part of the evolutionary explanation of why they are associated with certain bodily episodes, such as sex, eating, or injury. They are adaptive, but they are something more than that. While they are not the only things that have objective value, these experiences are among the most conspicuous phenomena by which value enters the universe, and the clearest examples through which we become acquainted with real value.

In the realist interpretation, pleasure and pain have a double nature. In virtue of the attraction and aversion that is essential to them, they play a vital role in survival and fitness, and their association with specific biological functions and malfunctions can be explained by natural selection. But for beings like ourselves, capable of practical reason, they are also objects of reflective consciousness, beginning with the judgment that pleasure and pain are good and bad in themselves and leading on, along with other values, to more systematic and elaborate recognition of reasons for action and principles governing their combination and interaction, and ultimately to moral principles.

Remember that for Nagel, there’s nothing metaphysical going on — no “root cause” that leads both to the truth that pain is bad and to our visceral aversion to it. As Nagel appears to recognize, this leads us toward a sort of dualism. I’d go further and say that it’s the same sort of dualism that gets us in trouble when we accept the mind/body problem as a real problem. Nagel also recognizes that his conclusion relies heavily on intuition: “That is just how they glaringly seem to me, however hard I try to imagine the contrary, and I suspect the same is true of most people”. Mind and Cosmos is refreshingly honest when it comes to intuition.

Okay, so that’s the value realism thing. The other puzzling thing for me was Nagel’s conclusion that an evolutionary account of reason is impossible because it is necessarily circular:

By contrast [to the case of perception], in a case of reasoning, if it is basic enough, the only thing to think is that I have grasped the truth directly. I cannot pull back from a logical inference and reconfirm it with the reflection that the reliability of my logical thought processes is consistent with the hypothesis that evolution has selected them for accuracy. That would drastically weaken the logical claim. Furthermore, in the formulation of that explanation, as in the parallel explanation of the reliability of the senses, logical judgments of consistency and inconsistency have to occur without these qualifications, as direct apprehensions of the truth. It is not possible to think, ‘Reliance on my reason, including my reliance on this very judgment, is reasonable because it is consistent with its having an evolutionary explanation.’ Therefore any evolutionary account of the place of reason presupposes reason’s validity and cannot confirm it without circularity.

My first intuitive response was to think that Nagel was going a little easy on perception. Isn’t an evolutionary theory of perception open to the same problem, since we are relying on our perceptions (empirical measurements through scientific instruments) to determine the validity of our theories? Or if we have ways of “checking” our perceptions to make sure they’re valid, wouldn’t the same kind of checking apply to our reasoning process?

I’m reminded of the refutation of moral relativism based on the idea that it makes an absolute claim (i.e., “no moral truths are absolute”). It feels like a trick — that it’s only circular because it’s “about” reason. Plus, as with value realism, this sort of rejection forces us into a position of turning reason into another “true in itself” thing that doesn’t require justification.

It seems to me that reasoning is something way less cool than Nagel makes it out to be. If I perform a reasoning task that takes me from proposition A to proposition Q, all I can say is that proposition Q follows from the procedural rules that I’ve set out. If proposition A is based on a perception of the world, and proposition Q also accords with a perception of the world, I can say that my procedure was successful in producing an inference about the world. Further tests might show that the procedure is wildly successful in producing a bunch of successful inferences about a bunch of things in the world. So when I’m “presupposing” reason when I theorize about reason’s awesomeness, all I’m really doing is saying that my confidence is high because the procedure I followed tends to be highly successful in making inferences about that kind of thing.

It’s really easy (for me, at least) to imagine this as a sort of algorithm-generating process that continuously takes A-propositions from perceptions, runs them through sets of rules, then tests the resulting Q-propositions against perceptions. Those algorithms that result in high “accordance” rates get weighted up and preferentially used. Those that don’t get weighted down and eventually wither away. If such a process occurs unconsciously and is repeated over years and years, even at early stages of an organism’s life, the adult organism would probably end up intuiting that the successful algorithms are “just true in themselves”. And if the external environment is perceptually consistent enough – if there is, in philosophical parlance, a metaphysical basis for the concordance between inferences and perceptions – those algorithms are going to be both easily discovered and widely applicable, given the right kind of hardware.

UPDATE: A recent review of Mind and Cosmos from John Dupré at Exeter contains a wonderful summary of the physicalist stance:

So here is the first problem. Reductionism can be understood as a metaphysical thesis, typically based on an argument that if there is only material stuff in the world (no spooky stuff), then the properties of stuff must ultimately explain everything. This is a controversial thesis, much debated by philosophers. But what the last 50 years of work in the philosophy of science has established is that this kind of reductionism has little relevance to science. Even if it turned out that most scientists believed something like this (which I find incredible) this would be a psychological oddity, not a deep insight about science. A more sensible materialism goes no further than the rejection of spooky stuff: whatever kinds of stuff there may turn out to be and whatever they turn out to do, they are, as long as this turning out is empirically grounded, ipso facto not spooky. Such a materialism is quite untouched by Nagel’s arguments.

I think critics of physicalism find this sort of stance to be extremely frustrating. If physical = non-spooky, then it could be said that everything we have a coherent theory of is physical, and that everything that seems spooky now will eventually be considered physical when we have a coherent theory of it. The only way for a critic to keep something permanently non-physical is to argue that no coherent scientific theory of it is possible (which is kind of what Nagel is trying to do with subjective experience and value judgements).

September 16, 2012

I’ll show you mine if you show me yours

by Carl Dyke

Promoting a comment on a previous post to start off this post: I’ve been baking a lot of bread lately. I’d dabbled before, but I started getting a bit serious about yeast-wrangling. I’ve read a lot of descriptions of the process, discussion boards and so on. The thing that gets (or should get) really clear really quickly is that a ‘recipe’ just barely gets you started. And you can talk about the biochemistry of yeast and lactobacilli and hydration ratios and such and it’s very illuminating. And you can provide guidelines about kneading and folding techniques and rates, and what the dough ought to look and feel like at various stages. All of that is awesome and a great start. But in relation to actually working up a dough it’s all ridiculously overelaborated and kind of beside the point. There are some things you want to mix together in rough rates, proportions and timings. There’s a way they should look and feel. You do stuff until you get that look and feel. What stuff you do exactly depends on what it felt like when you did that other thing a second ago. Maybe you fold, maybe you stretch, maybe you pull, maybe you push. And if you do that, and trust the process and set up the yeast to do its thing and don’t try to impose your will on it, you end up with delicious bread. If you don’t, you post frantic questions on discussion boards about why you didn’t get a crown or why your crumb is too dense or whatever.

Teaching is the same, except in this case the recipe is the syllabus. So when colleagues think they’ve communicated what their class is by sharing their syllabus, I just hang my head.

In my experience there’s a kind of porno for eggheads quality to syllabus-sharing. Ooooh, check out the size of that reading list! As I just said in commenting on Tim Burke’s recent post asking for feedback on his intriguing draft syllabus for a course called “Bad Research and Informational Heresies,” a reading list and its associated assignments are not very helpful to me for envisioning a class. Those parts are aspirational and maybe even outright fantasies, as I remarked there. All sorts of reading lists and assignments can work or not work, but that depends on the teaching and learning relationship, that is, not just the recipe but what teacher(s) and students do with it, which in turn depends on a complex of dispositions, expectations, practices and relationships that have to be worked through in each case and that can’t be forced based on preconceptions of what college/teaching/students are supposed to be. Is it possible to say anything useful about those variables in a syllabus? Well, I’ve been trying to gradually get better at that over the years – if you’re curious, here’s this semester’s World History syllabus:

&his104f12.dyke

Not much of a reading list, I’m afraid, but lots of other things I wonder what folks think of.

July 8, 2012

Constraint and the perfect shot

by Carl Dyke

Watching Andy Murray play Roger Federer (Wimbledon 2012) I’m struck by another case of constraint causation. The court, the surface, the net, the weather, the opponent, the rules of the game and a million other factors large and small create the conditions of the game in general and its particular instance in this match. They do so by ruling out all the things the game isn’t (backgammon, a brit milah, a French Open quarterfinal between Williams and Sharapova) and all the people who haven’t ‘earned’ their way there, so now it’s Fed and Murray in the Wimbledon men’s final.

All this sets the scene for both the match and my observation about it, which is that sometimes Murray second-guesses himself in the middle of a shot. Every point begins with a highly constrained space of possible plays – the serve – which then has a virtual infinity of permutations of speed, spin, angle, target. This wave function of possibility collapses into an actual serve which is just one of the shots it might have been, calling forth in turn a service return that has a possibility space constrained by all the conditions previously mentioned plus the particulars of the instant serve.

It is of course impossible to hit your return both crosscourt and down the line, both hard and soft, both topspinned and underspinned, although in theory and early in the process all of those are live and potentially good options. Every shot works like this – a big but constrained space of possibilities that must be collapsed into a single actual shot. And this is the thing that Murray sometimes fails to do – he seems to stay aware of options through the point of execution, consequently hitting shots that are trying impossibly to be both or all of the good options he needed to pick between.

The perfect tennis shot only becomes so by rigorously not being every other possible shot.

May 22, 2012

Out of the box

by Carl Dyke

We’ve been talking about constraint and causation (or ‘enablement’, as Garfinkel might say), and this morning I’ve stumbled into a chain of associations that illustrate the point. Specifically, two juxtaposed reviews in the NYRB, on Edward St. Aubyn’s Melrose novels and Margaret Wertheim’s Physics on the Fringe; the book Rachel is reading, Charlatan, on medical quackery in the fin de siecle; her previous research on Olaus Rudbeck; and a movie we just watched, “(untitled).” All of these are cautionary tales about thinking outside the box, and therefore reminders of the enabling function of boxes.

Let’s start with Rudbeck, a Swedish scientist who taught Linnaeus and (perhaps) discovered the limbic system. Rightly celebrated as a Renaissance man, he spent the second half of his life and blew his reputation pursuing his idee fixe that Atlantis had been in Sweden. Clearly a creative thinker, once he got into a field where his thinking was unconstrained by conventions and a developmental programme of investigation he came unglued and started making stuff up to suit his emotional preferences, then selectively interpreting the evidence to fit. This fact was clear to everyone but him.

In the review of Wertheim, Freeman Dyson tells a similar story about Sir Arthur Eddington, a brilliant astronomer whose observations of deflected starlight were instrumental to the experimental support of Einsteinian relativity, and whose lucid writing and teaching on the subject helped establish the new orthodoxy. But Eddington also had his own “Fundamental Theory,” an idiosyncratic mishmash of “mathematical and verbal arguments… [with] no firm basis either in physics or mathematics.” “Two facts were clear. First, Eddington was talking nonsense. Second, in spite of the nonsense, he was still a great man.”

What’s striking about these examples is how people exquisitely functional within one set of conventions can spectacularly implode outside them, and without any apparent reflexive awareness that this is the case. St. Aubyn’s novels (which I have not read) would seem to be excruciating meditations on this theme. Patrick Melrose, the main character, is an unwilling participant observer in a horrifying upper-crust British social milieu in which publicly effective people behave abominably to each other in private, with no apparent sense of disconnect. In fact, they seem to use the effective parts of their lives as systematic displacements of self-reflection. Patrick, in contrast, is practically disabled by self-awareness (“how could he think his way out of the problem when the problem was the way he thought”) and floats through drug addiction before finally working himself around to an effective balance of interiority and exteriority.

Charlatan is about a guy who got rich transplanting goat testicles into the scrota of men anxious about their virility. Needless to say this was a fool’s errand and a septic nightmare, but neither he nor his patients seemed clear on these obvious facts. In Physics on the Fringe Wertheim writes about Jim Carter, a successful engineer and entrepreneur who spends his spare time concocting experiments to prove his pet theory that the universe is composed of hierarchies of “circlons,” of which smoke rings are the demonstrative exemplars. It turns out that unbeknownst to Carter a very similar theory was once entertained by Lord Kelvin, but dropped for lack of convincing evidence – despite/because of experiments much like Carter’s, experiments which he finds amply probative, although he cannot convince the scientific community to agree.

In his review of Wertheim, Dyson champions the fringe creatives working outside the box as courageous poetic visionaries. But the tricky thing is figuring out what the ‘good’ versions of this are, since both psychosis and ordinary crackpottery are also often characterized by poetic vision. “untitled” comes at this question from the arts side and shows that Dyson’s offloading of the question onto art only works because his understanding of art is romantic. (Of course he does not know this about himself.) The movie’s central characters are an experimental musician, his brother the painter, and the gallerist who takes an interest in both. The painter is a hack, but does not know it; his paintings sell very well to hospital chains for use as soothing motifs in their lobbies, which is how the gallerist funds her showings of the serious art that does not sell. The musician produces elaborate cacophanies; he tells us that tonality is over, now just a matter of “pushing notes around,” which is essentially what his brother the painter is doing with color. The problem is that although it’s clear the painter is a hack, it’s not at all clear whether the musician is something better. There are norms of judgment for the former, but not the latter. Is that just unpleasant noise, or is it a brilliant meditation on the contingency of norms of pleasantness? As the musician tells us, all sound is noise unless it’s welcome. What makes it welcome?

The problem turns out to be that outside the box, there’s no way to settle these questions, to move things forward or even to know what forward would be. “It’s all good,” as they say. But a river without banks is a swamp. So constraint, a box of some kind, is essential to getting anything done, even if all it does is provide the contrast space against which plausible innovation can be measured. Is that enough of a point for this post? It will have to be, because I’ve said all I had in mind to say at this time.

February 1, 2012

Re-vole-ution

by Asher Kay

That’s right, bitches.

My life has changed several times since I last dropped a disemboweled little critter on this blogospheric porch-step. But recently, I came face-to-face with a vole that has been hiding out around my place for years, nocturnally rooting through the garbage bin and occasionally scaring all the cats. It’s not one of those eensy voles either — this one is a monster. It’s so brobdingnagian that I will need to dismember it and carry it piece-by-bloody-piece to the patio window.

But I’m not going to do it in the annoyingly metaphorical style of the previous paragraph. Nor will I use words like “brobdingnagian”. I don’t even like that word.

Okay, so this is a sort of teaser post. All of the setup and none of the stunts.

Have you ever read a book that seemed to know what you’d been thinking about for the last five years? I’ve read about half of one, and it is Incomplete Nature, by Terrence Deacon.

About two years ago, I did a post here called Causation, Reduction, Emergence, and Marbles. It was mostly about reductionism and predictability, but I had this to say about causality:

My stance is that causality is really a much, much looser concept than physical science would make it seem. Over time, physical science has corralled causality into a smaller and smaller area — but that area is occupied by some pretty inscrutable things — things like “forces”, which end up being mostly tautological at a paradigmatic level (“it’s a force because it makes things move — it makes things move because it’s a force”), and metaphorically hinky at the level of theory (gauge bosons as “virtual particles”).

So when we think about the neuronal “causing” the mental, we usually have in mind some sort of physical-science-like efficient causality, because that’s what we see as operating at the molecular level of description that neural networks inhabit.

But the question is — why are there multiple levels of organization at all? Is reality really separated into strata of magnification, with causality operating horizontally within a layer and vertically between layers? If so, are the vertical and horizontal causalities the same *kind* of causality?

Basically, I was thinking about an old argument amongst emergentists about the possibility of “downward causality”. There are tons of problems with the notion of downward causality, but my particular problem was the difficulty of thinking about a model of emergent, stratified reality in which nothing more than the standard, modern, efficient causality of the physical sciences played a part. It’s so difficult to think about that it’s hard to even figure out why it’s so difficult to think about. It’s the kind of problem that makes you start to wonder if maybe we just don’t really have a firm understanding of causality. But in a world where we can annihilate a couple hundred thousand people in an instant with our notion of causality, this is pretty much a heretical thought — or at least the kind of thought you don’t feel comfortable entertaining until you’ve done some post-graduate work in particle physics.

Despite the discomfort, I entertained the thought, in a playfully non-rigorous way. If you allow that there might be additional sorts of causation, you’re free to change the model around (or abandon it) and see what you come up with. My suspicion was that the additional sort of causality, if there was one, had to be related to the fact that in complex dynamical systems (or self-organized systems, or “emergent” systems), there are a lot more parts interacting and relating to one another than there are in the sort of billiard-ball examples we tend to imagine when thinking about efficient causality. And if that was the case, then the additional sort of causality was essentially mereological, since the cross-strata nature of this causality would be tied up with the relation of the parts to the whole system. The key to that, in my opinion, was the idea of “constraint”. To me, this was sort of like the flip side of an efficient cause. A constraint can be thought of as a causal “force” in that it disallows a dynamical system from occupying certain positions in the system’s state-space.

You can see me start to fiddle with the idea of constraint in the comments section of the same post. I say things like:

What I’m beginning to think is that causality is emergent in the same way that properties like “transparency” or “consciousness” are emergent. At the subatomic level, we have all these efficient causes (weak and strong, electromagnetic, gravitational), but at higher level, different sorts of causality actually emerge — larger “forces” that act mainly as “constraints of organization”. So what I’m trying to think through is how we can look at “organization” as causality. I think this will end up helping me to conceptualize levels of organization in a way that places them in the “real world”.

And:

I agree, though on the need for a careful mereological/emergence distinction. In a sense, maybe it’s the same thing as a distinction between causality and “relation”. If so, the idea of causality as “constraint” could help in formulating the distinction.

The ideas I was expressing were obviously not well-developed then, but the basic line of thought was: 1) questioning whether we really understood causality in complex systems; 2) the suspicion that the current model wasn’t adequate to emergent systems at a higher level; and 3) the notion that the idea of constraints could help in re-working the model.

So now it’s two years and some odd months later, and I discover that Terrence Deacon has a new book out (this is a very exciting thing for me — his last book, The Symbolic Species, is one of my all-time favorites, and it was published in 1997). The book is ostensibly about “How Mind Emerged From Matter”, but since it’s Deacon, you can pretty much count on it being about a whole lot more.

And it is. It’s about emergence and causality and, best of all, constraint. There’s even a whole chapter called “Constraint”!

Take a moment to imagine my joy.

I’m only a little more than halfway through the book, but I’m starting to think that Deacon has actually found a way to re-think the model. His approach is strange, tortuous, detailed, counter-intuitive, and involves the same sort of mind-blowing figure/background switch he performed in The Symbolic Species. Here’s just a little taste:

The concept of constraint is, in effect, a complementary concept to order, habit, and organization, because it determines a similarity class by exclusion. Paying attention to the critical role played by constraints in the determination of causal processes offers us a figure/background reversal that will turn out to be critical to addressing some of the more problematic issues standing in the way of developing a scientific theory of emergence. In this way, we avoid assuming that abstract properties have physical potency, and yet do not altogether abandon the notion that certain general properties can produce other general properties as causal consequences. This is because the concept of constraint does not treat organization as though it is something added to a process or to an ensemble of elements. It is not something over and above these constituents and their relationships to one another. And yet it neither demotes organization to mere descriptive status nor does it confuse organization with the specifics of the components and their particular singular relationships to one another. Constraints are what is not there but could have been, irrespective of whether this is registered by any act of observation.

What I’m planning to do (and it may take a while) is create a series of posts on the ideas Deacon puts forth in Incomplete Nature. If the quote is not enough of a teaser, I will add that the journey involves the number zero, a  partial resurrection of Aristotle, boxes full of air, Charles Sanders Pierce, at least four neologisms, the siren-song of mereology, and a totally new perspective on object-oriented philosophy.

Stay tuned, beotches.

December 7, 2011

Useful uselessness

by Carl Dyke

Bookmark here. Something to connect to previous posts and conference papers about the usefulness of history being its uselessness. Found in Peter Manseau’s review of Robert Bellah’s Religion in Human Evolution:

All animals of a certain level of complexity, Bellah explains, engage in forms of “useful uselessness,” the developmental psychologist Alison Gopnik’s term for behaviors that do not contribute to short-term survival yet do ensure long-term flourishing. In the play of animals, we can see a number of interesting elements: The action of play has limited immediate function; it is done for its own sake; it seems to alter existing social hierarchies; it is done again and again; and it is done within a “relaxed field,” during periods of calm and safety. Put another way: Play is time within time. It suggests to its participants the existence of multiple realities—one in which survival is the only measure of success, and another in which a different logic seems to apply.

‘Useful uselessness’ is how I’ve been framing history, so I’ll need to track down Gopnik. Other links: Gramsci’s advocacy of ‘dead languages’, Hegel’s remark about history being too different than the present to offer useful lessons, Watzlawick et. al.’s critique of Freudian psychology to the effect that knowing the causal origins of a complex in one’s developmental history is of no use in resolving it since we cannot go back in time and change them.

Aren’t all of the humanities, at least as taught in Gen Ed to people who will not be following them into serious scholarship, this kind of useful uselessness? Wouldn’t it be good to be clear about this fact and be appropriately playful about them?

October 14, 2011

Ponzirama

by Carl Dyke

There’s Madoff. Then there’s Social Security according to Rick Perry. Now here’s an essay (from a website about a book) that ups the ante. Ellen Hodgson Brown argues that the entire global financial system is a Ponzi scheme.

Brown elegantly shows how the whole notion that the national debt has to be paid down or paid off is a red herring, a fundamental misunderstanding of how the system works (money is debt; the national debt is, essentially, the national money; it is therefore constantly both paying itself off and recreating itself in the normal course). But she also shows how leaving the creation of the debt/money supply in private hands, as it is now, keeps interest from circulating back into the economy where it can be earned back by debtors and used ongoingly to pay their debts, making the system unsustainable. Essentially this creates toxic debt sinks that eventually have to fill up, so that the deficit fretters end up being right albeit for the wrong reasons. She recommends public banking as the solution, which as she describes the problem does seem sensible, albeit further infuriating for the Ron Pauls (warning: balky script at this link) of the world.

The essay clarifies some things nicely and I recommend it. At the same time I’m suspicious of this kind of clarity, which feels a lot like the sort of self-help advice where everything will be cool if you exercise, eat right and get plenty of fiber. I have this intuition, maybe small-minded and self-serving, maybe I can get some Dao cred, or maybe it’s the same thing, that problems on a global scale are fundamentally unfathomable, indeed that to treat facts at that scale as problems is a kind of existential category error. Of course I know better from Marx, but then again we’re still waiting for Marx to pay off on the solution side.

August 12, 2011

Relative immiseration

by Carl Dyke

Does fiscal consolidation lead to social unrest? From the end of the Weimar Republic in Germany in the 1930s to anti-government demonstrations in Greece in 2010-11, austerity has tended to go hand in hand with politically motivated violence and social instability. In this paper, we assemble cross country evidence for the period 1919 to the present, and examine the extent to which societies become unstable after budget cuts. The results show a clear positive correlation between fiscal retrenchment and instability. We test if the relationship simply reflects economic downturns, and conclude that this is not the key factor. We also analyse interactions with various economic and political variables. While autocracies and democracies show a broadly similar responses to budget cuts, countries with more constraints on the executive are less likely to see unrest as a result of austerity measures. Growing media penetration does not lead to a stronger effect of cut-backs on the level of unrest.

That’s the abstract of a long Centre for Economic Policy Research working paper (pdf), “Austerity and Anarchy: Budget Cuts and Social Unrest in Europe, 1919-2009” by Jacopo Ponticelli, Universitat Pompeu Fabra and Hans-Joachim Voth, UPF-ICREA, CREI and CEPR. Thanks to Duncan Law.

The dynamic is long familiar in social movement theory, often referred to as the ‘relative immiseration’ effect. It’s also familiar to people with more than one child. Basically, when you give folks stuff and then take it away, or give them relatively less stuff than reference groups, they get way more pissed off than if they never had anything to start with or deprivation is evenly distributed.

Relative immiseration is an important corrective to vulgarizations of Marxism in which capitalism is supposed to precipitate its own demise only if it reduces the working class to absolute abjection. Not so – just as all needs beyond mere subsistence are relative to particular social formations, revolutionary immiseration is relative to the general standard of well-being of particular social formations. Nowadays the poor in Western societies mostly have indoor plumbing that was not available even to kings just a few centuries ago. (They have fridges and microwaves, yes.) But that’s not the relevant measure of degradation – it’s where the poor stand in relation to the rich now. And as is well-known, that gap has been widening. The borrowing powers of governments have been filling the gap for the past several decades, but that compensatory regime seems to be hitting its unsustainability threshold. We live in interesting times.

Of course there’s nothing that says capitalists have to keep driving relative immiseration toward the brink. At least since Bismarck and the Gilded Age smart elites have recognized the need to spread the wealth to some degree to purchase social peace and secure the conditions for continued profit. All it takes is withdrawing some capital from speculative ‘investment’ and using it instead, directly or through government transfers, to build the consumption side of the economy – namely by hiring people and paying them well, whether they ‘earn’ it or not – compensating according to need, not productivity, as Marx argued and Jim Livingston keeps arguing.

If paying people to be consumers out of scale with their productivity seems immoral, it’s worth remembering that while credit default swaps may be called ‘products’ in the ‘industry’, they’re not actually making anything but wealth either. Aren’t (relative) need and general prosperity enough to ground public morals?

UPDATE: Dave Mazella at The Long Eighteenth has been rereading E.P. Thompson on “The Moral Economy of the English Crowd in the Eighteenth Century” and finds rioters “trying to restore traditional understandings of collective rights and reciprocities, traditions that elites disrupted or ignored at their peril.” This is consistent with both the analysis here and JohnM’s disambiguating comment below, but adding another layer: I often have to resist the activist reflex to see in every little upheaval a foretaste of revolution, and Thompson reminds us of the complex dynamic robustness of existing arrangements.

August 2, 2011

Steering and the ruts

by Carl Dyke

“He told me years later that serving the church in Oxford reminded him of driving an old Model T Ford on a muddy country road; the steering column had so much play in it that turning the wheel didn’t do much good and the car just followed the ruts anyway.”

Tim Tyson, Blood Done Sign My Name

April 11, 2011

Energy and Curiosity, the Wisdom of Robertson Davies

by johnmccreery

I am in one of those fey moods where I find myself rereading books that, after a long waiting, have spoken to me and demanded to be read again. The book now in question is Rebel Angels by the Canadian novelist Robertson Davies. No one to my mind does academic comedy better. One thing that makes his books worth rereading is the bits of wisdom that pop up here and there. I thought of Carl again as I read the following passage.

Energy and curiosity are the lifeblood of universities; the desire to find out to uncover, to did deeper, to puzzle out obscurities, is the spirit of the university, and it is a channelling of that unresting curiosity that holds mankind together. As for energy, only those who have never tried it for a week or two can suppose that the pursuit of knowledge does not demand a strength and determination, a resolve not to be beaten, that is a special kind of energy, and those who lack it or have it only in small store will never be scholars or teachers, because real teaching demands energy as well. To instruct calls for energy, and to remain almost silent, but watchful and helpful, while students instruct themselves, calls for even greater energy. To see someone fall (which will teach him not to fall again) when a word from you would keep him on his feet but ignorant of an important danger, is one of the tasks of the teacher that calls for special energy, because holding in is more demanding than crying out.

December 6, 2010

Monologue tolerance

by Carl Dyke

As you may know, Bob, I was trained in one of the smaller and more obscure subdisciplines, a little thing we like to call ‘Intellectual History’ (or sometimes ‘intellectual and cultural history’ if we’re aware, however dimly, that people other than official intellectuals have an intellectual history). Even in the high academy we’re pretty ornamental and there aren’t usually a lot of us around. So it’s been a blessing of sorts for me to live and work just near enough to the Raleigh/Durham node of big research universities to be able to attend the meetings of the Triangle Intellectual History Seminar.

The seminar often brings in bigwigs to talk about their work in progress, and also offers a forum for members and their advanced graduate students. The level is high and the distribution of expertises is broader than someone outside our little field might think possible. In general the room is packed with very smart people who know a lot of stuff, so in principle it ought to be a thoroughly stimulating experience – you know, like a conference. And even better than most conferences, papers are distributed beforehand and we’re all there intentionally, so everyone arrives prepared on the topic of the day and there’s no need for the slow death of droning paper delivery.

In practice of course there’s a little of that droning, by way of introduction, but it’s mercifully brief and usually offered with some ad libs to keep it fresh. But by academic standards we get down to discussion remarkably quickly, and here is the perfect opportunity for the exciting exchange of ideas that we all imagined academe to be!, before graduate seminars, freshman surveys, and committee meetings blew our brains out like egg yolks. Except that even here, where conditions are seemingly ideal, that exciting exchange does not take place.

Why? Well, there are just some logistical issues when you’ve got 15-20 smart people who all have things to say and can’t say them at once. Can’t have the loud and the quick dominating the discussion, so everyone gets a turn. Time is limited so followups have to be moderated and tangents discouraged. And although everyone likes a good joke, we wouldn’t want to short the presenter on the serious discussion about her important work that she deserves.

The result of these reasonable considerations is that nothing resembling conversation actually takes place. Because she knows she’ll get one shot to say what’s on her mind and then the turn will pass to someone else with their own fish to fry, each speaker produces a well-crafted monologue so dense with premises and implications that the presenter can only respond to a fraction of it, of course with another monologue. And of course all exchanges radiate from the node of the presenter, with no direct interactions between the other participants. It’s all very orderly, lots of smart stuff gets said, it’s productive, certainly worthwhile, even beautiful in its way; and there’s no transformative effervescence, no spark, virtually no chance of the happy accidental flashes of insight that come from free-flowing conversation, improvisation, riffing call and response, theme and variation, the jazz of the mind.

I said there was no conversation, but that’s not quite right. There is, but it’s on a very slow and ponderous (in the sense of pondering) rhythm. As I sit in that room aching for something a little more upbeat, it occurs to me that success in the high academy is in part a function of tolerance for monologues, both delivering and receiving: relatively short ones like those in the room, longer ones like lectures and journal articles, really long ones like books. For ordinary mortals this kind of monologic sensibility is just plain rude, but for the beasts of academe it’s the measure of seriousness. We discipline our young to patience for the monologues of others, and patience for the development of their own; and tsktsk at the minds both bright and dull who won’t or can’t adapt to the deliberate pace of our conversations. No wonder serious academics are leery of bloggery.

Which brings me to my last point. The paper last night was by Lloyd Kramer, a very good historian who was engaged in it in a conversation about the right way to do history with his graduate advisors, now very old, and R.R. Palmer, now dead. There was a bit of a recovery of Palmer, an old-school big-picture synthesizer, as against the more fragmented, conflicted history derived from post-structuralism that followed. This is a conversation in which the monologues are at the scale of oeuvres and generations, or rather in which it is only at that scale that the apparent monologues resolve into utterances in a very ponderous conversation indeed. In the course of the ‘discussion’ Lloyd mentioned that one difference between these generations had to do with their understanding of selves and identities: as primordial and singular for Palmer, as dialogically constructed and plural for the post-structuralists. Here I wanted to say that it didn’t take post-structuralism to see self and identity this way, since the insight was there already in Hume, Hegel, Nietzsche, James, Mead and DuBois to name a few. But I held my tongue, and thought about what kind of selves are constructed out of dialogues that take hours, years, lifetimes and generations to unfold.

November 25, 2010

Happy accidents

by Carl Dyke

I am a firm believer in the happy accident. I may have said this before. I don’t mean purely random serendipity. Any dipity-shit can get that sometimes, but mostly not. I mean the sort of emergent event where a loose collection of good elements collated in a loosely enabling process dynamically configure in an unexpectedly, even unexpectably delightful way.

I think it’s possible (by definition, see above) to arrange things so there are more happy accidents, and fewer. The single best way to minimize the possibility of happy accidents is to carefully control everything about the inputs and processes of a situation. In academe one regularly sees this in curriculum and syllabus design, where ponderous machineries of micromanagement are deployed to assure that an outcome better than bad and worse than good occurs. In contrast, a happy accident-friendly situation is characterized by a certain flexibility toward both input and processes. “The best way to control people is to encourage them to be mischievous,” Shunryu Suzuki says (Zen Mind, Beginner’s Mind). Divergence from norms and ideals must be tolerated, even encouraged (selectively and not infinitely, to be sure) on the theory that it’s precisely norms and ideals that are inhibiting the happy accident. Just one of many reasons to be traitorous towards norms and ideals.

Although I pretty much run my life according to the happy-accidental principle of assembling good elements and letting them do their thing, two recent moments brought this into focus for me. The first, about which I’ll need to be vague to protect a personal and collective privacy, happened in one of my classes. As usual we’ve noodled around quite a bit and I’ve tolerated/encouraged all sorts of tangents to cultivate a spirit of investigation and to see where they might go. The other day it all came together in a moment where one of the students made a series of personal revelations that in context were so striking, and so helpful to our understanding of the world around us, that for a moment the class became more than it could possibly have been if I had strictly dictated content and process. Over the course of the semester we had all learned some things together, developed a group process, and established a trust without which this moment wouldn’t have been possible. But any given class meeting might well have seemed like a complete waste of time to a conventional observer.

The second moment was watching a movie Rachel and I quite like, “The Fall,” through the lens of the director’s commentary. Tarsem talks about a process of creation taking 17 years, in which he patiently assembled influences, techniques, collaborators, locations, and favors due. The catalyst was a young Romanian actress to play the lead. Tarsem and the other actors provided a stimulating immersive environment, then allowed her to improvise creatively within that loose structure and bring all the elements together into an imaginative whole much greater than the sum of the parts.

So many great things work like this: jazz, inspired oratory, the Iron Chef, Dutch soccer. As Picasso said, “creativity is allowing yourself to make mistakes. Art is knowing which ones to keep.”

July 8, 2010

Oscillation

by Carl Dyke

One of the basic misconceptions about ‘global warming’ is that there should be a smooth upward trend of temperatures all over. A cold winter means it can’t possibly be warming. But actually climate is made up, as I’ve recently learned, by a whole mess of coupled quasi-oscillators, systems that swing through a series of states tending toward return to origins (seasons), but with drift. In this case the drift is the warming, but the oscillation means that at moments in the linked oscillation parts of the system may well be swinging low while other parts are swinging up.

I was sitting outside just now playing a game I grimly enjoy with a simple natural quasi-oscillator, the mosquito. I have this tennis-racquet-looking device that uses batteries and a metal mesh to deliver a mild but mosquito-killing shock. In the process of getting good at killing mosquitos, first with my hands and then with this newfangled mosquitocidal contraption, I’ve noted that mosquitos’ basic move while feeding is to oscillate. They loop back and forth around a target area, drawing nearer and swinging farther until the coast clears, at which point they shorten the period of oscillation down to a landing. With feeding mosquitoes, like climate, there is no straight point-to-point flight.

Because the mosquitoes’ period and amplitude of oscillation vary their flight can look random, but it’s not – it’s non-linear but quite orderly. This makes mosquitoes almost as hard to track and swat as a purely random path, but pure randomness wouldn’t get them food. So my tender white flesh is an attractor (perhaps a strange one) around which the mosquitoes oscillate, reacting to movement and opportunity by swinging out or swinging in, with the oscillations drifting toward a meal.

Waving the killer doohickey around randomly will occasionally intersect one of these paths, but what works way better is to swing it back and forth in the same oscillating pattern as the mosquitoes, only slightly faster than them so that the linked oscillations have the chance to intersect on both the way out and the way in. If you’ve ever tried to stop someone (maybe yourself) on a swing you know how this works. You want to be pushing forward while the swing’s going back, and back when it’s going forward. Get this counter-period wrong and you just amplify the swing or knock the swinger on his ass.

Can we do something like this on a global scale with ocean currents, gas concentrations, absorbtive/ablative surfaces, butterfly wings and so on to manage the climate? Wow. Well anyway, be sure to turn off the lights when you leave.

May 27, 2010

Clue processing disorder

by Carl Dyke

Old-school parenting expert John Rosemond takes on the newfangled neurological diagnosis “Sensory Processing Disorder” in this week’s column. Rosemond, who is generally against the trend to turn every little behavioral inconvenience into an acronymed medical condition, reports on a child who was officially diagnosed with this disorder because, in part, she didn’t like her underpants. The good doctor instead diagnosed a case of ‘defiance’, and prescribed a fuss- and distraction-free bedroom environment in which the child was required to stay until she was dressed in whatever she wanted to wear.

Two weeks later, I received the following email from Mom: ‘The very first morning, (daughter) reminded us to remove her sleep toys so she could get dressed. She then put on underwear and clothes and came out for breakfast. She has done this with no tantrums or requests for help since we began two weeks ago.’

At this writing, it’s been five weeks since this little girl complained of her clothes not feeling right.

Like Rosemond, I am neither qualified nor prepared to pronounce definitively on the existence of a neurological disorder in sensory processing. Just as I have only personal observation, anecdata and cherry-picked social-psychological theory with which to doubt the neurological foundations of many cases of ‘attention deficit disorder’. It is my general impression that all but the extreme fringes of the survivable human input-output spectrum respond adequately to well-modulated social interaction (that’s what they’re built for, after all) and learned behavior. I do however think that there’s a certain practical value to the current trend of medical diagnoses, since they offer access to some very powerful social experiences.

In any event, what struck me about the little girl’s story was how familiar it was. Distraction-proofing the work environment, putting her in charge of her own process, while creating performance accountability, is exactly how I finally got my dissertation done after several years of fairly creative stalling. It’s also how I get stacks of student papers graded nowadays.

This is also a pretty good model for education and other independent or independentable performances, I think, which means that in a roundabout way I am agreeing with much of what John Doyle says in his current series of posts on educational reform, e.g. “Stop Paying Professors to Teach.”

May 11, 2010

Kick the can

by Carl Dyke

Today on NPR I heard an economist (from the Brookings Institute, if I remember correctly) lament in relation to current attempts to avert the European crisis triggered by the Greek meltdown that this and other various bailouts, reshufflings and austerities were only treating symptoms, while the fundamental problem with the global economy was not being addressed. That problem, he said, was the underpricing of risk.

You may recall previous discussions of source scarcity and sink scarcity. The gist there was that although source scarcity is more immediately visible, we may be in more trouble from sink scarcity. I’ve been thinking that this analysis fits several seemingly disparate current events: the financial meltdown, health care reform, and a swirling mass of pulverized plastic in the mid-Atlantic. They’re all about risk management. Maybe as is so often the case I’m just stretching a metaphor to paper over my ignorance, but let’s see if it holds up.

Source is the stuff you use, and its scarcities are directly managed by whatever the local mode of allocation is, e.g. reciprocity systems or markets. In markets when things we want to use get more scarce they get more expensive, modifying our behavior until demand syncs up with supply – you know the drill. Sink is the other end of the process – it’s where we dump the waste. Sinks are less thoroughly marketized than sources (hence they can be described as ‘underpriced’): we may nominally pay for sewage and garbage disposal, but usually just what it costs to profit from carting it away rather than the longer-term costs of its enduring existence; and as yet we don’t pay in any direct and behavior-modifying way for, e.g., the carbon that comes out of our or our cars’ tailpipes, although we’re dimly becoming aware that this blessed oblivion may be leading to the other kind.

In fact throughout a whole range of activities dear to us, without clear source-to-waste-to-sink throughput we’d end up in the shit – as anyone who’s had a backed-up toilet knows. For example, two small cities in New York generate 13.8 million gallons a day of “domestic sanitary sewage… as well as industrial wastewater from food manufacturers, leather tanning and finishing, metal finishing, textile and other major industries.” Follow the link for a virtual tour of the facility. After treatment, which mostly involves separating the solids and chlorinating the heck out of it all, the liquid goes in the creek and the “dewatered sludge” gets trucked to the dump. Some other places it gets sprayed on cornfields. At that point, if not earlier, we’d like it just to be gone; but no such luck. Landfills refuse to go away by becoming filled up and needing replacements, often in neighborhoods where the folks would rather not have one; by leaking nastiness into the local subsoil; and by exuding earth-warming methane and other stanky joy into the atmosphere. Sludged fields run off into creeks and rivers, joining the other effluent there to create fertilizer soups that bloom up algae and kill fish.

Still, the earth and the waters do take the bulk of the waste away with consequences that are tolerable in the short term. The secret is in expanding the sink, for example by getting the ocean involved. If you dump your crud in a pond in the backyard, your life is going to get nasty in a big hurry. But if the pond outflows to a stream, then a river and ultimately the sea, your crud can disappear without a trace for a very long while. So it is with all our wastes. Concentrating and rebreathing the contents of your own lungs or your car’s tailpipe is an efficient way to commit suicide, but if you can dump that junk into the global atmosphere it spreads so thin you don’t even notice it trying to kill you. There are some recycling processes at work (e.g. plants that enjoy CO2 and oceans that absorb it) further extending sink capacity. Once we tap into the big sinks, at any given moment and for a long time out of sight is legitimately out of mind.

Until, that is, algae dead zones and life-choking pulverized plastic masses the size of nations start to show up in the world’s oceans. If sink capacity and recycling extension are not infinite, eventually the density of crud must become such that its attempts to kill us once again become noticeable and then effective.

It seems to me to require only a very small metaphorical leap to see the current financial crisis in these terms. As I and perhaps that Brookings economist understand it, the essence of the trouble was a saturation and reflux of the sinks into which financial risk was being dumped. Bad bets like subprime mortgages got dumped into the global economy in the form of securitized debts, credit default swaps, collateralized debt obligations, and so on, like so much pulverized plastic or sludged poo – chopped up small enough, washed far enough away from their sources and diluted with enough clean commerce that for a long time they in effect disappeared without a trace. Just like the thin, chlorinated sewage solution most of us call drinking water. In the short term this expansion of sink capacity looks incredibly clever and works great to turn marginal resources into wealth. But their marginality makes their waste-load that much greater, and eventually the solution got saturated enough that the economy tipped over from being clean with some acceptable contaminants to being dirty. All the noses turned up at once, and down we went. At this point governments step in as the big sinks of last resort. The European central bank is currently trying to reclarify the Eurozone by buying up national securities toxified by their bailouts of banks toxified by bad bets on bad debts. There’s only so far you can go with this; it’s not clear how much farther.

So far so icky but debatable. Now, to get the metaphor to health care I have to do something really ugly, which is to describe human suffering in the same terms as poo, trash, toxic waste, or bad debt. But in terms of creating loads on sinks pretty much any liability, including illness, works the same way. So sure enough, spreading risk around is how all insurance works, including health insurance. Basically, the costs of sickness and injury are spread out and paid by the healthy (through private premiums or public taxation, as we’ve discussed). The mechanism of health insurance is just like bad debt being mixed into good debt and wastewaster being mixed into the ocean. And in the same way, the success of the strategy depends on the capacity of the sink, or ‘pool’, to absorb costs without fatally toxifying. Sink/pool expansion is why the key to the current U.S. reform was pulling in millions of (mostly healthy) uninsured, which then enables toxically-expensive pre-existing conditions to be dumped in. Socialized medicine works the same way while adding the government’s bigger sink.

These dots first started connecting while I was listening to a panel discussion about autism, also on NPR. At one point one of the experts launched into a rant about how those jackals in the insurance industry were attempting to define autism as a learning disorder rather than a medical condition in order to skip out on the costs of lifelong care. And of course this is pretty shady, but why do it? The insurance companies are going to take their profits no matter what. If they have to pay for autism care they’ll just pass the cost along to the pool of healthy payers. What they’re actually doing is protecting the sink from having the toxicity of incredibly expensive long-term care for relatively few beneficiaries dumped into it. That they doubt the pool can absorb that cost sustainably should give us pause. A similar example showed up at Anodyne Lite’s place in relation to new treatments for Fragile X syndrome. One triumph of modern science is that these kids now survive childbirth and so do their mothers. In humane terms this is an unqualified good. In sink terms it’s another load of toxicity to find a way to dissipate.

Of course Malthus fretted about final limits to environmental carrying capacity well over a century ago, and since then we’ve figured out how to kick the can down the road just fine. There are many ways to manage the source-waste-sink throughput, including sink expansion, recycling and other conversions of net liabilities into net assets. What does seem clear is that our existing sinks are filling up, and alternatives are not immediately available. How we ‘should’ react to all of this can’t keep kicking the can down the road forever, most likely.

March 23, 2010

Quality enhancement

by Carl Dyke

Our accrediting agency requires a “quality enhancement plan” (QEP) because nothing is perfect and everything can be improved. We settled on a program to create a culture of reading. And no, you can’t take that for granted at American universities. Here’s a poster I posed for to promote the cause:

The book I’m holding upside-down is Postmodernism for Beginners. The t-shirt says “Don’t Wanna.” By the way, at first I thought the committee-produced slogan “get between the covers” was harmless enough in a nerdy teehee kind of way, but it turns out to be catastrophic in both directions at once: it’s nerdy enough to turn off the students; yet it offends the prudes.

February 24, 2010

Fight or flight

by Carl Dyke

At Easily Distracted Tim Burke is thinking about what happens when liberal academics and conservative evangelicals meet on the ground of public school curriculum design. On the one hand a scholarly ethic of “reasoned, fair-minded, methodologically transparent, standards-driven investigation,” what Weber called the ethic of science, seems to require understanding the Other in their own terms and rule out passionate side-taking. On the other hand we eggheads do stand for some stuff, starting with knowledge that’s been reliably generated out of reasonable, fair-minded, transparent and standards-driven practices. Shall we fight for these things, shifting to what Weber called the ethic of politics? Tim captures the dilemma with pith and vigor:

But I think there’s still a complicated perspectival choice between trying to study a group of people or an institution ethnographically and engaging them as fellow citizens with whom you intensely disagree. If I set out to understand a group in their own terms, to gain an emic understanding of their rhetoric and practices, if I see the world as they see it, I achieve insight at the potential cost of having a permanently asymmetrical, insulated relationship to that group and its goals. That is, unless they take a similar interest in understanding me and my world in a similarly curious, open-minded, investigatory fashion.

There are times where I think it’s more honest and in a roundabout way more respectful to just come out with your dukes up and straightforwardly fight against initiatives or ideas from socially or ideologically distant groups that threaten your own values, no matter how much their ideas are rooted in an authentic habitus of their own. There’s a kind of equality in that struggle, an acknowledgement that you’re engaged in a fight over institutions or policies with people who have an equal right as citizens to push their beliefs.

I very much like Tim’s suggestion that fighting issues out on the common ground of citizenship is a form of respect. It may well be that a better understanding of each other enables win/win solutions, compromises, or agreements to disagree. We may find grounds to move from the narrow us to the larger we. But when that doesn’t work, the larger we yet is the one in which we take our differences to the public forum and trust democracy to do its thing. Thoughts?