Philosophy’s Reason Problem

by Asher Kay

There was a decent column on NYT's The Stone blog today. Yeah, I was surprised too. Robert Burton, a neurologist, neuroscientist and popular science author was discussing the problem that Philosophy has letting go of Reason:

Going forward, the greatest challenge for philosophy will be to remain relevant while conceding that, like the rest of the animal kingdom, we are decision-making organisms rather than rational agents, and that our most logical conclusions about moral and ethical values can’t be scientifically verified nor guaranteed to pass the test of time.

Burton makes some good points in the piece, but he doesn't offer a lot in the way of solutions. It might be fun to think about what a solution would look like.

Like so many of the misconceptions that Philosophy can't seem to get past, I'm inclined to lay this one at the feet of Immanuel Kant, who probably did as much to send Philosophy veering off in the wrong direction as Freud did for Psychology.

Without getting long-winded about it, Kant was keen to refute Hume's conclusion that inductive reasoning from experience was the only way we really acquired knowledge. To attack the problem, Kant separated judgements along two axes: the a priori/a posteriori axis and the analytic/synthetic axis. The distinction between a priori and a posteriori is that the former takes place "prior to experience" and the latter are grounded in experience. The difference between analytic and synthetic judgements is that the former work only with information present in the propositions being reasoned about and thus don't create "new" knowledge, while the latter bring new information in. Kant combined these axes to produce four kinds of judgements: a posteriori analytic, a posteriori synthetic, a priori analytic, and a priori synthetic. The kind of judgement which Kant felt would refute Hume, if it existed, was the a priori synthetic judgement. If we could reason, without reference to experience, in a way that would produce new knowledge, Hume's depressing thesis would be wrong.

The fact that of his four categories, one – a posteriori analytic judgements – couldn't exist, and another – a priori synthetic – didn't obviously exist should have been a red flag that he was thinking about it wrong. But maybe we should cut him a break. My sense, after reading a lot of philosophy from this time period, is that the concept of "experience" was not particularly clear or well-formed. The ghost of Avicenna's Floating Man was still pretty regularly rattling windows and knocking things off Philosophy's bookshelves in Kant's time (hell, he's still kicking around today).

A modern, physicalist view would reject the a priori/a posteriori distinction altogether. Avicenna's Floating Man would have a brain with zero input, and we know that such a brain would probably not operate at all, let alone do any predicate logic. Thought itself – including reason – is driven by stimulus, and there is no real way to separate perception from reasoning, except with respect to a rough taxonomy of mental activity. Both reason and perception depend upon the patterned activity of a brain impinged upon over long periods of time by a consistent physical universe that enables it to transcode, record and model its consistencies.

When Burton talks about the "void" that would be left in Philosophy if reason were abandoned, I'm reminded of the fear of nihilism that lurks behind arguments for moral realism. If there is no real foundation for moral truths, it whispers, then the whole ediface falls and we are left with a morality in which "anything goes". It's the same fear that Camus walked right up to and flipped the bird at in his discussion of absurdity. And it was a bird that needed flipping.

It's not really Reason that's losing its footing in Philosophy. It's not just a perception problem. It's realism.

The fact is that the nihilism of "anything goes" is a false fear. Anything doesn't go. Anything hardly even gets started going. Our physical universe constrains us to a knife's edge of possibilities, and our biological structures constrain us even further. We can't escape the survival instinct or our range of tastes any more than we can escape gravity or the temperature range our bodies can withstand.

So the solution to Philosophy's problem is a weaker, less absolute concept of reason, and a weaker, less absolute realism that recognizes that what we wanted to be a foundation is really just a strong, consistent, persistent set of constraints.

Advertisements

30 Comments to “Philosophy’s Reason Problem”

  1. Asher, very nice. My first response is to ask if you are familiar with the work of Belgian philosopher of science Isabelle Stengers, whose Cosmopolitics I and II I am finding fascinating reading. My second, before I have to rush off to a choir practice is to wonder when it was that Philosophy forgot the wisdom of the Sage,

    “Our discussion will be adequate if it has as much clearness as the subject-matter admits of, for precision is not to be sought for alike in all discussions, any more than in all the products of the crafts. Now fine and just actions, which political science investigates, admit of much variety and fluctuation of opinion, so that they may be thought to exist only by convention, and not by nature. And goods also give rise to a similar fluctuation because they bring harm to many people; for before now men have been undone by reason of their wealth, and others by reason of their courage. We must be content, then, in speaking of such subjects and with such premisses to indicate the truth roughly and in outline, and in speaking about things which are only for the most part true and with premisses of the same kind to reach conclusions that are no better. In the same spirit, therefore, should each type of statement be received; for it is the mark of an educated man to look for precision in each class of things just so far as the nature of the subject admits; it is evidently equally foolish to accept probable reasoning from a mathematician and to demand from a rhetorician scientific proofs.”

    Aristotle, Nichomachean Ethics, Bk III-1

  2. Hey Asher, long time no see.

    From the Stone blog link: “We describe the decision to jam on the brakes at the sight of a child running into the road as being rational, even when we understand that it is reflexive. However, few of us would say that a self-driving car performing the same maneuver was acting rationally.”

    I would say it. Two potential advantages to the self-driving car: its reflexes would be quicker, without being compromised by distracted attention or ingested chemical substances; and its rational decision-making algorithms would not be compromised by irrational drives like road rage or self-preservation. I’d rather have a cadre of self-learning AIs running the US government than either of the mainstream candidates and their hosts of minions/handlers.

    “a weaker, less absolute concept of reason, and a weaker, less absolute realism that recognizes that what we wanted to be a foundation is really just a strong, consistent, persistent set of constraints.”

    So yes, weaker in the sense that the task is to arrive at the most reasonable response to the particular set of constraints presented by the environment and by the brains working on solving the problem. And since, as sapient beings, humans can become aware of their own limitations, it seems reasonable for them to hand off mission-critical tasks to artificial brains possessed of stronger powers of perceiving, reasoning, and acting than the humans themselves.

  3. Hi, Asher!

    “A great many people think they are thinking when they are merely rearranging their prejudices.” William James

    This is in one sense a point about programming, and how much thinking is algorithmic. When we say we want more reason, the easy romantic retort is that we want to become more like machines. As if we weren’t already.

    “But in order fully to understand the immediate submission that the state order elicits, it is necessary to break with the intellectualism of the neo-Kantian tradition to acknowledge that cognitive structures are not forms of consciousness but dispositions of the body. That the obedience we grant to the injunctions of the state cannot be understood either as mechanical submission to an external force or as conscious consent to an order (in the double sense of the term). The social world is riddled with calls to order that function as such only for those who are predisposed to heed them as they awaken deeply buried corporeal dispositions, outside the channels of consciousness and calculation. It is this doxic submission of the dominated to the structures of a social order of which their mental structures are the product that Marxism cannot understand insofar as it remains trapped in the intellectualist tradition of the philosophy of consciousness.” Pierre Bourdieu, Practical Reason

    Reason has never been able to carry the load that wishful thinking dumps on it. But it’s what we have, so like an old scarecrow it keeps getting dusted off, propped up, and stuffed with new straw by default.

    “The fact is that the nihilism of “anything goes” is a false fear. Anything doesn’t go. Anything hardly even gets started going. Our physical universe constrains us to a knife’s edge of possibilities, and our biological structures constrain us even further. We can’t escape the survival instinct or our range of tastes any more than we can escape gravity or the temperature range our bodies can withstand.” Asher Kay, above

    I keep talking as if I wish this fine thing you said was obviously true, so we could get the decks cleared of all that old primitive philosophical garbage and see what comes of putting what reason we have into learning through this insight instead. But so let’s say that this is actually true, and therefore James and Bourdieu are both also right. What would it take for it to become obvious?

  4. Both teachers and students should enroll in a few Complexity Explorer (https://www.complexityexplorer.org) courses. I am dead serious about this. It is one thing to read James or Bourdieu and say “Yes, yes” to what they say. It is a true learning experience to try to build a model and learn from direct experience the difference between models with predictable outcomes and those that produce chaotic results with only a small tweak of the initial conditions. Or, if that is too much, download NetLogo (https://ccl.northwestern.edu/netlogo/) and play with the models in the Models Library, for a similar experience without having to learn to code (though it’s easy to learn if someone is feeling ambitious).

  5. Hi, everyone! It’s great to Vole again.

    “But so let’s say that this is actually true, and therefore James and Bourdieu are both also right. What would it take for it to become obvious?”

    This is the crux.

    I was reading a piece about Tom Wolfe throwing a few poorly-aimed punches at Linguistics’ old football wound (the one Chomsky administered), and it struck me that none of the people rehashing it – even the people whose conclusions I agreed with – were talking in terms of basic neural processing. And that reminded me of Pinker’s criticism of connectionism (I can’t remember which book it was), which is memorable as one of the most frustratingly wrong things I’ve ever read. These are some really, really smart people. How could they be missing something that seems so elegantly to settle their decades-long argument? Seriously. Is a “deep structure” some kind of Platonic form or something? Or is it a messy, complex system, constrained in all the ways every complex system is constrained, by everything from how our lungs work to how sound works to how our brains work to how the whole physical world is patterned?

    If I think about it, there are only a handful of concepts that have truly transformed how I see the world. And at the very top of that short list is neural networks. Not only do neural networks make blindingly clear how a logical operation can be embodied in a skein of atomic connections where none of the pieces know anything about anything, but they also explain what *association actually is* — not as a high-minded philosophical “relation”, but as an actual *physical process*. And speaking of that, neural networks highlight something that our philosophers seem almost willfully to forget — that there’s no such thing as “material” or “states”. Our concepts are not objects — they are activities. And last but not least, neural networks get at the heart of pattern and constraint and transformation and *information* and emergence and system adaptation — concepts without which I would be hopelessly lost in trying to understand the world.

    Honestly. I know this sounds borderline nuts, but if I could wave a magic wand and make everyone truly grok something. It would be neural networks. I think a lot of people – (people like Pinker) – *think* they really understand them. But it can’t be the case, because if they did, they wouldn’t be saying 90% of the other shit they’re saying.

    Maybe that’s the problem. I’ve always thought there should be a “Neural Networks for Philosophers” class, but maybe it’s hopeless because it’s too easy to understand how they work on a mechanical level without seeing the huge implications.

  6. ” I’d rather have a cadre of self-learning AIs running the US government than either of the mainstream candidates and their hosts of minions/handlers.”

    Amen to that. Isn’t it strange how people are tolerant of all kinds of error when the human mechanism is performing the task but think that a computational device should be perfect before being allowed to perform the same task? Maybe people are still stuck in the conception that the programs running inside cars have statements like “IF SEE CHILD, STEP ON BRAKE”, or that algorithms necessarily express the biases of the person who programmed them.

  7. “It is one thing to read James or Bourdieu and say “Yes, yes” to what they say. It is a true learning experience to try to build a model and learn from direct experience the difference between models with predictable outcomes and those that produce chaotic results with only a small tweak of the initial conditions.”

    Yeah, maybe that’s what’s missing from my idea of “Neural Networks for Philosophers”. It would need to be hands-on. If you train a two-layer network to perform a logical operation like XOR, you can’t help but see that nothing in the network knows anything about the overall operation — that there’s no “rule” in it at all. And you could see that you could train a totally different network to do the same operation in a totally different, equally incomprehensible way that gives the exact same results at the higher level. And you could see that you can re-train the same network to be a NAND network even though it has the exact same “deep structure”.

  8. Asher, I, too, was enthralled by Chomsky’s deep structure, together with Claude Lévi-Strauss’ “Mendelevian Table of the Mind,” to which, in my mind, it bore a strong family resemblance. I was primed for both by taking a year of symbolic logic and an honors calculus course that began by deriving the natural numbers from the Peano Postulates. The notion that a system composed of a small, finite set of primitive elements could, by adding a few rules, generate a space of infinite possibilities was—and still is— mind-blowing to me. I agree with much of what you say here but also wonder about the apparent equation of models based on software automata to the properties of human wetware. The latter involves a lot of pattern recognition, interrupt loops, and fuzzy logic introduced by chemical processes that don’t fit neatly into the dream of a logically formalized system that captures all of what goes on in human brains, bodies, and interactions.

  9. Perhaps the simples response to your conundrum is to have every philosophy class begin with statistician George Box’s famous remark, “All models are wrong. Some are useful.”

  10. “The latter involves a lot of pattern recognition, interrupt loops, and fuzzy logic introduced by chemical processes that don’t fit neatly into the dream of a logically formalized system that captures all of what goes on in human brains, bodies, and interactions”

    True. My take is that an artificial neural network’s architecture captures enough of the basic properties to allow the big realizations. Other kinds of cellular automata have their own ways of blowing one’s mind (especially in terms of predictability and sensitivity to initial conditions).

  11. “My take is that an artificial neural network’s architecture captures enough of the basic properties to allow the big realizations.”

    Asher, could you spell this out a bit more? Provide some examples of “basic properties” and “big realizations”? I have a few notions of how I might answer if the topic were network analysis or agent based modeling; but my ignorance concernignneurql networks is profound.

  12. I can give it a try.

    Some of the big stumbling blocks in Philosophy are rooted in conceptions about how we think. In pre-computational times we had weird Platonic and dualistic ideas. In post-computational times, we have weird ideas about logic and rules and so forth. These weird ideas play into our conceptions of realism and truth, subjectivity and objectivity, syntax and semantics, the possibility of foundational knowledge, etc. Really, if you look anyplace where the problems get super knotty in Philosophy, you’ll find a basic lack of clarity about these concepts (great popular examples of this would be Nagel’s bat essay, Searle’s discussion of the Chinese Room or Jackson’s thing about black-and-white Mary).

    So my assertion is that if a philosopher has a good grasp of how the brain processes information, a lot of problems become much more tractable. Physical monism, for example, starts to seem a lot more reasonable than dualism does. The idea of anything our minds do being “prior to experience” becomes nonsensical. Truth becomes statistical. And things like “moral realism” transform into things like “what will likely (or in rare cases, necessarily) be true based on the constraints of our structure as organisms, our brains’ structure, the universe’s structure, etc.” (where “structure” = “organized physical processes”). Understanding how our brains process information also makes sense of things like why language is so strange and exception-riddled while still having a seemingly coherent “deep” structure; why we are able to abstract concepts like numbers away from the things we count with them; what “consciousness” is; how a huge amount of what we think is unconscious; why metaphors play such a massive yet mostly unnoticed role in thinking.

    It’s true that you can realize a lot of these things without a deep understanding of neural networks. But if you didn’t understand neural networks well and I told you that consciousness is basically a sensory apparatus whose input is brain activity, would that really seem profound? Probably not.

    Okay, so that’s what I think some of the big realizations are. You pointed out that the kinds of artificial neural networks we study don’t really reflect the messiness of the actual physical brain. I agree with that. But my sense is that the core ideas of how information is processed by the brain are present in artificial neural networks, and that looking at artificial neural networks might make these core ideas easier to grasp than looking at actual physical brains (or models of physical brains).

    The core ideas are: 1) interconnected groups of neurons; 2) the excitation or inhibition of neurons by other neurons; 3) the way a network of neurons associates “sensory” inputs with “motor” outputs; 4) mechanisms by which networks “learn” associations; 5) the nature of how information is “stored” in a network. I’m probably missing a few.

    The beauty of artificial networks is that you can start with a simple, (literally) straightforward “feedforward” network, and that network will already be able to do things that will (or at least should) amaze you. Then, once you have a good understanding of that, you can add in things like different learning methods, recurrence of connections, inhibitory connections, etc.

  13. Also – I didn’t realize how good the SEP article was on connectionism:

    http://plato.stanford.edu/entries/connectionism/

    Here is a taste:

    Over the centuries, philosophers have struggled to understand how our concepts are defined. It is now widely acknowledged that trying to characterize ordinary notions with necessary and sufficient conditions is doomed to failure. Exceptions to almost any proposed definition are always waiting in the wings. For example, one might propose that a tiger is a large black and orange feline. But then what about albino tigers? Philosophers and cognitive psychologists have argued that categories are delimited in more flexible ways, for example via a notion of family resemblance or similarity to a prototype. Connectionist models seem especially well suited to accommodating graded notions of category membership of this kind. Nets can learn to appreciate subtle statistical patterns that would be very hard to express as hard and fast rules. Connectionism promises to explain flexibility and insight found in human intelligence using methods that cannot be easily expressed in the form of exception free principles (Horgan and Tienson 1989, 1990), thus avoiding the brittleness that arises from standard forms of symbolic representation.

  14. Asher, thank you for putting all this together. Would it be too much, I wonder, to ask for one or two concrete examples? I ask, not because I disagree with anything you have said, but, on the contrary, because I am now even more curious about how neural networks work.

    When SEP writes, “Philosophers and cognitive psychologists have argued that categories are delimited in more flexible ways, for example via a notion of family resemblance or similarity to a prototype,” I instantly think of what George Lakoff wrote in Women, Fire and Dangerous Things and Philosophy in the Flesh. Serendipitously, I am also at the moment dipping into Douglas Hofstadter and Emmanuel Sanders’ Surfaces and Essences: Analogy as the Fuel and Fire of Thinking. I am, thus, fully on board with arguments against rigid definition as a criterion for serious thought. That is the context in which I am looking for an example or two to bring the generic argument about the value of neural networks to life.

  15. (Philosophers don’t want to know. They’re a lost cause. They’re what’s left behind after folks like you leave for disciplines in which learning happens. I was raised by a lapsed philosopher, so I come honestly by the view that philosophy is where worthwhile questions go to die. I have a friend who’s about to publish a(nother) book on Aristotle. As intellectual history this would be a fine thing, but she seems to think she is actually doing philosophy.

    That miasma of fetishized, irresponsible antiquarianism is still what’s taught in the schools. Do you start with connectionism, or with Socrates? How long does it take to get to connectionism if you start with Socrates? Long enough that working philosophers can still write books on Aristotle with a clear conscience and a straight face.

    You can’t clear those decks of that rotten old garbage. They’re made of it. You have to abandon that flying dutchman and let it and its zombie crew drift on in their seaweedy doldrums. Neural networks don’t answer a philosophical question, they – yet again – demonstrate the futility of that old philosophical project. I think one of the marginal conditions for the conceptual transformation we’re talking about is that we altogether stop treating philosophy as if it has anything serious or useful to offer other than a quaint hobby.

    I say this as a historian who has enormous admiration but no particular vocation or aptitude for the kinds of modeling you guys are talking about. Fortunately, the people I study didn’t either, and other than nonsense slogans about learning from the past there’s no professional requirement that I be addressing or solving any problems in the present.)

  16. Carl, anyone who can replace the starter and front shocks on a 1992 Chevy will have no trouble with NetLogo. Don’t underestimate yourself.

  17. I think one of the marginal conditions for the conceptual transformation we’re talking about is that we altogether stop treating philosophy as if it has anything serious or useful to offer other than a quaint hobby

    Sadly, this is probably right. Philosophy as a discipline has gotten itself stuck. My worry is that philosophy as a thing we do because we’re people with an itch to know how things work – though it can’t really ever disappear – can go through times when it’s hibernating, or sick, or out of fashion. If the discipline as it exists dies, it’s probably all the better, because the project of conceptual transformation will likely picked up by scientists with meta-scientific proclivities. Without “discipline”, though, it’s also being picked up by non-scientists with pseudo-scientific proclivities.

    I’m not a historian, so maybe you can tell me: what kind of world are we heading into? Does the now almost commonplace reference to our society as “post-factual” signify anything? Does the fledgling concept of “cognitive biases”, now tottering into mainstream parlance, just amount to an irony because it gives us a perspective on what we’re doing that we don’t know how to make use of except to biased ends?

    Maybe I’m reading too much into the moment. Maybe speculation without discipline is part of some larger cycle that’s needed to restore the fecundity of the metaphorical soil in which our concepts thrive or wither. Maybe the concepts we need are just too incompatible with extant structures and can no longer be tacked on, and some sort of philosophical extinction event is needed to open up the right ecological niche. I’m not very good at seeing the larger historical currents, so I’ll be the first to admit that I have no idea.

  18. Asher, thank you for putting all this together. Would it be too much, I wonder, to ask for one or two concrete examples?

    There are some great examples in Rumelhart and McClelland’s “Parallel Distributed Processing”. It’s an old couple of books, but it’s got some lovely, simple examples, including the network that learns the past tenses of English verbs that Pinker hated so much.

    Simpler yet, though, are networks that learn how to be logic gates. Essentially, these networks learn a truth table (with two true/false inputs and one true/false output), and make it really easy to see that the rule is “distributed” throughout the network in a way that defies our common understanding of the word “rule” (for example, because the same neuron contributes to one thing being true and another thing being false without changing its value). Here’s a web site that lets you visualize simple logic gate networks. It’s not all that well written, but at the bottom there’s a small network that you can train to handle any 2×2 truth table (i.e., any logic gate). This shows how what we commonly consider to be a set of distinct logical operations can be generalized to a single statistical structure (and by philosophical extension, what “generalization” really means):

    http://www.mind.ilstu.edu/curriculum/artificial_neural_net/xor_problem_and_solution.php

  19. Asher, thanks for the pointers to the neural net examples. One day, just maybe, they may pop to the top of my reading stack. Here I would like to think a bit more about your statement that,

    “Maybe the concepts we need are just too incompatible with extant structures and can no longer be tacked on, and some sort of philosophical extinction event is needed to open up the right ecological niche.”

    I am reminded of how narrow my formal education in philosophy was. When I graduated from Michigan State in 1966, I had done a fair amount of logic and philosophy of science, a course on Aristotle and a course on British Empiricists. An ethics course had been devoted primarily to George E. Moore. I had only a brief taste of philosophy not saturated with British analysis and/or Logical positivism. That was a course on Philosophy and Literature where I briefly encountered Nietzsche and wrote a paper comparing Leibniz’s monads to St. Thomas’ description of angels. It wasn’t until 1995, while I was teaching an advertising seminar at Sophia University in Tokyo that I was introduced to Pierre Bourdieu and a student pointed me to Terry Eagleton’s Ideology of the Aesthetic. Only then did I begin to discover a different current of philosophy, free myself from analytic prejudices, and begin to consider why intelligent people would take it seriously.

    Having in the meantime done a Ph.D. in anthropology, I found it congenial to approach philosophers ethnographically, suspending judgment while attempting to understand their assumptions and concerns. I began to see what others might see in thinkers like Deleuze and Guattari while retaining a non-judgmental but skeptical stance toward what they were saying. Now, as I have mentioned before, I am reading Isabelle Stengers, who is both very Continental in her thinking and also knowledgeable about the sciences whose practices she analyzes. Instead of the George E. Moore game in which we begin by demanding definitions and discussion collapses when no one will agree, I find myself attracted to Stengers’ “ecology of practices,” examining the contexts in which certain ideas flourish or fail to do so.

  20. Asher, I agree completely. I’m probably beating a dead horse. If philosophy has been hollowed out over the last few centuries by the departure of folks with an itch to know how things work rather than noodling around in speculative personal ontologies and nerdbucket dominance games, so much the worse for philosophy.

    I don’t think there’s anything post-factual about the world. I think it’s still pre-factual, but enough less so that it’s started to sting. The tricky part though is that there are far more people – most of them, in our part of the world – who are effectively insulated from the hand of consequence’s caress. So in that sense there are a lot more people who just plain have no relationship to fact at all, except insofar as it seamlessly makes their little lives possible.

    Found this just now, a sort of visual primer on neural net learning. It’s crude, but the army, the church, and the advertising industry all found out long ago that you need a simplifying picture language to get people oriented to ideas and practices they haven’t experienced directly.

    http://nautil.us/issue/40/learning/the-genius-of-learning-rp

  21. For a different angle on this discussion, perhaps more congenial to Carl the historian, see

    http://understandingsociety.blogspot.jp/2016/10/what-is-conceptual-history.html

    What say we here?

  22. Ha! Oh my gods! Make it stop! OK John, good one. You got me. The fetishizing antiquarians are not exclusive to philosophy and History has plenty to answer for. Good lord, that is a lot of prim making of rules and defining of terms and splitting of hairs. Perfect example of rigid definition’s grim subversion of serious thought. So easy to see why Nietzsche suggested to get playful.

    So yeah, historians too can go around and around for centuries on this stuff, and then metahistoricize going around and around for centuries on this stuff, and eventually either just start over or see if they can pull off some metametahistory. And it’s almost always a class in grad school, part of the disciplining, where we learn that this is what counts as serious thought for our folk. Although around here usually the Skinner / Pocock version with conceptual history rather than Koselleck, who comes out of and responds to a German tradition that takes a lot of unpacking for Anglos (of the Germans we cherry pick Ranke because he sounds like he’s saying just the facts) and almost always gets them wondering semi-productively how the Continentals managed to get themselves so far up their own asses. And therefore in small doses, it can even be a useful corrective to the vulgar empiricism we often get from our students. All history is conceptual and all concepts have histories, check.

    Hoo boy. I’ve lived this for thirty-some years as if in a fever dream, and like a nasty addiction, relapse is never more than a loose moment away. I actually do not at all forget how mind-poppingly exciting it was in earlier times to be introduced to these ideas and invited to join in them. Sort of like being led into a blind alley and rolled by a really, really hot hooker who turns out to want both your wallet and your blood.

    So, what was that about neural networks?

  23. So yes, where this really gets to the question for me is how to teach the ‘historiography’ / practice of History seminar. Traditionally, you survey the great historians and historiographical movements, starting with Thucydides. If it’s week 8, it’s conceptual history. If you’re really sophisticated you turn the discussion toward what kinds of questions they were trying to answer, or what kinds of questions their approaches are good for answering. Bright students learn to spin out a literature review, but how to actually do history in any particular case is left a dangling and disconnected project.

    For years now I’ve been trying to turn the course toward actually doing history, which means dropping the tradition and getting down to cases. If a historiographical approach is valuable it ought to suggest itself to a bright student with an area of interest and some orienting questions. But, to be honest, that approach doesn’t work all that well either. They just need too much scaffolding to get past their own explanatory prejudices in any kind of productive way. So I mix in a lot of suggesting and prompting and workshopping, and here’s where I want to find a way to cut through some crap and get them oriented toward complexity and emergence, because to my mind the rest of it is a bunch of handwaving. But as much as possible I want the complexity orientation to be itself emergent, rather than something I jam at them didactically, and I also don’t want them getting caught up in modeling too quickly because there’s a quick road to social science truthifying there.

  24. Jesus Christ. I just wrote a comment, or maybe what would have been the preface to a comment, but somehow my click patterns eliminated what I’d written. It was affectively laden too, not an overly intellectual meditation on intellect or learning or whatever the fuck. It was some kind of lamentation about being an asshole who, as it happened, believed he had something valuable to offer to the world, or at least to some small segment of it. I remember my comment began: “What a fucking disappointment.” I’d have another drink but I’m not sure I’d enjoy it enough, or despise it enough.

  25. Okay, that does it: I’m giving up five o’clock, starting today.

  26. I must read this comment! Let the shadow of five o’clock be your muse!

  27. The comment is alluring because it is missing: shadowed, sublimated, apophatic, a homeopathic tincture of the five o’clock spirits, its purity of essence hermetically withdrawn into fetish, its historic trace into legend.

  28. One of my buttons got pushed and I haven’t been real helpful on this thread. I’m sorry for that.

    I notice us all being tuned in to the ways that very few basic elements can, through the magic of assembly and reassembly, configuration and reconfiguration, interaction and reinteraction, be developmentally transformed into more complex entities, structures, and operations.

    I think now it’s a commonplace across the practice-oriented human studies to think of reason / rationality as being emergent in this way. We even know better than to think anything important is lost by seeing it this way. But, in part because of a long history of jerks and eager beavers trying to jam science down our throats, we have nothing like the conceptual vocabulary it would take to ‘settle’ that question against the essentialists, and so we keep fighting the last centuries’ wars over and over again. I find that very frustrating.

  29. Carl, you will I believe find Jean G. Boulton, Peter M. Allen, and Cliff Bowman Embracing Complexity: Strategic Perspectives for an Age of Turbulence a more congenial text than the one you have been reading.

Leave a Reply!

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: