Spitballing the abyss

by CarlD

Last year my colleagues Peter and Patrick and I took our university’s community oral history project to the two local rallies for Donald Trump. We talked with a number of the ralliers in what might be described as a naive, unstructured ethnographic style. Recordings led to transcripts (thanks, Patrick!), and then to a proposal to present our findings as this year’s faculty research lyceum (thanks again, Patrick!). We got the gig.

Each of us has his own take on what is, of course, not so much a ‘data set’ (let alone a ‘representative sample’) as a particular interactive assemblage, a massively contingent co-production. We conducted the interviews as interested parties and with leading ideas about what was happening; we interpret them now with those same ideas and all of the resources of partisanship, prejudice, bias, selective perception, agenda, etc. etc. at our disposal. We are not reliable narrators. But as historians we are used to speaking for the dead. And for the living we think talking with people, taking them seriously, and trying to understand them is better than any alternative we are aware of.

The other thing that’s been on my mind lately is my sabbatical project on the history, theory, and pedagogy of complex adaptive systems. So of course what I’m doing with these interviews is to mash them up with the complex systems stuff. The general question I’m asking of the data then is, ‘How do these folks (seem to) think things work’?

We’ve got about 8 minutes each. Here’s the rough draft I just put together for my partners and the commenters. I’ll be filling in citations and interview quotations next, and I can tweak the whole thing until the actual presentation later this month. So, comment is welcome:


I’m interested in what we think about how things work. When I’m not interviewing Trump ralliers, my research is on the history and theory of societies as complex adaptive systems. People have always noticed that social processes do not seem to correspond very well to simple cause and effect explanations, or to respond very well to simple cause and effect engineering. Social processes routinely go sideways and defy prediction and control, much like the weather. Back at the tail end of the Renaissance Machiavelli warned the Prince about this ‘fortuna’, and some kind of ‘fortune’ or ‘luck’ explanation is one of the more common ways of accounting for the wonkiness of social processes.

We now know that with the weather, even short term unpredictability is because there are many systems actually involved in the ‘weather system’, all of them are active and effective but none of them are in control, they are all oscillating and linked and dynamically interdependent, and there’s lots of feedback that can amplify very small causes into very large effects, or dampen very large causes into very small effects. This disparity between causes and effects is called ‘nonlinearity’. It is characteristic of complex systems, as are self-organization (there is no designing hand at work) and emergence (the whole is more and other than the sum of the parts).

Plans are worthless, but planning is everything, Ike Eisenhower remarked. Despite Machiavelli’s early attunement to the issue and the routine awareness by better leaders and strategists that you have to expect the unexpected, getting serious about grappling with societies as complex systems that work a lot like the weather has been slow going. For one thing, we have a species prejudice that our reasons and intentions are different and more effective kinds of causes than ocean currents and snow melt. And for another, our own evolutionary adaptation disposes us to act on simplifications rather than get lost in complexity. In most action windows there’s not much advantage in prediction or control to be gained by sorting through dozens, hundreds, or thousands of oscillating, interacting, feedbacking variables with massive uncertainty factors, so our default is to make a best guess and take a stab at it. Styles and strategies of guessing distribute across the population and this diversity, like our distribution across the political spectrum, assures that for most processes and contingencies, a bunch of them will be good enough. Sub-optimality is also characteristic of complex systems.

So I was not surprised to find that our interview partners had accounts of how things work that, shall we say, left some things out. At a first pass, they all confidently articulated a crudely simplistic, personalized story of current American politics. Crooked politicians messed things up; immigrants abused our kindness and stole our jobs. Trump will toss the bums out and fix everything. They were strongly focused on individual intention and agency, motivated by personal character, morals, and formal ideas, as their primary explanation for political processes and actions. Systems routinely appeared in their accounts as illegitimately powerful, anonymously personal (“they”), generally malevolent intentional corruptors of wholesome individual action.

Fascinating corollaries included Trump’s personal incorruptibility due to his already having plenty of money of his own, and unquestioning faith in their ability to peer deep into Trump’s soul and detect the authentic care and concern for America there. From a complex systems perspective, their anger at the “rigged system” and eagerness to find a powerful leader to overturn it come into sharp focus as perpetually frustrated and frustrating attempts to enforce legible, predictable linearity on irreducibly non-linear processes. They would have just as much luck understanding how politics work if they believed in witchcraft, fate, or a shadowy global cabal of all-powerful dentists.

I have already said, however, that hurling spitballs at the yawning abyss of complexity is pretty much standard operating procedure. It is hardly a unique failing of these folks, or even a failing at all. Good enough is good enough. And complexity can in fact be managed and engineered down to mere complication or even simple linearity in local settings through rigorous organization and massive effort. Our interview partners all had robust histories in these kinds of engineered systems, and the dispositions to match. They were military and ex-military, nurses, librarians, postal and factory workers. They were mostly religious. They were used to other people having more power than them and making things happen. They were steeped in the everyday strategies of complexity management by orderly hierarchy, leadership, function, and procedure.

But in the parts of the interviews where they were not explaining how they think things work but reflecting on what worried them, a powerful countertext emerged. They perceived only too well the unmanageable complexity of things. It frustrated and terrified them. It kept them up at night and troubled their waking. The uncanny complexity of the world was so far beyond their scope, so realistically out of their reach and uncaring of their wellbeing, so stubbornly resistant to every normal effort and procedure in their experience, so unfair and irrational and amoral, that they lived in anxiety and dread. None of the law, rules, discipline, hard work, the nation, the flag, kittens, puppies, authenticity, guns, and ammo, hold up against the infinite confounds of complexity.

And then Trump said he could fix all that. They knew it was a gamble, and said so. But they were going to hurl him at the abyss and hope.


32 Responses to “Spitballing the abyss”

  1. I know, I’m sick of this too.

  2. Sick of it, and scratching our heads wondering what to do with it. If I were still teaching, I’d run it through THE FEDERALIST — and then wonder where the hell that goes.

  3. My stealth citations were to Beyond Good and Evil and The Plague, but it ends up in the same place. Of course.

    I kind of got stuck with this, for good enough reasons, but it’s very bad for my head. I don’t like these people at all; what they’re doing with their angst is repulsively irresponsible to me. Trying to sidestep that spitball judgment defaults me almost irresistibly into psychology on the one hand and engineering on the other, which is just more spitballing. In the bigger scheme I’m hoping it’s part of me horsing myself over to just putting my head down and telling the historical story about dead theorists I know how to tell. That continues to feel pretty useless and not at all worth the effort, but it also comes with an imaginary audience I can cope with, whereas with this one the dispersion from ‘of course’ to ‘what the fuck’ is absolutely defeating.

  4. shut up ad bail

  5. I’m of two minds about this post, maybe more. I agree of course that simplistic explanations of complex phenomena aren’t going to get the job done. But what about complex explanations? You write: ” Social processes routinely go sideways and defy prediction and control, much like the weather.” But the weather can be predicted with pretty good accuracy. Even chaotic systems like hurricanes can be predicted before they form, along with the trajectories they are most likely to trace. These forecasts aren’t 100% lock-sure, and the certainties degrade as the predictive time horizon recedes into the future. But even long-term predictions can be made within empirically grounded confidence intervals; e.g., the likely range of global warming to expect over the next 50 years, plus or minus a degree or two. I.e., within the predictive parameters even the unpredictability can be estimated. These forecasting models aren’t simple and linear; they are complex, multivariate, iterative, stochastic, built up from vast data sets — complex models that attempt to simulate the complexity of the phenomena under study.

    “our own evolutionary adaptation disposes us to act on simplifications rather than get lost in complexity.” Agreed. But this: “In most action windows there’s not much advantage in prediction or control to be gained by sorting through dozens, hundreds, or thousands of oscillating, interacting, feedbacking variables with massive uncertainty factors, so our default is to make a best guess and take a stab at it.” Most of the time yes, but there are crucial action windows where systematic analysis is likely to offer significant improvements over gut feel. The recent election demonstrates among other things that the kinds of simplifications people make can then be analyzed, predicted, and controlled by those with access to more powerful, more complex models of human behavior. Trump’s marketing people used big data and complex algos to sort the deplorables into different baskets based on profiles abstracted from their social media click patterns, custom-crafting online ads that played into their heuristics and biases. This ad campaign wasn’t the only factor swaying the election, but I suspect that data would show that it did contribute.

    “Styles and strategies of guessing distribute across the population and this diversity, like our distribution across the political spectrum, assures that for most processes and contingencies, a bunch of them will be good enough.” This sounds like Hayek, who regarded aggregate emergence from individual heuristics and folk traditions as a better allocation of resources than systematic analysis and planning. Hayek’s view is understandable given his experience with the hierarchical bureaucratic Soviet system. And it’s certainly the case that even the most brilliant individual human mind is pretty darned limited. But now we’re talking about machine minds, their processors distributed across the globe, having access to vast datasets and the ability to manipulate huge numbers of variables with extremely complex mathematical and statistical tools. And these machine minds are already being used against us by Wall Street, Madison Avenue, and Capitol Hill. Tools of the master.

    “The uncanny complexity of the world was so far beyond their scope, so realistically out of their reach and uncaring of their wellbeing, so stubbornly resistant to every normal effort and procedure in their experience…” So yeah, that’s pretty much the human condition. Relying on simplistic narratives to deal with complexity seems like the wrong tactic, and I can see why you’d want your students to question this instinct. Letting oneself accede to complexity as a vast and tangled congeries of intrinsically unpredictable forces also seems ill-advised. So too does reliance on AIs and algos, treating probabilities as certainties, confusing simulations for the real thing. What does that leave?

  6. A couple of common puzzles:
    It’s important, for messy multi-dimensional systems, to be able to distinguish signal from noise. Then we find that in many systems (in biological ones, for example) we’re told that an important part of the dynamics is the amplification of noise. What’s going on?
    Playing to the crowd: from protest to performance. This came to the fore in the sixties and has settled in as a common occurrence. Who marches? Identify them. What’s the real point of little pink hats? Talk about your “confusing simulations with the real thing”.

  7. I’m not sure what role the amplification of noise plays in biological systems. Mostly what comes to mind is error detection: if amplifying noise helps the organism identify it as such, then the organism could more effectively filter out that noise in order to focus more attentively to signal. Another possibility is that signal is mistakenly categorized as noise; e.g., by the subsystems that attempt to consolidate the vast quantity of environmental input into functionally useful categories or factors. If the factoring is accurate then redundancies can be safely weeded out, but the inputs that don’t comfortably fit inside the factor structure are also going to be weeded out as irrelevant. But the factor-analytic engine might be mistaken, miscategorizing signal as noise. A noise amplifier could enable the organism to re-iterate its factor-analytic engines, refining the algorithms so that more of the input is classified as signal, as information. This sort of thing happens all the time in learning, I should think.

    Another reason to amplify noise would be for one signal to jam other systems’ signal detectors. So, e.g., system A could reclassify verified information as false news — i.e., reclassify signal as noise — while at the same time propagating its own noise and calling it signal. This would be a shrewd maneuver for predators trying to deceive prey, and vice versa.

  8. I always forget to override the default setting that identifies me as “ktismatics.” That second paragraph should say “…for one system to jam other systems…”

  9. Why isn’t thinking in terms of algorithms, optimization, and equilibrium a paradigm case of confusing simulations with the real thing?

  10. Probably. Does the Trump supporter confuse his idea of how things work with how things *really* work? Does the analyst confuse his model of how Trump supporters view the world with how they *really* view the world? Being self-reflexive and self-corrective about the differences between a simulation and what’s being simulated: these would seem to be important skills to cultivate.

    Then there are the cases where the simulation becomes the reality, or at least becomes fused with it. Index mutual funds model the movements of the market as a whole and are themselves traded on the exchanges. An architect’s drawing is transformed into a building: is the building a material simulation of the drawing that preceded it?

  11. As usual the commentary has surpassed the original post. Thanks. Processing.

  12. So, speaking of the tools of the master, a lot of people I know are really worried about the (Google, probably) click-tracking where stuff you look at on Amazon or whatever shows up as an ad on Facebook or wherever. I’m going to assume that this is at worst the second most powerful application of our current ability to gather, sort, and interpret signal across the range of digital inputs. And what I always say is that I’ll start to worry when the algorithm stops showing me the thing I just bought as if I might like to buy it now, and then keeps showing it to me for weeks after I’ve received it, taken it out of the box, and made it a regular part of my life. Hell, the bot might actually be useful if it was good enough to guess accurately at something I might like that I didn’t know about yet. So far, iteration and randomness seem to be the best it can do.

    So then, I’m not sure about this: “The recent election demonstrates among other things that the kinds of simplifications people make can then be analyzed, predicted, and controlled by those with access to more powerful, more complex models of human behavior. Trump’s marketing people used big data and complex algos to sort the deplorables into different baskets based on profiles abstracted from their social media click patterns, custom-crafting online ads that played into their heuristics and biases. This ad campaign wasn’t the only factor swaying the election, but I suspect that data would show that it did contribute.”

    Unless they were better than Google, what they did was just feed back more of exactly what their deplorables were giving them. Iteration and reiteration. People often agree with themselves, and away we go. Which is how I learned how to do sales and fundraising back in the analog good old days. It works, but not because you’ve figured out anything about complexity. You’re just amplifying the simplicity of the original signal, or in the case of voter suppression and fake news jamming it. Where it gets hard and complexity starts being involved is when you need the mark to do something other than what they were already disposed to do anyway.

  13. Don’t sell yourself short. Being able to discriminate signal from noise seems essential in performing the iterate-reiterate trick. What aspect of the mark maps onto the product? How can you amplify that signal in the mark and map it onto the corresponding signal in the product? I’m presuming that an effective salesman can sort the marks into different baskets, extracting characteristics of the mark from the sales call, then mapping those characteristics onto sales pitches that have worked in the past for this sort of customer. That’s what the Facebook consultants did for Trump, using Facebook clicks to sort people not just by demographics and likely party affiliation but also by personality profile. From this article:

    “Pretty much every message that Trump put out was data-driven,” Alexander Nix remembers. On the day of the third presidential debate between Trump and Clinton, Trump’s team tested 175,000 different ad variations for his arguments, in order to find the right versions above all via Facebook. The messages differed for the most part only in microscopic details, in order to target the recipients in the optimal psychological way: different headings, colors, captions, with a photo or video. This fine-tuning reaches all the way down to the smallest groups, Nix explained in an interview with us. “We can address villages or apartment blocks in a targeted way. Even individuals.”

    Code switching.

  14. Thanks, John. This is helpful to me. So, collating now with DtE’s sphinxy comments upthread, it seems like what this does is it uses a kind of radar ping to find a signal that’s already coded as feedback, then follow up with a canned but experimentally evolving reiteration. So the interaction is cooked for a narrow range, and like any algorithm, works not by engaging with complexity but by engineering it out?

  15. Most complex simulations engineer out certain aspects of complexity; e.g., by dismissing it as noise, by ignoring signal redundancy, by averaging across variations. Otherwise the map would be the same as the territory. Human visual perception works this way: as I recall, less than 10% of the information picked up by the retinal cells is passed through the optic nerve to the visual cortex.

  16. Whenever a student would say “But that’s a social construct,” I’d always say “So’s your grandmother.”
    If you tell me that I’m an evolutionary artifact, I agree.
    When Smolin and Rovelli tell me that electrons are the resonant interaction of waves, I’m encouraged, for then space and time as we see, feel, and conceptualize them are relational products of interactions, not a ready-made box in which interactions take place. [the subfield here is loop quantum gravity. Both guys have excellent readable books. I have to hope they’re right.]
    I persist, because I’m beginning to be able to tell reasonable stories of relational existence now from the very bottom to the current top — human interaction as the constructor of social space. I can fill in lots of biology in between, with the help of Deacon, West Eberhardt, Bourdieu and many other folks.
    A big part of the stories, at all levels, is entrainment: the production of stable resonant structure.
    At every one of the levels, when stable entrainment takes place “things” emerge. Secondary ontologies are created out of these things, some of which are chosen as fundamental. We relate those things with one another: we classify them, we find statistical regularities that we regard as fundamental, we annunciate laws of their behavior, and come to think of their existence as a consequence of these laws. Eventually we work hard to create and preserve the conditions for the entrainments we want to see, and, if we succeed, we interrogate the world to discover the desired regularities of behavior, glowing when we find them. Marketing success seems to licence market theory.
    Entrainment always radically truncates the range of future possibilities. We’re what we are as much because of what we can’t do as because of what we can do. That’s true all the way down to electrons and such.
    The dominant explanatory pattern changes, if you think this way. Instead of the positivist focus on making things happen by knowing how they happen, the emphasis is on what’s allowed to happen, and what’s prevented.
    The world freezes out of random flux, say the LQGers. Biology freezes out of thermodynamic flux, says Deacon. The market freezes out of “animal spirits”, Keynes should have said. But in all the cases we’re on the ragged edge. The incoherent fluctuations (noise) never stop. We can do normal (linear and law-like) science for what’s really solidly frozen; but only if we can damp out the noise locally and for a reasonable amount of time. That turns out to be energetically hideously expensive — again, at all levels. The nature of noise, that is, the nature of relevant fluctuations, varies from level to level — one of the main reasons reductionism is impossible. But the “problem” of the management of fluctuations is more than analogical. [Taking the quotation marks off of “problem would require knowing what the whole schmear was about.] How near the ragged edge are we? Well, if one of the fundamental constants, C or h and so on were to differ out at a short decimal place, there wouldn’t be a cosmos at all — no entrainments would freeze in. As for human society, nobody has asked that question directly, but someone clever could make a reasonable estimate. We may be testing it at the moment. Of course the doomsday people always think they know how close we are. It’s cold comfort to know that so far they’ve been wrong.

  17. I don’t understand, DtE, so I can’t interact very effectively. I have read some Deacon though, thanks to our old pal Asher Kay. “Biology freezes out of thermodynamic flux, says Deacon.” Well yes, life forms constitute an encapsulated zone in which entropic forces are held at bay. But there’s no magic there, no override of the second law: energy is expended in order to sustain the life form. Deacon proposes that, at least for symbol-using life forms like humans, constraint can function as a kind of perpetual motion machine, partitioning noisy spaces into negentropic sectors without expending energy. And that’s explicitly what’s happening when, e.g., the big basket of Trumpist deplorables are partitioned into smaller and smaller baskets by deploying inferential statistical techniques in partitioning up the data space. The big complex space of potential voters is negentropically partitioned into information-rich regions by means of algorithmic constraints that don’t require physical expenditures of energy to establish and sustain.

  18. “it seems like what this does is it uses a kind of radar ping to find a signal that’s already coded as feedback”

    It’s not a one-to-one match between stimulus and response, between Facebook click and political message element. It would be patterns of clicks, maybe hundreds of them, combined into a smaller set of factors — 5 factors in the case of the OCEAN personality model used in the Trump campaign. Each click isn’t assigned to a particular factor; e.g., this click codes for introversion, that click for openness to experience. Rather, each click gets a weight for all 5 factors. So: multiple input variables, each coded for multiple output factors. The allocation of factor weights isn’t predetermined; it is calculated on the fly, iterating and reiterating in order to arrive at the best fit. And the fit isn’t deterministic; it’s probabilistic. The more data, the better the fit — the more reliable the probability estimates. The factors likewise are statistical patterns rather than a priori categories. Eventually a 6-factor model might prove a better fit with the data than the 5-factor OCEAN model. But there’s still going to be unexplained sources of variability, and the individual predictions aren’t always going to be accurate. The goal of the reiterative modeling isn’t certainty; it’s getting incrementally better than random chance.

  19. John, I apologize for that cryptic overview, but I thought it ought to show up in this thread once as a sort of goofy benchmark. Precious little of it, as it’s worked out, is inconsistent with what you’ve been saying. The important thing for me is that the picture it presents has arisen more or less independently in many first order research trajectories. I think that you underestimate the degree to which it invites a re-interpretation and assessment of limits on the standard techniques that — as Carl says — reiterate with decimal gusto those very limitations.
    The Keynes reference wasn’t just cute.

  20. I certainly underestimated the simple-minded bureaucratic rigidity of Medicare, which I just spent the morning trying to get worked out but without success. This is something I’ve encountered before, in individuals as well as in institutions: I assume there’s some complexity working behind the scenes that’s opaque to me but that, if I’m clever enough, I can figure out. Nope. Turns out the surface level is all there is, but the surface is an impenetrable barricade.

  21. For me it’s the income tax, that terrorizes me every year, Turbo Tax or no.

  22. Right? Rachel just did a Farm Extension seminar on how to do farm taxes. If you want to make her run screaming for the next little while, just say “Schedule F.”

    I also think I figured out why I’m a little askew this discussion. Both of you, and most well adjusted rational people, are talking about complexity in relation to stuff getting done. I’m not. I’m talking about it in relation to stuff not getting done. Specifically, I think there’s a kind of engineering orientation to the world that causes a lot of mischief and ought to be ruled out, or at least drastically moderated, by complexity. This is why I focus, probably disproportionately, on the nonlinearity dimension. It’s also why as often as not I’m not really talking about complexity at all, just complication. If we could get to complexity we might get somewhere.

    I come at this honestly, if twitchily, via a foundational orientation to Marxism and the revolutionary experiments. Long ago I figured out they were just gambling, mostly with other people’s money, and at enormous cost. Once around it’s tragically impressive. The Louis Bonapartes of the left now are only a little bit charming when they’re nowhere near power. Everyone else is some kind of pragmatic incrementalist, which is its own kind of trap.

    When Trump and the Trumptones sing the song about how government has failed to produce legibly linear outcomes I think of course it has, and when they chirp let’s blow it all up and see what happens I think about how poorly they seem to understand both their own situation in the world and the possibility fan that issues from such behavior. Like trying to rock your wheels out of a rut next to a cliff by flooring it while rapidly slamming it into forward and reverse. Not that there’s anything great about the rut.

    What I take from John is that despite a genuinely impressive propaganda and electoral apparatus, they managed to produce an outcome that was in no numerical way distinguishable from an ordinary cycle election after eight years of the other party holding the top chair. They succeeded insofar as they were in the flow. I’m also much better attuned to how what I was hearing at these rallies was a narrowly constructed discourse that sounds like political abracadabra because it is.

  23. Well I thought I was harmonizing with your tunes about weather and Trumpists rather than cranking up my mischievous engineering agenda. Go sidewise, my friend; rule out my commentary or drastically moderate it as you see fit. “They succeeded insofar as they were in the flow”? Dude, that’s heavy.

  24. No, sorry John, I meant that as appreciation but I put it badly. The engineering critique was not aimed at you. You’ve totally convinced me those guys were loaded for bear and did what they set out to do. That’s now apparent to me in the interviews, so I’m at a different level of understanding the dynamics here because of this conversation. Harmony achieved!

    But isn’t it interesting that in terms of turnout and outcome this was pretty much a standard cycle election? All that work, and what they got was an outcome you could have predicted on a napkin two years ago if you ignored the polls and the celebrity soap opera.

  25. At 8pm on election night the betting odds were 5 to 1 against Trump, so you could have made a lot of money if you’d gone with your napkin prediction. I’ve not seen empirical evidence about the impact of various political tactics — ads, canvassing, driving people to the polls, etc. — on voting behavior in the swing states. Candidates do spend a lot of dough on such interventions, and I have seen data showing that candidates’ success in winning elections is strongly and linearly correlated with the amount of money they raise and spend during the campaign. Clinton/Trump was an exception, since Clinton raised far more money and still lost. One possible explanation is that Trump used his limited finances more shrewdly and effectively; e.g., via the Facebook campaign. Anyhow, that campaign was a pretty clever use of data and statistics; no doubt there will be more of it in light of the Trump win.

    Causing predictable results by means of specific interventions is the essence of the experimental method, which I suppose supports the knowledge-power conflation. Anyhow, I too like to watch. Besides, my attempts to intervene pragmatically in the world nearly always come up empty. Those are two big reasons why I quit trying to fix healthcare and took up writing fictions.

  26. What role does intent play in complex systems? The Trump voters chose their candidate, but their choices were also caused: by demographics, impulses, historical circumstances. Some of the factors causing the voters to make their choice were intentionally manipulated by others. But those intentional manipulations were themselves also caused, no? The engineering approach presumes the efficacy of cause-effect, but it also presumes that the engineer can somehow transcend the mechanism. But engineering does seem to work. Eliminating the efficacy of intent in complex systems seems based on philosophical commitment more than on empirical evidence. But no doubt there is a benefit to simplifying the development of new complexity models by (intentionally) setting intentionality aside, at least for the time being.

  27. I wish I had made that bet!

    This part of the conversation is reminding me of a previous one we had, cross-blog, about teaching and learning. You did a fair amount of research that found no particular correlation between the teaching input and the learning output. Basically, you argued, the critters are going to spool out their learning potential one way or another (the INUS conditions we’re gesturing at here), and ascribing that to the teaching intervention is a kind of corporate wishful thinking. All those places where we think we had an impact are just places where the process came to its own head. I had no good basis to argue otherwise. Spend some time together and see how it goes, was the takeaway.

    Of course the teaching is part of the process, and intentionally so. And people do from time to time seem to be learning the particular things they’re being taught, along with all the other things they’re learning. We know that the lesson is easily lost, and that the effective lesson has to be robustly reinforced across a range of situations and needs. The teacher’s intent and power are insufficient and unnecessary parts of a complex adaptive system that has to arrange many currents into flow to work out as, say, learning the calculus. We also know that the latent function of education in reproducing the class system is troubled not at all by the stories we tell about education as a playing field leveler. So in short, we spend upwards of twenty years of every young life and billions upon billions of dollars on an education system that may not be doing much more than would happen some other way if we stopped. As engineered systems go, schools are unbelievably inefficient. Like health care.

    I’m also reminded of my friend Lou, who works for Anheuser Busch, talking about how the malt beverage market has for many years been maxed out, with enormous creativity and resources devoted to swinging tiny fractions of market share. He said Budweiser has no illusions that their advertisements are selling anyone Budweiser. They’re just reminding the Bud drinkers what team they’re on and representing against the rival teams. All of this is why Marx said human history hasn’t started yet. We don’t even need to set intentionality aside to simplify the models. A good complexity model can treat intentions like all of the other dynamics without missing a beat. I’d like to report this was Alicia Juarrero’s point in Dynamics in Action: Intentional Behavior as a Complex System (which, if you’ll recall, Terry Deacon then allegedly “plagiarized” in Incomplete Nature), but I’m afraid I couldn’t make it through enough of her noodling around in the history of philosophy to say for sure.

    On Clinton’s loss despite higher funding, the analysis I’ve seen is that Trump was a master at ginning up free publicity; or rather, that’s just who he is and who he is was right in the flow. He didn’t need to spend much on media because the media kept treating everything he did and said as news. Of course he was also tapping into the alt new media, which are essentially free. All that Twittering!

  28. It’s a hard hypothesis to falsify. Central heating, air travel, internet, the cure for polio — all of it was inevitable, with the individuals and groups who kept moving the ball incrementally ahead being themselves moved inexorably toward their discoveries and inventions.

    My recollection of the teaching discussion was that the empirical case for distinguishing good from bad teachers based on the performance of their students was extremely weak, but that the union-busting state governments were gesturing toward these weak studies to justify reducing teaching staffs in districts that were underperforming for demographic reasons. And you’re right: one could conclude that, since at present it’s impossible reliably to distinguish between good and bad teachers, teaching itself is useless. i have a feeling that a controlled test of this hypothesis would come out in favor of teaching as essential to a sociocultural milieu that enhances learning. Humans are demonstrably far better than any other kind of animal at learning from others, and adults do tend to have a wider array of knowledge and skills at their disposal than do kids, so kids would find it adaptive to pay attention. But the adults who know things might be pretty interchangeable as teachers. BTW, similar findings occur when looking at parenting. Based on studies of twins adopted by different sets of parents, it turns out that genes and culture together account for nearly all of the variance in kids’ outcomes, with individual parents’ contributions being very small. Adults occupying the same sociocultural stratum as one another are pretty much interchangeable as parents. The exception: abusive and neglectful parents have demonstrably bad impacts on kids. The same is no doubt true of abusive and neglectful teachers. Random high-SES parents have kids with better outcomes than random low-SES parents; I don’t doubt that this is true also of random teachers in high- versus low-SES schools.

    Intentionality seems part of the human apparatus in learning, discovering, inventing. Even if it’s inevitable that I’m going to cook a pork roast Wednesday night, I still have to remember to take the meat out of the freezer the night before, etc. etc. Even if I can explain the causes of intent, the intent persists as an emergent result of those causes. Like the migrating flock of cedar waxwings that shat holly berries all over our cars last week: just because the causes of flight and migration can be understood, birds still actually do fly, still do migrate.

  29. “nearly all of the variance” gives me the chance to spell out a point I’ve made cryptically (me? cryptic?). The phrase is embedded in in a technique that’s, in turn, embedded in a long orthodoxy defining tradition. We might as well start with Adam — it’s traditional. In TWON we find the famous passage where we learn that it’s not from the kindness etc. of the butcher and the baker that we owe our meat and bread, but from their desire to profit from trade. This is, or is near, the beginnings of defining an economic system as a singularized object of study. In particular, it defines relevant motivation — another way of putting “intentions”.Smith considers himself to be isolating a separable sub-part of the moral universe as a whole. There ALL motivations are relevant. Soon, with Bentham, Mill the elder, etc. the attempt is made to characterize that moral universe in terms of utility, something that’s subject to comparison and quantitative measure. But the marginalists discover that comparison and measure are harder than was thought. In fact, they decide that interpersonal comparisons are paradoxically useless, intensity of desire is hopelessly vague, and so on. They end up plumping for behaviorism: the theory of revealed preference. Market behavior will be the sole criterion of economic motivation, and the sole source of quantitative measure. The literature of welfare economics is full of the debate about these criteria, but they become, de facto, a dominant orthodoxy. In effect, as in behavioral psychology, “intentions” are strengt verboten as explanatory variables.
    Keynes honors that prohibition by replying to the question of intentions by replying “animal spirits” — one of the greatest orthodoxy preserving sandbags ever thrown on a dyke.
    The next step is Milton Friedman’s also famous paeon to strict positivism, It merely echoes the tradition, but does so by claiming that the only truth to be sought from a system of independent and dependent variables is (in effect) what emerges from the regressions, and allows prediction. Beyond that, questions of truth and falsity are nonsensical. In more modern terms, that means that the topology of economic space must be assumed to be flat: any variable must be separable and additive: the axiom of positivist puritanism.
    Multi-variate analysis and it’s cousins try to hold to the axiom, but that’s harder to do than might have been thought. In fact, as is often pointed out by critics, it can do so only by shoving potential dynamical variables and interactions into a bag of residuals, so, from a detached point of view, judgments of “nearly all the variance” come with an asterisk.
    The confrontation of non-linearity has been understandably difficult for the orthodox. However there are some really good attempts within mainstream economics. A consideration of two such attempts occur in my unpublished works. In one case I looked at Arrow, Dasgupta and Maeler’s “Evaluating Projects and Assessing Sustainable Development in Imperfect Economies” in the context of thinking about non-convex production functions. (In Dasgupta and Maeler, eds. 2004), and in the other case worked with Kevin Hoover’s CAUSALITY IN MACROECONOMICS. In both cases, the debates they intervene in aren’t isolated, so there’s a hefty literature..
    Notice that in the course of the tradition, the “economic intentions”, isolated and identified as a part of the moral universe by Smith, have expanded to encompass the whole moral universe — revealed preference does that, and, obviously, so does Friedman’s positivism. So in fact, it may well be that Milady owes her meat to the fact that the butcher has a large Jones for her, and tosses in an extra chop. One hopes that he revealed only his preference.

  30. 1. Getting ready to go to the gym, daughter Kenzie gave us her view as to why academe is better than employment. On the job, one is expected to perform tasks at one’s pay grade, being given a bump only after having demonstrated the ability to perform incrementally next-level tasks. I.e., linearity. In contrast, rewards in academe aren’t nearly so tangible, so if one is so inclined one can take larger risks; e.g., writing an essay about what 400 years of scholarship on Paradise Lost is mistaken. Even better, some professors actually reward the student who takes a big risk even if it doesn’t succeed. I.e., non-linearity. I think this insight about academe has to do in large part with the kind of school one attends. Kenzie went to Grinnell, where most of the professors evidently reward innovative rigorous thinking rather than conformity to generally accepted standards.

    2. Kenzie has a half-time job at Duke, where she is working on comparing econometric models of rationally optimal decision-making in game theory with empirical evidence about the way people actually play these games in a laboratory setting. Preliminary findings, as well the results from other similar studies, supports the idea that players aren’t optimizers, or even satisficers; that the large jones and other noneconomic factors also play significant role in motivations and decisions.

    3. Explained variance fits the regression line; unexplained variance does not — it is by definition not linear. Once the awesomeness of the linearity has begun to wane, attention inevitably focuses on the scatter, the outliers, the nonlinear individuals. Management wants to squeeze down the variation, to get everyone toeing the line. Science wants to figure out what’s going on with the outliers, what accounts for the deviation, how can the line be altered or delinearized to accommodate the scatter. Maybe art wants to exacerbate the deviation, push things farther off the line.

    4. In an earlier thread I lamented the impact of evidence-based best practices on physicians’ practices. The best practices are linear, typically in a Bayesian branching way, squeezing out the variation of routine activity. I expected that, freed from the routine linearity, doctors would take more risks, figuring out what was going on with outlier patients who didn’t seem to respond to the linear protocols. Instead the financial rewards of the job seemed to override the intellectual rewards of exploration and innovation.

    5. Tiny children will help adults without being prompted, seemingly satisfied with the intrinsic value of the act. If, however, the child is rewarded, he stops spontaneously being helpful, waiting for the reward before taking action.

    6. A chimp will work for slices of cucumber. If, however, the chimp at the next workstation, performing the same task, gets rewarded with grapes — a tastier treat than cucumber — the first chimp rebels. The next time he does the task and gets another piece of cucumber, he hurls it back at the boss/experimenter and refuses to do the task anymore.


Leave a Reply!

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: