Late Night Thoughts

by Jacob Lee

Social systems are dynamic, internally heterogeneous, and loosely coupled. Some may object to my use of the term ‘system’ and certainly the word has a lot of baggage. By calling something a system, I am merely drawing attention to the fact that it admits descriptions in terms of parts, their properties, and the relationships between these; as a statement about something it really adds very little since all but possibly the simplest things can be described in such terms; however, we must call ‘it’ something, and by calling ‘it’ a ‘system’ I invite an analysis in terms of its constituent parts. The last sentence is slightly misleading however, because it suggests both that we can presuppose the thing (Brian Cantwell Smith’s Criterion of Ultimate Concreteness), and probably more importantly that any such thing is decomposable into parts, properties, and relations in only one right way. This is not the case at all; all but possibly the simplest things admit many analyses in which different parts, relations, and properties are distinguished and at different levels of granularity or precision, and accuracy. This needn’t be taken as an assertion of metaphysics so much as an assertion of pragmatism.

Obviously what sorts of descriptions are best is relative to purpose, context, and circumstance (if those indeed are three distinct things), but that only means that some difference between those descriptions must be able to account for their different utilities. There are a variety of information theoretic approaches that can be applied to such a problem. We can look at the complexity of the description itself, using something like algorithmic information theory; we can try to measure the amount of uncertainty a description reduces using Shannon’s mathematical communication theory, or we can try to look at the various approaches to measuring semantic information content that have been introduced into the philosophy literature. In a very general sense however, a good decomposition is one which is coherent and approaches some optimum ratio between the amount of information that can be extracted and the cost to do so.

So, social phenomena admit many possible decompositions; some may be better than others for some purposes and in some contexts; but here I want to ask: given the current state of social science, and our increasingly dire need for sound policy advice, what sorts of descriptions are we in want of?  To put this in a slightly different way, what sorts of descriptions are needed to improve our explanations of social phenomena, both to advance our theoretical understanding, but also to advance our practical ability to provide valuable policy advice? That’s a big question (of course), and I don’t think it has one answer (wink), so I want to specifically focus on the sorts of descriptions that, to put it crudely, would enable us to do good macro.

Let me stop myself her and express a view I adopted fairly early in my intellectual life: its long past time we stopy trying to explain macro-level social phenomena by projecting individual psychological or behavioral traits onto society: society is not the individual writ large. Nor is society something as orderly and well-engineered as, say, a mechanical clock. I can understand most of what I need to know about how a mechanical clock works by understanding how all the gears, springs, and other parts fit together: their respective properties and the dynamic relations between them. I don’t have to go much deeper than that. Their precise substance could be plastic, or brass, or wood: as long as they are rigid and sturdy enough to do their jobs, I don’t need to know. But in the open-ended, constantly evolving and boundary-transgressing world of human social systems, that sort of crude decomposition can only get us so far. Or put another way, descriptions which rely on the stability (in identity, function, etc.) of things like organizations, institutions, etc. — the low hanging fruit on the tree of knowledge– only help us when everything stays the same. But they don’t, even if for a long time it looks like nothing is changing. We see this is biological systems, for example, in which genetic diversity accumulates hidden by phenotypic homogeneity under some general regime, and when that regime changes, or some internal tipping point is reached, that hidden diversity rapidly becomes manifest in the distribution of phenotypes in the population.

As I said, society is heterogeneous, dynamic, and loosely coupled. By loosely coupled I mean that at reasonable levels of precision, most of its parts exhibit varying degrees of autonomy. Of course, autonomy is a contentious notion, but at the very least it means that behavior (of the parts in some natural decomposition) is determined in great degree by internal state rather than external inputs. That is, the parts exhibit relatively high degrees of independence from one another. Not too much of course; but not too little either, or they’d just be like the gears in some clock.

Back when, well back when I started reading things that led me to start thinking these sorts of things, the call to arms was something called ‘population thinking’ and ‘emergence’. The idea was to move toward ways of conceptualizing problems that avoid the traps of Platonistic essentialism. In particular that meant thinking about heterogeneous sets of individuals, and how the properties of their aggregates arise through their interactions. Methodologically, but to varying degrees of fidelity, this has been expressed in the rise of a number of interrelated approaches to modeling social systems. Fueled by advances in graph theory (especially from work in computer science) and the new ‘social’ web, we have the blooming of social network analysis which largely seeks to explain aggregate phenomena via the structural properties of social networks (however they end of being defined). In addition to (and in some ways complementary to) social network analysis, have been a variety of computational approaches to modeling, especially agent-based approaches which study how aggregate behavior arise from the interactions of modeled individual agents interacting in some domain or problem environment. These come in an exceeding abundance of variants that are difficult to describe.

In a recent post, Daniel Little discusses how what he calls methodological localism emphasizes two ways in which people are socially embedded: agents are socially situated, and social constituted. By socially situated he means how agents are locally embedded within systems of constraints: systems of social relations and institutions that determine the opportunities, costs, and choices available, i.e. the ‘game’ that agents have to play. Or to quote Marx:

Men make their own history, but they do not make it as they please; they do not make it under self-selected circumstances, but under circumstances existing already, given and transmitted from the past.

The social constitution of agents is a more subtle thing, but something that anthropologists are generally poignantly aware of. People are encultured beings. Their behavioral and cognitive repertoire comes into being as part of ongoing social interaction. They are learned, but their learning is not simply a matter of knowledge acquisition, but of agent becoming: we fully exploit the affordances of our developmental plasticity. To say that I am an American is not simply to say that I have adopted a convenient label, but to assert an embodied fact.

Little goes on to discuss how these two perspectives on social embeddedness give rise to differing approaches to modeling social phenomena:

These two aspects of embeddedness provide the foundation for rather different kinds of social explanation and inquiry.  The first aspect of social embeddedness is entirely compatible with a neutral and universal theory of the agent — including rational choice theory in all its variants.  The actor is assumed to be configured in the same way in all social contexts; what differs is the environment of constraint and opportunity that he or she confronts.  This is in fact the approach taken by most scholars in the paradigm of the new institutionalism, it is the framework offered by James Coleman in Foundations of Social Theory, and it is also compatible with what analytical sociologists refer to as “structural individualism”. It also supports the model of “aggregative” explanation — explain an outcome as the result of the purposive actions of individuals responding to opportunities and constraints.

The second aspect, by contrast, assumes that human actors are to some important degree “plastic”, and they take shape in different ways in different social settings.  The developmental context — the series of historically specific experiences the individual has as he/she develops personality and identity — leads to important variations in personality and agency in different settings. So just knowing that the local social structure has a certain set of characteristics — the specifics of a share-cropping regime, let us say — doesn’t allow us to infer how things will work out. We also need to know the features of identity, perception, motivation, and reasoning that characterize the local people before we can work out how they will process the features of the structure in which they find themselves. This insight suggests a research approach that drills down into the specific features of agency that are at work in a situation, and then try to determine how actors with these features will interact socially and collectively.

Clearly traditional economics is particularly wed to the first approach. At the individual level, economic agents are typically modeled as completely informed, perfectly rational and self-interested agents. In equilibrium models, say of market behavior, that idealized agent *is* writ large: all agents are the same and face the same situation and have the same information. It would be fair to say that this simplifying assumption has yielded very interesting formal results, but their adequacy as a foundation for an empirical science can be robustly criticized–though there are indeed circumstances in which, say, markets perform in close accordance to such models.

The behavioral revolution in economics of the last twenty years or so introduced various sorts of ‘boundedly rational’ agents. For example,  Tversky and Kahneman demonstrate a number of ways in which real human agents violate these assumptions. In particular, their prospect theory holds that people have distinct utility functions for gain and loss domains (and that these domains are subject to framing effects). Generally speaking, Tversky and Kahneman found that people are risk-avoiding when facing gains, and risk-seeking when facing losses. However, some who utilize agents are assumed to have the similar enough risk preferences to justify ignoring individual differences. So while prospect theory’s agents are more psychologically ‘real’ than Homo economicus, they clearly fall within Little’s first domain. Other models do however include limited varieties of agents- usually agents with fixed strategies or preferences of one kind or another. What is most frequently omitted, perhaps because it is hard to model and hard to analyze, is the adaptive agent, agents who change and grow, agents who are socially constituted.

Recently, VOX, a policy and analysis forum for economists hosted a debate ‘What’s the Use of Economics?’ regarding the future of economics post the world’s economic crisis. In his contribution to this debate, Andrew Haldane, Executive Director of Financial Stability at the Bank of England blames the academic and professional economics profession for a number of sins contributing to the world’s current economic crisis. Amongst them is a failure to adequately take into account heterogeneity of economic agents in economic models:

These cliff-edge dynamics in socioeconomic systems are becoming increasingly familiar. Social dynamics around the Arab Spring in many ways closely resembled financial system dynamics following the failure of Lehman Brothers four years ago. Both are complex, adaptive networks. When gripped by fear, such systems are known to behave in a highly non-linear fashion due to cascading actions and reactions among agents. These systems exhibit a robust yet fragile property: swan-like serenity one minute, riot-like calamity the next.

These dynamics do not emerge from most mainstream models of the financial system or real economy. The reason is simple. The majority of these models use the framework of a single representative agent (or a small number of them). That effectively neuters the possibility of complex actions and interactions between agents shaping system dynamics…

Conventional models, based on the representative agent and with expectations mimicking fundamentals, had no hope of capturing these system dynamics. They are fundamentally ill-suited to capturing today’s networked world, in which social media shape expectations, shape behaviour and thus shape outcomes.

This calls for an intellectual reinvestment in models of heterogeneous, interacting agents, an investment likely to be every bit as great as the one that economists have made in DGSE models over the past 20 years. Agent-based modelling is one, but only one, such avenue. The construction and simulation of highly non-linear dynamics in systems of multiple equilibria represents unfamiliar territory for most economists. But this is not a journey into the unknown. Sociologists, physicists, ecologists, epidemiologists and anthropologists have for many years sought to understand just such systems. Following their footsteps will require a sense of academic adventure sadly absent in the pre-crisis period.


18 Responses to “Late Night Thoughts”

  1. This is an absolutely fantastic post.

    Re: this -

    I want to specifically focus on the sorts of descriptions that, to put it crudely, would enable us to do good macro.

    My longstanding hunch has been that an important problem with modern macro is its tendency to take the nation state as a central unit of analysis. I’d like to see (and such a thing may already exist – I’m not at all abreast the research literature) something that shares the analytic orientation of Wallerstein’s world-systems approach, but that makes use of modern modelling techniques.

  2. Pretty. Very pretty. Still you could have just said macro makes too many assumptions, go build more complex models, buy bigger computers to simulate them, intellectualism is dead in macro economics.

  3. I agree with Duncan. This is a fantastic post. The question it raises for me is the boundary between the simple and the simplistic. I doubt that Edward Sapir was the first to notice this, but it was in his Language that I first encountered the observation that every word conveys a concept and every concept is an abstraction. It is, thus, impossible to speak of the whole of any particular thing. For science the question is always what degree of abstraction is productive. To design an experiment or survey is to make decisions about what to exclude from consideration. In some cases, e.g., formulating the law of gravity, point masses in a vacuum may be a productive assumption. In others, e.g., explaining why an airplane flew and then crashed (a task my son-in-law undertook while squadron safety officer for a Marine Corps air training command), those same assumptions are simplistic. In the case at hand the question is how simple can we make our agents before simple enough to be feasible to work with becomes simplistic—and thus a dangerous ground for policy.

  4. @duncan: Thank you. I am also ignorant of the literature, but I have had similar thoughts. Taking the state as fundamental actors seems perilous in an age when the boundaries between them are so permeable. I do not mean merely the flows of people, information, and material across borders, but also how powerful organizations wield influence across and despite the borders of individual nation states. Such organizations include corporations, non-commercial NGOs, labor organizations, criminal enterprises, terrorist organizations, etc. But even if nation states were entirely contained, it is not clear to me that understanding nation states as single agents (especially using some kind of rational agent model) can be entirely justified. Indeed, the cynic in me grants that at least some foreign policy decisions of nation-states are made mainly to protect the financial and commercial interests small group of well-connected elites.

    @bar fight: thanks. A pithy summary, and mostly right. You are welcome to write my future abstracts. Actually though, I think I am arguing for rather more, or maybe rather less than what you say. I am not arguing for more complexity for complexity’s sake. I am arguing against simplicity for the sake of elegant theoretical constructs, at least when it is intended to inform us, and especially decision makers, about how the world works. Furthermore, my aim was not intended solely at macro/economics/, but macro theories in general, e.g. on the evolution of societies. It was not necessarily a criticism either, but a statement of principal, something I had to get out there because its obvious but almost never spoken. Also, my examples were chosen by convenience and circumstance–things I happened to have been reading that allowed me to piece together long slow thoughts that have been banging around in my head for a while. I have been reading macro-economics lately, particularly on monetary theory, but also general economics, mainly because it seems so relevant given our world’s woeful current economic, fiscal and monetary circumstances. On the other hand, I am involved more or less in the behavioral revolution in micro-economics. I am research programmer in a social neuroscience and neuro-economics laboratory at Virginia Tech. We study the neural correlates of decision making in a variety of contexts, particularly through games of economic exchange, trust, and so forth using reinforcement learning paradigms. The heterogeneity of agents is right there in our data, and implicit in our research questions, e.g. concerning PTSD, addiction, etc.

    @John: Yes, simple but not simplistic. You are exactly right. I was hoping that I was treading lightly on what is already heavily trodden ground though. All models abstract away detail, and we hope that by doing so that we are trading fidelity for tractability at a favorable ratio. But it isn’t just a choice of how much detail you want to keep and how much you want to throw away. Its also about which details are kept and which are not. Going back to my response to bar fight: we need to find ways to edge closer to the observed complexities of the world we live in. Just jotting some thoughts down here: We’ve explored rational actor models quite thoroughly. How to go further? Introduce behavioral paradigms, letting agents stay the same, but making them more like real world agents. Go further by introducing varieties of agents, say with fixed characteristics, and see where that takes you. See: for example. Go further, but taking a step back: make agents adaptive,, i.e. constituted by their environment, including social environment, but make all agents be fundamentally of a kind: think a class of identical agents being programmed and reprogrammed by environments and by other agents throughout their life-cycles, and agents doing the same back to both. Go further: entire ecosystems of diverse classes of adaptive agents interacting.

    In artificial life, and other exploratory endeavors, there have been to basic approaches: 1) build small models, testing and theorizing carefully as you go in what one hopes is an additive process of knowledge accumulation. You make a model, then change one thing about it, then change one thing again, and one thing again and again, so each time you know where the different observations come from. The second approach 2) is to build crazy big kitchen-sink models and see what the hell crawls out.

    Both approaches are valuable, and there is no reason that macro couldn’t proceed in both ways at once.

    But something more too: you can’t just generating theory models and call yourself an empirical science. You have to go get data, generate data models, and test your theory models against those data models. For example, its not just that the models in macro economics may have been too simple, but that that simplicity was maintained by pushing inconvenient realities under the rug, i.e. theory-busting details were being abstracted away, and by failing to systematically emphasize empirical checking of their theories. I’ll admit, there may be too many degrees of freedom to do that as well as one would like, but as Clifford Geertz wrote: “I have never been impressed by the argument that, as complete objectivity is impossible in these matters (as, of course, it is), one might as well let one’s sentiments run loose. As Robert Solow has remarked, that is like saying that as a perfectly aseptic environment is impossible, one might as well conduct surgery in a sewer.”

    Some links for further thought in this vein (some of them might be familiar already to some here):

  5. Late coming to the discussion (was blind to new posts for a while), but I wanted to say that this was a great post and really struck a chord with me. I’ve returned to these sorts of issues many times when I think about modeling — especially agent-based modeling.

    One of the big challenges with agent-based modeling is – in simple language – creating the sort of model where agents (or groups of agents) are capable of displaying novel behavior. If we were modeling an economic system, for example, we’d optimally want one in which new financial “vehicles” might be invented. We might see banking arise within a more primitive fiat system, or the idea of options arise within a trading system.

    That sort of challenge seems to be met by defining the physics/metaphysics of the system at a suitably low level (Tom Ray’s Tierra is a venerable example of that — he saw parasitism, for example, arise within the system), but I think it’s much harder when we’re modeling human beings, who display creative behavior at many levels.

    My intuition tells me that successful models are going to be “analogous to” or “somewhat isomorphic to” real economic systems in the same way that Tierra is analogous to a real ecological system. We need models in which more complex affordances can form from smaller ones, and in which the embedded agents can exploit them at multiple levels.

    I’m obviously a bit partial to the “kitchen sink” approach ;).

  6. I’ve also been following this thread admiringly. At this point, I think Asher has put his fingwer on one of the most accessible and tractable areas for “testable” models. I’d call it the self-organization of derived (sometimes derivative) markets — sort of the characteristic phenomenon of our time. There are two species, friom an initial point of view. The first are the hedges: secondary markets to redistribute risk. The second are the speculative markets that self-organiize to exploit potential opportunities that the initial market can’t access. The standard case is that the initial market becomes subject to regulation, so a shadow market organizes (in Singapore, or someplace else beyond the reach of regulation) that bets on the behavior of the primary market.
    For example, right now, in Joisey and elswewhere, strategies are being devised to deal with the radically changing shape of the real estate market as shore land use becomes highly regulated with respect both to what can be built; how; and even if something can be built or rebuilt at all. In effect, the developers etc. are now systemizing their betting strategies on what the rules will be over the next decade or so. Meanwhile, current owners think of their stake in terms of the old real estate market and its metric. The developers are betting on how that stake will be factored in a new metric
    Back a ways, I put “testable” in quotes. That’s to suggest that the traditional habits of testing are tied to a swarm of linearity assumptions, and relevant, let alone successful only under the assumptions. This is the old causality issue again. On this go-round let me just say that introducing creativity is one thing when the creative have control over what they create; and another when they don’t. In the highly non-linearly interactive contects we’re talking about here, they veery often don’t. So if, for example, we tried to identify agent motivation on the basis of results, we’d have to be lucky to succeed, etc. So, how, in these circumstances, are models to be evaluated?
    Asher is right also in highlighting the analogy with ecosystems, along witth the potential disanalogies. For ecosystems, “agents” can often be identified (defined, constructed) in terms of a more or less manageable set of capacities (plasticities, reaction norms, evolvability). The parallel for people has most often been to construct caricatures, ideal types, etc., that constitute bounded capacities in a stylized way. Are we trying to get beyond that, or trying to get it right? Think about “species” and ideal types.

  7. This has, indeed, been a great thread, but where will it lead us. Jacob Lee and Asher Kay both seem to have a technical grasp of what is involved in modeling. My understanding of the subject is barely a sketch. It strikes me as an amusing, and not entirely self-serving idea, that we could all sign up together for Scott Page’s Coursera course on modeling and develop some common ground from which to proceed further. The course in question is Model Thinking – Any takers?

  8. Don’t laugh, John. I already thought of signing up for Page’s course on the strength of COMPLEX ADAPTIVE SYSTEMS — a really good book. But then I read DIVERSITY AND COMPLEXITY, which is disappointing.

  9. I don’t know about the latter book, but I am also an admirer of COMPLEX ADAPTIVE SYSTEMS. I signed up for Page’s course last year. Didn’t finish it because I was distracted by other more urgent obligations. But the first few sessions were terrific. I was thinking that if several of us signed on at the same time, we could form an impromptu study group and keep each other’s noses to the grindstone.

  10. I’m game, but don’t shoot me.

  11. John – Sadly, I think it’s going to be summer until I have enough time to give a course the attention it deserves. Coursera had me drooling (intellectually). I immediately saw about 12 courses I wanted to do (a whole 10-week course on MOS transistors?!??!) , and I had to leave before I did something stupid.

    I think one of the best ways to get a feel for computer modeling is to read something by Robert Axelrod. His models tend to be very simple, interesting (predicting the alliances in WWII, for example), and introduce various ideas like the iterated prisoner’s dilemma or “landscape theory”.

  12. Sigh – I take it back. Every time I do some work someone else has in mind, I don’t do some work I had in mind. I learn a lot that way, but I also get turned down for promotion and whatnot. So no, not this time.

  13. Not to worry. I am in the same boat. We have a ton of work to get through and a bunch of other commitments as well. Perhaps next year.

  14. “That sort of challenge seems to be met by defining the physics/metaphysics of the system at a suitably low level (Tom Ray’s Tierra is a venerable example of that — he saw parasitism, for example, arise within the system), but I think it’s much harder when we’re modeling human beings, who display creative behavior at many levels.”

    Yes. Surely this is key to this, and to the related problem of developing artificial life evolutionary systems capable of supporting open-ended evolution.

    Given the game of life, how would you go about creating an object oriented version of life written in Java or some other similar language in which the object consisted of those objects or patterns that we commonly recognize in the game of life: i.e. gliders, puffers, breeders, and so on. Object-oriented programming defines classes and instances of that class (instantiations of the class). Objects have properties (data), and methods (functions). Methods are a means of manipulating the object in some way, or having it perform some action.

  15. “Open-ended” — that’s the term I was looking for.

    WRT an object-oriented implementation of Conway’s game of life, it’s an interesting question you ask. You seem to be saying that there’s a sort of built-in obstacle to open-endedness in the language paradigm, in that you’d have to know beforehand about puffers and gliders in order to have a




    class defined. It’s likely that in an implementation, these “aggregate” objects would be defined by a more abstract class, and the “fun” part would be recognizing when a collection of cells is acting as an aggregate (without thinking deeply about it, I’m guessing that would have something to do with structural persistence). And then the problem switches to one in which you have to know beforehand what properties and methods to give the abstract class, which also closes off the open-endedness, since the object can’t “be” or “do” just anything. Perhaps that can be solved in the same way, by having abstract




    classes that contain raw information and dynamic code respectively. It starts to look like “enterprise Java” with all the abstraction.

    In Tierra, of course, the organisms themselves are just collections of instructions, and the “world” is just a virtual machine of sorts. An “aggregate organism” isn’t something the system itself understands — that is, an analysis of the system is required to label a particular organism as an aggregate. I wonder if there’s not a parallel way of thinking about a game of life simulation.

  16. So, when people began exploring the Game of Life, they began to notice commonly recurring patterns–spatially organized sequences of state changes that repeat. These patterns are objectified in the discourse of GOL enthusiasts. GOL engineers, who build large patterns that do lots of cool stuff, explain how they work in terms of their constituent patterns. For example, here is a description of an oblique spaceship:

    The “shoulder” of the construction arm, which fires the four kinds of construction salvo, was deliberately built using “Spartan” Herschel technology, ie using Herschel components constructed solely from blocks, beehives, tubs, boats and eaters. This was in the expectation of minimizing the complexity of slow-salvo construction using a single arm. (Even so, some recent informal work I did showed that the close spacing of eaters in some places would require some synchronized one-off two-glider syntheses using small, one-shot glider splitters and reflectors.)

    So not only do these gliders, reflectors, beehives, etc. behave in a regular (non-chaotic fashion), but GOL engineers have figured out how to compose these objects in a way that themselves are regular: their interactions. So, these objects are composed of other objects, and they can also be categorized by type. So far, it sounds like it conforms nicely to the motivating metaphor of object-oriented computing: that the world is more or less composed of objects that have properties and affordances. So, for example, we might want to create a class called Chair, and a class called Recliner inheriting from Chair, where Recliner can be in state /recline/ or in state /not reclined/ and affords reclining and unreclining methods. Or we might say simulate the collisions of particles in some space, and so define a class Movable, and a class Particle that inherits from it, with properties like mass, velocity, position, orientation, with methods like move() and reflect(). And so on. Maybe we want particles to become donuts when the particles collide, in which case we destroy the two instances of the particle class and replace them with newly instantiated instances of some donut class.

    So, one might say have two class of GOL objects, and have two instantiations of those classes act as they are supposed to (displacement across space, transitions to known aggregate states, etc.), where, for example, their properties are their current state in their characteristic state sequence and location, and their methods would involve things like state transitions and movement and so on. And GOL engineers harness the interactions of particles to build bigger fancier patterns so it seems that one ought to be able to capture pattern interactions using some kind of methods of these objects or perhaps some static method of object method of the space handling collisions: Space::collision(pattern1, pattern2) where if pattern1 and pattern2 are determined to have collided, then something appropriate happens depending on their respective properties.

    But this gets really really nasty. If we look at say this intricate magnificent beast (a Turing machine) and have a single glider collide with it from the wrong location, you might very well have the whole structure collapse into a chaos, ultimately terminating in some ugly collection of low level still lifes and oscillators. The point here is: collisions reveal the ultimate fragility of their identity as objects. As if the collision of a pea and a planet at low velocities might cause the planet and pea to disintegrate to form rings of dust in space. But maybe the result of collisions simply require a refined enough specification, right? IThere are a lot of ways that that glider can collide with that Turing machine, and the result of each of those ways is likely to be unique, but not specifiable in so simple a way as physical particle collision: results are nonlinear. f we have a rich enough set of classes (one for every conceivable pattern, stable or unstable, then all we would need to do is know how to move between them…It is almost as if…you would have to descend to the cellular level to be refined enough, i.e. to be reduced to simply running GOL where the only real objects are the cells and arrays that define that world.

    A resilient self-reproducing GOL pattern would have to have some kind of wall or boundary unresponsive to most perturbations. Hard to see how that can happen. But real world life has the same thing: every living thing has boundaries between itself and non-self, though permeable they may be. At the simplest level, single celled organisms have cell walls and membranes mediating interactions between the cell’s internal world and the outside world. And of course you need that: no cell wall, and your nucleus is likely to drift off somewhere…

  17. That paragraph from the forums is sheer awesome…

    I think the Turing machine in GOL makes a perfect example. Well, I guess it would be even better if it came into being by itself. Or if it defended itself from harm.

    “you would have to descend to the cellular level to be refined enough, i.e. to be reduced to simply running GOL where the only real objects are the cells and arrays that define that world”

    That’s really what it comes down to. The world needs to be 1) very dumb; 2) very causally local; and 3) fundamentally capable of asymmetry. I guess the problem really ends up being that we want to start out with something akin to the Turing machine in the GOL fully-formed because we don’t have a couple billion years to wait for it to happen.


Leave a Reply!

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


Get every new post delivered to your Inbox.

Join 180 other followers

%d bloggers like this: