Author Archive

January 31, 2013

Late Night Thoughts

by Jacob Lee

Social systems are dynamic, internally heterogeneous, and loosely coupled. Some may object to my use of the term ‘system’ and certainly the word has a lot of baggage. By calling something a system, I am merely drawing attention to the fact that it admits descriptions in terms of parts, their properties, and the relationships between these; as a statement about something it really adds very little since all but possibly the simplest things can be described in such terms; however, we must call ‘it’ something, and by calling ‘it’ a ‘system’ I invite an analysis in terms of its constituent parts. The last sentence is slightly misleading however, because it suggests both that we can presuppose the thing (Brian Cantwell Smith’s Criterion of Ultimate Concreteness), and probably more importantly that any such thing is decomposable into parts, properties, and relations in only one right way. This is not the case at all; all but possibly the simplest things admit many analyses in which different parts, relations, and properties are distinguished and at different levels of granularity or precision, and accuracy. This needn’t be taken as an assertion of metaphysics so much as an assertion of pragmatism.

Obviously what sorts of descriptions are best is relative to purpose, context, and circumstance (if those indeed are three distinct things), but that only means that some difference between those descriptions must be able to account for their different utilities. There are a variety of information theoretic approaches that can be applied to such a problem. We can look at the complexity of the description itself, using something like algorithmic information theory; we can try to measure the amount of uncertainty a description reduces using Shannon’s mathematical communication theory, or we can try to look at the various approaches to measuring semantic information content that have been introduced into the philosophy literature. In a very general sense however, a good decomposition is one which is coherent and approaches some optimum ratio between the amount of information that can be extracted and the cost to do so.

So, social phenomena admit many possible decompositions; some may be better than others for some purposes and in some contexts; but here I want to ask: given the current state of social science, and our increasingly dire need for sound policy advice, what sorts of descriptions are we in want of?  To put this in a slightly different way, what sorts of descriptions are needed to improve our explanations of social phenomena, both to advance our theoretical understanding, but also to advance our practical ability to provide valuable policy advice? That’s a big question (of course), and I don’t think it has one answer (wink), so I want to specifically focus on the sorts of descriptions that, to put it crudely, would enable us to do good macro.

Let me stop myself her and express a view I adopted fairly early in my intellectual life: its long past time we stopy trying to explain macro-level social phenomena by projecting individual psychological or behavioral traits onto society: society is not the individual writ large. Nor is society something as orderly and well-engineered as, say, a mechanical clock. I can understand most of what I need to know about how a mechanical clock works by understanding how all the gears, springs, and other parts fit together: their respective properties and the dynamic relations between them. I don’t have to go much deeper than that. Their precise substance could be plastic, or brass, or wood: as long as they are rigid and sturdy enough to do their jobs, I don’t need to know. But in the open-ended, constantly evolving and boundary-transgressing world of human social systems, that sort of crude decomposition can only get us so far. Or put another way, descriptions which rely on the stability (in identity, function, etc.) of things like organizations, institutions, etc. — the low hanging fruit on the tree of knowledge– only help us when everything stays the same. But they don’t, even if for a long time it looks like nothing is changing. We see this is biological systems, for example, in which genetic diversity accumulates hidden by phenotypic homogeneity under some general regime, and when that regime changes, or some internal tipping point is reached, that hidden diversity rapidly becomes manifest in the distribution of phenotypes in the population.

As I said, society is heterogeneous, dynamic, and loosely coupled. By loosely coupled I mean that at reasonable levels of precision, most of its parts exhibit varying degrees of autonomy. Of course, autonomy is a contentious notion, but at the very least it means that behavior (of the parts in some natural decomposition) is determined in great degree by internal state rather than external inputs. That is, the parts exhibit relatively high degrees of independence from one another. Not too much of course; but not too little either, or they’d just be like the gears in some clock.

Back when, well back when I started reading things that led me to start thinking these sorts of things, the call to arms was something called ‘population thinking’ and ’emergence’. The idea was to move toward ways of conceptualizing problems that avoid the traps of Platonistic essentialism. In particular that meant thinking about heterogeneous sets of individuals, and how the properties of their aggregates arise through their interactions. Methodologically, but to varying degrees of fidelity, this has been expressed in the rise of a number of interrelated approaches to modeling social systems. Fueled by advances in graph theory (especially from work in computer science) and the new ‘social’ web, we have the blooming of social network analysis which largely seeks to explain aggregate phenomena via the structural properties of social networks (however they end of being defined). In addition to (and in some ways complementary to) social network analysis, have been a variety of computational approaches to modeling, especially agent-based approaches which study how aggregate behavior arise from the interactions of modeled individual agents interacting in some domain or problem environment. These come in an exceeding abundance of variants that are difficult to describe.

In a recent post, Daniel Little discusses how what he calls methodological localism emphasizes two ways in which people are socially embedded: agents are socially situated, and social constituted. By socially situated he means how agents are locally embedded within systems of constraints: systems of social relations and institutions that determine the opportunities, costs, and choices available, i.e. the ‘game’ that agents have to play. Or to quote Marx:

Men make their own history, but they do not make it as they please; they do not make it under self-selected circumstances, but under circumstances existing already, given and transmitted from the past.

The social constitution of agents is a more subtle thing, but something that anthropologists are generally poignantly aware of. People are encultured beings. Their behavioral and cognitive repertoire comes into being as part of ongoing social interaction. They are learned, but their learning is not simply a matter of knowledge acquisition, but of agent becoming: we fully exploit the affordances of our developmental plasticity. To say that I am an American is not simply to say that I have adopted a convenient label, but to assert an embodied fact.

Little goes on to discuss how these two perspectives on social embeddedness give rise to differing approaches to modeling social phenomena:

These two aspects of embeddedness provide the foundation for rather different kinds of social explanation and inquiry.  The first aspect of social embeddedness is entirely compatible with a neutral and universal theory of the agent — including rational choice theory in all its variants.  The actor is assumed to be configured in the same way in all social contexts; what differs is the environment of constraint and opportunity that he or she confronts.  This is in fact the approach taken by most scholars in the paradigm of the new institutionalism, it is the framework offered by James Coleman in Foundations of Social Theory, and it is also compatible with what analytical sociologists refer to as “structural individualism”. It also supports the model of “aggregative” explanation — explain an outcome as the result of the purposive actions of individuals responding to opportunities and constraints.

The second aspect, by contrast, assumes that human actors are to some important degree “plastic”, and they take shape in different ways in different social settings.  The developmental context — the series of historically specific experiences the individual has as he/she develops personality and identity — leads to important variations in personality and agency in different settings. So just knowing that the local social structure has a certain set of characteristics — the specifics of a share-cropping regime, let us say — doesn’t allow us to infer how things will work out. We also need to know the features of identity, perception, motivation, and reasoning that characterize the local people before we can work out how they will process the features of the structure in which they find themselves. This insight suggests a research approach that drills down into the specific features of agency that are at work in a situation, and then try to determine how actors with these features will interact socially and collectively.

Clearly traditional economics is particularly wed to the first approach. At the individual level, economic agents are typically modeled as completely informed, perfectly rational and self-interested agents. In equilibrium models, say of market behavior, that idealized agent *is* writ large: all agents are the same and face the same situation and have the same information. It would be fair to say that this simplifying assumption has yielded very interesting formal results, but their adequacy as a foundation for an empirical science can be robustly criticized–though there are indeed circumstances in which, say, markets perform in close accordance to such models.

The behavioral revolution in economics of the last twenty years or so introduced various sorts of ‘boundedly rational’ agents. For example,  Tversky and Kahneman demonstrate a number of ways in which real human agents violate these assumptions. In particular, their prospect theory holds that people have distinct utility functions for gain and loss domains (and that these domains are subject to framing effects). Generally speaking, Tversky and Kahneman found that people are risk-avoiding when facing gains, and risk-seeking when facing losses. However, some who utilize agents are assumed to have the similar enough risk preferences to justify ignoring individual differences. So while prospect theory’s agents are more psychologically ‘real’ than Homo economicus, they clearly fall within Little’s first domain. Other models do however include limited varieties of agents- usually agents with fixed strategies or preferences of one kind or another. What is most frequently omitted, perhaps because it is hard to model and hard to analyze, is the adaptive agent, agents who change and grow, agents who are socially constituted.

Recently, VOX, a policy and analysis forum for economists hosted a debate ‘What’s the Use of Economics?’ regarding the future of economics post the world’s economic crisis. In his contribution to this debate, Andrew Haldane, Executive Director of Financial Stability at the Bank of England blames the academic and professional economics profession for a number of sins contributing to the world’s current economic crisis. Amongst them is a failure to adequately take into account heterogeneity of economic agents in economic models:

These cliff-edge dynamics in socioeconomic systems are becoming increasingly familiar. Social dynamics around the Arab Spring in many ways closely resembled financial system dynamics following the failure of Lehman Brothers four years ago. Both are complex, adaptive networks. When gripped by fear, such systems are known to behave in a highly non-linear fashion due to cascading actions and reactions among agents. These systems exhibit a robust yet fragile property: swan-like serenity one minute, riot-like calamity the next.

These dynamics do not emerge from most mainstream models of the financial system or real economy. The reason is simple. The majority of these models use the framework of a single representative agent (or a small number of them). That effectively neuters the possibility of complex actions and interactions between agents shaping system dynamics…

Conventional models, based on the representative agent and with expectations mimicking fundamentals, had no hope of capturing these system dynamics. They are fundamentally ill-suited to capturing today’s networked world, in which social media shape expectations, shape behaviour and thus shape outcomes.

This calls for an intellectual reinvestment in models of heterogeneous, interacting agents, an investment likely to be every bit as great as the one that economists have made in DGSE models over the past 20 years. Agent-based modelling is one, but only one, such avenue. The construction and simulation of highly non-linear dynamics in systems of multiple equilibria represents unfamiliar territory for most economists. But this is not a journey into the unknown. Sociologists, physicists, ecologists, epidemiologists and anthropologists have for many years sought to understand just such systems. Following their footsteps will require a sense of academic adventure sadly absent in the pre-crisis period.


April 30, 2012

Split Peas, Distraction, and the enemies of Abstraction

by Jacob Lee

While reading Marijn Haverbeke’s book Eloquent JavaScript: A Modern Introduction to Programming, I came across this passage, extolling the virtues of abstraction:

When writing a program, it is easy to get sidetracked into small details at every point. You come across some little issue, and you deal with it, and then proceed to the next little problem, and so on. This makes the code read like a grandmother’s tale.

Yes, dear, to make pea soup you will need split peas, the dry kind. And you have to soak them at least for a night, or you will have to cook them for hours and hours. I remember one time, when my dull son tried to make pea soup. Would you believe he hadn’t soaked the peas? We almost broke our teeth, all of us. Anyway, when you have soaked the peas, and you’ll want about a cup of them per person, and pay attention because they will expand a bit while they are soaking, so if you aren’t careful they will spill out of whatever you use to hold them, so also use plenty water to soak in, but as I said, about a cup of them, when they are dry, and after they are soaked you cook them in four cups of water per cup of dry peas. Let it simmer for two hours, which means you cover it and keep it barely cooking, and then add some diced onions, sliced celery stalk, and maybe a carrot or two and some ham. Let it all cook for a few minutes more, and it is ready to eat.

Another way to describe this recipe:

Per person: one cup dried split peas, half a chopped onion, half a carrot, a celery stalk, and optionally ham.

Soak peas overnight, simmer them for two hours in four cups of water (per person), add vegetables and ham, and cook for ten more minutes.

This is shorter, but if you don’t know how to soak peas you’ll surely screw up and put them in too little water. But how to soak peas can be looked up, and that is the trick. If you assume a certain basic knowledge in the audience, you can talk in a language that deals with bigger concepts, and express things in a much shorter and clearer way. (emphasis added) This, more or less, is what abstraction is.

Now, after reading this, I could not help but think about how blogs, listserves, and other open forums for expert discussion can become bogged down by the participation of non-experts, or the badly informed and opinionated, or even people merely coming from different domains of expertise. Eventually, this can lead to the departure of those experts for less congested locales.

There is immense value in the continued existence of open forums. So the question is, how can a forum remain open, but circumvent, or at least diminish the challenges just mentioned? I can think of at least one way, suitable for highly topical forums: namely, the FAQ, or Primer.

So, for example, New Economic Perspectives, a group blog of academic and professional economists interested in Modern Monetary Theory (MMT), has an extensive primer on MMT available on its blog. But a Primer or FAQ is probably not enough, unless it is tied into some kind of policy and practice of policing. An interesting, if extreme, policy might involve some kind of test of the basic knowledge presumed by the epistemic community in question. For example, New Economic Perspectives might test users knowledge of MMT in order to obtain the credentials to participate in the forum, or their scores on such a test could be associated with their comments in all cases. Annoying perhaps. But then, I am reminded of John’s idea of an online community based on concentric circles:

While agreeing fundamentally with grad student guy’s observation about the need for filtering, it is far from clear to me that any currently available technological solution is likely to solve the problem. As far as I can make out, the patterns that emerge in forums like OAC are not that different from those that emerge on listservs. After an initial period of enthusiasm in which people jump on board and stake out positions, things settle down to a handful of contributors accounting for most of the traffic and occasional outbursts of concern about why more people aren’t contributing.

The alternative is to consider filtering as a political problem, where the fundamental dilemma is the gap between the ideal of openness and ease of access and the reality that most people can’t or don’t want to spend large amounts of their day responding to a mass of material that grows exponentially and is mostly repetitious and, in event the best sense, juvenile — an endless rehashing of old arguments that rarely goes anywhere.

The only plausible scheme that I have been able to imagine is modeled on secret societies with a hierarchy of concentric circles, in which it requires an invitation to move toward the center for all but the outermost circle, to which everyone is invited. Are there other options to consider?

March 23, 2012

The Railway Inspector

by Jacob Lee

Having finally completed my laborious Master’s thesis and degree, I have been free to pursue a number of other projects that have been on hold for almost two years. This freedom has helped me rediscover the joy of research, something that can be lost in the long uncertainty of thesis work.

One of the projects on which I have resumed work is a formal analysis of the Polish kinship terminology. The work is inspired by Dwight Read’s  algebraic approach to the analysis of kinship terminology. Prior to engaging in my thesis work, I had gone a long way to completing my analysis (the details of which I will not go into today). Yet, in that effort, I had been nagged by a feeling that I had not properly done my homework of carefully gathering, and documenting the different sources of information on Polish kinship terminology I had available.

I suppose that some might be thinking, “What’s the big problem? I mean, can’t you just ask someone what Poles call their family members?” The answer is, yes of course, I can, and have. The problem is, the Poles I asked often disagreed with one another, and in some cases, my questions instigated not-always-so light-hearted arguments between my Polish friends on just what to call a mother’s brother, or a father’s brother. As it happens, the Polish kinship terminology appears to be going through a long slow, and uneven, structural transformation (Parkin 1995). See (Stein 1975) for similar observations for the Slovak terminology. Add into the mix the existence of regional or dialectical varieties in the terminology, and a turbulent 19th and 20th century history of (sometimes forced) migration, and you have a pretty messy soup.

So, I’ve been pushing through the literature carefully documenting whatever information on the terminology I can locate. I have been particularly keen on going through the Polish literature. With my still limited grasp of the language, and lack of convenient library access, its been a long slog. And so, much of my data is coming from rather old texts, freely available at

All of this discussion has really been nothing but a long preamble to my main point. Of the various texts I am consulting, one of them is an ethnological report written by Jan Świętek (1896). During his life, Świętek’s published several important and reputable contributions to Polish enthology, including a more than 700 page monograph on the social and cultural life of the Poles living in Bochnia County (a district located a little ways from Cracow in southern Poland).

And yet… Jan Świętek was an amateur, a working-class man who earned his bread as a railway inspector.

There is something satisfying, and maybe even comforting, about consulting the work of a long dead amateur, who did what he did for the love of it. Maybe its just that, right now, standing outside of academia, contemplating which career path I ought to choose, I take comfort in the idea that the serious amateur can still make serious contribution to scholarship, even if he (or she) does not have a position within academia.


Parkin, Robert. 1995. “Contemporary Evolution of Polish Kinship Terminology.” Sociologus 45 (2): 140-152.

Stein, Howard F. 1975. “Structural Change in Slovak Kinship: An Ethnohistoric Inquiry.” Ethnology 14 (1): pp. 99-108.

Świętek, Jan 1896. “Zwyczaje i Pojęcia Prawne Ludu Nadrabskiego.” In Materyały Antropologiczno-archeologiczne i Etnograficzne Komisja Antropologiczna Akademja Umiejętności w Krakowie., 1:266–362. Kraków: Nakl. Akademji Umiejętności.

May 29, 2011

Complexity Science

by Jacob Lee
complexity map thumbnail

Map of Complexity Sciences - Catsellini

I’ve just run across Brian Castellani’s map of the complexity sciences. I’ve been poking my nose in a lot of these corners for the last 10 years, not always systematically, and I’m kinda familiar with many of the local landmarks, but having a broader view of the landscape is very cool.

Margaret Mead is in there too…

May 29, 2011

More on Channel Theory

by Jacob Lee

In my last post I introduced a couple of concepts from the channel theory of Jeremy Seligman and Jon Barwise. In this post I would like to continue that introduction.

To review, channel theory is intended to help us understand information flows of the following sort: a‘s being F carries the information that b is G. For example, we might want a general framework in which understand how a piece of fruit’s bitterness may carry the information that it is toxic, or how a mountain side’s having a particular distribution of flora can carry information about the local micro-climate, or how a war leader’s generous gift-giving may carry information about the success of a recent campaign, or the sighting of a gull can carry the information that land is near. In a previous post, we asked how position of various participants in a fono might forecast information about the political events of the day. One would hope that such a framework may even illuminate how an incident in which a person gets sick and dies may be perceived to carry the information that there is a sorcerer who is responsible for this misfortune.

In my last post, I introduced a simple sort of data structure called a classification. A classification simply links particulars to types. But as my examples above were intended to show, classifications are not only intended to model  ‘categorical’ data, as usually construed.

Def 1. A classification is a triple A = \langle tok(A), type(A), \vDash \rangle such that for every token a \in tok(A), and every type \alpha\in typ(A), a \vDash_{A}\alpha  if and only if  a is of type \alpha.

One might remark that a classification is not much more than a table whose attributes have only two possible value, a sort of degenerate relational database. However, unlike a record/row in a relational database, channel theory treats each token as a first-class object. Relational databases require keys to guarantee that each tuple is unique, and key constraints to model relationships between records in tables. By treating tokens as first class objects, we may model relationships using an infomorphism:

Def 2. Let A and B be two classifications. An infomorphism f : A \rightleftarrows B is a pair of functions f = \lbrace f^{\wedge}, f^{\vee} \rbrace such that f ^{\wedge} : typ(A) \rightarrow typ(B) and f^{\vee}: tok(B) \rightarrow tok(A) so that  it satisfies the following property: that for every type \alpha in A and every token b in B, b \vDash_{B} f^{\wedge}(\alpha) if and only if f^{\vee}(b) \vDash_{A} \alpha.

An infomorphism is more general than an isomorphism between classifications, i.e. an isomorphism is a special case of an infomorphism. For example, an infomorphism f : A \rightleftarrows B between classifications A and B might map a single type \beta in B onto two or more types in A, provided that from B’s point of view the two types are indistinguishable, or more precisely that for all tokens b in B and all types \alpha in A, f^{\vee}(b) \vDash_{A} \alpha if and only if f^{\vee}(b) \vDash_{B} \alpha^{\prime}. Note that this requirement does not mean that those types in A are not distinguishable in A (or more technically, are not co-extensional in A). There may be tokens in A outside the range of f^{\vee} for which, for example, a \vDash_{A} \alpha but not a \vDash_{A} \alpha^{\prime}. A dual observation may be made about the tokens of B. Two tokens of B may be mapped onto the same token in A, provided that those tokens in B are indistinguishable with respect to the set of types \beta in B for which there exists some \alpha such that f^{\wedge}(\alpha) = \beta). Again, this does not mean these same tokens in B are wholly indistinguishable in B. For example, there may be types outside the range of  f^{\wedge} classifying them differently. Thus, an infomorphism may be thought of as a kind of view or filter into the other classification.

It is actually rather difficult to find infomorphisms between arbitrary classifications. In many cases there will be none. If it were too easy, then the morphism would not be particularly meaningful. Too stringent and then it would not be very applicable. However, two classifications may be joined in a fairly standard way.For example, we can add them together:

Def 3. Given two classifications A and B, the sum of A and B is the classification A+B such that:

1.      tok(A + B)=tok(A)\times tok(B),

2.     typ(A + B) is the disjoint union of typ(A) and typ(B) given by \langle 0,\alpha \rangle for each type \alpha \in typ(A) and\langle 1,\beta \rangle for each type \beta \in typ(B) , such that

3.      for each token \langle a,b\rangle \in tok(A+B) \langle a,b\rangle {{\vDash }_{A+B}}\langle 0,\alpha \rangle \text{ iff a}{{\vDash }_{A}}\alpha and \langle a,b\rangle {{\vDash }_{A+B}}\langle 1,\beta \rangle \text{ iff b}{{\vDash }_{B}}\beta .

Remark. For any two classifications A and B there exist infomorphisms {{\varepsilon }_{A}} : A \rightleftarrows A+B and {{\varepsilon }_{B}}:B\rightleftarrows A+B defined such that {{\varepsilon }_{A}}^{\wedge }(\alpha )=\langle 0,A\rangle and {{\varepsilon }_{B}}^{\wedge }(\beta )=\langle 1,B\rangle for all types \alpha \in typ(A) and \beta \in typ(B) {{\varepsilon }_{B}}^{\vee }(\langle a,b\rangle )=b and {{\varepsilon }_{A}}^{\vee }(\langle a,b\rangle )=a for each token \langle a,b\rangle \in tok(A+B).

To see how this is useful, we turn now to Barwise and Seligman’s notion of an information channel.

Def 4. A channel C  is an indexed family of infomorphisms \{ f_{i} : A_{i} \rightleftarrows C \} _{i \in I} each having co-domain in a classification C called the core of the channel.

As it turns out, in a result known as the Universal Mapping Property of Sums, given a binary channel C = \{ f : A \rightleftarrows C, g : B \rightleftarrows C \}, and infomorphisms {{\varepsilon }_{A}} : A \rightleftarrows A+B and {{\varepsilon }_{B}}:B\rightleftarrows A+B, the following diagram commutes:

The result is general and can be applied to arbitrary channels and sums.

I still haven’t exactly shown how this is useful. To do that we introduce some inference rule that can be used to reason from the periphery to the core and back again in the channel.

A sequent \langle \Gamma ,\Delta \rangle is a pair of sets of types. A sequent \langle \Gamma ,\Delta \rangle is a sequent of a classification A if all the types in  \Gamma and \Delta are in typ(A).

Def 5. Given a classification A, a token a\in tok(A) is said to satisfy a sequent \langle \Gamma ,\Delta \rangle of A, if a{{\vDash }_{A}}\alpha for every type \alpha \in \Gamma and a{{\vDash }_{A}}\alpha for some type \alpha \in \Delta . If every a\in tok(A) satisfies \langle \Gamma ,\Delta \rangle , then we say that \Gamma entails \Delta in A, written \Gamma {{\vdash }_{A}}\Delta and \langle \Gamma ,\Delta \rangle is called a constraint of A.

Barwise and Seligman introduce two inference rules: f-Intro and f-Elim. Given an infomorphism from a classification A to a classification C, f:A\rightleftarrows C:

f\text{-Intro: }\frac{{{\Gamma }^{-f}}{{\vdash }_{A}}{{\Delta }^{-f}}}{\Gamma {{\vdash }_{C}}\Delta }

f\text{-Elim: }\frac{{{\Gamma }^{f}}{{\vdash }_{C}}{{\Delta }^{f}}}{\Gamma {{\vdash }_{A}}\Delta }

The two rules have different properties.  f-Intro preserves validity, ­f-Elim does not preserve validity; f-Intro fails to preserve invalidity, but f-Elim fails to preserve invalidity. f-Elim is however valid precisely for those tokens in A for which there is a token b of B mapping onto A by the infomorphism f.

Suppose then that we have a channel. At the core is a classification of flashlights, and and at the periphery are classifications of bulbs and switches. We can take a sum of the classifications of bulbs and switches. We know that there are infomorphisms from these classifications to the sum (and so this too makes up a channel), and using f-Intro, we know that any sequents of the classifications of bulbs and switches will still hold in the sum classifications: bulbs + switches. But note that the classification bulbs + switches, since it connects every bulb and switch token, any sequents that might properly hold between bulbs and switches will not hold in the sum classification. Similarly, all the sequents holding in the classification bulbs + switches will hold in the core of the flashlight channel. However, there will be constraints in the core (namely those holding between bulbs and switches) not holding in the sum classification bulbs + switches.

In brief: suppose that we know that a particular switch is in the On position, and that it is a constraint of switches that a switch being in the On position precludes it being in the Off position. We can project this constraint into the core of the flashlight channel reliably. But in the channel additional constraints hold (the ones we are interested in). Suppose that in the core of the channel, there is a constraint that if a switch is On in a flashlight then the bulb is Lit in the flashlight We would like to know that because *this* switch is in the On position, that a particular bulb will be Lit. How can we do it? Using f-Elim we can pull back the constraint of the core to the sum classification. But note, that this constraint is *not valid* in the sum-classification. But it is not valid for precisely those bulbs that are not connected in the channel. In this way, we can reason from local information to a distant component of a system, but in so doing, we lose the guarantee that our reasoning is valid, and we lose the guarantee that it is sound.

[1] Barwise, Jon, and Jerry Seligman. 1997. Information Flow: The Logic of Distributed Systems. Cambridge tracts in theoretical computer science 44. Cambridge: Cambridge University Press.

May 4, 2011

Have Hammer Need Nail

by Jacob Lee

In the past month or two here on Dead Voles the notions of instance and of type have come up in several times (not always in the same place). I have become more keenly aware of this distinction, particularly in certain discussions of cultural anthropology, but also endemically in discussions of programming. More precisely, I have become more keenly aware of how often we slip between talking about the world in the language of types and in the language of tokens, without really being aware that we are doing it, and how difficult and fruitful it can be to discipline ourselves to maintain the distinction, especially when we are trying to analyze the social world.

The history of situation theory’s struggle to arrive at an adequate notion of information flow is perhaps a testament to the tendency to neglect one or the other. In particular, situation theory was introduced with just such a distinction in mind, with a division between situations, as concrete parts of the world, and infons as items of information (or types). And yet, for some quite defensible reasons, situation theorists chose to model situations as the sets of infons made factual by that situation, treating two situations as being identical (i.e., the same situation) whenever they supported precisely the same information. This move reintroduced an ambiguity between tokens and types so that it becomes difficult sometimes to know whether situation theorists are talking about infons or the concrete situations themselves.

But it may also be evident in how we go about interpreting human artifacts in terms of some presumed system of meaning and ignore the brute actuality of the artifact itself (which is why a sacred object can still be used as a paper-weight). It is not enough, as John McCreery tells us, to look for meanings behind the objects; instead we may well ask, why do the gods looks like that?

I have already mentioned the theory of information flow (called channel theory) of Jon Barwise and Jeremy Seligman in their book [1]. Here I would like to briefly introduce two of its main concepts, since not only does it take the distinction between tokens and types as fundamental, but it provides an interesting model of the flow of information.

It is also the hammer, with which I have been looking for a nail.

Let us first define a sort of data structure, that in some ways is not very remarkable. It is merely a kind of attribute table, and is somewhat similar to a formal context in formal concept analysis discussed here. The structure consists of a set of tokens, a set of types classifying those tokens, and a binary classification relation between them.

Def 1. A classification A is a triple A = \langle tok(A), type(A), : \rangle such that for every token a \in tok(A), and every type \alpha\in typ(A), a:\alpha  if and only if  a is of type \alpha.

The classification distinguishes itself from other similar data structures (and relations in general) by making both types and tokens first class objects of the theory. This allows an interesting morphism between classifications, called an infomorphism (also called a Chu morphism), which we define presently:

Def 2. Let A and B be two classifications. An infomorphism f : A \rightleftarrows B is a pair of contravariant maps f = \lbrace f^{\wedge}, f^{\vee} \rbrace such that f ^{\wedge} : typ(A) \rightarrow typ(B) and f^{\vee}: tok(B) \rightarrow tok(A) satisfying the fundamental property that for every type \alpha in A and every token b in B, b : f^{\wedge}(\alpha) if and only if f^{\vee}(b) : \alpha

The infomorphism defines a curious part-whole relationship that can be used to represent a number of interesting relationships, for example, between points of view or perspectives, between map and terrain, between the parts of distributed systems, and between concepts.

An elaboration must wait for a future post, since I have run out of time.

[1] Barwise, Jon, and Jerry Seligman. 1997. Information Flow: The Logic of Distributed Systems. Cambridge tracts in theoretical computer science 44. Cambridge: Cambridge University Press.

April 27, 2011

skyhooks of the amazons

by Jacob Lee

One of the interesting things about the modern human environment is the extent to which autonomous processes and artificial intelligent agents of various kinds (and intelligences) not only figure in determining the situations in which we navigate, but figure in determining the situations in which *they* (the artificial intelligent agents ) navigate as well. For example, many retailers use automated pricing bots on sites like Amazon. Frequently these bots base pricing judgments  upon the prices of similar items being sold by their competitors. As might be expected, this can lead to various interactions between bots as they adjust to changes in other retailers prices. Sometimes the result can be amusing, even fascinating, as blogger Machael Eisen relates in his investigation of two absurdly priced books at Amazon daily ratcheting up in price:

What’s fascinating about all this is both the seemingly endless possibilities for both chaos and mischief. It seems impossible that we stumbled onto the only example of this kind of upward pricing spiral – all it took were two sellers adjusting their prices in response to each other by factors whose products were greater than 1. And while it might have been more difficult to deconstruct, one can easily see how even more bizarre things could happen when more than two sellers are in the game.


February 15, 2011

Some information flows in the Samoan fono

by Jacob Lee
Idealized spatial organization of the fono

Idealized spatial organization of the fono

The fono, politics and space

A fono is a political meeting in the Western Somoan village. The fono is a spatially organized event: high ranking orators are seated in the front, low ranking orators and other low status and low rank persons in the back. High status chiefs and special guests are seated on the sides (tala).

A person’s location with respect to these three areas during an event may signal a variety of informational contents, some of them stereotypical and some of it sensitive to the particular situational context of the event. Within each area, the position of individuals can also signal various informational contents, particularly as relating to status. The precise boundaries between these three areas may not be well defined, and seating in an ambiguously defined location may itself convey interesting information.

What enables these sorts of information to be conveyed?

Cognitive Schema of the fono

Alessandro Duranti answers this by positing a cultural cognitive schema defining an idealized fono seating arrangement spatially organized in terms of various socially relevant classes of persons:

By matching the ideal plan for a particular occasion with the actual titleholders who occupied various positions in the house, one could obtain a first reading of the political situation and make a few predictions about the way in which the discussion might unfold. Thus, according to the kind of fono that was being held, a particular set of orators would be expected to sit in the front row. In such a system, every slight variation from what is considered the ideal plan is potentially significant. For this reason, as suggested above, the ideal plan acts as a cognitive schema that provides a key for the participants to interpret the contingencies of the day. The relationship between the ideal seating arrangement and the actual one gives a first approximation of the potential conflicts, tensions, and issues of the day. (Duranti p. 65)

This schema indicates various sorts of stereotypical information. If a person p is seated in the front F then one conventional item of informational content indicated by this fact is that p is a high-ranking orator.  Other informational contents are indicated too, of course. More interesting are occasions when an actual fono deviates (or is believed to deviate) from this schema. Subtle variations in seating arrangement, in conjunction with knowledge of the political situation, can indicate various interesting sorts of information. Throughout Duranti’s analysis, he draws us to the subtle political dimensions of the relative positions taken up in the political theater of the fono:

Samoans are in this respect true masters of spatial finesse, as demonstrated by the poisition occupied by the matai (JL: chief) who shares the title with Savea Sione, namely Savea Savelio. He sits in a position that is similar to Savea Sione’s but slightly “farther back.” This he explained to me as a sign of restraint: He should not take a foregrounded role in the fono proceedings given that the actions of the one we might call his alter ego, Savea Sione, were under severe scrutiny by members of the assembly. (Duranti p. 68)

and how an intimate understanding of this space provides informational cues for contextually relevant political positioning and interaction.

An understanding of the locally engendered meaning of the seating arrangement for the day suggests that Moe’ono, as well as the other matai in the fono house, had ways of expecting, ahead of time, Tafili’s attack and her role at the meeting. If she is present and has chosen to sit in the front row, the place reserved for the more active members of the assembly, everyone knows that Tafili has come ready to speak and, most likely, to argue. Thus, even before a word is exchanged, Tafili’s spatial claim provided Moe’ono with clues about the forthcoming discussion and gave him some time to prepare himself for it. In this case, the regionalization of the interactional space available to participants can communicate just as much as words. (Duranti p. 72)

Information Flows

Jon Barwise and Jeremy Seligman argue that information flow crucially depends on regularities within distributed systems. Such information flows are present, but they will not necessarily be available to any particular cognitive agent nearby. Such agents must be attuned to those regularities to translate information about the occurrence of an event of one type into information about the occurrence of an event (possibly the same) of another type. This becomes particularly evident when one is displaced into natural and cultural environments outside our own experience and knowledge. One might not know, for example, that an increase in the number of insects indicates water nearby; a certain discoloration of the skin may indicate that a patient has a certain inflammatory skin disease, but only to a person attuned to the constraints between this kind of skin discoloration and the presence of that particular disease. Duranti describes his first encounter with the fono:

The first time I entered the fono house, I only saw people sitting around the eges of the house and noticed that some portions were unoccupied whereas other portions seemed crammed with people. (Duranti p. 64)

It was only after mapping many different fono events, and matching seating with titles and other relevant information, that Duranti was able to appreciate how much information about the political  events of the day was present in the seating locations of its participants. Standing back from this, we must recognize that every participant in a fono is thus situated, having their own information, and being attuned to some constraints active in the situation, and not others, and so on. This is particularly easily seen when on one occasion, Duranti intentionally seated himself  in a low status position in the fono, when as a guest he was usually accorded a high status position. In this case Duranti sat in the back of the fono (low status position) with low status men, and women. He was was served food last by young servers, and didn’t get any fish, until one of the high status chiefs noticed and directed the servers to bring some of his fish to Duranti:

My experiment was over. I had been able to show the relevance of the locally defined spatial distinctions (front vs. back region) for establishing the status of a participant in a public event. At the same time, I had proven to myself that the system was flexible. Different “parts,” namely the servers versus the matai, or the kids versus my adult friends, were acting on different premises. For the kids who brought in the trays with the food and for the untitled adults who were preparing the portions, it was safer to follow the basic spatial distinctions. They had no way of knowing the details about who was doing what on a particular occasion. The spatial arrangement in the house constituted a first key to know how to operate with a minimum assurance of appropriateness…In most cases, the seating plan works very efficiently to convey a first sense of order. Whether or not that order conforms to the relative statuses of the participants as displayed on other occasions is not something that low-status people must be concerned with. Their socialization teaches them that any hierarchy must adapt to contingencies, must fit their task…It is up to the more knowledgeable members in the gathering to complement or rectify the reading provided by the bare layout of the human bodies in space…The distribution of knowledge about how to act on any given situation is thus functional to the distribution of power within the community. On the one hand, the lower-status people act on more general and hence more easily amendable models, that is, models that need additional information in order to operate appropriately. Higher-status people, on the other hand, not only have access to more specific information about the nature of the activity and the expected and expectable actions, they also control this more specific knowledge by putting it to use when they choose to do so. (Duranti p.59-60)

The fono is a distributed system, with many different parts. Information about one part of the fono can give us information about other parts of the fono. But the information flows in this distributed system are relative to to the cognitive schemas by which the fono is conventionally understood by each of the participants. The information flows to which any participant is attuned depends on their assessments of how others are attuned to informational flows in the event. Duranti is able to assess the reasons for his not receiving any fish from the servers because his position in the fono conventionally indicated a low rank and status, and because the servers were not sensitive to other information relevant to interpreting Duranti’s behavior in any other way*.

In order to see Duranti’s choice of seating as an exception to the idealized arrangement presumes not only knowledge about the idealized arrangement, but crucially requires additional information that is inconsistent with that idealized arrangement. In this case, the fact that at one other such occasions Duranti had been seated toward the front. Like Duranti when he first began to participate in the fono, any stranger encountering the scene for the first time, especially one who was not familiar with the dynamics of agency and signification in the fono, would not know ‘what was going on’. That Duranti was unusually seated likely would not have impressed itself on such an observer as a fact worth pondering further. But the intelligent and culturally and situationally literate observer would have seen Duranti’s position as unusual, and possibly interesting, if noticed. The seating of any actual fono is set against the idealized arrangement given by the cognitive schema; the difference between them is a scaffold for signification.

Since deviation from the ideal is not infrequent, and is often interpreted this way or that, we might very well suppose that some deviations from the ideal are in fact conventional and well understood. Others may be less well-understood. When Savea Savelio sits slightly back from Savea Sione, Savea Savelio, and presumably many of the other participants understood, though Duranti may not have seen the reason until later. Yet,  while there may have been some conventional interpretation of Duranti’s sitting in the back, it is safe to say that that interpretation of his seating misunderstood Duranti’s unconventional objective. We may well doubt that anyone besides Duranti, or anyone he let in on it beforehand, correctly understood that his sitting in the back was  a behavioral experiment.

*It is also possible that the servers speculated on the reasons, but did presume to act on those speculations.


A. Duranti, From Grammar to Politics: Linguistic Anthropology in a Western Samoan Village, Berkeley: University of California Press, 1994.

January 24, 2011

Lattice Model of Information Flow

by Jacob Lee

I am caught in a maelstrom of work and so I decide to play.

I have an excellent textbook on discrete mathematics on my shelf  from a course I took as a student a few years ago [1]. Its always useful to review such books to remind oneself of certain foundational principles used in computer science [2]. My thesis work concerns, among other things, the study of information flow and in the course of my work I found myself consulting this book to review the mathematical concept of a lattice [3]. Looking through the index of this text I found an entry reading ‘Information flow, lattice model of, 525’. Naturally, I was intrigued.

Funnily enough, the three paragraph section on the lattice model of information flow is only of tangential relevance to my thesis work; yet it was interesting enough. It discussed the uses of lattices to model security policies of information dissemination. Rosen presented a simple model of a multi-level security policy in which a collection data (the authors use the word information) is assigned an authority level A, and a category C. The security class of a collection of data is modeled as the pair (A,C). Rosen defines a partial order on security classes as follows: (A_{1},C_{1})\preceq (A_{2},C_{2}) if and only if A_{1} \leq A_{2} and S_{1} \subseteq S_{2}. This is easily illustrated by an example.

Let A = \{A_{1}, A_{2}\} where A_{1} \leq A_{2} and A_{1} is the authority level secret and A_{2} is the authority level top secret. Let S=\{diplomacy, combat ops \} [4][5]. This forms the lattice depicted in figure 1.

Figure 1: example security classification lattice

Figure 1

The objective of such a security policy is to govern flows of sensitive information. Thus, if we assign individuals security clearances in the same way that information is assigned security classes, then we can set up a policy such that an item of information i assigned a security class (A_{1},C_{1}) can only be disseminated to an individual a having security clearance (A_{2},C_{2}) if and only if (A_{1},C_{1})\preceq (A_{2},C_{2}).

Without looking at the literature [6], it seems that the obvious next step is to embed this into a network model. Supposing that one has a network model in which each node is classified by a security clearance there are a variety of useful and potentially interesting questions that can be asked. For example, one might want to look for connected components where every node in the connected component has a security clearance (A,C) such that (A,C)\succeq (A_{j},C_{k}) for some j and k. Or if one were interested in simulating the propagation of information in that social network such that the probability of a node communicating certain security classes of information to another node is a function of the security class of the information and the security clearances of those two nodes.

So far this discussion has limited itself to information flow as dissemination of information vehicles, contrary to the direction I suggested in my last post should be pursued. One easy remedy might be to have minimally cognitive nodes with knowledge bases and primitive inference rules by which new knowledge can be inferred from existing or newly received items of information. This would have several consequences. Relevant items of novel information might disseminate through the network (and global knowledge grows), and items of information not originally disseminated, for example because it is top secret, may yet be guessed or inferred from existing information by nodes with security clearances too low to have received it normally.

Moving away from issues of security policy, we can generalize this to classify nodes in social networks in other systematic ways. In particular, we may be interested in epistemic communities. We might classify beliefs and/or knowledge using formal tools like formal concept analysis, as I believe Camille Roth has been doing (e.g. see his paper Towards concise representation for taxonomies of epistemic communities).

Fun stuff.

[1] Rosen, Kenneth H. Discrete mathematics and its applications. 5th edition. McGraw Hill. 2003.

[2] Some undergraduates joked that if they mastered everything in Rosen’s book, they would pretty much have mastered the foundations of computer science. An exaggeration, but not far off.

[3] A lattice is a partially ordered set (poset) such that for any pair of elements of that set there exists a least upper bound and a greatest lower bound.

[4] According to Wikipedia such the US uses classifications like the following:

1.4(a) military plans, weapons systems, or operations;

1.4(b) foreign government information;

1.4(c ) intelligence activities, sources, or methods, or cryptology;

1.4(d) foreign relations or foreign activities of the United States, including confidential sources;

1.4(e) scientific, technological or economic matters relating to national security; which includes defense against transnational terrorism;

1.4(f)USG programs for safeguarding nuclear materials or facilities;

1.4(g) vulnerabilities or capabilities of systems, installations, infrastructures, projects or plans, or protection services relating to the national security, which includes defense against transnational terrorism; and

1.4(h) weapons of mass destruction.

[5] An interesting category of information is information about who has what security clearance.

[6] Where fun and often good ideas go to die.

December 15, 2010

Diffusion of Informational Contents

by Jacob Lee

Research into diffusion processes permeates disciplines as diverse as computer science, anthropology, sociology, economics, epidemiology, chemistry, and physics[1]. Much recent work, in the last fifty years or so, has been explicitly network oriented and has sought to better understand how network topology and transmission mechanisms determine properties such as the rate of diffusion and the various thresholds at which diffusion processes become self-sustaining.

Despite the many apparent similarities between different diffusion processes, it is important to be attentive to the particulars of each kind of diffusion process. Commercial products diffuse among consumers in ways different than do news articles or news topics in the blogosphere (see Dynamics of the News Cycle). Behaviors like smoking spread across networks of friends in both similar and contrasting ways as those of sexually transmitted diseases. And routing information in a sensor network propagates differently than routing information in a mobile phone network.

Diffusion of Semantic Content

The diffusion of information or more generally semantic content has been a cross-disciplinary concern, and has been treated in a variety of ways, depending on the domain of application. It is generally recognized that such content exhibits properties that distinguishes it from the diffusion of other phenomena. For example, it is recognized that the sharing of semantic content, unlike commodities, does not necessarily incur a consequent loss of that content for the sharer, and that information is often shared preferentially with those for whom it may be of interest or desired[2].

Nonetheless, in more general settings the implications of the properties of semantic content for its diffusion has to my knowledge not yet been formally investigated. Content is typically treated as a non-relational item whose diffusion-mechanism is essentially content-neutral, except perhaps in its differential transmissibility or mutability. Furthermore, it frequently restricts its models to the diffusion of isolated pieces of content in an otherwise content-less context. Consequently, it confounds the diffusion of content vehicles with the diffusion of the semantic content itself, treating content vehicles as having an intrinsic meaning or significance.

It is true that the transmission of content vehicles is easier to understand, and that this simple approach probably does a fair job of approximating the diffusion of semantic content at a unit of content or level of abstraction at which the applicability of a more rigorous approach may not be either readily apparent or especially necessary. Yet it has the unfortunate effect of potentially blinding us to the way in which the relation between contents and cognition (or computation) can generate a second, more leaky, means of content diffusion, or can inhibit or transform content. For example, it is entirely possible for  multiple agents in a network to independently infer the same piece of information without that  piece of information ever having been explicitly communicated to them. It is also, I might add, entirely possible for two agents to receive the same communications and infer entirely different things, or fail to interpret the message correctly, as anyone who has had to try to collaborate with others by email can readily attest.

[1] See my recent blog post Processes of Diffusion in Networks

[2] Unlike disease, which neither the giver or receiver desires!

November 27, 2010

my first dead vole: synchrony, structure, snow.

by Jacob Lee

I want to first thank Carl, John, and Asher for giving me the opportunity to join Dead Voles as its fourth member! I am looking forward to a future of exciting exchanges and world domination! I have been a little shy about starting- a funny feeling in someone with a relatively visible amateur online presence. What should I write about? What if everyone thinks its boring? What if its a just a dead vole? Oh wait

Lately I have been working on a little problem. I wanted some way to say that two configurations in some space are structurally identical but in a way which abstracts from any particular way that space happens to be structured. I do not say independent of how that space is structured, since the underlying structure of a space is what gives any configuration of locations its form.*

Anyway, while I was pondering these things, and doing a little background googling, I came across something completely different, the fascinating work by Sang-Hun Lee  and Randolph Blake on how the perception of spatial structure can be induced by temporal synchrony.

Picture an old black and white analog television filled with static, what is often described as ‘snow’. Each pixel on the screen changes its luminescence by some random amount at random intervals. It is unstructured chaos. Such a sequence cannot be compressed without loss, because it does not have any regular structures.

Temporal Synchrony and Random Luminescence

Temporal Synchrony and Random Luminescence Source:

Imagine  synchronizing a block of pixels’ random changes in luminescence, and voila! you see the block of pixels as a distinct visual object! It must be emphasized that the synchronized changes happen at random intervals, and each pixel’s change in luminescence in the synchronized block was individually random. You can see an animation here.

Lee and Blake do a bit more with this than I have described. actually did was a little more complicated and involved. Check it out (PDF).

It reminds me of the fact that staring at that old television set filled with static, if you try you can see any shape or scene that you desire. Perhaps the brain is selectively picking up random synchronizations in the visual field. Lookie lookie! Its a couple dancing! Lookie lookie! It’s a coyote chasing a rabbit! Lookie lookie, its Jacob’s first dead vole!

*We might specify that two spatial configurations are equivalent if there is some kind of structure preserving map between them: in particular, you define a set of invariant transformations of the space. Which of these aspects need to be invariant depends on the space at hand. For example, a linear transformation of a geometric figure, like a triangle, in a Cartesian plane preserves shape, size, orientation. Jerry Seligman uses a similar approach to derive the equivalence class of situations in his paper Physical Situations and Information Flow.


Seligman, Jerry. 1991. Physical situations and information flow. In Situation theory and its applications, ed. Jon Barwise, Jean Mark Gawron, Gordon Plotkin, and Syun Tutiya, 2:257-292. CSLI Lecture Notes 26. Stanford, CA, USA: Center for the Study of Language and Information (CSLI).