In the past month or two here on Dead Voles the notions of instance and of type have come up in several times (not always in the same place). I have become more keenly aware of this distinction, particularly in certain discussions of cultural anthropology, but also endemically in discussions of programming. More precisely, I have become more keenly aware of how often we slip between talking about the world in the language of types and in the language of tokens, without really being aware that we are doing it, and how difficult and fruitful it can be to discipline ourselves to maintain the distinction, especially when we are trying to analyze the social world.
The history of situation theory’s struggle to arrive at an adequate notion of information flow is perhaps a testament to the tendency to neglect one or the other. In particular, situation theory was introduced with just such a distinction in mind, with a division between situations, as concrete parts of the world, and infons as items of information (or types). And yet, for some quite defensible reasons, situation theorists chose to model situations as the sets of infons made factual by that situation, treating two situations as being identical (i.e., the same situation) whenever they supported precisely the same information. This move reintroduced an ambiguity between tokens and types so that it becomes difficult sometimes to know whether situation theorists are talking about infons or the concrete situations themselves.
But it may also be evident in how we go about interpreting human artifacts in terms of some presumed system of meaning and ignore the brute actuality of the artifact itself (which is why a sacred object can still be used as a paper-weight). It is not enough, as John McCreery tells us, to look for meanings behind the objects; instead we may well ask, why do the gods looks like that?
I have already mentioned the theory of information flow (called channel theory) of Jon Barwise and Jeremy Seligman in their book . Here I would like to briefly introduce two of its main concepts, since not only does it take the distinction between tokens and types as fundamental, but it provides an interesting model of the flow of information.
It is also the hammer, with which I have been looking for a nail.
Let us first define a sort of data structure, that in some ways is not very remarkable. It is merely a kind of attribute table, and is somewhat similar to a formal context in formal concept analysis discussed here. The structure consists of a set of tokens, a set of types classifying those tokens, and a binary classification relation between them.
Def 1. A classification A is a triple such that for every token , and every type , if and only if is of type .
The classification distinguishes itself from other similar data structures (and relations in general) by making both types and tokens first class objects of the theory. This allows an interesting morphism between classifications, called an infomorphism (also called a Chu morphism), which we define presently:
Def 2. Let and be two classifications. An infomorphism is a pair of contravariant maps such that and satisfying the fundamental property that for every type in A and every token b in B, if and only if
The infomorphism defines a curious part-whole relationship that can be used to represent a number of interesting relationships, for example, between points of view or perspectives, between map and terrain, between the parts of distributed systems, and between concepts.
An elaboration must wait for a future post, since I have run out of time.
 Barwise, Jon, and Jerry Seligman. 1997. Information Flow: The Logic of Distributed Systems. Cambridge tracts in theoretical computer science 44. Cambridge: Cambridge University Press.