Lattice Model of Information Flow

by Jacob Lee

I am caught in a maelstrom of work and so I decide to play.

I have an excellent textbook on discrete mathematics on my shelf  from a course I took as a student a few years ago [1]. Its always useful to review such books to remind oneself of certain foundational principles used in computer science [2]. My thesis work concerns, among other things, the study of information flow and in the course of my work I found myself consulting this book to review the mathematical concept of a lattice [3]. Looking through the index of this text I found an entry reading ‘Information flow, lattice model of, 525’. Naturally, I was intrigued.

Funnily enough, the three paragraph section on the lattice model of information flow is only of tangential relevance to my thesis work; yet it was interesting enough. It discussed the uses of lattices to model security policies of information dissemination. Rosen presented a simple model of a multi-level security policy in which a collection data (the authors use the word information) is assigned an authority level A, and a category C. The security class of a collection of data is modeled as the pair (A,C). Rosen defines a partial order on security classes as follows: (A_{1},C_{1})\preceq (A_{2},C_{2}) if and only if A_{1} \leq A_{2} and S_{1} \subseteq S_{2}. This is easily illustrated by an example.

Let A = \{A_{1}, A_{2}\} where A_{1} \leq A_{2} and A_{1} is the authority level secret and A_{2} is the authority level top secret. Let S=\{diplomacy, combat ops \} [4][5]. This forms the lattice depicted in figure 1.

Figure 1: example security classification lattice

Figure 1

The objective of such a security policy is to govern flows of sensitive information. Thus, if we assign individuals security clearances in the same way that information is assigned security classes, then we can set up a policy such that an item of information i assigned a security class (A_{1},C_{1}) can only be disseminated to an individual a having security clearance (A_{2},C_{2}) if and only if (A_{1},C_{1})\preceq (A_{2},C_{2}).

Without looking at the literature [6], it seems that the obvious next step is to embed this into a network model. Supposing that one has a network model in which each node is classified by a security clearance there are a variety of useful and potentially interesting questions that can be asked. For example, one might want to look for connected components where every node in the connected component has a security clearance (A,C) such that (A,C)\succeq (A_{j},C_{k}) for some j and k. Or if one were interested in simulating the propagation of information in that social network such that the probability of a node communicating certain security classes of information to another node is a function of the security class of the information and the security clearances of those two nodes.

So far this discussion has limited itself to information flow as dissemination of information vehicles, contrary to the direction I suggested in my last post should be pursued. One easy remedy might be to have minimally cognitive nodes with knowledge bases and primitive inference rules by which new knowledge can be inferred from existing or newly received items of information. This would have several consequences. Relevant items of novel information might disseminate through the network (and global knowledge grows), and items of information not originally disseminated, for example because it is top secret, may yet be guessed or inferred from existing information by nodes with security clearances too low to have received it normally.

Moving away from issues of security policy, we can generalize this to classify nodes in social networks in other systematic ways. In particular, we may be interested in epistemic communities. We might classify beliefs and/or knowledge using formal tools like formal concept analysis, as I believe Camille Roth has been doing (e.g. see his paper Towards concise representation for taxonomies of epistemic communities).

Fun stuff.

[1] Rosen, Kenneth H. Discrete mathematics and its applications. 5th edition. McGraw Hill. 2003.

[2] Some undergraduates joked that if they mastered everything in Rosen’s book, they would pretty much have mastered the foundations of computer science. An exaggeration, but not far off.

[3] A lattice is a partially ordered set (poset) such that for any pair of elements of that set there exists a least upper bound and a greatest lower bound.

[4] According to Wikipedia such the US uses classifications like the following:

1.4(a) military plans, weapons systems, or operations;

1.4(b) foreign government information;

1.4(c ) intelligence activities, sources, or methods, or cryptology;

1.4(d) foreign relations or foreign activities of the United States, including confidential sources;

1.4(e) scientific, technological or economic matters relating to national security; which includes defense against transnational terrorism;

1.4(f)USG programs for safeguarding nuclear materials or facilities;

1.4(g) vulnerabilities or capabilities of systems, installations, infrastructures, projects or plans, or protection services relating to the national security, which includes defense against transnational terrorism; and

1.4(h) weapons of mass destruction.

[5] An interesting category of information is information about who has what security clearance.

[6] Where fun and often good ideas go to die.

Advertisements

27 Responses to “Lattice Model of Information Flow”

  1. Interesting. But I see some possible parallels here in the algorithms used to identify network cores. Nodes are sorted by degree — the number of a node’s immediate neighbors. The analysis then systematically eliminates nodes with only 1, 2 ….n immediate neighbors until the network falls apart into discrete components. I imagine a security system in which all those with any connection at all (degree=1) have the lowest security clearance. Those interconnected at the level where the network falls apart (in my winners’ circle networks, typically at degree 6 or 7, have the highest security clearance. The basic rule is that the higher the connectedness (here = degree) of the subset of nodes in question the higher the higher their security clearance. One interesting aspect of this model is that at the higher levels there may be more than one discrete group with the highest level clearance — and its members do not share information with members of groups at the same level. In real world terms, the model predicts that the FBI doesn’t share its most carefully guarded information with the CIA or DIA, even though the three are supposed to be partners in defending homeland security.

    Just brainstorming. Not sure how seriously to take this.

  2. All this is very interesting. Of course, when I think network, I think brain. One of the newer techniques in functional brain imaging is connectivity analysis. In connectivity analysis, one takes a functional contrast within a region of interest(the seed) and determines what other voxels in the brain have correlated co-activity. What is interesting about this type of analysis is that one can begin to demonstrate connectivity properties of the network, or in my case, how those properties develop over childhood. One developmental finding is that development proceeds along a course in which local connectivity increases early in development, then cross-region connectivity matures over late childhood and adolescence. Cross region connectivity should enhance cognitive flexibility, while local connectivity may be related to functional specialization of brain regions. Interestingly, autistic children show greater local connectivity, but diminished regional connectivity; this difference becomes more robust with age in autistic children. This evidence might be consistent with the domain-specific cognitive marvels the autistic brain can sometimes perform(e.g. music and date calculations).
    Aside from this digression, I wonder whether some of John’s comments may give insight issues in brain development…security policies in the brain ha ha…but information dissemination and noise reduction in and between brain networks are most likely a driver of behavioral change.

  3. Since we are getting into brain-related stuff, allow me to point once again to David Ritchie’s article on the Context-Limited Simulation theory of metaphor. The sorts of connectivity issues that Joshua raises may also be relevant here. Is there any evidence that autistic children tend to be more literal-minded and less able to handle metaphor (in humor, for example) in the way that Ritchie suggests?

  4. Much of David Ritchie’s ideas are consistent with semantic and episodic memory theory, and perception. As far as autism, I am not in fact an expert in autism, although there are quite a few down the hall. However a quick pubmed search reveals a paper by Rundblad G, Annaz D. (2010) that shows that metaphoric cognition is subnormal in autism. Autism is typically marked by rigid cognitive behavior, lack of social awareness, to name a few. It is also a spectrum disease in which severity and symptom are quite varied.

  5. It is really, really weird that you posted this, because I was just reading an article on relational modeling in the sociology of culture (Ronald Breiger, “A Tool Kit for Practice Theory,” hidden I’m afraid behind a pay wall) (for the seminar I’m sitting in on this semester) which deploys, among other things, Galois lattices; and while trying to figure out what the bloody ‘ell those were I went and read exactly the Wikipedia entry on formal concept analysis you link. Unfortunately this sort of symbolic modeling is not in my conceptual repertoire, so my eyes just crossed and I didn’t get very far. In fact none of us in the class did, so if you care to take a look at that piece and see if there’s something to say about what it’s good for that might make sense in words, I’d be mighty interested. (I liked the article for a different reason – because to me it seemed like a pretty funny joke, taking Bourdieu’s anti-economistic analysis of tastes and preferences and Coleman’s culture-blind rational choice theory and showing how they’re really two homologous moments of the same analysis.)

    Anyhoo, given this admission that I don’t know what the flark, I cautiously wonder if the classic organization of revolutionary cells (see, e.g., “The Battle of Algiers”) might be another example of this kind of security/information application, including the ‘autistic’ tendency to sacrifice integrative relational diversity for focused task-orientation? If so, is this inherently a trade-off?

  6. Thanks, Joshua. It looks like we do have overlapping concepts here. Highly connected clumps with fewer connections between clumps imply a structural rigidity. The same pattern appears to occur at the level of neurons, concept formation, and social groups, suggesting a generalization based on network concepts. How cool would that be!

    Carl, I have just read over the Wikipedia article, and what strikes me is that the Galois lattice is an extension of conventional taxonomic thinking in which members of a category share attributes that are necessary and sufficient for membership in the category. The difference is that in the classical genus-species taxonomy, the network is a tree, whose branches never converge. As the Hasse diagram in the Wikipedia article illustrates, this is not necessarily true of Galois lattices.

    The real-world example that comes to mind is from phonology in linguistics, where there is a set of possible attributes (fricative, bilabial, voiced/unvoiced, exploded/unexploded, etc.), only a subset of which are significant and used in constructing the phonemes of the language under study, and the attributes in question overlap in different combinations.

    Jacob, am I right?

    Carl, does this help at all?

  7. “Highly connected clumps with fewer connections between clumps imply a structural rigidity.”

    Its the appearance of the old idea of the modular brain, and the basic idea in neuroscience that neurons that live together probably work on similar things. In development, first one seems to develop one’s modules, then one develops the ability to coordinate them to build more flexible, and more cogent high-level cognitions.

  8. It seems clear that under certain conditions it would be advantageous to have a ‘rigid’ behavior in a network. Flexibility in the network may come at an operational cost. Sometimes tactical, but not strategic action is valued in a scenario…think reflexive reaction to a pain stimulus.

    Thinking about network concepts working at multiple levels of analysis is quite charming an idea. How might one use a network perspective to design society parameters that promote general welfare while maintaining stability within some reasonable bounds?

  9. Wow! This has stimulated a bit of comment! I must apologize because I have been extraordinarily busy and will be again for the next month or so.

    Regarding Galois lattices John’s characterization of Galois lattices and formal concept analysis sounds about right to me. One reason that it is not a tree is that the same object has more than one possible classification. For example, the number 4 is classified by the concept {even}, and by the concept {square}. It is also classified by the concept {even, square} meaning that it is {even} and {square}. Note that there is no node in the lattice with the concept {odd, even}.

    Formal concept analysis shares some features with channel theory developed by Jon Barwise and Jeremy Seligman. I will want to write about that in the future in a little more depth. I would be interested in reading that article Carl.

    Regarding the degree of nodes The security classification model I presented is a way of classifying nodes in a network independent of network properties of those who are so classified. More generally, one might classify nodes by occupation: academic, artist, designer, roofer, truck driver, etc. and ask questions about the connectivity within an occupation and between occupations.

    Still node degree has been used as a measure of social influence. If one did have the necessary data it would be interesting to see whether the security clearances a node in a social network is granted correlates with node degree. Still, it obviously matters who your friends are. To have a king as your only friend might position you better than to be the friend of a thousand paupers. A better measure might be the size of your extended neighborhood in the graph.

    Regarding the brain Joshua, you know far more than I here. Intuitively, it makes sense that local connectivity may be associated with domain specific or otherwise specialized brain function, while inter-cluster regional connectivity associated with more domain general cognition. And the pattern exhibited by autistic persons does not seem as surprising. I have speculated before that an important part of intelligence is the ability to shift between perspectives each of which define parameters on how a problem can be seen and thought about (see: http://jacoblee.net/?p=137)

    Some theorists posit that the brain is a massive collection of evolved highly domain-specific cognitive modules. What is your opinion Joshua, given what you know about brain development and brain physiology?

  10. I get cautious when throwing around the loaded term ‘cognitive’ with people around here. Cognitive modules mean what exacly? I prefer to think of a region in terms of the types of computations supported, and the availability and nature of inputs. Given a region, say in the pre-frontal cortex, one may find all sorts of cognitive tasks which seem to activate that region, be negatively affected by lesions to that region, and so on. This is confusing to many perspectives (i.e. Is it a memory region, or is it a executive control region?). However, it is more sensible once one realises that any task, especially complex tasks, will require not one type of computation, but many. Tasks will tap a region’s computational resources differentially depending on the nature of the task, obviously. One may then begin to analyse the relations between disparate tasks by comparing overlaps in activation maps.

  11. The modularity of mind or cognition hypothesis posits a functional decomposition of the brain, although not necessarily localized in particular brain regions. A functional module is restricted to doing a certain class of domain-specific computations, the function it serves specified with respect to how it contribution to fitness. Or something like that.

  12. I am suggesting:

    1. Some localized computations are agnostic to the modality of the inputted represenation. Domain specificity is in troubled water if conceded…

    2.
    While modularism views do not neccessarily localize modules to a particular region, in practice this is most likely true for computational efficiency. But also note that local computations are not likely ‘cognitive’, but computational. A collection of computations working in cohort are more likely to be ‘cognitive’. Again I like to think of a region as supporting a type of computations it can perform (e.g. pattern separation or completion)tht is rather independent of the types of cognitions it supports. A hammer can be a paper-weight.

  13. I’ve been busy, too. At The Word Works we are translating several long essays for a photography exhibition catalogue dedicated to Pictorialism, an art photography movement that peaked in Japan in the 1910s and 20s. Lots and lots of lookup to get names and technical terminology right. Did want to pop in though to remark that one of the things I find most intriguing about network analysis is that the mathematics, mostly derived from graph theory, seem to work across a huge spectrum of phenomena, from power grids to the Internet to protein cascades in cell biology. My initial runs at my Winners’ Circles data showed that, yes indeed, giant components and bicomponents and power-law distributions of notes appear precisely as predicted. To my overactive mind this makes network analysis a serious candidate for the next cosmological big thing — like taxonomies were for the middle ages and analytic geometry, calculus and probability became starting in the 17th century —a new and very effective way of thinking about the underlying bones of matter, life, society, the whole damned universe.

  14. I’m not competent in mathematical modeling, as I’ve said, and I’m intrigued by what I do understand, and there seems to be a plausibility. As a historian of the period I’m familiar though with the disaster an older version of this unity-of-science notion, positivism, turned out to be. The procrustean bed fits everyone but not without some chopping and stretching. So is it that we’ve got the right model now?

  15. I also know little about network analysis. I remember of fragment or two from my undergraduate discrete mathematics course, but that is it. Since I am now collecting data that will be used to conduct functional connectivity analyses, I wonder what kind of insights about network analysis might be used to guide interpretation of our planned statistical testing, and/or guide exploratory data-mining of the rich dataset. That is, what is the take-home message?

  16. @Carl, no. Anyone who claims to have “the right model,” where that phrase implies a complete answer hasn’t considered the genuine messiness of reality. That said, however, models improve. Analytic geometry, the calculus, probability and statistics revealed a lot about the world that scholastic taxonomies never considered. Network analysis reveals things that what Andrew Abbott calls the Standard Causal Model (basically various elaborations of regression analysis) doesn’t. We now have a set of new tools, in addition to the older ones, and with them we discover things we didn’t know before.

    @Joshua. Since I don’t know anything at all about functional connectivity analysis, I am in no position to say exactly what network analysis might add to your research results. The story that pops up repeatedly in introductions to network analysis is that classic statistical and causal analysis assume systems whose components’ behavior is determined by properties of the components themselves. It thus makes sense to sample said components and see if those with properties in set A (A1…An) tend to occur more often than expected with components with properties in set B (B1….Bn). Then, if you see a strong correlation, you try to figure out if the correlation reflects causation in one direction or the other or is itself the accidental outcome of some other process. You can learn a lot about things that way.

    In contrast, network analysis pursues the notion that a significant factor affecting how the components in question behave is in how the systems to which they belong are configured, i.e., to relations between them. Why does this make a difference? Consider, for example, an economic analysis that assumes rational actors who have different properties (call them interests) and equal access to information, so that each acts on that information in pursuit of its interests at the moment the decision is made. Now introduce the notion that because of their positions in relevant networks, some of those rational actors are able to engage in insider trading, having quicker access to information than those in more peripheral positions.

    Or, I am largely hand waving here since I don’t know the relevant literature in any detail, imagine a protein cascade in which what is chemically the same protein plays very different roles depending on its immediate neighbors in the process in question.

    Methodologically speaking, the radical bit is that network analysis requires systematic violation of the principles of random sampling on which conventional statistical analysis depends. There is no such thing as a random sample that preserves network configurations. So the question of how to do some analog of conventional statistical analysis is a very big one, indeed. As I understand it, the most common current approach is to model a process using randomly constructed networks with properties similar to those of the real-world network in question. The results from analysis of the real-world network are compared with those of averaging the results of several hundreds or thousands of simulations using the random model.

    Hope this is helpful. Highly recommend M.E.H. Newman’s brand new textbook Networks if you would like to pursue the matter further.

  17. I am enthusiastic about the potential of graph theoretic/network approaches to social science. Graph theory is elegant, but still fairly simple, and the network class of models (and I emphasize the word ‘model’) appears to be a fruitful and widely applicable one. Nonetheless, I worry sometimes.

    Graph theory, a body of mathematics largely developed within computer science, has many applications within computer science, and *not* just in understanding computer networks. But even when graphs are used to model computer networks, it should be understood that the graph model is frequently in mind in the design of such computer networks. Therefore, we should not be too surprised at their applicability.

    Not so in social networks. A social network is an abstraction over the continuous natural field of social interaction. The vertices of a graph represent something, a person usually. But what is that? a person, when? At a given moment? Over some interval of time? Such choices decide identity conditions of the objects that populate the nodes in the network. Meanwhile edges between vertices in the graph represent a relationship between nodes. But what relationship? To be meaningful, the relationship must be specified, and the conditions of that relationship holding rigorously applied. If the relationship is friendship, is it an emic definition or an etic one? It seems to me that too frequently far too little effort is made to substantiate the basis of the model in the first place.

    Agent based modeling is an alternative approach that is applicable to many of the same sorts of problems that network theory is applied to. Indeed, one can take an agent based model, record all the interactions between agents, classify those interactions by type, and then create a network model of that agent based simulation. Perhaps that is a good thing to do too, because the result of agent based simulations can be hard to unpack. But it would still be an abstraction from the underlying (artificial) reality of the simulation.

  18. Jacob, everything you say is true and must be considered in any application of network analysis to real-world phenomena. Much of what you say, however, applies to any form of modeling, since all forms of modeling involve simplifications that omit much of what reality has on offer. It is interesting, too, that you mention agent-based modeling, since combining network and agent-based approaches seems to be the current frontier for hard-core modelers (see the new textbooks I mention in my overview of where I’m coming from in the Winners’ Circles projects).

    What impresses me is the extraordinary fit between what network mathematics predict and the data I collected before I knew that said mathematics existed. I currently envision the relation between that body of mathematics and the real-world history I want to understand as comparable to that between the law of gravity, how airplanes fly, and the details of why some airplanes crash — a topic a bit on my mind just now because my son-in-law, who has a bit of experience in this area, would like to be an FAA crash investigator. That airplanes fly cannot be explained by the law of gravity alone; but the law of gravity is an indispensable factor in aerodynamics. It remains so, even if the crash were caused by birds sucked into an engine, in so far as it effects calculations of how far the debris gets spread when the plane falls from a certain height.

  19. I am reminded that in some population genetics models a simplifying assumption sometimes made is that, for example, any two organisms in a population have an equal probability of mating and having offspring. This assumption is violated in so called structured population genetics models, in which the probability of any two members of a population mating and having offspring is not equiprobable. It is often still taken as a probability distribution of some kind, I think. Network models on the other hand give a rather more fine grained approach to defining structured populations, as do agent based models. IN the latter, for example, it might be the case that two members of a population wandering around randomly in the landscape can mate only if they bump into each other, introducing an important kind of locality and path dependence into the model.

    Of course, the more variables you introduce into the model, the harder it is to learn from it.

  20. @John@16, thank you. Of course I am familiar with these philosophy / history of science basics, and although it’s always possible in the skeptical frame to quibble with progress claims I accept what you say about development and robustness as true and well known. But I asked the question badly so I got what I deserved.

    What I’m actually interested in is being well addressed in the discussion, so thanks everyone and please carry on.

  21. “Of course, the more variables you introduce into the model, the harder it is to learn from it.”

    Andy Perrin, who’s teaching the grad seminar I mentioned earlier, says that modeling is all about taking information out until you’ve got something you can work with. (Borges says roughly the same thing.) Which leads to further questions about what to take out and what’s being worked toward.

    “…classic statistical and causal analysis assume systems whose components’ behavior is determined by properties of the components themselves. It thus makes sense to sample said components and see if those with properties in set A (A1…An) tend to occur more often than expected with components with properties in set B (B1….Bn). Then, if you see a strong correlation, you try to figure out if the correlation reflects causation in one direction or the other or is itself the accidental outcome of some other process. You can learn a lot about things that way.

    In contrast, network analysis pursues the notion that a significant factor affecting how the components in question behave is in how the systems to which they belong are configured, i.e., to relations between them.”

    This is very helpful to me for seeing what’s offered and what’s at stake. If I can be playful for a moment, the former procedure might be characterized as stereotypically ‘masculine’ and the latter ‘feminine’. Interesting times in the world of science. Speaking of which, as I understand the procedure most sexual reproduction involves ‘bumping into each other’, doesn’t it?

  22. That’s why I’m careful in crowded hallways.

    This is an interesting and informed conversation here, and I am a little overwhelmed. Connectivity analysis look for statistically significant correlations of fMRI BOLD signal between a seed region of interest and the rest of the brain. That is, time-locked, voxel to voxel correlations in fMRI BOLD signal are ascertained from a preselected voxel to a population of other voxels. One may compare correlations and interactions of correlations between two conditions in an experiment. For example, memory researchers might find is particularly informative to know whether voxel activity in the hippocampus is more tightly correlated with activity signal from voxels in the dorsolateral prefrontal cortex (DLPFC) for correctly remembered items in comparison to missed items.

    Modeling is fairly popular in some segments of my discipline. An interesting paper I briefly looked at the other day took a computational model of the hippocampus, fed in human neurological data recordings to that model, and then compared the model’s behavioral output to the human’s behavioral output.

  23. I have a Glassman (2003) paper sitting on my desk that I’ve meant to read for a while now.

    It is entitled, “Topology and graph theory applied to cortical anatomy may help explain working memory capacity of three or four simultaneous items.”

    It looked interesting before, but doubly so after the conversations above. If any are interested in the article, I will be happy to send it to you.

  24. Joshua, I’d be happy to add it to my library. Can’t promise to read it immediately, but if it is there it is sure to bubble to the surface when needed. Please send a copy to jlm@wordworks.jp

    P.S. For everyone here: If you are interested in this sort of thing you should subscribe to SOCNET, if for no other reason than Barry Wellman’s occasional “Complexity Digest” posts, where I found, for example, the following:

    —-

    19.03. Introduction to Complexity and Complex Systems , CRC Press

    Summary: The boundaries between simple and complicated, and complicated and complex system designations are fuzzy and debatable, even using quantitative measures of complexity. However, if you are a biomedical engineer, a biologist, physiologist, economist, politician, stock market speculator, or politician, you have encountered complex systems. Furthermore, your success depends on your ability to successfully interact with and manage a variety of complex systems. In order not to be blindsided by unexpected results, we need a systematic, comprehensive way of analyzing, modeling, and simulating complex systems to predict non-anticipated outcomes. (…)

    * [43] Introduction to Complexity and Complex Systems, Robert B. Northrop,
    2010/12/08, CRC Press
    _________________________________________________________________

    19.04. Chaos: The Science of Predictable Random Motion , Oxford University
    Press

    Summary: Based on only elementary mathematics, this engaging account of chaos theory bridges the gap between introductions for the layman and college-level texts. It develops the science of dynamics in terms of small time steps, describes the phenomenon of chaos through simple examples, and concludes with a close look at a homoclinic tangle, the mathematical monster at the heart of chaos. The presentation is enhanced by many figures, animations of chaotic motion (available on a companion CD), and biographical sketches of the pioneers of dynamics and chaos theory. (…)

    * [45] Chaos: The Science of Predictable Random Motion, Richard Kautz,
    2010/12/30, Oxford University Press
    _________________________________________________________________

    19.05. Networks of the Brain , The MIT Press

    Summary: Over the last decade, the study of complex networks has expanded across diverse scientific fields. Increasingly, science is concerned with the structure, behavior, and evolution of complex systems ranging from cells to ecosystems. Modern network approaches are beginning to reveal fundamental principles of brain architecture and function, and in this book, Olaf Sporns describes how the integrative nature of brain function can be illuminated from a complex network perspective. Highlighting the many emerging points of contact between neuroscience and network science, the book serves to introduce network theory to neuroscientists and neuroscience to those working on theoretical network models. (…)

    * [47] Networks of the Brain, Olaf Sporns, 2011/11/30, The MIT Press

  25. I haven’t read this yet, but it looks interesting.

    http://jasss.soc.surrey.ac.uk/14/1/5.html

    I’m running on coffee fumes.

Trackbacks

Leave a Reply!

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: