Monday, December 1, 2008

Taking a Look at the Abstract Machine of Soft Control

Terranova initially made some points about information and networks I read but did not entirely grasp until her chapter on biological computation.

One of her first moves was to posit Shannon's informational theory as a closed linguistic system of probabilities (20). We've studied this to some extend before, but then she introduces what she calls virtualization. "The virtualization of a process involves opening up a real understood as devoid of transformative potential to the action of forces that exceed it from all sides" (27). What exactly this process consists of eluded me. We've studied D+G's attempt to break out of the linguistic regimen, but what was hinted at here by Terranova did not necessarily shout out "rhizome" since we have "action from all sides" within this virtualization. Apparently, the roots are pushing, grinding, and morhping one another.

But then take a look at Conway's the Game of Life. We have these individual elements that shove up against one another, change one another. The grid of cells looks lifeless, cluttered, unpredictable, but yet each cell sucumbs to the pressures of those around it and emerges within a whole new environment. We don't know how things will look, we have to simulate them (be it by hand, computer, or chemical process). The very fact that we can't predict the game helped me to bettter understand how virtualization refutes Shannon's information theory.

We are then able to illustrate other terms Terranova uses here. Microstates can obviously be the cells on the grid while the macrostate can take the form of the grid as a whole. We see that the microstates of cellular automaton "cannot be known completely because they cannot be studied by dissection: once the connectionand mutual affection with other elemens is removed, the individual elements become passive and innert" (104). Who the hell cares about taking one cell out of the game of life and looking at it? It means nothing to us to look at that one cell- it is meaningless without the other cells surrounding it, it is lifeless. However, when we surround it by other microstates, we cannot even begin to predict the state of that cell without simulation- it's "impossible... to determine a priori the sequence of configuartions that running a CA experiment will produce" (112).

Informational space, where "space becomes informational... when it presents an excess of sensory data, a radical indeterminacy in our knowledge, and a nonlinear temporality involving a multiplicity of mutating variables" (37) can become the limited sets of algorithms that run each cell. Information space requires the "mechanical and local rules" (110) that allow for such things as self-reproduction, unpredictability, and change of states. Informational space requires a large set of interacting algorithms.

In chapter 2, we also had a big new term when Terranova explains Xeno's paradox as not understanding space and time properly. "The virtuality of duration, the qualitative change that every movement brings not only to that which moves, but also to the space that it moves in and to the whole which that space necessarily opens up" (51). Time changes space when we think of her use of the term 'duration'. In the game Life, we see that every new tick/beat/turn changes the entire grid. Time always changes space- even if the board is clear, the algorithm is still running and open to new possiblities (it is waiting for you to place that next "live" cell down).

Major Question:

We then get to the notion of productivity: "the autonomy and creativity is produced by a process of recursive looping that generates divergent and transmittable variations at all points. Such systems, that is, are characterized by their tendency to escape themselves" (105).

Productivity hangs around in Terranova's liquid state: "at a certain value of information speed, the movement of cells turns liquid and it is this state that is identified as the most productive, engendering vortical structures that are both stable and propagating" (114). This .5 lambda value where "within CAs, then, the key area of computation is identified with a border zone fluctuating between highly ordered and highly random CAs" (113).

I can see how this can go outside of Shannon's theory of information, but it seems like this .5 lambda value harkens back to a linguistic system. A lingusitic system is predictable in the sense of syntax: a non-linguistic system escapes our grammar and thus does not fit into our system. However, lingusitics is productive in the sense that everytime one speaks, they can form a sentence never heard before. Redundancy and predictability allow for that. I am hesitant to connect the terms noise or randomness to that productive aspect of language; nevertheless, the liquid state produces new, unpreditable creative structures, yet they still remain within some sort of grammar or algorithm. For example, the CA of a game of Life won't build red or green squares if we've only programmed them to build black and white squares.

Terranova talks about these limits when she writes about the limits of genetic search algorithm. A CA is not artificial intellegence, but is dependent on the human limits set about it. These limits, be they rules in the CA, the predicitive element of Shannon's information theory, or the grammar of linguistics, set the stage for stable, productive forms produced through innovation. However, the term limit applies broadyly and diverly to the systems I outline above. How should we apply, define, and look at limits within Terranova's text? I think in my case it is easy to abuse or misread the term. We can always add limits (a new protocol, a new cell state in the game of life), but we can never take them all away.

Minor questions:

What is the difference between prediction and simulation? Does simulation require time? Is Shannon's information theory really predictable, or must it too require simulation like a CA?

I look back at Every Icon by John F. Simon and the first thing that comes to mind is "duration" art. I think using Terranova's use of the term duration applies fantastically well to this piece. This seems like a pretty different kind of CA though (almost linear in a way). Should we think of this merely as a subclass of a game of Life (with a simple pattern), or should we question how a state state within a system determines the system?

No comments: