Monday, September 29, 2008

Lyotard, Research and Its Legitimation through Performativity

On the Postmodern Condition, Lyotard writes an intriguing chapter on "Research and Its Legitimating through Performativity". He returns to the science field and "the multiplicaton in methods of argumentation" and the other being, "rising complexity level in the process of establishing proof." After reading the statement I began to think about the possible connections in this chapter and Turing's essay, "Computing Machinery and Intelligence."

I idetify with Lyotard's statement on "rising complexity . . ." because on a practical level, it relates to this pressure that is building within our culture on DNA proof.  The major connections that come to mind are: Maze and Monsanto in Mexico, paternal testing, and advances in forensic science that question the validity of human testimony.  One could go even farther by thinking about the effect "multiplication in methods of argumentation" effects the process of establishing proof.  Even how that can shake our perception of 'proof'.

Now revisiting Turing at this juncture of 'process', 'proof', and rising complexity, one can see a surface connection to Lyotard.  Turing's question of establishing proof of intelligence and thinking machines is also connected Lyotards assessment of "rising complexity level in the process . . ."  
More specifically, in the 'form' of his description of the imitation game. Some of the language used and action described are mixed signals to me.  I think in part because there is an underlining element of performativity.

I am interested in discussing performativity and I am still processing the larger underlining  connections it makes to understanding Lyotard, Turing, and Licklider.

I am drawn to Licklider as well because of the engagement that he is describing between Man and Computer. Thought he is calling it a symbiotic relationship what he is really describing are methods and process.  He writes that one of the main aims are to allow humans and computers to cooperate in making decisions and "controlling complex situations" He continues by stating that what computers are really helping with is the process of preparing the way for humans to gain insite and make decisions.  The statement as well as how he describes the human methodology in the partnership draw my curiosity into discussing the relationships of the two texts with Lyotards chapter on "Research and Its legitimating through Performativity." Performativity in Lyotards work is a loaded concept that I would like for use to unpack.
-rosie   


 

language & art

1. "The postmodern would be that which, in the modern, puts forward the unpresentable in presentation itself" (Lyotard, 81)

2. "The distinctive feature of this second, 'performative,' utterance is that its effect upon the referent coincides with its enunciation....the addressee...is immediately placed within the new context created by the utterance....the sender is dean or rector--that is, he is invested with the authority to make this kind of statement--only insofar as he can directly affect both the referent (the university) and the addressee (the university staff)" (Lyotard, 9).

3. "From this continuity between cybernetic and Lyotardian postmodern social relations....we track Lyotard's postmodernist and game-theoretical worldview back deep into the heart of the Manichean sciences" (Galison, 259)

Lyotard's framework of social relations as agonistic language games seems to me to reduce the complexity of language; this repressed complexity reappears in his essay on art in "What is the Postmodern."

In short, it seems that Lyotard is caught within the very problematic of performativity that he takes as his object of study (as indicated by Galison above). In the second quotation, he asserts that language has a power to create its context. But the slippage between the power of language and social relations, which are irreducible to the utterance itself, is evident in his qualification that the dean must also be invested with a certain authority. This concept of authority seems to elide the problem that the efficacy of the dean's statement depends not so much upon a context engendered solely by the performative utterance, but rather on an institutional context within which language is made to appear stable and univocal, directly affecting its “referent.”

In “What is the Postmodern,” Lyotard takes up the question of the referent in art, but this time to assert that the postmodern “puts forward the unpresentable in presentation itself.” That is, postmodern art engages with the question of its own legitimacy, finding that the very means of presentation constitute a limit. That Lyotard does not suggest that language may share this limit with art, that the performative utterance may fail to generate its context, seems to be a moment in Lyotard’s text when his reliance upon game theory and cybernetics (the “Manichean sciences”) has infiltrated his theory of language as the answer to an unasked question (in the sense of the Althusserian problematic). In Lyotard’s discussion of the performative utterance, language takes a form familiar in cybernetic theory: communication-as-control. And while Lyotard questions the dominance of performativity as a criterion of legitimacy, he misses, I think, an important opportunity to destabilize this standard when he imputes so much power to the performative.

Perhaps the circularity of capitalist, scientific, performative processes that other posts (Mark’s stands out in my memory) have indicated is in some way a result of this tension in Lyotard’s method.

State Executor

One exciting point of "The Postmodern Condition" was when Lyotard related the cybernetic systems we have been studying (system that describe self-automating robots, humans, factories, and other "organisms" on a relatively micro scale) as models for the same type of systems that describe entire nations and multi-national corporations:

"The true goal of the system, the reason its programs function like a computer, is the optimization of the global relationships between input and output" (11).

The move from the single disembodied human in Hayle's book has been transformed into a disembodied institution of power. Cybernetics can now equate a discrete machine with the millions of people living in- or the thousands of people running a- country. The scientists, the academics, and the factory boss operate within a single I/O device (perhaps measured by the NYSE or imports/exports?).

This raised an interesting question for me when Lyotard brought up scientific knowledge as executable, not executory.

"In this context, the only role positive knowledge can play is to inform the practical subject about the reality within which the execution of the prescription is to be inscribed. It allows the subjects to circumscribe the executable, or what is possible to do. But the executory, what should be done, is not within the purview of possible knowledge.... knowledge is no longer the subject, but in the service of the subject" (36).

Thus, the state often builds these scientific knowledge seeking apparatuses (LHC at CERN, Watson at Brown Uni., NSA at the gov) and we accumulate these vast amounts of executable knowledge the state can feed back into its own system (an increase in input) for better automation (or in this case, control).

The language game limits this positivist knowledge (i.e., knowledge for the purpose of system regulation) to prescriptive statements (executable), but who takes the place of executor in the cybernetic nation? Within the body we have the POV, the machine has a programmed consciousness, but what about the nation?

I believe this would be a question of legitimacy. The morality of this scientific knowledge is very different from narrative knowledge that gains legitimacy and execution from its "meter" (self-reproduction that appropriates the subject). So can this scientific knowledge have an "accent" that acts as an executor? Or the limited dissent of the scientist/researcher? Or is this a void that the state or capitalism swallows to execute these regulating acts?

Sunday, September 28, 2008

Commodified knowledge

I understand and somewhat agree with Lyotard’s statement that the mercantilization of knowledge has caused it to become a saleable commodity. The commodification of knowledge gives it an eerie resemblance to software: just as we buy base software and have it be continually upgraded, knowledge is produced to be sold, which is then consumed in order to make a “new” set of knowledge. Lyotard’s pessimistic view that knowledge is no longer made for the sake of knowledge itself is true: his harsh critique of the university is testament to that- the student no longer seeks information for information sake, rather to see how that information can be useful for them.

However, there is still a part of me that wonders if commodified knowledge totally excludes knowledge for its own sake. Can they (and are they) live side by side?
I also have to disagree with how Lyotard completely dehumanizes knowledge. After all, I don’t believe that the commodification of knowledge is a consequence of new technologies as Lyotard seems to imply. Computer languages are not knowledge within themselves, merely objective things while knowledge is something more subjective. This must be true to Lyotard’s own relativism: there can be no metanarrative of knowledge transforming into commodity; rather, knowledge is subject to the powers of labor product that need such knowledge.

As such, I think that Lyotard’s imagining of “databanks” is an inappropriately dehumanized way of thinking of commodified knowledge. Instead, we must look at outsourced customer service centers, cells of Korean animators, workers at the assembly line: these are the shape and form of commodifed knowledge.

And if people, and not mere technological innovations, are the real source of commodifed knowledge, how can we tell them that they have lost their use-value, their individuality? And moreover, how long before this “commodified knowledge” consumes itself to progress to a new set of knowledge- namely that they are being used and underappreciated? What will these “highly developed societies” do then?

Modern Post-Modernity

Just a few quick thoughts on Lyotard. 

First, I think that his analysis of higher education might be a little too hasty. There is a kind of truth to the statement made on page 48 that education functions to create actors within specific fields. In our current political climate, I think that that is in some part the criticism that is lobbied towards educational initiatives such as "No Child Left Behind". Rather then teaching for the sake of precisely a sort of humanist emancipation, education is being turned towards the creation of functional societal actor to be inserted into technical and vocational fields. I mean that this type of educational structure is strictly correlative with post-modernism. This type of training structure has been around, I would argue, as long as any sort of teaching has existed. In fact, within the modern education structure, the university is precisely the place where this type of pedagogy is contested. If anything this split within pedagogy is precisely what would prevent the transformation of professors into information banks and processors. 

Second, on the idea of the legitimation of scientific knowledge, specifically the ideas on page 61 pertaining to Kuhnian paradigms. I think that Lyotard makes a really good argument concerning the nature of narratives. However, I do think that he is missing one type of function, that of predicability that seems to be missing from his narrative survey. At any rate, the denotative, prescriptive, and performative are things that are exterior to knowledge yet act upon it. Predicability seems however, to be the internal meta-condition for scientific legitimacy. This is why I feel that his analysis of paradigms and scientific knowledge is overly simplistic. Legitimation requires the interplay of interior and exterior factors. This is I think the point Kuhn makes in relation to Galileo. It was the complex interplay of cultural mores and the need for newer predicability problematics that defined the advent of that paradigm. 

Third, the last point that Lyotard makes on 67 appears to be his only regression to a kind of emancipatory statement. Yet, there is again a need for a further delineation between software and hardware in terms of access. Yes, free access to knowledge would be good. However, knowledge doesn't exist in cloud. It's access is barred and constrained by not just a variety of corporate and statist factors but also the very material constraints of hardware access. As such, I don't think that Marxist analysis of real existing conditions can be thoroughly discounted. 

Additionally, Section 5.1 of Licklider made me laugh. It illustrates, I think, a somewhat truism concerning both science-fiction literature and scientific writing. On one hand the futures predicated still seem quite far-fetched. Realistically, we are still far from Neuromancer or a true man-machine symbiosis. On the other hand, something like the iphone could have conceivably blown Licklider's mind. Also, the answer is of course 42. 

Narrative exhumed?

I take issue with Lyotard’s argument that lamenting the loss of meaning in postmodernity boils down to mourning the fact that knowledge is no longer principally narrative. I would argue that the masternarratives are not dead, particularly in politics. As Lyotard says, a community’s relationship to itself and its environment is played out through narrative tradition. The Cold War narrative of Capitalism against Communism has been firmly replaced by the Clash of Civilizations and a strengthened emergence of ethnic identity expressed in increased ethnic conflict.
In addition, the presence of information as a commodity and the wealth of databases and internet does not reduce the presence and power of narrative, because humans still process information through narrative. Journalism, media, and the internet are all becoming more open-source, and there is less importance paid to reporters as the official the narrators of current events, but information nonetheless needs to go through a process of narration because humans like a good story. I heard an editor from the New York Times describes the paper now as ‘platform agnostic’ as they negotiate the changing shape of information and knowledge; they have blogs, videos, interactive media, and photo essays in addition to articles, but the central point is ‘the story.’ Google news is a database or collection of current events, but the first words on the webpage are “Top Stories.”

I also disagree with Lyotard’s argument that narrative is a way to forget and that the “storage, hoarding, and capitalization” of information in archives is a way to keep it in the present. We memorialize and archive in order to be able to forget, in the way that we write down a phone number in order to be able to forget it. If, instead, we recite the phone number repeatedly, we remember it and it remains in the present. Museums are temples to narrative and memory, efforts to keep past events in the present social conscience. The language games of narrative as well as science are used by the same ‘cultural imperialists’ and power brokers to legitimize their positions.
I agree instead with Jameson’s argument that narrative has just gone underground into the “political unconscious,” particularly in sciences. Instead of narrative legitimizing science as Lyotard proposes, I am more interested in Hayles’s use of narrative to put cybernetics back into history in order to examine it and the emergence of posthumanism in a critical, delegitimized light.

On another note, I disagree with Lyotard’s dismissal of Habermas’s theory that society strives towards consensus and that legitimacy resides in knowledge’s (be in narrative or science) contribution to common emancipation through the regularization of language. Although he criticizes it as teleological, hindsight shows us that Habermas’s belief in global civil society is not unfounded; international law, the internet and the globalization of media and communication all suggest a desire for common ground and a narrowing of relativism that Lyotard says replaces the masternarrative.

infinitely powerful, painfully inadequate

For me, Lyotard’s Appendix, “Answering the Question: What Is Postmodernism?” helps to clarify his argument about the status of knowledge in postmodern, “computerized” societies presented in the core of his book.

He conceives of a public that is collectively stagnated through the desire for order and totality at the expense of creativity and individuality. Lyotard depicts this trend through the realm of the art world, in which power emerges in the form of a set of established rules, which dictate artistic creation through the “call for order, for identity, for security, for popularity” (73). According to Lyotard, the artists which follow the “correct rules,” produce works that are able to fulfill their pacifying role and “reassure” the public only through the “deceit” of representing a fantasy of reality (74). Through this power structure, Lyotard perceives culture as contaminated by a sentiment of “slackening,” in which individual artistic taste is dissolved into the submissive consumption of “eclectic works” (76). Driving this epoch of “eclecticism,” the power of capital requires artists to skillfully market their works towards the target consumer population of the public, and therefore generate nothing that explores new, experimental forms. He equates this structure of power with terror, in which rules are enacted to encourage the dangerously misguided pursuit of totality. Similarly, Lyotard seems to realize this “terror” as a consequence of the legitimation of scientific knowledge by performativity. On 63 he states how, “The stronger the “move,” the more likely it is to be denied the minimum consensus, precisely because it changes the rules of the game upon which consensus has been based.” The player who tries to move outside the game is therefore silenced by the “terrorist” behavior of a power center that limits the generation of “inefficient” ideas.

I want to examine the new, productive power structure that emerges for Lyotard which will remove this threat of terror. In his appendix, he presents an alternative, “infinitely powerful” structure which reveals itself as “painfully inadequate” in its ability to be represented by reality (78). The productive pain that embodies an awareness of the incapacity to reach a unified coherence stands in direct opposition to the restraining terror that emerges from thinking that the representation of reality is possible. Lyotard, therefore, depicts a structure of power that drives a culture towards the exploration of what is unanswerable, as opposed to a power regime that requires the public dependence on its rule for easy, predictable answers. Similarly, Lyotard distinguishes Postmodern science as functioning more appropriately through a legitimation of paralogy, which introduces, “the existence of a power that destabilizes the capacity for explanation,” and does not presume the predictability of scientific discoveries (61).

It’s interesting to think about this power structure in juxtaposition to the other forms of power that Lyotard disparages. What might it mean for science or politics or a system of power to seek the unanswerable/ the instabilities, and to for-go predictable answers? Science tends to try and supply solutions to things, to answer problems..what would a power based on the unanswerable look like?

science and power, automatization, net neutrality

I found Lyotard’s treatment of the relationships between science, technology, money, power to be particularly salient but at times problematic. Lyotard’s basic sketch of the production of late-capitalist scientific knowledge as an “equation between wealth, efficiency, and truth” (45) is certainly justified by a quick look at the bankrolls of major scientific research centers inside and outside of the academy. Dominant metanarratives of science portray science as a disinterested pursuit of truth, which, within these narratives, is self-legitimating based on notions of the human spirit’s “fundamental desire” to increase knowledge and expose this “truth.” The large-scale infusion of private corporations into scientific research and development apparatuses dismantles this narrative handily and, in so doing, raises the question of the relationship between the financial ability to conduct science, the knowledges produced by for-profit research efforts, and the augmentation of consolidation of power through control of the dominant means of scientific advancement. Based on Lyotard’s vision, a feedback loop is created:

 

moneyàscientific apparatus/research fundingàknowledgeàpoweràmoneyà

 

ad infinitum.

 

            My question is whether or not this continuous consolidation of power, knowledge, money, and scientific research allows room for intervention or subversion? Will we continue to see the consolidation of knowledge production in the hands of the wealthy and powerful, or can we locate sites at which science resists this one-way flow, subverts dominant regimes of power, or simply proceeds “in the name of science?”

 

 

            I was also fascinated by Lyotard’s discussion of the mechanical automatization of teaching, which brought me back to Weiner’s oft-misinterpreted visions of the automatization of society. I am skeptical of Lyotard’s argument that “pedagogy would not necessarily suffer” in the replacement of professors by machines. It is true that such a system would teach new skills (new languages, better manipulation of language games), and the mercantilization of knowledge certainly elevates the value of efficiency above that of truth. However, much larger questions are raised by Lyotard’s automatization scheme: who will teach the computers that will then teach humans, and by what pedagogy will these computer-teachers operate? Do computers allow for the efficiency of knowledge exchange that human teaching can offer (for instance, 1 professor teaching 500 students)? Will people accept and, most importantly, pay for this?

 

            A few pages later, Lyotard seems to partially deconstruct his argument regarding the obsolescence of the professor and the possibility of teaching by machine. Addressing interdisciplinary studies as a site of cross-currents of disparate knowledge devoid of metanarrative, Lyotard underscores the centrality of brainstorming and teamwork in the understanding of these new forms of knowledge. These operations, uniquely human and incapable of machine replacement, probably represent the biggest new trend in academia and, as a consequence, necessitate the allocation of more professors, students, and resources. Here, the professor doesn’t serve merely to “transmit established knowledge;” (53) he or she is a facilitator of new connections and a steward of increased teamwork. Replacing the interdisciplinary professor with a machine would create the exact type of rigid framework of knowledge that interdisciplinary studies seeks to undermine and problematize, thus rendering it inneficient, wasteful, and backward.

 

            Lastly, and briefly, I found Lyotard’s ultimatim to “give the public free access to the memory and data banks” to be at once prescient and reassuring. In a time in which net neutrality and access to information is constantly being threatened by the intrusion of governments, corporations, groups, and individuals, Lyotard’s early call for openness and free access rings more true and important than ever. 

Some Questions

I must admit to being rather mystified by Lyotard's report, specifically in my attempt to follow his individual terms throughout, which seem to mutate as he pits them against each other in new ways. Most naggingly, I seemed to have missed where the point where the notion of system performance (i.e. efficiency) became system performativity (i.e. that of linguistic pragmatics). I'm not sure I see how the postmodern system of science operates through performativity, especially given the facts that a) Lyotard mentions on page 41 how the postmodern world is "all about" legitimation not based upon performativity and b) he argues that the "act" is no longer good enough to validate knowledge, but that it must show value with regard to efficiency.

I also don't see how the new system lacks a metalanguage (64). (I can see how it might lack a sweeping metanarrative, but those two ideas seem to differ.) If anything, the postmodern institution(s) sweeps all forms of knowledge into one level playing field determined by a strict language of efficiency and capital. Lyotard also mentions (52) the growth of cross-disciplinary studies (concommitant with the conversion of the university into a kind of productive machine) which outstrip the obsolescing modes of academic specializing. I definitely see a kind of economics as on overarching metalanguage of the sciences, regardless of linguistic divisions at lesser levels.

There are, of course, these moments of extreme specificity within the system. These are the productive play of dissent and paradox which reminds me of Derrida's differance (ultimately meant not to dismantle reason/metaphysics but to make it more productive in its own way). Derrida, in different ways than Lyotard, represented a strong opposition to Habermas' rational consensus community. Both theorists sought a more productive, if negating, specificity within a greater system of language. However, Lyotard's system seems ultimately more uncompromising and terrifying.

It is also unclear how historically minded Lyotard is being when he speaks of the "system." He certainly has a sense of history when speaking of the systemization of sciences, but I also get the sense (especially when he is critiquing Habermas) that he is implying something universal about the way that people interact in language games--man's designs on man that is only now coming to full fruition.

Language games, the legitimation of knowledge

In immediately designating the field of his text as knowledge in computerized societies, Lyotard opens a space for cybernetics to both implicitly and explicitly inform what he is saying. I didn't realize until section 6 that 'knowledge' is translated from savoir and 'learning' from connaissance, a distinction that I understand to be quantitative vs. qualitative (more informally, having to do with familiarity). For this reason I'm interested in Lyotard's assertion that in the next general transformation, the nature of knowledge "can fit into the new channels, and become operational, only if learning is translated into quantities of information" (4).

This radical proposition then is developed by Lyotard's discussion of how knowledge is legitimated, and he suggests that 'traditional' theory is more invested in narrative, whereas 'critical' theory is more concerned with self-reflexive questions. The notion of credibility (extending to Lyotard's assertion that postmodernism is incredulity toward metanarratives) and of who is authorized to be believed stands out to me as a point through which to begin unpacking the text. The continued references to games, and language games in particular, really intrigues me, especially in terms of how the rules of the game are legitimated, and then what that game legitimates (a passage that interested me was on p. 24). In addition, it is interesting that Lyotard uses the term 'performativity' in the text in very much the way that the concept of performative utterances was set out by J.L. Austin, and yet that he also uses performativity with respect to systems-- "the optimization of the global relationship between input and output" (10). With regards to this question, I was particularly interested in section 12 and pages 64-65. I am somehow tenuously linking the idea that knowledge can be legitimated through performativity to the idea that the disembodied definition of information set forth in the Macy Conferences created rules for a game wherein if a technology could be developed with this definition as its 'basis,' the basis was legitimated. The idea of 'moves' is also fascinating.

Because it is a report on knowledge, it is interesting to me how self-referential Lyotard gets about the idea of the university, or even with his examples, e.g. "The university is sick." Also, "Copernicus states that the path of the planets is circular" is an interesting choice because so much of Lyotard's discussion has to do with knowledge possibly circulating as currency in the future, with all attendant questions of access and power.

information wants to be free (does it? can it?)

In Lyotard's final paragraph of the "The Postmodern Condition," he argues that the computerization of society seems situated between a binary future: on one hand, the computer could become "the 'dream' instrument for controlling and regulating the market system...in that case, it inevitably would involve the use of terror" (67). On the other hand, the computer could "give the public free access to the memory and data banks" (67). His preference clearly lying in the latter option, Lyotard imagines how "languages games would then be games of perfect information" (67) but not (paradoxically) predictable as "the reserve of knowlege... is inexhaustible" (67).

I find it ironic if not problematic that Lyotard (with so much against the essentialism of the metanarrative) could possibly distill down the question of computerized society to such a simple binary. Is this really an either/or equation? Either the terror of information used against the individual (terror) or the individual empowered by access? In the binary, one can already see the science fiction narrative forming: on the one side, a institute whose purpose is to control information ("knowledge in the form of an informational commodity indispensable to productive power" (5)) and on the other, individuals who fight to make it free, info-thieves/hackers who take information from the rich and give it to the poor. Even you haven't seen a specific permuation of this narrative as sci-fi film let me recommend the 1995 film "Hackers" were New York hipsters are framed by Lyotard's feared "multinational corporations" (5).

Already, the essentialism of either reality is beginning to look absurd. It seems impossible that society could ever achieve either of Lyotard's spectral ends. Instead, society is left between the two by forces pulling it towards the extremes.

A brief look at contemporary cyberculture reveals this exact scenario. Institutions like Harvard or MIT who attempt projects like Open Access and Open Courseware respectively are frequently rebuffed by their professors and their publishers, whose exclusivity to the dissemination of this information equates to financial gain.

Not being an economist or an information theorist, it is impossible for me to adequately address the question that's really bothering me, which is basically does information retain an intrinsic value if it's freely available?

Postcapitalist wisdom would suggest no, the secret is more valuable that then the known fact. But when try to imagine the world of Lyotard's open informational access the question must be restated. And if that world begins to emerge, than like a library of books overwhelming a researcher, the value would seem to shift from the question of availability to the question of discovery and aggregation. This is to say that if "knowledge [was] produced in order to be sold" (4) does information now exist to be 'peddled' and 'distributed' ? And if this is true than the question is not of open information but open/optimized means of getting at information.

google/ge decision making

I read about a week or so ago about how Google is embarking on a smart grid development plan in partnership with GE. Their plan is to restructure the nation’s electrical grids and access “real-time information about home energy use” in order to make energy consumption more efficient. This endeavor is relevant to aspects of both the Lyotard and the Licklider readings this week.


With respect to the Licklider, the smart grid would be an example of man-computer symbiosis on a massive scale. One of the key elements that was not sufficiently advanced in 1960 to develop a system of man-computer symbiosis was real-time cooperation and information exchange between human and machine. The smart grid would use real time information about energy consumption to increase efficiency of the input/output matrix of (electrical) power.


To be enacted, the smart grid first of all requires a lobbying effort on the part of Google/GE. Such an effort, if successful, would be an example of the process by which multi-national corporations use their privileged access to information to make decisions that broker power in arenas that were formerly the purview of government. Lyotard indicates that this process of "economic redeployment" based on the commodification of information "goes hand in hand with a change in the function of the State" (14).


Google's official blog mentions that currently "we all receive an electricity bill once a month that encourages little except prompt payment," implying that with the smart grid information will also be flowing with this electricity. But the fact that the information will flow about home energy back to the grid recalls Lyotard's imaginary "flows of knowledge traveling along identical channels of identical nature, some of which would be reserved for the "decision makers," while the others would be used to repay each person's perpetual debt with respect to the social bond" (6).


The smart grid, which is not a vision or goal exclusive to Google, seems to me to be an extreme example of the computerized routinization of such clerical and mechanical tasks as Licklider complained took up so much of his time (5), which Lyotard calls "the functions of regulation [... being] further withdrawn from administrators and entrusted to machines" (14).


The Google smart grid appears not only to be an attempt to monopolize access to information about energy use by homes on the grid, but to monopolize the supply of the electricity that powers the computer age, whose information is increasingly filtered through Google's search engines, and now its browser.

On the Role of Knowledge in Late Capitalist Society

Postmodernism, as Lyotard so aptly puts it, is an incredulity towards metanarratives. This is the basic thesis of his entire report and it moves on from here to substantiate this claim which is essentially that all our great assumptions have been undermined and that we are in a state of diffusion and multiplicity. Central to this is the role of knowledge and power.

In many ways, Lyotard's report has a lot in common with Althusser's essay on the Ideological State Apparatuses. The role of the educational institution in both cases is as a sort of primer for a specific role in society. Whereas Althusser stresses its function as a reproducer of the conditions of the modes of production, Lyotard stresses its role in relation to wealth and technology and the changes in the meaning and motivation for education. No longer, learning for learning's sake, he mentions that the most important question on the mind of the student is "what use is this?" and in this way, there is an alignment between the mind of the individual and the mind of the state. By focusing on the practical aspect of knowledge as a commodity and by funding those research ventures that aim to increase out technological efficiency, we enter into a new type of economy (and this has been stated oft before). From the importance of ownership to the importance of access, this motif appears again here in Lyotard in this form where knowledge moves from the value of being "true" to the value of being "useful."

These sorts of things had been discussed somewhat in Weiner concerning his research for war technologies but receives a more thorough theoretical treatment in Lyotard. As an information economy, the emphasis of the state's exertions is not just on the exploitation of others for its own benefit; it's how to exploit them better and more efficiently and this is achieved through this development of "useful" knowledge so that new technologies may be created to allow a nation to stay at the cutting edge of efficiency, power, and control relative to its citizens and other nation-states. Education still serves the purpose of training the bureaucrats and administrators of the government but its more general purpose has a much broader scope of possibility.

This goal of the state of total control, however, cannot complete itself. This is brought quite often by Lyotard through his many references to Godel's incompleteness theorem so it becomes that one of the most influential debates in science is that of a meta-debate concerning its own legitimacy in light of quantum mechanics and the like. This also frames facts of history. The Roman Empire and the British Empire have both seen their legitimacy and ability to exercise effective control crumble as they became more totalizing in their mission of territorial control and subordination.

Other things to consider:
-Languages games and the agonistics of discourse
-The crisis of legitimacy in science and its use of narrative knowledge for legitimation while simultaneously disdaining it as non-verifiable

I end with a quote:
"Knowledge is no longer the subject, but in the service of the subject: its only legitimacy (though it is formidable) is the fact that it allows morality to become reality." (36)

Bildung

In the 12th section of his book, Lyotard poses that in the postmodern age of technocracy, "what we are approaching is not the end of knowledge--quite the contrary. Data banks are the Encyclopedia of tomorrow. They transcend the capacity of each of their users. They are "nature" for postmodern man." (p. 51)

The argument is that an essential transformation has occurred within the university as an institution of legitimation. Under the stressors of new technological development, namely information technologies, the university has necessarily had to reorient itself in order to maintain its position as a bildung project. Given the ability to access vast stores of information, a collective memory of sorts, it becomes the student's imperative to be able to 1) access and navigate this information, and 2) formulate new connections between this data, what Lyotard calls "imagination." (p. 52) The "Spirit of speculation" of Humboldt's university model was sought after in a regimented fashion, but this Spirit has now become a spectre--what matters to the postmodern student is the ability to transcend barriers in creative ways. What is also augmented is the bildungsroman project as it is traditionally understood.

This project was traditionally composed of two currents unified by the Spirit: the acquisition of learning by individuals, and the training of a "fully legitimated subject of knowledge and society." (p. 33) Bildung could be seen as a tool of the state, that is, as a nationalist process that aimed to endow new generations of its population with the necessary tools and skills to perpetuate the nation. Lyotard rejects this notion, however, in stating that thinkers such as Humboldt held up the speculative spirit as outside of authoritarianism. The postmodern condition changes this, for a rejection of such grand-narratives forces a crisis in the bildung project: science is removed from the game of self-legitimation by technology. This allows the process of the "growth of power, and its self-legitimation... taking the route of data storage and accessibility, and the operativity of information." (p. 47)

The bildungsroman project has changed from the personal growth of an individual resulting from that individual trying to find her/his place in a homeostatic society, to that of the postmodern bildungsroman wherein the protagonist finds her/himself constantly displaced as the narrator within the stores and flows of information which constitute a dynamic society. Yet it maintains a distinct bourgeois character: the first carried assumptions about the relative autonomy and integrity of the self (individualism), while the second is embedding within the material culture of aristocratic access. What has changed, however, is the means and the ends of the narration. For postmodern, cyborg subjects, the bildungsroman is about learning what it means to be human, a process mediated by technologies--the process is paralogical.

Lyotard argues that paralogy avoids terror, or the arbitrary removal of certain voices from discussion. It also operates on locality, thus allowing for an individualist ethos to surface: it privileges the "here and now," a product of the university's attempt to stimulate imagination. What endangers the bildungsroman of the postmodern error is precisely its bourgeois nature: the danger of people being left out of loop, of being unable to participate--thus Lyotard's request on the closing page of his essay to "give the public free access to the memory and data banks." (p. 67) It cannot be assumed that those without access to technology are incapable of producing "local statements," therefore the material nature of technology has to be worked through before paralogy can become a useful tool of new knowledge production.

Lyotard's Postmodernism

I find I am more interested by what Lyotard doesn't say than by what he does. For example, why is there not a single mention of Barthes in the book?

Lyotard argues that concurrently developing streams of thought, particularly over the last few hundred years but traceable back to Platonism, have led (or are leading) to a crisis in contemporary knowledge--specifically, in the legitimation of Western knowledge. He isolates two models for comprehending the actuality of experience, which I suppose can be summarized as the scientific and the speculative. The scientific method assumes that phenomena can be observed empirically and accurate deductions can be made in order to formulate "laws," which would be legitimate by virtue of the scientific method. The trouble is that these deductions require the presupposition of universal laws in relation to which we may arrange the information we have acquired, laws which must be determined by (demonstrably fallacious) scientific reasoning. On the other hand, the speculative model (which includes such phenomena as German idealism), which attempts to create a self-modulating metalanguage to increase the theoretical and functional accuracy of its utterances, suffers from essentially the same problem--the lack of a good referent. Lyotard speculates that the function of narrative as the overarching reality of human existence (which is to say, the inevitability of the passage of time) can serve as a reference in itself, but I don't quite understand what he wants us to do. It seems fair at this point to say that the only reality to which we have access is the reality of language, or rather, communication (which can be defined as the awareness of the passage of time followed by the conscious expression thereof), to which we can consign the entirety of the arts and sciences, and all the words that people have said. I guess the project of postmodernism is to cautiously but immediately utilize the new potentials of a computerized world to map out the structure of language in a way that can inform the general population as a whole--while avoiding the pitfalls of, for instance, Stalinism and Nazism.

Lyotard, Nationalism, and Legitimacy: Questions

In discussing the problem of legitimacy, Lyotard relies on Humboldt, who denotes "a...threefold aspiration" necessary to the "training of a fully legitimated subject": "deriving everything from an original principle,...relating everything to an ideal,... and...unifying this principle and this ideal in a single Idea," (33). This aspiration, as anything, is subjective, so how can any system, state, or nation, be effectively defined as legitimate without adhering to this threefold congruence? Is Lyotard's delineation of the Occident (versus, as he leaves to be implied, the Orient, or any non-Western culture of thought) meant to establish a binary opposition, or does it go further with the aid of irony and speak against such oppositions? More simply, is the way Lyotard frames his arguments part of his critique?

In stating that "the language game of legitimation is not state-political, but philosophical," (33), what is at stake in his construction of another opposition? Is this disingenuous? Is there really a separation between the political and the philosophical?

Lack of faith

Because our minds are completely informed by signals conducted along our nerves, our minds' union with the physical world is tenuous. For the most part our perceptions reflect reality, or at least we all agree on the same misconceptions. This kind of thinking isn't new, either. Illusions and magic have been around for as long as the human race. But problems arise when perceptions which do not reflect reality are made to affect reality through social structures and systems.

Early in human history we made an effort to defend ourselves from dependence on information by creating "rigorous systems" like geometry or law, which tried to reduce the chaotic and unreliable world to manipulations of symbols or abstractions.

As Turing proved, machines can be just as good as humans at manipulating symbols and these abstractions necessarily leave out the chaotic or exceptional parts of reality. This was surprising at the time, but maybe it shouldn't have been. Looking back, the whole idea of rigor is in fact to suppress human failure, exceptions, and chaos.

Because they are easy to work with we have become dependent on rigorous systems to tell us how to live. While some systems like math and engineering depend on consistency with the real world to be useful, systems like law, government, and finance are not necessarily rooted in any aspect of physical reality. Rather, they deal with information transactions between information processors, with relationships between people.

But as Maturana described our physiological structure, minds are isolated from reality by nerves which stimulate parts of a preexisting biological structure. The information upon which we rely to shape the immensely consequential transactions of law or government is not reliable but subjective, and, because the computer mediates much of it, malleable. The structures on which we rely for order in an inherently disordered world are predicated on a stability of information which we as a society are beginning to realize is simply not there.

With her description of the flickering signifier Hayles uncovers one of the central paradoxes brought on by the "changing state of knowledge" (Lyotard 3), which really is the changing state of information. If what we perceive can change instantly through nontransparent chains of referentiality and consequence (as in a computer), we can not and should not have faith in our perceptions. Information and our manipulations of it have been demonstrated not to be rooted in reality but rather in ourselves.

As a result of this lack of faith in information, and in the light of the present financial crisis and the theatrics of the upcoming election, I've begun to wonder whether we should maintain faith in the informational cathedrals of law, government, and finance we were born inside. If any information could do for our perception of reality (as shown by the computer), and if the existing institutions are failing or misleading, than the preexisting should have no more weight than the as-yet-unimagined alternatives.

The Terror within Postmodernism

My question has to do with the terror that haunts Lyotard's writing on postmodernsim. The terror described is the threat of eliminating the opposing player in a language game (46). "The stronger the "move," the more likely it is to be denied the minimum consensus, precisely because it changes the rules of the game upon which consensus has been based" (63). In this "realm of terror" the social bond is destroyed since the game is being played at the level where the player is either operational or disappears (xxiv). ("Do this or else you'll never speak again," 45).

In the opening of the book where Lyotard presents and grapples with the question of "Who Will Know" (6), he discusses the commodification of information in the postmodern age. Information is now more mobile and subject to piracy. The effect upon the subject of such information is a feeling of perpetual debt with respect to the social bond. The subject's work is never over within society since information at any moment may move on or be stolen away. As in the language game played in "the realm of terror," the postmodern subject must be efficient or lose his or her place within the game.

Yet if terror breaks the social bond, while the state of information today perpetually binds the subject to it, then what is the role of terror within Lyotard's exploration of knowledge? Lyotard ends his book calling for free access to databanks and memory in order to allow for politics that respect the public's desire for justice and desire for the unknown (67). This point is in opposition to the narrative of the computerization of society that would involve the use of terror in its "dream"-like, totalizing explanation. Does this call for open source information not contradict his earlier point on the commodification of information? Is terror a positive, regulatory force or a negative, harmful force within knowledge structures? How does it function in relation to the power of knowledge institutions in postmodernity? Finally, is terror the price one pays for an illusion of some real unity, as Lyotard mentions in the appendix?
One point that did trouble me, relevant to Sinje's comment below, is that although most of the text seems descriptive of the postmodern times in which Lyotard finds himself, injected into this is a very prescriptive notion of the ways in which information should be treated and by which science and society should construct themselves (through little narratives, via paralogy, local rules, &c.). Although I'm easily enchanted by utopian visions of multiple narrative, decenteredness, and open access to information, it seems as though there is still in the State an imposing metanarrative. I don't know that Lyotard explicitly denies this-- in his arguments on power one can easily read politics and economics, but omissions like this make some of his grander ideas more difficult for me to seriously apprehend. He makes statements that are profoundly apt in today's society: "The games of scientific language become the games of the rich, in which whoever is wealthiest has the best chance of being right" (45); but then these fade away amidst some of his broader ideas.

So considered, perhaps a single place from which I might be able to better work through his theory as a whole is where he describes reality on page 47. Is this sentence a definition of reality, even if necessarily a local one? Or simply a comorbidity of it? Because his concept of reality here is important in structuring elements of his argument, whether he is in the very delineation of terms being descriptive or prescriptive is very important. That is, do we take him to mean: "since 'reality' [qua physical world experienced through senses, &c.] is what provides the evidence..." or instead that "reality" as a term is here meant to mean "that [abstract entity which] provides the evidence....". Which perhaps comes down to his notion of the real and thus circles in some way right back to descriptive/prescriptive....

Other things: I like paralogy as the model (you know, of course: 'model'-but-without-the-outside-ideal-ness-that-model-connotes,-albeit-more-groundedly-denotative-than-'metaphor'). Unrelatedly but also a neat potential point of discussion: Bill Gates's "creative capitalism" sprung to mind at several moments as I read this text, although whether it owes a lot to or makes a travesty of the relevant parts I'm yet undecided.

Also interesting is that although cybernetics, information, and computerization figure into Lyotard's ideas, I don't think he anywhere talks about computing machines or the physical objects of technology that are thereby implicated, other than the "memory and data banks" that would store I guess all information society knows to that point. Although I can speak only to the American and not French history of computers, in 1979, computers were far from common but they weren't exactly new. If anything, they were more expensive and less likely to be owned or even used by the average person than one might consider a computer today. So besides some of the power dynamic behind computers and programming that I'd like to consider further and maybe discuss in class, I wonder how Lyotard saw "free [public] access" to computerized information taking place.

Saturday, September 27, 2008

So up, for waging a war on totality

In an attempt to wrap my mind around Lyotard's argument, I seriously wonder where he sees the possibilities of intervention in his theory, though. If knowledge and the production of scientific truths are essentially questions of money, which results in power which in turn can continue to produce more truths and also threaten any player who does not oblige to the pre-existing system with terror, which means that he/she will be excluded from the language game, then how and where does he actually see the possibility for an intervention? 

Is it in the "nature" of this new scientific discourse of not obliging to any scientific method, which would be to find a consensus among peers either within the rules of the already existing game or by changing the rules and therefore also the game through a new revelation, but to legitimate its existence and its way of questioning through paralogy. Does he mean with paralogy in this context that this new scientific system is somehow working against its own legitimation or previous forms of scientific legitimation? And how does this interact with narratives. While part of postmodernism is surely the abondenment of metanarratives, he also seems to place actually quite a lot of emphasize on the "little narrative", since he refers to science inventing its own narratives, and also stresses the point that in the humanist system the metanarrative could not be controlled, as they were a product of a discourse among scholars, who also had the possibility to not agree to certain policies.

If the players are to choose their "moves" "locally" how can they avoid terror? Can we maybe construct new narratives, maybe not metanarratives, but still narratives which are important enough to get noticed? And why does he attack Habermas so viciously?

I was also a lot reminded of Foucault and how he traces the historical development of how our times method to determine truth resulted in the model of the "epreuve". And also of his concept of power/knowledge.

definitely an exciting text,
a wondering social atom

Monday, September 22, 2008

statistics - a science of the state

While I found that Hayles' text limned many of the implications of cybernetic theory's erasure of embodiment in favor of informational pattern, I think that she misses an opportunity to further intertwine the cultural, scientific, and political stakes of information theory by leaving the genealogy of statistics out of her discussion. As we've seen in both Weiner's and Hayles' writings, statistical mechanics was indispensable to the scientific production of the universe as fundamentally probabilistic. But before Gibbs could wield statistics to breach the Newtonian paradigm in physics, this science of probability was forged in service of the state (from which statistics derives its name).
While a thoroughgoing history of statistics is beyond my grasp, commentaries by Hacking ("How Do We Write the History of Statistics?") & Foucault (e.g. History of Sexuality, "The Right of Death and the Power Over Life") present some of the stakes of a new mode of scientific reasoning and its application to (or, perhaps, constitution of) a state's population

Foucault claims that the aggregation of data about individuals engenders a form of thinking about society in which the population of a state can be analyzed & targeted by power/knowledge.  This coupling of the collection of data about & the control of the population prefigures Wiener's claims about society's fundamentally informational character & first-wave cybernetics' preoccupation with homeostasis.
I contend, following Foucault, that the emphasis placed, on one hand, on the population (characteristic of biopower) & , on the other, the machine-like regimentation of individual conduct (discipline) transgresses the boundaries of the autonomous liberal subject, setting an important precedent to the cybernetic challenge to the subject of humanism. Hacking points to the rise of social laws posited as statistical by their very nature, in light of which social scientists seriously questioned the autonomy & free will available to any individual to resist these laws.
There is also a question of feedback loops in that categories of analysis (the recidivist, the homosexual, the suicide), abstracted from embodied instantiations of certain behaviors, are then available for individuals to reincorporate and/or resignify.
Hacking cites statisticians' obsession with "immoral" activity, which leads to ideas of the normal & the pathological, the norm & deviation. How might this binary interact with  pattern/randomness as presented by Hayles? 

Although these observations are somewhat scattered, I hope that they at least mark a site of exclusion in Hayles' text that we could explore further in class.

Sunday, September 21, 2008

Post-Humanism?

I'm still trying to wrap my head around this week's readings as well as last week's readings. Science confuses me greatly and I don't think I grasped these concepts as well as a should. I hope these comments are cogent. 

First, let's look at the statement on page 7 of Hayles, "Henceforth, humans were to be seen primarily as information-processing entities who are essentially similar to intelligent machines." 

I think that that statement is mean to be a summary of the conclusion of the project that founded cybernetics. However, I don't think that that statement can resist problematization and critique. I suppose that the biggest argument in favor of the statement is a reasonable application of the Turing Test or configuring machines that can act like and perform human functions. I can agree that this type of machine could be intelligent. But, I fail to see how a machine that can process information as well as a human can be said to be essentially similar to a human. Essentially, which is italicized in the original text is a loaded word. It assumes that the terms preceding it, that humans are primarily information-processing machines is true. It seems to me that humans are characterized by more then that, especially in light of our ability to make decisions in the absence of any/all information. So, the term essentially plays an additional role of essentially essentializing. Not to mention the fact that since Hayle's bases her conception post-humanity on a strict definition of the term intelligent machine, there is doubt whether or not it's even a realistic prospect. 

2. On page 30, Hayles says "Language is not code," Lacan asserted, because he wanted to deny one-to-one correspondence between the signifier and the signified. In word processing, however language is code." Let's add to and say that html is code, ajax is code, and c++ is code. The problem here is that Hayle's conflates two very different systems in order to prove her point about floating signifiers. Language as a communicative apparatus is a symbolic and signifying system. How well this occurs is as "asserted" by Lacan difficult to fathom. I might be wrong here but I tend to think of code as something that is quite unlike a symbolic and signifying system. In fact code is by definition a 1 to 1 representational system. There is nothing resembling the big Other in an ajax script. Hayles might be correct about the existence of a floating signifier but her proof of concept here is a little off. 

3. Speaking of which on 31 let's look at the text starting with "If I am producing Ink marks..." I feel as if there is a somewhat large ambiguity here concerning the difference between hardware, software, and wetware. It's true that a word processing program will allow me to do thing that I cannot do on a typewriter. However, that ignores the fact that code is just software, an information pattern that has to be run on a piece of hardware, i.e. a computer. The typewriter is limited in so far as its hardware = its software. The computer is limited insofar as its software is depended on its hardware for representational activation. I think that distinction needs to be made more clear or her argument about presence vs pattern becomes basically incomprehensible. 

pattern/randomness dialectic

I found Hayles’ discussion of the displacement of the presence/absence opposition by the pattern/randomness to be very pertinent to the distinctions between the definitions of information in relation to entropy in Weaver (or Shannon) and Wiener. After discussing the two definitions at such length last week, I had the impression that I agreed with Weaver that the more entropy a system has, the more information a message has. But it wasn’t until I read How We Became Posthuman this week that I had a clearer idea of what I found lacking in Wiener’s formulation.


Towards the end of class last week I mentioned that I thought that the theory of evolution by natural selection presented a small “explosion” in Wiener’s text. Natural selection, of course, comes about through random mutations of genes that make an organism more fit to survive in its environment, more capable of swimming locally upstream against entropy. Entropy, however, is the very condition of those mutations. Wiener points to a similar process (learning) when he cites “Ashby’s brilliant idea of the unpurposeful random mechanism which seeks for its own purpose” (38). He doesn’t reject the idea that entropy is a condition of evolution or of learning but he does indicate that in the long run all instances of local organization will meet their “eventual doom” in the face of entropy. It seems to me that it is in the long run that Wiener differs from Weaver’s idea that as entropy increases, information increases.


Contrary to Davis, I thought Hayles’ discussion of mutation in the pattern/randomness dialectic effectively shed light on how randomness evolves into pattern (32-3). She points out that mutation conserves the idea of pattern while disrupting it because it replaces only a small segment of a long chain of conserved digits.

connecting literature and science, Weiner contextualized, posthuman gender

Katherine Hayles’ How We Became Posthuman is most successful in its effort to historicize and contextualize the three major movements in cybernetics and its conceptual evolution from “homeostasis” to “reflexivity” to “emergence.” In particular, Hayles’ discussions of Weiner’s Human Use of Human Beings provides a helpful perspective from which to understand the connections between Weiner’s ideas, popular culture, and theories of language and semiotics. One connection that I found particularly cogent was that between Ferdinand de Saussure’s conceptualization of “la langue” and weiner’s assertion of the probabilistic nature of communication. For Saussure, the selection of a specific sign is marked by a choice between numerous signs in an expansive field; the choice of a specific word to communicate an idea, then, is ultimately the choice not to use all of the other signs in the field. Each sign derives its meaning from its relation to other signs in that field, not from a direct reference to an external “reality.” Weiner’s conception of communication as a fundamentally probabilistic act also relies on the notion that messages derive their meaning relationally, not referentially. As Hayles concludes on page 98, “For Weiner no less than Saussure, signification is about relation, not about the world as a thing-in-itself.”


 Another interesting if unrelated question lies in Hayles’ discussion of human beings as informational patterns. If humanness is constituted as an informational pattern rather than an embodied enaction, is the human body—the corporeal proof of humanness—evacuated of meaning? When bionic cyborgs merge human flesh and machine intelligence, is gender reduced to an antiquated organizational feature, a mere vestige of an embodiment once central in the formation of individual identities and gendered subjectivities?


I’m lastly interested in the critical intersections between cybernetic theory, information science, and cyberpunk literature that rachel alluded to in her post “narrative, subjectivity, signification.”


Rachel says:


Within the context of this project, [Hayles] is also interested in how culture and science circulate through each other, such that literary texts both embody scientific assumptions and enable further research in certain directions (21).


I wonder specifically about the extent to which literary images of “futuristic” cyborgs have impacted the ways in which scientists and engineers have themselves conceived of the forms and functions of their cyborg creations. Literature has certainly followed the evolution of cybernetic science; have fictional texts reciprocally spawned cybernetic imaginations or influenced the science itself? Where are the boundaries between the two, or do they, as Rachel asserts, “circulate through one another?” A feedback loop, perhaps? 

platonism and randomness

My favorite part of How We Became Posthuman so far is Hayles' description of the Platonic backhand and forehand. The first process, the Platonic backhand, describes the way in which the world is transformed in the human mind from dirty special cases to beautiful abstractions. She claims that humans see the world's inconsistencies as exceptions to rules because they create descriptive systems and treat them as prescriptive ones. The second process, the Platonic forehand, describes the recent tendency among such theorists as Stephen Wolfram to claim that simple rules generate the complexity of the world through sufficiently gnarly cellular automata or other similar formal systems.

Over the summer I read a book by Gregory Chaitin called The Quest for Omega. In it, he discusses the mathematics of randomness and undecidability and does a wonderfully accessible job of proving that all formal systems must be incomplete. He also reveals the philosophical implications of of such proofs. In his mathematical description he shows that finite axiomatic systems, which are an abstraction of all systems which start with axioms and rules and then derive 'truths' from them, are necessarily incomplete. That is to say, there will be facts which are true in the real world which will be unprovable in such systems. The systems he treats in his proof are analogous to the systems which the Platonic forehand generates.

According to Hayles, applications of the Platonic forehand have worked to separate information and materiality. I wonder if it would be possible to demonstrate that all rigorous abstractions generate fallacies like these. The universe has been shown to include randomness as a constituent by quantum mechanics, and mathematics has been shown to include randomness by Chaitin. Any extraction of abstraction from a system which includes randomness must fail with nonzero probability.

I believe that applications of both the Platonic forehand and backhand lead to inconsistent theories of the world. In the search for truth it is best to keep truth in your sights. To ignore the randomness, the "noise", or the "fuzz" (as Hayles calls it) is to ignore truth. With no amount of effort will humans be able to compress the universe into a series of axioms and theorems.

Posthuman Photography


"The human body is no longer an absolute entity, fixed by nature and destined to be eternally replicated...the era of genetically engineered body has begun. We are beginning to think of the bodies we inhabit in much the same way as we do the clothes we wear - as changeable according to climate, task, fashion, and whim...We are allowing our bodies to be transformed..." (From The Century of the Body: 100 Photoworks 1900-2000, Edited by William Ewing, Thames & Hudson, 2000). 

On a break from Hayles’ How We Became Posthuman this past weekend I was researching another project and came across this opening to a photography book concerning the human body. The introduction continued on the topic of how artists, specifically photographers photographing nudes, are reconsidering this reconfigured body of the digital age. The introduction also pointed to the fact that photographs of the body are both the core of our dreams and of our nightmares since by representing the body they bring both an enjoyment of and fear of the body. The picture accompanying this post by Pierre Boucher called “Electra” from 1961 evokes the terror and pleasure that Hayles speaks of in the prospect of becoming posthuman. 

The photograph displays the paradox between the pleasure and terror of the prospect of posthumanism because it displays Hayles’ argument that the “human being is first of all an embodied being” (283). It is pleasurable to see a new way of thinking the nude. Yet the machines in the photograph cannot articulate the beauty of the human form. In the conclusion to Hayles’ book she points to the fact that “the body itself is a congealed metaphor, a physical structure whose constraints and possibilities have been formed by an evolutionary history that intelligent machines do not share” (284). This picture I see as a perfect display of that statement.




Disembodied Movement, Structure Info. & Web 2.0, Subversive Language

1. Hayles discusses the shift toward disembodiment by cybernetics and how that shift relates to subjectivity. For cybernetics to equate humans with machines, subjectivity needs to lose its body.

Hayles thus brings up Gibson's "point of view" as a disembodied way of experiencing subjectivity:
"pov is a substantive noun that constitutes the character's subjectivity by serving as a positional marker substituting for his absent body" (37).

The "positional marker" thus navigates through information to create the informatic narrative (or pattern) that now supersedes presence as subjectivity:
"Narrative becomes possible when this spatiality is given temporal dimension by the pov's movement through it" (38).
AND
"the dataspace is narrativized by the povs movement through it" (39).

In this sense, what does Hayles regards as movement? She gives the example of Gibson's characters jacking into a neural network and traversing the system from there, but more specifically, what does disembodied movement constitute?

Could another example be the machine's tagging a corpus of text, or the human's click-history through Wikipedia? Are these informatic narratives as well?

2. After watching "Web 2.0 ... The Machine is Us/ing Us" for our first class, MacKay's suggestion of information theory (that was apparently confined to British academics) piqued my interest.

The YouTube video suggests the movement from a syntactic web (Web 1.0 dominated by HTML
formatting of data) to a semantic web (Web 2.0 dominated by XML formatting of data) is forming a super linked structure that we help create by defining relationships between data.

MacKay tried to push information theory from a contextless theory to one that involved selective information and structural information (in essence, providing semantics to raw data):
"The information content of this message, considered as selective information (measured in "metrons").... structural information (mesaured in "logons"), for it indicates that the preceding message has a kind of structure rather than another" (55).

Are these two shifts similar in nature? Does Web 2.0 incorporate this observer MacKay strives to include (or for that matter, reflexivity in general)? How were these two shift's goals different/the same?

3. Hayles also writes about cybernetics attempts to equate humans with computers (at least in terms of function). To do this, cybernetics tries to erase all traits humans and machines cannot share (specifically the body). This leaves language (code, mathematical models) as a way of connecting the two. But machines cannot understand "ambiguous" language since they
need formal language to execute commands and still function.

"The common ground that humans and machines share is identified with the univocality of an instrumental language that has banished ambiguity from its lexicon" (67).

Use of language by human and machines creates a "universality [that] is achieved by bracketing or 'black-boxing' the specific mechanism" (60).

Yet, I still assume language can be subversive in these situations (e.g., sending a double encoded message that a machine only understands one meaning but allows a human to read two meanings). Are only embodied beings able to decode these messages? Would such
messages constitute the new anti-Turing test?

Weiner's withdrawal, Hayles' deconstruction

I'm interested in Hayles' depiction of how Weiner, "struggled to envision the cybernetic machine in the image of the humanistic self" (86).  She observes how his alliance between human and the machine in terms of their identical capabilities to self-regulate, enabled a cyborg subject to emerge which can undermine the very autonomy that is realized by the liberal humanistic tradition.  Hayles does a good job illuminating Weiner's ambivalence which arises from his imagined alliance between machine and human.  The "horrified withdrawal" that Hayles envisions Weiner to make when he sees his liberal subjectivity replaced by the cyborg subject, captures a significant consequence of Weiner's cybernetic discourse (87).  Hayles describes how within this discourse, Weiner attempts the impossible task of designating a cybernetic machine that will enhance the autonomy of the liberal humanist subject.  Weiner's extension of the human subject, "into the realm of the machine," will therefore enact a new subjectivity that ultimately cannot be contained by liberal humanistic values (86).
Weiner's cybernetic discourse, in which information is linked to pattern without materiality appears to obscure for Hayles a complimentary and potentially emancipatory relationship between pattern and presence.  I'm interested in how Hayles reconstructs the cyborg subject within a cybernetic discourse which requires the deconstruction, rather than the enhancement of or withdrawal from the liberal human subject.

rejecting binaries

For my post, I wanted to focus on Hayles’ writing about the Flickering Signifier and the historical trend to move away from presence/absence to pattern/randomness.

I really liked Hayles’ skeuomorph in bringing code and cybernetics into the realm of structural and psychoanalytic linguistics by distinguishing Lacan’s floating signifiers to today’s more appropriate “flickering” signifiers. The fact that a signifier “can no longer be understood as a single marker” is incredibly impacting, and not just to the realm of informatics.

However, I believe that her argument that flickering signifiers is an example of how we are progressing from presence/absence to pattern/randomness is very problematic.

The argument that we cannot know information without uncertainty, presence with absence, and pattern without randomness is a binary teleology that, I believe, is regressive when dealing with the multiplicity and complexities of “third-wave” cybernetics and code as language. The fact that we are still using negative relationships (pattern is pattern because it is not randomness) entrenches Hayles’ argument in a geminated world. I find it hard to see then where multiple levels and parallels, which are necessary for this flickering signifier, can exist if everything is still being reduced to a yin and a yang.

I do believe that she tries to flesh out her ideas more by introducing the concepts of “mutation” for presence/randomness and “castration” for presence/absence. I wish she would have expanded more on this as I found her attempts to correlate mutation to castration very hard to follow/problematic and the way she described mutation often sounded like it was the same thing as randomness. I do believe that it is this third factor, not the noise entering the system, but the actual entrance of noise into the system where these multiplicities are created. I think that instead of trying to understand something by seeing what it is not, we she focus in on where the 1 and 0 converge and enter into each other- that is the “catastrophe” where signifiers become signifieds and randomness evolves into pattern.

Humanism continued

The exposition that Hayles gives regarding the new prime binary of pattern/randomness as, in a sense, replacing the presence/absence binary of days past appears as a continuation of the humanist project rather than an opposition to it. The post- of the posthuman is more of a meta than an after, that is, it's not new in the sense of destroying the old as much as it is an improvement on the old, going beyond its methods to achieve its goals.

The mind of Cartesian dualism was always a strange thing whose existence was ambiguous and undefined. This mind has now become information as an abstraction that can exist in any substrate whatsoever. During the primacy of the presence/absence binary this area in between was kept undefined and ambiguous but with the content abstracted from the container, everything contained in the mind (and thus the mind itself) has become separate in a more concrete and technical way than it ever was before. Things exist period; either as patterns or as randomness.

Hayles analogy between mutation and castration is thus an apt one since it foregrounds the failure of the system. In the privileged pattern, the mutation jams its perfect replication and thus its advantage.

access/ownership

I found Hayles’s most interesting articulation of the effects of cybernetics and the posthuman state to be the replacement of presence/absence with pattern/randomness (29). The concept of life as information as pattern opens the human boundary past the epidermis. Since cybernetic systems are “constituted by flows of information,”(84) a blind man’s cane an extension of the blind man. Thus cybernetics doesn’t only equate humans with machines, it extends the human. Despite Hayles’s problems with Wiener’s “transformation of embodied experience, noisy with error, into the clean abstractions of mathematical pattern,”(98) in which, Hayles argues, the differences in the embodied experience are ignored and ultimately lost, this border-crossing allows for the blurring of human boundaries. Although she is wary of the devaluing of the material, she concedes that “the contrast between the body’s limitations and cyberspace’s power highlights the advantages of pattern over presence.”(36) Implicit in this statement is the internet’s acts as a powerful extension of the human; human is internet and internet is human.
This reconceptualization of human boundaries demands a redefinition or new understanding of self-hood (279). The extension of the human via the internet, or more basically the concept of human as information in a larger web of information seemingly disrupts human freewill and agency.

The internet is an obvious example of access replacing ownership (39) and therefore pattern replacing presence, but I’m interested in the implications of this trend on the material as well as immaterial. In reconfiguring conceptions of ownership as ‘access’ to information, we could simultaneously reconfigure human relationships with more traditional material commodities, such as land and material production—a new post-capitalist theory to accompany a post-industrial era?

Today's Posthumanism, Ignorance or Freedom?

In reading N. Catherine Hayles' How We Became Posthuman, I found myself drawing connections between Hayles' arguments, and the Atlantic's much discussed "Is Google Making Us Stupid?" In that article, author Nicholas Carr articulates what Hayles calls her third narrative, that "the human is giving way to a different construction called the posthuman" (2). Carr doesn't say that he is becoming posthuman, but rather that he can feel himself changing as a result of his interaction (and arguably dependency) on new internet technologies.

Over the past few years I’ve had an uncomfortable sense that someone, or something, has been tinkering with my brain, remapping the neural circuitry, reprogramming the memory. My mind isn’t going—so far as I can tell—but it’s changing. I’m not thinking the way I used to think.

Carr is not alone in this assessment. In fact, Carr's observances have been echoed so extensively since the article's publication, that one might indeed judge it as a sign not of things to come, but of things that already happened, namely a transition towards some semblance of the Haylian posthuman.

For me, as for others, the Net is becoming a universal medium, the conduit for most of the information that flows through my eyes and ears and into my mind. The advantages of having immediate access to such an incredibly rich store of information are many, and they’ve been widely described and duly applauded. “The perfect recall of silicon memory,” Wired’s Clive Thompson has written, “can be an enormous boon to thinking.” But that boon comes at a price. As the media theorist Marshall McLuhan pointed out in the 1960s, media are not just passive channels of information. They supply the stuff of thought, but they also shape the process of thought.

To ignore for a moment Carr's metaphor "the stuff of thought" which it seems gets to the struggle between medium and information (as Hayles, the latter cannot exist without the former (13)), it is fascinating that Carr acknowledges the net as advantageous in offering "immediate access" to information. Is it really immediate? I mean this, in the sense that does information accessed through the internet come more intimately, is it more direct? If it is, one could gain the idea that what car is really bemoaning in his article is a new closeness with the machine, and therefore a new perception of dependency.

Beyond the mere tethering of man and machine, Carr's article also articulates a reconstruction of himself as a body related to the pathways of information embedded in internet technology. In the quote above, Carr relates his experience to the insight of Marshall McLuhan because he feels that the consumption of information through technology must reconstruct the human element receiving this information. In short, one can see in Carr's article the fear of technology as other, and a deep concern that his sense of self (perceived as a idealistic whole) is being reshaped by his use of technology towards an end that he cannot control.

In this last consideration of Carr's fear and anxiety about his relationship to the internet, it is worth considering the mechanism of reading that Carr uses to show how he has been "changed" by the machine.

I can feel it most strongly when I’m reading. Immersing myself in a book or a lengthy article used to be easy. My mind would get caught up in the narrative or the turns of the argument, and I’d spend hours strolling through long stretches of prose. That’s rarely the case anymore.

Hayles, I would guess, would rapidly deconstruct Carr's idealization of reading as a particularly rich signifier of the Western Intellectual tradition. That book culture, itself not a naturalized human activity, is taken by Carr as a standard for his ideal consciousness, one that is attuned to literature. How fascinating it is that Carr would naturalize that fluency. One might just as well say that relating to the book compromises a "natural" human consciousness, as the internet today. In fact, it seems egregious that Carr simply skate over the idea that the book could be considered a technology of itself, serving like the internet, to provide a mediation of information that must restructure the human subject. In fact, to reasses Hayles' consideration of the posthuman, why is it that we must wait for cybernetics to escape bodily understandings of intelligence. Would not the book also be a point of departure for the human intelligence from the body? What about the origins of language?

In Saturday's New York Times, writer Damon Darlin responded to Nicholas Carr's article by writing in his title "Technology Doesn't Dumb Us Down. It Frees Our Minds." A sharp contrast indeed. Consciously not as long as Carr's article, Darlin writes with a style of writing that must be considered as technologically-influenced brevity. As if making fun of Carr's long article about not being able to read long articles, Darlin writes a 140-character twitter translation of Carr's argument.

Google makes deep reading impossible. Media changes. Our brains’ wiring changes too. Computers think for us, flattening our intelligence.

"Computers think[ing] for us" is one of the central questions of How We Became Posthuman, but shelf the issue of can and how for a moment, it may be more interesting to complicate the issue of computer agency human thinking by asking "is being thinking? (in reconsideration of Descartes Cogito) and what will it mean when we computers are "being" for us? If there's an us in that construction.