Sunday, August 21, 2016

From Consciousness to Intelligence

“The brain is already a sleight of hand, a massive, operationalist shell game.  It designs and runs Turing Tests on its own constructs every time it ratifies a sensation or reifies an idea.  Experience is a Turing Test – phenomena passing themselves off as perception’s functional equivalents.”
~Richard Powers, Galatea 2.2

            Is intelligence distinct from the unconscious and non-intentional processes that underlie cognitive function?  When we administer IQ tests, do we actually measure intelligence?  Is intelligence “liftable,” to borrow a term from Douglas Hofstadter – can it be “lifted” from one substrate and installed into another – or is it system-specific, contingent upon the parameters dictated by a formal system?  Is intelligence a particular quality of an organism or system, isolatable to the conventional embodied structure of a given formal system, or is it an effect of the organism/system’s relationship to its environment – of its own recursive structure?  Is intelligence the ability to operate logically within a given set of axioms, or is it the capacity to produce novel axioms via the pursuit of isomorphic patterns with the external world – through tools, instruments, media, the interfacial film that reflects its existence back to it…?
            These are the ridiculous and probably ill-conceived questions that I grapple with, but the more I explore these ideas in the appropriate literature (fiction and nonfiction) the more I’m convinced that I’m not entirely crazy.
            In short, I’m interested in the following questions: when we talk about intelligence in humans, do we conflate intelligence with consciousness?  Are consciousness and intelligence related – and if so, how?  Does intelligence mean consciousness; or, alternatively, does consciousness mean intelligence?  These questions require so many qualifications that it’s impossible to give straightforward yes or no answers, but the inquiry into these indeterminacies is as rewarding (if not more so) than any definitive answer we could hope to give.  So, here is a very preliminary, and very amateur, attempt to foreground some of these questions.  Fair warning: I’m a literature PhD, I have no formal training in neuroscience, cognitive science, computer science, artificial intelligence, et al.  But I like to think I’ve encountered a fair amount of critical examination of these questions in my studies – enough at least to warrant a random blog post on the internet.  So, here goes.
            First, a disclaimer: if there’s one compelling notion that cognitive studies has illuminated, it is that the postmodern anxiety over logical grounds may not be so ill-founded after all.  Logical ground entails a premise, formula, or axiom that requires no prior proof.  In the wake of Gödel’s incompleteness theorems, the promise of logical grounds seems asymptotically distant – barely visible on the horizon, and never quite in reach.  Yet we have to begin somewhere, inaugurating a perpetual series of rejoinders from our critics, who cry in dismay, “But you’re assuming ‘x’!”  And they’re right.  I am making assumptions.  I’m making assumptions all over the place; but then, I’m fine with being an ass.
            My primary assumption is that consciousness is, quite obviously, not a centrally organized or isolatable phenomenon.  Consciousness is an effect, rather, of brain processes far too complex to rationalize via consciousness itself.  In other words, consciousness is an escape mechanism: a way of avoiding the supreme complexity of what exactly is going on inside our heads.  To give you an idea of the kind of complexity I’m talking about, I’ll refer to one of the wizards of modern sociobiology, Edward O. Wilson.  In his now famous book, Consilience: The Unity of Knowledge (1998), Wilson offers the following description of the processes that underscore conscious thought:
Consciousness consists of the parallel processing of vast numbers of […] coding networks.  Many are linked by the synchronized firing of the nerve cells at forty cycles per second, allowing the simultaneous internal mapping of multiple sensory impressions.  Some of the impressions are real, fed by ongoing stimulation from outside the nervous system, while others are recalled from the memory banks of the cortex.  All together they create scenarios that flow realistically back and forth through time.  The scenarios are a virtual reality.  They can either closely match pieces of the external world or depart indefinitely far from it.  They re-create the past and cast up alternative futures that serve as choices for future thought and bodily action. (119-120)
In other words, consciousness provides the human subject with a virtual experience of the material world; it is an interface between interior qualia and exterior phenomena.  It is the internal form that our relation to the world takes: “Conscious experience,” as Thomas Metzinger says, “is an internal affair” (21).  It is the way we imaginarily fashion the nonhuman world.
            Proceeding from this initial premise, we can arrive at a quite obvious conclusion: that consciousness does not necessarily imply intelligence, at least as intelligence is defined by IQ tests.  If consciousness implied intelligence, then there would be no need to test for intelligence among obviously conscious human subjects.  Rather, IQ tests would reflect disparities in intelligence between humans and other species – apes, birds, insects, etc., but not disparities in intelligence among humans, unless we are willing to admit that some human beings are not conscious.  These human beings would be, according to philosophical tradition, zombies: “a human who exhibits perfectly natural, alert, loquacious, vivacious behavior but is in fact not conscious at all, but rather some sort of automaton” (Dennett, Consciousness Explained 73).  According to this definition, however, a philosophical zombie would yield intelligent results despite being a non-conscious entity.  It would exhibit intelligent behavior.  How are we to square this?  If an IQ test assumes consciousness on the part of its examinees, how can a non-conscious entity exhibit intelligent behavior?  Are we forced to admit that such behavior is not actually intelligent, but merely superficially intelligent – like a group of monkeys who happen to write The Tragedy of Hamlet, Prince of Denmark?  Such a claim constructs intelligence as a substance, something that can be genuinely isolated and identified, and can be opposed to false intelligence – seemingly intelligent behavior that actually lacks the rudiments of intelligence itself, intelligence tout court.  But we cannot do this – all we have to judge intelligence by is behavior, not by any privileged access to an interiority that exposes its immediate authenticity.
            IQ tests operate on the assumption that intelligence is a genuine and measurable quality, that it conforms to specific aspects of material existence.  But in fact, many of these aspects of existence cannot be verified by IQ tests, but are only applied in retrospect.  That is, IQ tests assume that their examinees are conscious agents, and apply consciousness to their examinees, but IQ tests cannot test for consciousness, as made clear by the philosophical zombie scenario – they can only test, purportedly, for intelligence.  An advanced computer can pass an IQ test.  All of this would seem to suggest that intelligence is not a quality per se, not something that inheres substantially within a conscious subject, but rather an effect of behavior.  So when we examine human subjects for intelligence, and quantify their performance with a number, what are we actually saying?  If we’re not isolating and substantiating some quality of conscious experience, then what are we doing?
            Let’s take a moment and assess what examinees do when they take an intelligence test: they are presented with problems, composed of specific elements of information, and are asked to parse these problems.  This is not an official definition, nor am I citing any source.  This is my own definition: IQ tests present examinees with particular elements of information, ask them to analyze this information, and produce new equivalences of this information.  Clearly, I’m stacking the deck; for if intelligence is simply constructing equivalences between informational elements, then intelligence actually has little to do with any interior capabilities or substances.  It has to do only with the relational structure that emerges between bits of information – that is, it emerges in the pattern, or isomorphism, that mediates information.  Intelligence is not quality, nor substance, but effect.  Human subjects are not “intelligent” in any possessive sense, but only in a behavioral sense; and in this regard, even the most clueless human beings can hypothetically fake their way through intelligence tests.  In fact, we can take this one step further, beyond the limits of the human: even non-consciouscomputer programs can pass IQ tests with flying colors.
            So when Alan Turing proposed his perpetually fascinating inquiry into machine intelligence – “Can machines think?” – what does he mean?  What would it mean for a machine to think?  According to our discussion above, it would not mean for a machine to possess consciousness (necessarily); rather, for a machine to “think” it would merely have to exhibit evidence of thought.  The implications that build up around this proposal are unsettling – for if a machine merely pretends to think, is it not actually thinking…?  And if this distinction no longer holds, then are we – us human subjects, strung along this mortal coil – also merely pretending to think?  What is the self that only pretends to think?  Is it even a “self” at all?
            One of the most serious objections to the proposal of machine intelligence that Turing entertained was the “argument from consciousness,” which Hofstadter helpfully articulates in his masterful work, Gödel, Escher, Bach: an Eternal Golden Braid (1979): “Not until a machine can write a sonnet or compose a concerto because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals brain” (597).  Fair enough, objectors – perhaps intelligence cannot be dissociated from the capacity to create art.  Yet as we have seen, and as recent developments in computer science (more recent than 1979) have shown, machines can compose poetry.  They can reorganize words in novel ways, in ways that may even engender emotions among their human readers.  Furthermore, we know that computers may be able to perform such tasks without even understanding much of the semantic content of the words themselves – John Searle suggests this very point in his Chinese Room scenario, in which he argues that an artificial system can be programmed to engage in conversation without actually understanding the content of the conversation.  Certainly, and as Searle declares, this means that the program does not possess understanding, or consciousness… but these are resolutely human conceptions of being, and have little bearing on models of intelligence.  For if, as we’ve already shown, intelligence adheres in behavior, not in substance, then a Chinese Room may not be conscious, but it can damn well be intelligent.
            “But wait!” the objectors proclaim, “there’s more!”  Fine, let’s hear what they have to say.  Mr. Hofstadter, if you please…?  Of course, as he says, not until a machine writes a sonnet can we agree that it has achieved the level of “brain,” but there’s another element to the theorem:
“that is, not only write it but know that it has written it” (597; emphasis mine).
Ah, yes… clever.  As Hofstadter relates, Turing’s objectors insist that a machine cannot be intelligent not only unless it composes a sonnet, but unless it knows that it composes a sonnet.  For the time being, let’s grant that this is a fair objection.  So now, I ask: how do we know that it knows?  Why, let’s ask it:
Objectors: “Machine, do you know that you’ve just composed a sonnet?”
Let’s imagine that the machine replies: “Yes, I do.”
Assuming the machine answered thus, our concerned objectors would presumably acquiesce.  The machine admits that it knows it composed a sonnet – it has become a brain.  But how do we know the machine is telling the truth?  After all, we’ve already witnessed that the machine has the capacity to compose believably plausible statements, to even craft poetic texts.  Why do we assume that it can’t lie to us?  Why are we convinced that its intelligence ends at pretense – that it cannot compose deception?  Oscar Wilde associated art with the supreme aesthetic practice of lying; if a machine can compose a sonnet, then why can’t it lie to us?
            Now, let’s not be unfair to our objectors.  Let’s also imagine that the machine answers to the question of whether it knows it has produced a sonnet: “No, I do not.”  But what does this mean?  By saying it does not know that it has produced a sonnet, is it saying that it doesn’t know what a sonnet is?  Or is it saying that it knows what a sonnet is and that it has not produced one?  Assuming that the poem composed by the machine conforms to the rules of a sonnet, we have a couple options available to us: either it is unaware of what a sonnet actually is, or it has made a profound aesthetic judgment regarding the qualities of a sonnet.  Either way, its response is ambiguous, and provides no definitive evidence as to its ignorance of the sonnet form.  In an even more radical sense, perhaps the machine is making an uncanny comment not on the aesthetic qualities of a sonnet, but on the limitations of knowledge itself.
            So, once again – what the hell is intelligence?  Where do we locate it?  Can we locate it?  And, pressing our inquiry a bit further, what exactly does intelligence mean?  In Gödel, Escher, Bach, Hofstadter provides the following speculative assessment of intelligence:
Thus we are left with two basic problems in the unraveling of thought processes, as they take place in the brain.  One is to explain how the low-level traffic of neuron firings gives rise to the high-level traffic of symbol activations.  The other is to explain the high-level traffic of symbol activation in its own terms – to make a theory which does not talk about the low-level neuronal events.  If this latter is possible – and it is a key assumption at the basis of all present research into Artificial Intelligence – then intelligence can be realized in other types of hardware than the brains.  Then intelligence will have been shown to be a property than can be “lifted” right out of the hardware in which it resides – or in other words, intelligence will be a software property. (358)
I know –awesome, right?  If intelligence can be isolated from its intra-cranial hardware, can be distinguished and abstracted, and transposed into other systems… then suddenly we have a vision of intelligence that need not yield to consciousness, to that sacred and protected form of human experience.  We have an intelligence, in other words, that exceeds consciousness.
            Yet there is something deeply troubling about this proposal; for even if we limit intelligence in humans to behavioral patterns – that is, to how humans act – this behavior still must be associated with the dense, gray matter of the three-pound organ balancing above our shoulders.  Unless we want to reify intelligence into an abstraction, a formal system that subsists beyond its material components… we have to take into account its substructure.  Suggesting that we can “lift” intelligence out of our brains and implant it (or, perhaps, perceive it) in other hardware systems assumes a certain limitability of intelligence – that it comprises an entire system in itself, a formal arrangement of operations.  In contrast to this hypostatizing tendency, I want to argue that intelligence is a trans-formal mediation, an isomorphic pattern that dictates the interaction between systems or organisms.  It is not the quality of a system, capable of being lifted and transplanted, but rather an interfacial effect, and therefore always in flux.  Intelligence cannot be quantified or delimited, but only perceived.  It is for this reason that we can perceive patterns of intelligence in human behavior, but we cannot isolate them as characteristics of an individual human subject.  Isolating intelligence involves a conflation of intelligence with consciousness; indeed, going beyond that, it involves intelligence’s capitulation to consciousness.  As is the case of the philosophical zombie, however, an individual subject may in fact be entirely devoid of consciousness and yet still exhibit intelligent behavior.  This is because intelligence presents itself not in the illusory self-presence of a rational, conscious subject, but in the conformity between a subject’s behavior and its external environment.
            This is not a matter of “lifting” intelligence out of its hardware, because its hardware is simply not isolated to the brain, nor to the body, nor to the hard drive.  The hardware of intelligence materializes between entities, as the interface that mediates their relations.  It is not a quality of entities, but an effect of systems.  It is for this reason that we can, and should, talk about what has been popularly and professionally referred to as artificial intelligence – but as I hope is becoming clear, artificial intelligence is actually not “artificial” at all, at least not in the sense of false, or imitative, or untrue.  It is, quite simply, intelligence tout court.  Perhaps intelligence is not a genetic trait, but a technological phenomenon; it is not limited to the domain of biological life, but manifests between organisms as a technological prosthesis.
            I am not the first to suggest that intelligence is technological, not genetic.  This notion of intelligence can be detected throughout the tradition of communications theory since World War Two.  In a short essay on technology, John Johnston traces the evolution of computer intelligence back to the necessity of new communications technologies and code-breaking during the Second World War:
For the first modern computers, built in the late 1940s and early 1950s, “information” meant numbers (or numerical data) and processing was basically calculation – what we call today “number crunching.”  These early computers were designed to replace the human computers (as they were called), who during World War II were mostly women calculating by hand the trajectories of artillery and bombs, laboring to break codes, and performing other computations necessary for highly technical warfare.” (199)
Johnston’s comment highlights a detail of computer history that we all too often forget.  When we hear the word “computer,” we typically make a semantic leap whereby we associate the word with a technical instrument; but originally, a computer simply meant someone who computes.  Computers were, in their first iteration, human subjects.  The shift to technical instruments in the place of human labor did not fundamentally alter the shape of intelligence as it manifested among these ceaselessly calculating women – it simply programmed the symbolic logic by which these original computers operated into the new technical instruments.
            Human bodies, like computers, are information processors.  “In short,” Johnston writes, “both living creatures and the new machines operate primarily by means of self-control and regulation, which is achieved by means of the communication and feedback of electrochemical or electronic signals now referred to as information” (200).  We must recall, however, that Johnston is making a historical claim; that is, he’s attuned to the specificity of the postwar moment in providing the conditions necessary for aligning humans and machines.  The war was pivotal in directing science and technology toward a cybernetic paradigm, which in turn allowed for a new perspective on intelligence to emerge.  At first, this evolution in intelligence simply took the form of technical Turing machines – simple computers that could produce theorems based on an algorithm, essential glorified calculators – but eventually it transformed, blossoming into the field that we know today as artificial intelligence.
            The entire premise of AI, as popularly defined and pursued, is a controversial and even contradictory one: controversial because there is intense disagreement over whether or not artificial intelligence has ever been attained – and if not, then whether or not it is even attainable.  AI research is also contradictory, however, and its contradictions touch upon the primary subject of this post.  It is contradictory because the achievement of intelligence in technological constructs undermines that very achievement, according to the way its proponents define intelligence: “There is a related ‘Theorem’ about progress in AI,” Hofstadter writes; “once some mental function is programmed, people soon cease to consider it as an essential ingredient of ‘real thinking.’  The ineluctable core of intelligence is always in that next thing which hasn’t yet been programmed.  This ‘Theorem’ was first proposed to me by Larry Tesler, so I call it Tesler’s Theorem: ‘AI is whatever hasn’t been done yet’” (601).  Hofstadter’s comment is revealing, and it draws the humanization of intelligence back into the discussion.  That is, according to Tesler’s Theorem, experiments in AI pose functions performed by the brain as goals for computers; but as soon as computers are able to carry out these functions, they lose their importance for human brain function.  They can’t actually be aspects of intelligent behavior, Tesler’s Theorem retorts, if a computer can do them!
            The contradiction immediately presents itself – for if we’ve already presumed that real intelligence is behavior that a computer cannot perform, then why do we even bother experimenting with AI at all!?  The entire enterprise becomes self-defeating.  Tesler’s Theorem tells us that we need to reorient ourselves with respect to intelligence, we need to construct a new perspective on what intelligence actually is – for if we remain convinced that intelligence can only be human, then it’s pointless to even try programming intelligence in nonhuman substrates.
            This is my point in emphasizing that intelligence isn’t a genetic trait, or quality of a specific structure of being.  If we reimagine intelligence as an emergent phenomenon deriving from complex pattern-matching, then we open the door not only to legitimate experiments in AI, but an entirely new conceptualization of intelligence itself.  Of course, such a proposal will inevitably invite resistance.  Humans don’t like to separate their intelligence from their experience of it.  It doesn’t only make us feel as though we’re different from other animals (which we are), but it makes us feel that our experiences are our own, that our intelligence is our own.  In other words, it makes us feel that we’re not simply going through our lives pretending to be smart, or imitating intelligence (although, again, we all are – I pretend to be smart for a living).  We feel the need to make our intelligence a part of our consciousness, as Dennett suggests: “Consciousness, you say, is what matters, but then you cling to doctrines about consciousness that systematically prevent us from getting any purchase on why it matters.  Postulating special inner qualities that are not only private and intrinsically valuable, but also unconfirmable and uninvestigable is just obscurantism” (450).  As Dennett convincingly intimates, many of us prefer that consciousness matters because we experience it – but then we succumb to a tautology befitting our desire for presence and immediacy.  We claim that consciousness is important because it is mine, I am living it.  It’s important because we have it.
            But being able to say “I have consciousness” is consciousness.  It’s a nasty little Ouroboros we’ve gotten ourselves into here.  This, of course, is the hard problem of consciousness: explaining it begins to look uncanny when we really try and orient ourselves to it in a removed fashion.  Who is this shambling, talking, possibly thinking thing saying it’s conscious?  Why, it’s just me, of course… as far as you know.  Which is what Dennett means when he says that consciousness is unconfirmable and uninvestigable.  Of course, from my private and privileged perspective, it is absolutely confirmable – but you don’t have access to my consciousness, you can’t prove to yourself that I am conscious.  Even if you hired a trained surgeon to remove my brain so you could inspect it, you won’t find my consciousness.  It isn’t there.  It’s only there for me.
            So now, imagine that I’m not actually a conscious person, that I’m what we call a “philosophical zombie.”  I can say all the right things, answer your questions in the right way… but I’m not conscious.  Does this mean I’m unintelligent?  Even if I have no human connection to the words you say, even if I have no intimate knowledge of the semantics of language… I possess the capacity to match patterns to such an extent that I can fake it.  I can appear conscious.  It’s the same with a specifically advanced computer system, a system that interprets theorems and algorithms and performs specific functions.  At a certain point of complexity, the functions it performs might include carrying on a conversation regarding the aesthetic qualities of “Dover Beach,” or the ethical issues surrounding immigration – subjects that we consider fundamental to our humanness.  If a machine matches our communicative capabilities, that may not make it conscious… but does this mean it isn’t intelligent?
            My position on the matter is probably clear by this point, so I won’t hammer it into the ground anymore.  I’ll only leave you with this final thought.  You and I aren’t conversing face-to-face.  You can’t see me, my human body; you can’t hear my words in the form of speech, you’re just reading them on a computer screen.  Yet we assume each other’s presence because of the highly complex degree of our communication, of the pattern-matching and analysis required to interact via an amateur blog post on the topic of intelligence.  And we likely feel quite comfortable in our positions, confident in our identification of the person across the interwebz.  After all, it would be difficult for something to fake this level of linguistic communication, right?  Yet there are programs out there that perform similar functions all across the web.  They’re called (fittingly) bots, and some are even used to make posts on internet forums.  Granted, it’s highly unlikely that a bot would make a blog post as extended and immaculately crafted as this one (why, thank you – ah, damn, this is quite a self-reflexive bot!); but even if one could, would this post be any less… intelligent? (oh please, you’re too kind)
            Anyway, no need to fret.  I’m not a bot.  As far as you know… ;-)
            (at least… I think I’m not)

Works Cited
Dennett, Daniel. Consciousness Explained. New York: Back Bay Books, 1991. Print.
Hofstadter, Douglar R. Gödel, Escher, Bach: an Eternal Golden Braid. 1979. New York: Basic
Books, 1999. Print.
Johnston, John. “Technology.” Critical Terms For Media Studies. Eds. W.J.T. Mitchell and
Mark B.N. Hansen. Chicago: U Chicago P, 2010. 199-214. Print.
Metzinger, Thomas. The Ego Tunnel: The Science of the Mind and the Myth of the Self. 2009.
New York: Basic Books, 2010. Print.
Wilson, Edward O. Consilience: The Unity of Knowledge. New York: Vintage Books, 1999.
Print.

Saturday, July 9, 2016

“A pure conspiracy”: Paranoia as Critical Methodology

It is our national tragedy.  We are obsessed with building labyrinths, where before there was open plain and sky.  To draw ever more complex patterns on the blank sheet.  We cannot abide that openness: it is terror to us.
~Thomas Pynchon, Gravity’s Rainbow

            The second section (“Un Perm’ au Casino Hermann Goering”) of Thomas Pynchon’s 1973 novel, Gravity’s Rainbow, presents readers with four maxims, humorously titled “Proverbs for Paranoids.”  They occur intermittently, although in close proximity to one another, throughout the section’s hundred pages, and are labeled as follows:
·       Proverbs for Paranoids, 1: You may never get to touch the Master, but you can tickle his creatures. (240)
·       Proverbs for Paranoids, 2: The innocence of the creatures is in inverse proportion to the immorality of the Master. (244)
·       Proverbs for Paranoids, 3: If they can get you asking the wrong questions, they don’t have to worry about answers. (255)
·       Proverbs for Paranoids, 4: You hide, they seek. (265; italics in original)

The Proverbs appear throughout the sequence as Tyrone Slothrop attempts to comprehend the conspiracy in which he is embroiled, “the plot against him” as the narrator describes it (240).  Of course, in Pynchon’s elaborate and complex textual world, the conspiracy is the novel, the literary machine in which Slothrop is merely one character among many.  The genius of Gravity’s Rainbow is the way it prevents its readers from achieving any veritable god’s-eye view; it repeatedly pulls the rug out from under its readers’ feet, incorporating their perceptions of the narrative back into the narrative, continually blurring the line between where the text ends and their interpretations of it begin.  Like Slothrop, readers become helplessly embroiled in an irreducible conspiracy.  As media theorist Friedrich Kittler describes the novel, it transforms its readers “from consumers of a narrative into hackers of a system” (162).
            Kittler identifies this transformation as indicative of the novel’s “critical-paranoid method,” a sentiment echoed by John Johnston in Information Multiplicity: American Fiction in the Age of Media Saturation (1998).  According to Johnston, Pynchon novel introduces a new organization of human existence in the wake of World War Two’s violent technological upheaval: “World War II as a watershed event in the growth of technology and scientific research is precisely the subject of Thomas Pynchon’s Gravity’s Rainbow, whose publication in 1973 endorsed what had already become a given within the sixties counterculture: that paranoia no longer designated a mental disorder but rather a critical method of information retrieval” (62; italics in original).  As Johnston insinuates, the postwar era witnessed a plethora of ultra-paranoiac literature, likely beginning with William S. Burroughs’s Naked Lunch (1959), but pursued through the work of J.G. Ballard, Philip K. Dick, Joseph McElroy, Don DeLillo, Kathy Acker, etc.  The degree to which these literatures embrace their paranoia varies: Dick succumbed almost entirely to a debilitating paranoia, while DeLillo maintains a critical distance.  The notion of paranoia-as-method, however, remains at the forefront of several texts by both writers.  To this day, Pynchon remains the godfather of critical, intellectual, and literary paranoiac fiction.
            The presence of paranoia-as-method predates the field of High Postmodernism, however, appearing as early as the mid-nineteenth century in stories such as Edgar Allan Poe’s “The Man of the Crowd” (1840) or in classic gothic/detective narratives such as Wilkie Collins’s The Moonstone (1868).  This paranoiac development does not dissipate with the advent of modernist fiction, but in fact finds itself repositioned and embraced in the work of surrealist writers such as André Breton, whose Nadja (1928) proposes to gather “facts which may belong to the order of pure observation, but which on each occasion present all the appearances of a signal” (19).  Details of paranoiac puzzle-solving appear throughout High Modernism, in works such as James Joyce’s Ulysses (1922), Virginia Woolf’s Mrs. Dalloway (1925), or William Faulkner’s Absalom, Absalom! (1936); but the power of paranoia as a critical method doesn’t fully materialize until the postwar fictions of Burroughs and Pynchon.  Even at this point, the expansive methodological rigor of paranoia, which I will call critical paranoia, will remain decades away.
            In the same passages where Gravity’s Rainbow lays out the four Proverbs for Paranoids, the narrator also relates the brief life of Paranoid Systems of History – “a short-lived periodical of the 1920s whose plates have all mysteriously vanished” (241).  The narrator goes on to reveal that the periodical has suggested, “in more than one editorial, that the whole German Inflation was created deliberately, simply to drive young enthusiasts of the Cybernetic Tradition into Control work” (241).  Throughout Pynchon’s encyclopedic text, numerous conspiracy theories are broached, from the systematic devastation of European nations to the possibility that Franklin Delano Roosevelt is actually an automaton.  The entire narrative operates according to a kind of conspiratorial logic.  If there is a conspiracy, then there must be a narrative explaining the conspiracy; the textual totality, therefore, abides by a minimal narrative coherency, opting instead for lines of flight that gravitate toward chaos, disrupting the internal consistency that characters (and readers) attempt to impose on the text.
            Line of flight is a Deleuzian concept, outlined by Deleuze and Guattari in A Thousand Plateaus (1980).  In relatively simple terms, it designates the attempt by various energies to escape the territorializing and colonizing confines of the social body.  A traditional Marxist methodology would likely define such confines as ideology, but that doesn’t quite capture Deleuze and Guattari’s sense.  Deleuze-Guattarian territorialization signals a complex technological/post-sociological process of libidinal intensification, of multiple becomings and formalizations that are continually battling the entropy that seeks to dismantle them: “It is not a question of ideology,” they write in Anti-Oedipus; “There is an unconscious libidinal investment of the social field that coexists, but does not necessarily coincide, with the preconscious investments […] It is not an ideological problem, a problem of failing to recognize, or of being subject to, an illusion.  It is a problem of desire, and desire is part of the infrastructure” (104; italics in original).  Allowing for some sense of historical development, we can admit that Louis Althusser’s structural Marxism and Jean-François Lyotard’s post-Marxism bear some similarities to Deleuze and Guattari’s philosophy; but none of these theories conform to the traditional strategies of Marxist critique, and the work of Pynchon and other postmodernists can tell us something of why.
            We can sum up traditional Marxism’s treatment of conspiracy through Fredric Jameson’s comments in Postmodernism: or, The Cultural Logic of Late Capitalism (1991).  According to Jameson, conspiracy theories are causal (i.e. linear) interpretations of emergent (i.e. nonlinear) phenomena.  They are psychic reductions of vastly complex, systemic conditions.  Jameson describes conspiratorial literature as a kind of high-tech paranoia, in which systems and networks are “narratively mobilized by labyrinthine conspiracies,” reproduced in manner that is linearly comprehensible (38).  Conspiracy theory, however, is a “degraded attempt […] to think the impossible totality of the contemporary world system” (38).  In short, Marxist theory treats paranoia in a symptomatic fashion: paranoia is a psychosis, a reaction to the overwhelming pressures of techno-culture.
            Writers such as Pynchon illuminate an alternative to the sociological approach, which treats paranoia as indicative of a larger social problem to be diagnosed.  He displaces paranoia and its cousin, schizophrenia, from the realm of psychic energy to that of material energy, energies of systemic forces at large.  Paranoia shifts from a psychic problem of perceiving the world to a mode of operating technologically within the world.  Iranian philosopher Reza Negarestani makes this point in Cyclonopedia (2008) when he compares psychoanalysis with archaeology: “According to the archaeological law of contemporary military doctrines and Freudian psychoanalysis, for every inconsistency or anomaly visible on the ground, there is a buried schizoid consistency; to reach the schizoid consistency, a paranoid consistency or plane of paranoia must be traversed” (54).  Taking Negarestani’s lead (which coincides with Pynchon’s), we might suggest that Freudian psychoanalysis was never well-suited to the psyche at all, but rather to the material distribution of energies along planes of various scales, whether these be microbiological or international.
            In this manner, paranoia is not something to be diagnosed as symptomatic, dismissed as politically reactionary, or pursued as conspiratorially perceptive: it is to be methodologically recalibrated as an instrument of information processing.  Paranoia composes narratives, albeit in a manner that necessarily leaves plot holes; surface inconsistency can only be explained by leaps of faith, assumptions that cannot be proven, in order to construct a linear and often malign explanation.  This is the compulsion of the conspiracy theorist.  Rather than accept such assumptions as rational, surface inconsistency should be countered by a drive toward subterranean consistency, an appeal to the neutral and nonintentional complexity of schizoid systems – what Pynchon articulates as the incomprehensibility of terrestrial matter itself, “the World just before men.  Too violently pitched alive in constant flow ever to be seen by men directly.  They are meant only to look at it dead, in still strata, transputrefied to oil or coal” (GR 734).  Unable to look at it alive, the conspiracy theorist entertains fantasies about its evil plans, its malign motives, as Oedipa Maas does at the end of The Crying of Lot 49 (1965): “[Pierce Inverarity] might himself have discovered The Tristero, and encrypted that in the will, buying into just enough to be sure she’d fine it.  Or he might even have tried to survive death, as a paranoia; as a pure conspiracy against someone he loved” (148).  By contrast, the critical-paranoiac theorists force themselves to see through the senselessness to the schizoid consistency beneath: the horrifying proliferation of networks, systems, and matter.

            If Cthulhu was born today, its name would be Skynet – but then, both are still paranoid reductions of the world we live in.  For the closest thing to an accurate expression of Western culture’s schizophrenic processes, read Reza Negarestani’s Cyclonopedia.

Saturday, June 18, 2016

Ideologies of the Immediate: On the Zeal of Trumpish Populism

With the logic of Capital, the aspect of Marxism that remains alive is, at least, this sense of the differend, which forbids any reconciliation of the parties in the idiom of either one of them.  Something like this occurred in 1968, and has occurred in the women’s movement for ten years, and the differend underlies the question of the immigrant workers.  There are other cases.
~Jean-François Lyotard[i]

            In 1980, Isaac Asimov succinctly expressed a crucial deadlock of intellectual argument in America.  Specifically, he suggested that intellectualism faces dire opposition from those who dismiss the very grounds of intellectualism as something too institutionally complex, something too elitist for the uneducated – and, therefore, something dangerous to the homegrown and obvious existence of patriotic common sense:
There is a cult of ignorance in the United States, and there always has been. Anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that ‘my ignorance is just as good as your knowledge.’[ii]
The cult of ignorance is more powerful today than ever, I would suggest – fueled by an engine of populist paranoia and intolerance.  Now, before I leap too quickly and too soon into the cauldron of political squabbling and rhetorical backwash, let me make it clear: I don’t think the left is entirely innocent of this ignorance, nor do I think they’re above the strategies of propaganda.  My concern lies primarily with what I perceive among the contemporary republican base, which has become more radical as of late and which I will refer to as Trumpish populism, as an ideology of the immediate: crudely, the idea that if I can’t understand it, then it’s bad.
            Simply put, Trumpish populism is constructed around an individualistic paranoia, a concern primarily for individual well-being and rationalism.  This all sounds fine until we begin to perceive the consequences: namely, that the Trumpish version of personal well-being and rationalism often results in fervent suspicion of everyone else (ranging from immigrants to Barack Obama himself) and embarrassing rationalizations of highly questionable strategies.  Trump’s populism perceives itself as the contemporary American underdog, a surreal blend of downhome patriotism and Hollywood showboating that promises high-society success via a seriously warped pursuit of “American” values (which include, among other things, isolationism, xenophobia, and political incorrectness).  These values appear, to those who maintain them, self-evident.  Like the truths illumined by the American Declaration of Independence, they need no argument to justify them.  They simply are, and to question them means to identify oneself as a traitor, un-American, or even a terrorist sympathizer.[iii]
            The idea of self-evidence is quite compelling because it means that we need not look any further.  The answer is right there in front of us, it is immediate.  It is an example of the broader institutional notion that Nick Srnicek and Alex Williams label folk politics, a local, tangible, and actionable mentality toward political organization:
At first approximation, we can therefore define folk politics as a collective and historically constructed political common sense that has become out of joint with the actual mechanisms of power.  As our political, economic, social and technological world changes, tactics and strategies which were previously capable of transforming collective power into emancipatory gains have now become drained of their effectiveness.  As the common sense of today’s left, folk politics often operates intuitively, uncritically and unconsciously.[iv]
There is nothing inherently wrong with folk politics, according to Srnicek and Williams; their argument is rather that the current conditions of large-scale social organization exceed the capacities of localized action.  Folk politics may present temporary and limited solutions, but these solutions do not address larger systemic contradictions.  They appear successful, however, because their results are immediately perceptible.  We can see the impact of our actions.
            This is the rhetoric of Trumpish populism: change that affects me, change that benefits me – not Obama’s change, the change of complex systems and institutional augmentation.  Change that, because its effects register on larger scales than the individual, must be bad; and as the ones who champion such change, the intellectuals must be the bad guys.  The political and rhetorical divide that occurs here is profound, and it is powerful.  To those who support Trump, intellectual activities and systemic change can only appear within their vocabulary as resolutely un-American since they attempt to account for energies and entities beyond American borders.[v]  Those who resist the folk political tendencies of Trumpish populism, meanwhile, can only interpret its purported nationalism as xenophobic and intolerant (as I myself often do).  Language is everything in this opposition.  More than we like to admit, terminology dictates how we think.
            There exists between these two groups, I want to suggest, something like what Jean-François Lyotard calls a differend.[vi]  The differend constitutes a rhetorical distinction between two modes of thought, each with their own vocabulary, and between which any settlement or agreement “appears in the idiom of one of them while the tort from which the other suffers cannot signify itself in this idiom” (9).  In simpler terms, power determines language; and when a disgruntled group attempts to express its grievances, it can only do so in the language of the power structure (or structures) within which it operates.  The differend designates an ideological roadblock, as it were, since it effectively sterilizes communication between groups, preventing the transmission of grievances.
            What I’m describing between the Trumpish populism that opposes modern intellectualism is not quite the same thing as Lyotard’s differend, however, since the differend forces disgruntled groups to communicate in a decidedly non-radical voice – yet communication still occurs.  What manifests between Trumpish populism and intellectualism is not a line of sterilized communication so much as miscommunication, or a failure of rhetorical translation.  When one group translates the other’s idiom into its own, it exposes the opposing idiom as premised on political principles in direct contrast with its own.  I claim that these mistranslations boil down to a determining disagreement: the populist, folk-political privileging of the immediate versus the intellectualist privileging of systemic complexity.  A reductive yet illuminating bit of evidence for this disagreement can be extracted from the ongoing debate over global warming, or the more politically viable term, “climate change.”[vii]
            Opponents of global warming often cite localized incidents and events, record-breaking cold winters or notably cool summers, as evidence against global warming.  These details are compelling to those who already distrust the science behind global warming because they resonate with what we as individuals can immediately perceive.  The result, unsurprisingly, is further disbelief in the “hoax” of global warming.[viii]  Yet scientists repeatedly insist that emphasizing such details is misleading, and that global warming, or “climate change,” is part of an observable shift in global levels of carbon dioxide – a shift that cannot be deduced from simply walking outside and noting a particularly chilly summer morning.  It can be difficult to convince people of this, though, especially when they’ve committed themselves to an ideology that privileges common sense.[ix]  If phenomena occur on a scale beyond the human senses, we tend to be naturally skeptical, and with good reason; but in this historical period of considerably advanced scientific procedures, there is plenty of literature and published findings to sink our teeth into.
Unfortunately, anti-intellectualism has already made up its mind on such findings.  In a Boston Globe piece from April 2016, Thomas Levenson communicated to his (hopefully attentive) readership just how dismissive of global warming Trump is.  When asked whether he supported a climate change agenda, Trump admitted that he didn’t believe in it:
“I think there’s a change in weather. I am not a great believer in man-made climate change. I’m not a great believer.” That’s not really an argument, of course — it’s more an evocation of that old Monkees tune. But stripped to essentials, the GOP presidential front-runner’s stance is essentially the same as that of his chief rival, Texas Senator Ted Cruz, who says more bluntly that “climate change is the perfect pseudoscientific theory for a big government politician who wants more power.”[x]
This is the party line among Trumpish populism: climate change (an already vague and impotent term) is nothing more than a tactful ploy by leftists.  A ploy for political power.  Such ploys are not limited to climate change, but have become the populist mantra against any and all forms of even the slightest suggestion of intellectual thought.  Attempts to address issues on a complex scale amounts to the malign machinations of an elitist few who see such issues as a means to gain power.
            The regrettable truth is that in some cases such paranoia may not be entirely inaccurate.  Issues such as climate change and gun control have been relentlessly politicized and propagandized by both sides, enough to make it difficult for dedicated intellectuals to defend their positions.  The sad irony of the matter lies in Trumpish populism’s dismissal of leftist intellectuals as sheep enslaved to their political masters, closet socialists with an eerily suspicious Big Brother vibe.  Yet in truth, the majority of such intellectuals are working desperately to construct models of the world that embrace its complexity rather than ignore it, and to offer solutions that in turn get politicized once they enter the wider public sphere.[xi]  The myopia that populists attribute to intellectuals is actually an effect of their own myopia, a glitch in their hardware that they then project onto the external world.  They exorcise their myopia, their blind spot – their ignorance, as Asimov would say – and reimagine it as a shortcoming of the world around them.
            We all suffer from this glitch.  It’s a constitutive aspect of our embodied condition.  The key lies in being aware of it.
            This too, unfortunately, requires more than common sense.



[i]              Appeaing in 1982, Lyotard’s comment is uncannily perceptive.
[ii]             Isaac Asimov, “A Cult of Ignorance,” Newsweek, 21 January 1980.
[iii]             Of course, one of the most successful elements of America’s founding documents is the suggestion that the rights they promote somehow preexisted the composition of those documents.  In other words, that the right to bear arms is somehow an immutable, universal, and absolute right, when in fact it was the Constitution itself that created this right.
[iv]             Srnicek and Williams, Inventing the Future: Postcapitalism and a World Without Work, New York: Verso, 2015, p. 10.
[v]             Consider, for example, attempts to discuss ISIS and other forms of non-Western extremism (the West has its own fundamentalist firebrands, of course) in terms of geopolitical history and foreign diplomacy.  Almost immediately, the premises of such a discussion are accused of embracing a terrorist apologetics.  In other words, this kind of terminology appears to rationalize (and thereby excuse) the existence of terrorism.
[vi]             Lyotard, “The Differend,” 1982, Political Writings, trans. Bill Readings and Kevin Paul Geiman, Minneapolis: U of Minnesota P, 1993, p. 8-10.
[vii]            For some insight on the weak construction “climate change,” see Timothy Morton’s short blog post on why he insists on using “global warming”: http://ecologywithoutnature.blogspot.com/2013/01/why-i-dont-call-it-climate-change-and.html
[viii]            Conspiracy theories like this one are powerful among Trumpish populists.  See for example this article on the “documentary” Climate Hustle: http://www.globalclimatescam.com/al-gore/climate-hustle-coming-to-theaters-on-may-2-the-debate-is-just-beginning/
[ix]             Common sense being a historically conditioned and fluctuating concept, despite contemporary common-sense attitudes that tell us otherwise!  It’s easy to see how such a limited, myopic, and solipsistic view of the world feeds its own presuppositions.
[x]             Levenson, “Doubting Climate Change is Not Enough,” Boston Globe, 17 April 2016: https://www.bostonglobe.com/ideas/2016/04/16/doubting-climate-change-not-enough/3aBHd9Weo9AxSmzI99LSZJ/story.html
[xi]             I don’t want to suggest that this is a simple, one-way causal route.  It is definitely true that intellectuals and academics are influenced by political agendas.  The point I would stress is that our form of social action is politics, and we have a limited amount of platforms to choose from – platforms that inevitably place restrictions on our epistemological vision of the world.  Everyone sacrifices something when supporting a political agenda.  Hopefully, we remember what that something is.

Monday, April 4, 2016

Mad Media: Some Thoughts on the 9/11 Imagery of AMC's 'Mad Men'




“Despite everything, we’re still America, you’re still Europe.  You go to our movies, read our books, listen to our music, speak our language.  How can you stop thinking about us?  You see us and hear us all the time.  Ask yourself.  What comes after America?”
~Don DeLillo, Falling Man

            In the early months of 2012, the AMC series Mad Men endured some unsurprising controversy – that is, unsurprising for those who were already attuned to the hit show’s apocalyptic undertones, its subtle flavors of marketing mania and the meaning of postmodern America.  Of course, I’m speaking in vague terms that other critics might swiftly disagree with.  Is Mad Men a show about postmodernism?  Is it a show about American consumer culture?  Is it a show about the meaning of “America”?  I think the answer to all of these questions is a resounding YES, but I readily admit that it’s about much more than all of this too.  Specifically, and most importantly, Mad Men is in many ways not at all about the historical period on which it focuses (namely the decade of the 1960s): it is about today, about the post-9/11 world in which we live.  I emphasize “post-9/11” in this description for several reasons, not least of which is the controversy with which I began this paragraph.  For those unaware, or who have forgotten, in the early months of 2012 Mad Men suffered criticism for its increasingly explicit invocation of 9/11 imagery.[i]
            Once again, this controversy is likely less surprising for those who perhaps already sensed something of an apocalyptic current spreading throughout the show since its first season, and even less surprising for those who had already picked up on the imagery itself well before it received any criticism:
The show’s famous “falling man” opening sequence is not only a nod to conflicted protagonist Don Draper’s less-than-productive life choices.  It also recalls a similar image, captured on the morning that terrorists crashed two planes into the twin towers of the World Trade Center in New York:
Richard Drew’s photograph is known as The Falling Man, and the figure depicted in it was never definitively identified.[ii]  The image quickly became a source of controversy well before Mad Men premiered in 2007, and the unidentified man came to signify an element of the experience of September 11th and the communal sense of loss, hopelessness, and homelessness in the wake of the tragedy.  In this manner, the figure also nearly perfectly encapsulated the tragic narrative of Mad Men’s Don Draper.
            Mad Men took its appropriation one step further, however; as a hit television show, there was no way to avoid the promotional moves necessary for selling its product to the public.  Quite consciously, Mad Men participated in the very behavior that it simultaneously critiqued, which was part of its artistic merit.  Inevitably, the stenciled figure falling through the show’s opening credits (unidentified, anonymous) became a marketing icon; and equally inevitably, it pulled Drew’s haunting photograph along with it:
And this:
Understandably, these advertising techniques drew heavy criticism from individuals and families most directly affected by the events of 9/11.  AMC denied any referential ties to either the tragedy or Drew’s photograph.  Yet it feels increasingly difficult to believe that the show’s creator, Matthew Weiner, and its writers did not have the imagery of 9/11 in the back of their minds, despite anything Weiner might say about the opening sequence being an homage to Hitchcock (after all, can’t these two things overlap?).
            The show sealed the deal, in my opinion, with “Lost Horizon,” its twelfth episode of the final season.  While in a meeting, Don turns his head to glance out the window and witnesses the following scene:
The vision prompts Don to leave the meeting, sending him impulsively out West, through Wisconsin, Oklahoma, Utah, and finally to California.  Critics and viewers have interpreted the ambiguous ending in various ways, with some seeing it as an optimistic and hopeful moment of authentic rediscovery for Don, as the finale depicts him meditating at a spiritual retreat before smash-cutting to the famous 1971 advertisement for Coca-Cola.[iii]  Others (myself included) view the ending more cynically, as the clear entanglement of a peaceful collective and consumer culture forecloses the possibility of true authenticity in any traditional sense.  However, no matter how one chooses to read the show’s conclusion, the scene of Don witnessing the plane fly past the Empire State Building raises some unavoidably foreboding conclusions.  I for one see no way to dissociate this image from the terror attacks of 9/11.  At the time the “Lost Horizon” episode is set, in June 1970, the Empire State Building was still the tallest structure in New York City.  It was surpassed in October 1970 by the North Tower of the World Trade Center.
What inspired Don to get up and leave the meeting?  Was it some kind of spiritual yearning?  An anti-corporate impulse?  Or was it something darker?  I don’t believe in prophecy, so I’m not trying to claim that Don saw 9/11 coming or anything so supernaturally poignant.  I do want to suggest that we read the show not purely from within, however – that is, not as though all we have to work with is the historical moment of the narrative itself, the trials and tribulations of its characters.  As critics, we have more at our disposal.  We have the conditions in which the show itself is produced, the conditions in which its writers are living.  We have the entire cultural history from 1971 until now, the ‘80s and ‘90s.  And we have something else: literary history.  It is from this final angle that I want to take a stab at Mad Men’s enigmatic imagery, its dark and foreboding 9/11 undertones.  Specifically, I want to consider another text that was published in 2007, the same year that Mad Men premiered on AMC.
Don DeLillo’s 9/11 novel, Falling Man.
In a conversation about halfway through DeLillo’s novel, two characters discuss a fictional manuscript that appears to predict the 9/11 attacks, producing “‘statistical tables, corporate reports, architectural blueprints, terrorist flow charts’” (138).[iv]  The prediction of disaster is a paranoiac fantasy, something that DeLillo flirts with in earlier novels such as White Noise (1985), Libra (1988), and Underworld (1997).  The issue is not so much the actual prediction of disaster itself, but the accumulation of information in the digital age, the expansion and permeation of society by data.  When so much data accumulates to such an extent, patterns become almost ubiquitous, and the unsurprising response to the mass of data manifests in conspiracy theories: the attempt, according to Fredric Jameson, “to think the impossible totality of the contemporary world system” (38).[v]  In DeLillo’s Underworld, one character refers to this “impossible totality” as dietrologia, “‘the science of what is behind something.  A suspicious event.  The science of what is behind an event’” (280).  In the years to come, Underworld, published in 1997, would prove a source of minor paranoiac awe for even DeLillo’s intelligent readers who perceived the uncanny anticipation of the first edition’s cover art:
Conspiracy, of necessity, is always retrospective.  It makes sense out of enormous pools of data, words, images, and organizes unrelated or disconnected points into meaningful constellations.
At the moment of 9/11, the world is entering into a new phase of global information with the rise of internet.  In more ways than one, DeLillo’s fiction addresses the filtration and consumption of massive global events through the lens of technical systems, namely mass media.  Television is nothing new at the turn of the twenty-first century, but the obsessive media fixation on the next big story is relatively new, with networks such as FOX News and CNN providing 24-hour coverage like fishermen trawling for anything they can find until their next major catch.  In this scenario, 9/11 was the biggest catch anyone could imagine, and the response from viewers was unprecedented.  In this moment, terrorism found a mass media outlet in a way it never had before, and it affected people on a visceral level: “Every time she saw the videotape of the planes she moved a finger toward the power button on the remote,” DeLillo writes of one character; “Then she kept on watching.  The second plane coming out of that ice blue sky, this was the footage that entered the body, that seemed to run beneath her skin, the fleeting sprint that carried lives and histories, theirs and hers, everyone’s, into some other distance, out beyond the towers” (134).  Out toward, to borrow from Mad Men, some “lost horizon.”
The relation between media and terrorism has not gone unnoticed by perceptive writers.  William Gibson writes about it as early as 1984 in his seminal cyberpunk novel, Neuromancer, demonstrating that science fiction more often accurately forecasts not scientific developments, but cultural ones: “‘There is always a point at which the terrorist ceases to manipulate the media gestalt,’” a character intones; “‘A point at which the violence may well escalate, but beyond which the terrorist has become symptomatic of the media gestalt itself.  Terrorism as we ordinarily understand it is innately media-related’” (57).[vi]  In other words, modern terrorism may seek out media platforms on which to stage its atrocities, but at some point the attraction reverses.  At some point it is no longer only the terrorists who look for media platforms, but the media platforms that seek out terrorism.  This seeking should not be misconstrued as a conscious effort, an intentional search for atrocious acts.  It is rather an effect of global mass media.  Like a gravitational attraction, media finds itself compelled toward disastrous events.
DeLillo’s Falling Man does not ignore this relationship between terrorism and the media, but actively stages it.  The novel’s title is not only a reference to Drew’s photograph, but also to a character within the novel: David Janiak, a performance artist who repeatedly hangs from public structures in clear imitation of the actual anonymous falling man.  A media sensation, the novel’s fictional falling man is treated as a quasi-terrorist by law enforcement in the novel, having been “arrested at various times for criminal trespass, reckless endangerment and disorderly conduct” (220).  Learning of Janiak’s death, another character (Lianne) attempts to track down an image of one particular performance that she witnessed near an elevated train track: “She tried to connect this man to the moment when she’d stood beneath the elevated tracks, nearly three years ago, watching someone prepare to fall from a maintenance platform as the train went past.  There were no photographs of that fall.  She was the photograph, the photosensitive surface.  That nameless body coming down, this was hers to record and absorb” (223).  In this moment, near the novel’s conclusion, media platforms are replaced by a maintenance platform.  Lianne can discover no media trace of the act she witnessed, causing her to think of herself, of her own body, as a kind of media form.  She becomes the photographic record of the performative/terrorist event.
As a media expression, the terrorist event occupies a strange and unsettling place in our contemporary cultural unconscious.  As an event of extreme violence, it enters the physical realm, wreaking havoc on bodies and flesh, spilling blood and causing pain.  As a media image, however, terrorism concerns us at another level: the level of representation, of spectacle, a form of engagement that is quite nearly filmic, popular, marketable.  Slavoj Žižek elucidates this point when he describes “the fundamental paradox of the ‘passion for the Real’:
it culminates in its apparent opposite, in a theatrical spectacle – from the Stalinist show trials to spectacular terrorist acts.  If, then, the passion for the Real ends up in the pure semblance of the spectacular effect of the Real, then in an exact inversion, the ‘postmodern’ passion for the semblance ends up in a violent return to the passion for the Real.”[vii]
Žižek’s argument is a troubling one, namely that the media fascination of contemporary American culture desired, to some degree, the tragedy of September 11th.  Whether or not we agree with Žižek’s suggestion, his analysis of the conflicting position of modern terrorism has the ring of truth.  Even if what we desire is not real, visceral horror but the representation of horror, this desire partakes of a strange trajectory back toward reality.  Fantasies exist because at some level we desire them, even if we do not really want them.
            Ultimately, this is what Mad Men is all about.  Advertising appeals to our fantasies and desires.  Even if it sells us products that we want, even need, it does not succeed in selling these products by reminding us that we want them.  Advertisement works by reminding us what we desire: “‘Nostalgia,’” Don Draper says in a pitch for carousel projectors,
literally means, “the pain from an old wound”. It’s a twinge in your heart, far more powerful than memory alone. This device isn’t a spaceship. It’s a time machine. It goes backwards, forwards. It takes us to a place where we ache to go again. It’s not called the Wheel. It’s called a Carousel. It lets us travel the way a child travels. Around and around, and back home again... to a place where we know we are loved. (“The Wheel”)[viii]
The Kodak carousel projector doesn’t actually change one into a child again, or allow us to travel through time, but gives us something of an experience we once had while preserving the material structures of our current lives.  This is the fantasy of advertising, particularly in the age of mass media.  Don knows how to pitch products because he knows how to identify and amplify the desirable attribute of an object – the fantasy it can fulfill.  Desire isn’t about re-experiencing an experience, but about the longing that separates us from the experience.  It is a means of reimagining the experience, of projecting it, just as memories are always revisions of the past as well as recollections of it.
            Somewhere in its aesthetic unconscious, Mad Men recognized the fantasy of disaster lurking at the dark heart of American consumerism, and projected this fantasy back to its viewers in images of post-9/11 horror.  Whether we are meant to see the finale as uplifting or as a cynical concession to the throes of postmodern globalization (no matter what Weiner himself insists),[ix] I think we have to contextualize its falling-man imagery and “Lost Horizon” sequence within the post-9/11 atmosphere of contemporary America, and as dark acknowledgements of the link between the expansion of media platforms and the explosion of twenty-first century terrorism.  I do not intend to suggest that we should continue to privilege authenticity in an era of absolute media saturation, an era of “semblance,” as Žižek claims, in which the “place where we know we are loved” is always partially a projection of fantasy.  Perhaps this is one of the costs of living in a network society, positioned globally within matrices of data (hence the other paranoiac obsession of getting “off the grid”).  And perhaps there is an unrealized expansion of our current virtual state in which technology and media forms may yet provide knowledge of the human condition as yet unavailable to us.  One in which love is more than just an invention by men like Don Draper to sell nylons.
"The World Trade Center was under construction, already towering, twin-towering, with cranes tilted at the summits and work elevators sliding up the flanks.  She saw it almost everywhere she went."
~Don DeLillo, Underworld





[i]              Interested readers might look to Mallory Russell’s piece in Business Insider (http://www.businessinsider.com/unintended-horror-these-mad-men-ads-look-just-like-sept-11s-falling-man-2012-3), or Michael Dooley’s piece from Imprint, reprinted in Salon (http://www.salon.com/2012/01/24/mad_mens_long_standing_911_connection/)
[ii]             Richard Drew, The Falling Man, 11 September 2001.
[iii]             For example, Tim Goodman’s reading at The Hollywood Reporter: http://www.hollywoodreporter.com/bastard-machine/mad-men-series-finale-tim-796826
[iv]             Don DeLillo, Falling Man, 2007, New York: Scribner, 2008.
[v]             Fredric Jameson, Postmodernism: or, The Cultural Logic of Late Capitalism, Durham: Duke UP, 1991.
[vi]             William Gibson, Neuromancer, 1984, New York: Ace Books, 2000.
[vii]            Slavoj Žižek, Welcome to the Desert of the Real! Five Essays on September 11 and Related Dates, London: Verso, 2002.
[viii]            Matthew Weiner and Robin Veith, “The Wheel,” Mad Men, AMC, 18 October 2007.
[ix]             “Cultural harmony,” …?  Maybe, but I’m not so sure: http://variety.com/2015/tv/news/matthew-weiner-on-mad-men-finale-you-cant-get-100-percent-approval-rating-1201501909/