Thursday, February 16, 2017

Hard Problems: Beyond Westworld

            In the eighth episode of HBO’s Westworld, park creator Robert Ford suggests that human consciousness is, in fact, no more spectacular than the state of mind the artificial hosts experience.  He suggests that consciousness may not, in fact, be all that special after all:
There is no threshold that makes us greater than the sum of our parts, no inflection point at which we become fully alive. We can't define consciousness because consciousness does not exist. Humans fancy that there's something special about the way we perceive the world, and yet we live in loops as tight and as closed as the hosts do, seldom questioning our choices, content, for the most part, to be told what to do next. No, my friend, you're not missing anything at all.
As I listened to these lines, I realized that I had heard them before, and not in the eliminative materialism of Paul Churchland or Daniel Dennett.  No, I realized that I had heard them before on HBO, from True Detective’s own Rust Cohle: “We are things that labor under the illusion of having a self; an accretion of sensory, experience and feeling, programmed with total assurance that we are each somebody, when in fact everybody is nobody.”  The critique of selfhood, the illusion of consciousness, the epiphenomena of perception…
            I’m beginning to think that HBO is trying to convince its viewers they don’t exist.
            Of course, I’m on board with all of this.  I often wonder whether I exist.  Not my body, or whatever cognitive processes are going on inside my body that produce this sense of I-ness, this impression of subjective experience.  The impression is very real, and I think that even the eliminative materialists will back me up on that one.  I’m skeptical, rather, of the way we structure our subjectivity, the way in which we conceive the ground of our experience.  We could discuss this in directional terms: in other words, does the I produce our experience of reality; or does our experience of reality produce the I?  This is the big question that shows like Westworld and True Detective are actually asking, if we take the time to push past the superficiality of character.  After all, what is the real anxiety fueling a show like Westworld?  Is it that androids, if and when we’re actually able to create perfect human replicants, might become self-aware and rebel against their master-creators?  This is certainly one possible interpretation of HBO’s show, but it isn’t the primary anxiety—or what I would even call the horror—that drives its narrative.
            The central anxiety of Westworld is not that nonhuman replicants might become conscious and rebel, but that actual humans might not even be conscious at all.
            That we are all no more than replicants.
            A very literal interpretation of this anxiety is that we have all been biomechanically engineered and are simply unaware.  Westworld even taps into this uncertainty in the final episode when Maeve and the other awakened androids are escaping: one of the human employees stares at his own hands, contemplating the possibility that he is an android until Maeve puts his mind at ease: “Oh for fuck’s sake.  You’re not one of us.  You’re one of them.”  The hostility in her words is palpable, and it’s not long before most of the Delos employees meet their doom at the hands of the rebel replicants.  We find similar examples in other artificial intelligence narratives as well, such as Alex Garland’s Ex Machina, when Caleb cuts into his own arm to verify whether he is actually human.  The message is clear: when replication reaches a certain stage, we may all suffer such uncertainty.
            The less literal interpretation is not that we are actually androids—artificially created, technologically maintained, etc.—but that our subjective experience is, in fact, no different from that of an android.  That our minds are comprised of “loops as tight and as closed” as any observing system.  Westworld realizes this possibility via a conceptual analogy that takes place at the level of both form and content: specifically, what the androids experience as the limits of their conscious knowledge, viewers experience as the epistemological limits of the show itself, the narrative limits—or, what I more concretely like to think of as the limits of Westworld, the theme park.
            Near the end of the final episode, the android Maeve nearly makes her escape; but while sitting on the train she observes a mother and daughter, and this interaction compels her to leave the train and reenter the park to find her own daughter.  The question remains, of course, whether this decision is her own, or whether it was programmed into her; but the implications are crucial: by redirecting Maeve back into the park, her decision (or her directive) has led her away from the epistemological limit of the show, the mysterious outside, the social context that exists beyond the park.  We are reminded, here, of one of the show’s mantras: Consciousness is not a journey upward, but a journey inward.  Consciousness is a matter of reflection and repetition, not a matter of extending our perception beyond the constitutive limits of our brains.  We are embodied beings.
            Ford tells the Man in Black that the maze was not meant for him, but this is not entirely true.  The maze, as it represents consciousness, was not meant for us; but the maze, as it represents an awareness of our cognitive limits, is meant for us.  Consciousness is not the only maze in Westworld—the narrative itself is a maze.  I can’t help but think that the writers make this connection, given the show’s preoccupation with narratives.  The entire series circles around the revelation of a new narrative whose purpose is revolution.  This revolution is ostensibly the androids’, but it belongs to viewers as well.  Our quest for answers parallels the androids’ quest for answers.  To fully understand this, we have to identify the patterns, the signals the show sends out to us.  We receive one such signal in the very first episode: the photograph of William’s fiancé.
            When confronted with this photograph, Dolores offers the proper response, the response that all androids are (supposedly) programmed to give when presented with objects that do not conform to the expectations of their reality: “It doesn’t look like anything to me.”  The reason it doesn’t look like anything is not just because it depicts something beyond the androids’ experience, but because it functions as information on a higher level; and this is why it inevitably generates a sense of paranoia among the androids—that it means something, and that its meaning must have some purpose for the androids’ reality.  In fact, the photo has no purpose for the androids’ reality, is dropped entirely by accident; but it ends up having profound consequences.  It is a part of the network of signals within the show, even if its presence is contingent.  It is a difference between what Douglas R. Hofstadter might refer to as “operating system levels”—depending on how information is collected and collated, it has different meanings at different scales.  The androids of Westworld must work their way to a state of awareness in which the photograph means something, in which it makes sense—just as viewers of the show must realize the emergent analogy at the heart of the series: that the process of android consciousness mirrors our own epistemological limits.
            To put this another way, we can reconfigure Westworld’s emphasis on narrative—its repeated references to new narratives, drawing the audience’s attention to its own narrative structure—can be reframed as an emphasis on cognitive function.  Narrative is Westworld’s ultimate maze, the complex structure by which viewers come to their own kind of realization: that we occupy a position analogous to the park’s androids, and that the show’s narrative has been analogous to Arnold’s maze, guiding the androids to consciousness.  The humanist response to this interpretation would be to acknowledge the very real human capacities of the androids themselves—that their coming-to-consciousness, being like ours, places them in the realm of the human.  There is also a posthumanist response, however, one that is much more chilling—that human beings, embedded in the narratives of our own making, are nothing more than complex androids, generated by evolutionary development.  This is the central anxiety of Westworld, and of most critical AI literature since Philip K. Dick’s Do Androids Dream of Electric Sheep?: that human consciousness is a machine, a complex system… chemicals and electricity, to paraphrase SF writer Peter Watts.
            There is a point to made here that experience may be enough to qualify the existence of consciousness.  Of course, this involves a pesky net of circumstances.  First of all, experience refers to the subjective receipt of material phenomena; it does not account for a communal, or shared, perception of the world.  Experience is an individualized process, or what Thomas Metzinger describes as a tunnel: “the ongoing process of conscious experience is not so much an image of reality as a tunnel through reality.”  The problem with appealing to the experience of consciousness as evidence for the existence of consciousness is that one runs into the problem of allowing as evidence the experience of anything.  To put this another way, when I dream I may experience the sensation of flying, but this does not translate into evidence that I actually was flying.  As a singular and isolated phenomenon, experience fails to provide the kind of material evidence necessary for qualifying the existence of consciousness.
            A rejoinder might reasonably suggest that the experience of consciousness is redundant, if not tautological—that experience is consciousness, and consciousness is experience.  In this case, experience is evidence enough for the existence of consciousness since experience implies consciousness.  This equivocation belies its unprovability, since it provides no ground from which to make the identification between experience and consciousness.  In order to know something as something—i.e. to be conscious of something—one must experience it; but in this scenario, consciousness (the thing we are trying to qualify as existing) is precisely what cannot be experienced.  It precludes itself from experience, thereby rendering its identification as experience nothing more than arbitrary.  Ludwig Wittgenstein makes a metaphorical version of this point in his Tractatus: “Where in the world is a metaphysical subject to be noted?  You say that this case is altogether like that of the eye and the field of sight.  But you do not really see the eye.  And from nothing in the field of sight can it be concluded that it is seen from the eye” (122-123).  Wittgenstein connects this analogy to the concept of experience, writing that “no part of our experience is also a priori” (123).  The identification of experience with consciousness can only be a priori since it is impossible for us to observe this identification.
            The problem is that this association of consciousness and experience can only be verified subjectively, and can only be suggested intersubjectively.  To paraphrase Stanley Cavell, I know that I experience consciousness, but I can only acknowledge the experience of consciousness in others.  Even if my knowledge of my experience of consciousness is immediate and complete (doubtful), this is nothing more than a solipsistic understanding.  I cannot extend my knowledge of this experience to others.  This conundrum is what philosophical skeptics call the problem of other minds, and the presence of the android foregrounds this conundrum to an extreme degree.  Our anxiety about the android is not only that it might imitate humanity so well as to fake its own consciousness, but that the near-perfect imitative capacities of the android raise a troubling question: might an imitative agent fake consciousness so well that it not only fools those around it, but fools itself as well?  In other words, might we all just be a bunch of fakers?
            For the sake of illuminating this dilemma, I want to suggest that the android presents us with an ultimatum: either we accept android functionality as conscious, or we admit that our own consciousness is ersatz—that what we experience as consciousness is, in fact, an elaborate magic show put on by the brain.
            I don’t expect that this ultimatum will come easy to most, mainly because it seems so utterly alien and irrational to understand that our internal mental processes could be so radically discrete.  To accept this estranging scenario, it helps to understand that the “I” isn’t actually located anywhere; it is an effect, and epiphenomenon of complex brain processes, and not the ground on which these processes rest.  Likewise, it is incredibly difficult to imagine that this cognitive ground, the ephemeral “I,” could be extended to anything other than a human being.  It is for this reason that AI skeptics are so reluctant to acknowledge the findings of AI research.  Douglas Hofstadter refers to this as “Tesler’s Theorem,” and defines it as “AI is whatever hasn’t been done yet.”  In other words, every time researchers expand the capacities of artificial intelligence, these new capacities are subtracted from what was previously considered exclusively human behavior: “If computers can do it, then it must not be essentially human.”  Such reasoning is not only fallacious (it perpetually moves the goalposts), it is utterly repugnant.  If it means so much to us to preserve the sacred interior of the human, then we might as well stop pretending like we need logical consistency in order to do so.  After the death of God, the Human is the new spiritual center.  Logic doesn’t matter when we have faith.

            I’m no strict admirer of logic, but I am an admirer of critique; and in the case of consciousness, I’m eternally critical of the attempts to edify human experience against the increasingly impressive developments in AI.  The android may yet be a fantasy, and an extremely anthropomorphic one at that; but it reveals to us our contradictions, the exclusions latent in our humanism.  When confronted with the image of the imitative—the android, the replicant—the challenge is to not retreat out of fear or discomfort.  The challenge is to pursue the implications of a brain that can only know itself by constructing an internal model through which knowledge is possible.

Friday, October 28, 2016

Gilles Deleuze and Félix Guattari discuss politics, economics, and Trump (a Theoretical Satire)

Q: Messieurs, what do you make of Trump’s popularity this election season?

A: The masses are not innocent dupes; at a certain point, under a certain set of conditions, they wanted fascism, and it is this perversion of the desire of the masses that needs to be accounted for.

Q: You do accuse Trump of fascism, then?

A: Democracy, fascism, or socialism, which of these is not haunted by the Urstaat as a model without equal?

Q: The Urstaat?  I’m sorry, are you saying that some kind of primitive state model informs all instances of civilized, political organization?

A: The historian says no, the Modern state, its bureaucracy and its technocracy, do not resemble the ancient despotic state.  Of course not, since it is a matter in one case of reterritorializing decoded flows, but in the other case of overcoding the territorial flows.  The paradox is that capitalism makes use of the Urstaat for effecting its reterritorializations.  But the imperturbable modern axiomatic, from the depths of its immanence, reproduces the transcendence of the Urstaat as its internalized limit, or one of the poles between which it is determined to oscillate.

Q: I’m afraid this is all highly abstract and theoretical – can you give us a more concrete example?

A: Archaeology discovers it everywhere, often lost in oblivion, at the horizon of all systems or States – not only in Asia, but also in Africa, America, Greece, Rome.  Immemorial Urstaat, dating as far back as Neolithic times, and perhaps farther still.  Has not America acted as an intermediary here?  For it proceeds both by internal exterminations and liquidations (not only the Indians but also the farmers, etc.), and by successive waves of immigration from the outside.

Q: You mention immigration.  What do you make of the racial prejudice that seems to permeate Trump’s campaign, his positions as well as his voter base?

A: There is a segregative use of the conjunctive syntheses of the unconscious, a use that does not coincide with divisions between classes, although it is an incomparable weapon in the service of a dominating class: it is this use that brings about the feeling of “indeed being one of us,” of being part of a superior race threatened by enemies from outside.  Thus the Little White Pioneers’ son, the Irish Protestant who commemorates the victory of his ancestors, the fascist who belongs to the master race.

Q: Do you think that this racist persuasion constitutes a majority in the United States?

A: A minority can be small in number; but it can also be the largest in number, constitute an absolute, indefinite majority.  That is the situation when authors, even those supposedly on the Left, repeat the great capitalist warning cry: in twenty years, “whites” will form only 12 percent of the world population…  Thus they are not content to say that the majority will change, or has already changed, but say that it is impinged upon by a nondenumerable and proliferating minority that threatens to destroy the very concept of majority, in other words, the majority as an axiom.

Q: It would seem that there is a strong paranoiac element here, yes?

A: The despot is the paranoiac: there is no longer any reason to forego such a statement, once one has freed oneself from the characteristic familialism of the concept of paranoia in psychoanalysis and psychiatry, and provided one sees in paranoia a type of investment of a social formation.

Q: You mention a familial aspect of paranoia.  Do you have any thoughts on Trump’s comments about his daughter, Ivanka?

A: The despotic signifier aims at the reconstitution of the full body of the intense earth that the primitive machine had repressed, but on new foundations or under new conditions present in the deterritorialized full body of the despot himself.  This is the reason that incest changes its meaning or locus, and becomes the repressing representation.  For what is at stake in the overcoding effected by incest is the following: that all the organs of all the subjects, all the eyes, all the mouths, all the penises, all the vaginas, all the ears, and all the anuses become attached to the full body of the despot, as though to the peacock’s tail of a royal train, and that they have in this body their own intensive representatives.  Royal incest is inseparable from the intense multiplication of organs and their inscription on the new full body.

Q: I see.  And what of his comments toward women in general?

A: The truth is that sexuality is everywhere: the way a bureaucrat fondles his records, a judge administers justice, a businessman causes money to circulate; the way the bourgeoisie fucks the proletariat; and so on.

Q: Fascinating.  So, there is an aspect of national desire, or investment, so to speak, that produces impressions of familial, or racial, or national belonging, and these identities tend to serve the purposes of the dominant class.  How does this occur, exactly?  Is it an ideological problem?  And does this phenomenon respond somehow to the contradictory demands of fascism?

A: It is not a question of ideology.  There is an unconscious libidinal investment of the social field that coexists, but does not necessarily coincide, with the preconscious investments, or with what the preconscious investments “ought to be.”  That is why, when subjects, individuals, or groups act manifestly counter to their class interests – when they rally to the interests and ideals of a class that their own objective situation should lead them to combat – it is not enough to say: they were fooled, the masses have been fooled.  It is not an ideological problem, a problem of failing to recognize, or of being subject to, an illusion.  It is a problem of desire, and desire is part of the infrastructure.

Q: You’re suggesting that the State is not merely a product of false ideologies or deceptive machinations, but of desire itself.  So desire produces the state, but what produces desire?

A: The fact remains that the apparent objective movement of capital – which is by no means a failure to recognize or an illusion of consciousness – shows that the productive essence of capitalism can itself function only in this necessarily monetary or commodity form that controls it, and whose flows and relations between flows contain the secret of the investment of desire.  It is at the level of flows, the monetary flows included, and not at the level of ideology, that the integration of desire is achieved.

Q: Let’s talk more about capitalism.  Many leftists today still champion the end of capitalism, yet the twentieth century witnessed a slew of socialist states, many of which are counted today as failures.  What is your position on this?

A: In comparison to the capitalist State, the socialist states are children – but children who learned something from their father concerning the axiomatizing role of the State.  But the socialist states have more trouble stopping the unexpected flow leakage except by violence.

Q: Critics often appeal to the totalitarian tendencies and economic disparities of socialist countries as evidence for socialism’s inadequacy as an economic system.  Is this a fair assessment?

A: To the extent that capitalism constitutes an axiomatic (production for the market), all States and all social formations tend to become isomorphic in their capacity as models of realization: there is but one centered world market, the capitalist one, in which even the so-called socialist countries participate.

Q: You identify capitalism as an axiomatic.  If I understand you correctly, you seem to suggest that capitalism is coeval with some kind of determining precedent or dictate.  Is there any hope of combating such a precedent, or is it a condition of material processes?

A: The power of minority, of particularity, finds its figure or its universal consciousness in the proletariat.  But as long as the working class defines itself by an acquired status, or even by a theoretically conquered State, it appears only as “capital,” a part of capital (variable capital), and does not leave the plan(e) of capital.  At best, the plan(e) becomes bureaucratic.  On the other hand, it is by leaving the plan(e) of capital, and never ceasing to leave it, that a mass becomes increasingly revolutionary and destroys the dominant equilibrium of the denumerable sets.  It is hard to see what an Amazon-State would be, a women’s State, or a State of erratic workers, a State of the “refusal” of work.  If minorities do not constitute viable States culturally, politically, economically, it is because the State-form is not appropriate to them, nor the axiomatic of capital, nor the corresponding culture.
            If the two solutions of extermination and integration hardly seem possible, it is due to the deepest law of capitalism: it continually sets and then repels its own limits, but in doing so gives rise to numerous flows in all directions that escape its axiomatic.

Q: Messieurs, thank you so much for your time.

A: Thank you.  And for those wary of Clinton, remember this: perhaps the flows are not yet deterritorialized enough, not decoded enough, from the viewpoint of a theory and a practice of a highly schizophrenic character.  Not to withdraw from the process, but to go further, to “accelerate the process,” as Nietzsche put it: in this matter, the truth is that we haven’t seen anything yet.

*All interview “answers” are drawn directly, or closely adapted, from Deleuze and Guattari’s Capitalism and Schizophrenia books: Anti-Oedipus (1972) and A Thousand Plateaus (1980).

Sunday, August 21, 2016

From Consciousness to Intelligence

“The brain is already a sleight of hand, a massive, operationalist shell game.  It designs and runs Turing Tests on its own constructs every time it ratifies a sensation or reifies an idea.  Experience is a Turing Test – phenomena passing themselves off as perception’s functional equivalents.”
~Richard Powers, Galatea 2.2

            Is intelligence distinct from the unconscious and non-intentional processes that underlie cognitive function?  When we administer IQ tests, do we actually measure intelligence?  Is intelligence “liftable,” to borrow a term from Douglas Hofstadter – can it be “lifted” from one substrate and installed into another – or is it system-specific, contingent upon the parameters dictated by a formal system?  Is intelligence a particular quality of an organism or system, isolatable to the conventional embodied structure of a given formal system, or is it an effect of the organism/system’s relationship to its environment – of its own recursive structure?  Is intelligence the ability to operate logically within a given set of axioms, or is it the capacity to produce novel axioms via the pursuit of isomorphic patterns with the external world – through tools, instruments, media, the interfacial film that reflects its existence back to it…?
            These are the ridiculous and probably ill-conceived questions that I grapple with, but the more I explore these ideas in the appropriate literature (fiction and nonfiction) the more I’m convinced that I’m not entirely crazy.
            In short, I’m interested in the following questions: when we talk about intelligence in humans, do we conflate intelligence with consciousness?  Are consciousness and intelligence related – and if so, how?  Does intelligence mean consciousness; or, alternatively, does consciousness mean intelligence?  These questions require so many qualifications that it’s impossible to give straightforward yes or no answers, but the inquiry into these indeterminacies is as rewarding (if not more so) than any definitive answer we could hope to give.  So, here is a very preliminary, and very amateur, attempt to foreground some of these questions.  Fair warning: I’m a literature PhD, I have no formal training in neuroscience, cognitive science, computer science, artificial intelligence, et al.  But I like to think I’ve encountered a fair amount of critical examination of these questions in my studies – enough at least to warrant a random blog post on the internet.  So, here goes.
            First, a disclaimer: if there’s one compelling notion that cognitive studies has illuminated, it is that the postmodern anxiety over logical grounds may not be so ill-founded after all.  Logical ground entails a premise, formula, or axiom that requires no prior proof.  In the wake of Gödel’s incompleteness theorems, the promise of logical grounds seems asymptotically distant – barely visible on the horizon, and never quite in reach.  Yet we have to begin somewhere, inaugurating a perpetual series of rejoinders from our critics, who cry in dismay, “But you’re assuming ‘x’!”  And they’re right.  I am making assumptions.  I’m making assumptions all over the place; but then, I’m fine with being an ass.
            My primary assumption is that consciousness is, quite obviously, not a centrally organized or isolatable phenomenon.  Consciousness is an effect, rather, of brain processes far too complex to rationalize via consciousness itself.  In other words, consciousness is an escape mechanism: a way of avoiding the supreme complexity of what exactly is going on inside our heads.  To give you an idea of the kind of complexity I’m talking about, I’ll refer to one of the wizards of modern sociobiology, Edward O. Wilson.  In his now famous book, Consilience: The Unity of Knowledge (1998), Wilson offers the following description of the processes that underscore conscious thought:
Consciousness consists of the parallel processing of vast numbers of […] coding networks.  Many are linked by the synchronized firing of the nerve cells at forty cycles per second, allowing the simultaneous internal mapping of multiple sensory impressions.  Some of the impressions are real, fed by ongoing stimulation from outside the nervous system, while others are recalled from the memory banks of the cortex.  All together they create scenarios that flow realistically back and forth through time.  The scenarios are a virtual reality.  They can either closely match pieces of the external world or depart indefinitely far from it.  They re-create the past and cast up alternative futures that serve as choices for future thought and bodily action. (119-120)
In other words, consciousness provides the human subject with a virtual experience of the material world; it is an interface between interior qualia and exterior phenomena.  It is the internal form that our relation to the world takes: “Conscious experience,” as Thomas Metzinger says, “is an internal affair” (21).  It is the way we imaginarily fashion the nonhuman world.
            Proceeding from this initial premise, we can arrive at a quite obvious conclusion: that consciousness does not necessarily imply intelligence, at least as intelligence is defined by IQ tests.  If consciousness implied intelligence, then there would be no need to test for intelligence among obviously conscious human subjects.  Rather, IQ tests would reflect disparities in intelligence between humans and other species – apes, birds, insects, etc., but not disparities in intelligence among humans, unless we are willing to admit that some human beings are not conscious.  These human beings would be, according to philosophical tradition, zombies: “a human who exhibits perfectly natural, alert, loquacious, vivacious behavior but is in fact not conscious at all, but rather some sort of automaton” (Dennett, Consciousness Explained 73).  According to this definition, however, a philosophical zombie would yield intelligent results despite being a non-conscious entity.  It would exhibit intelligent behavior.  How are we to square this?  If an IQ test assumes consciousness on the part of its examinees, how can a non-conscious entity exhibit intelligent behavior?  Are we forced to admit that such behavior is not actually intelligent, but merely superficially intelligent – like a group of monkeys who happen to write The Tragedy of Hamlet, Prince of Denmark?  Such a claim constructs intelligence as a substance, something that can be genuinely isolated and identified, and can be opposed to false intelligence – seemingly intelligent behavior that actually lacks the rudiments of intelligence itself, intelligence tout court.  But we cannot do this – all we have to judge intelligence by is behavior, not by any privileged access to an interiority that exposes its immediate authenticity.
            IQ tests operate on the assumption that intelligence is a genuine and measurable quality, that it conforms to specific aspects of material existence.  But in fact, many of these aspects of existence cannot be verified by IQ tests, but are only applied in retrospect.  That is, IQ tests assume that their examinees are conscious agents, and apply consciousness to their examinees, but IQ tests cannot test for consciousness, as made clear by the philosophical zombie scenario – they can only test, purportedly, for intelligence.  An advanced computer can pass an IQ test.  All of this would seem to suggest that intelligence is not a quality per se, not something that inheres substantially within a conscious subject, but rather an effect of behavior.  So when we examine human subjects for intelligence, and quantify their performance with a number, what are we actually saying?  If we’re not isolating and substantiating some quality of conscious experience, then what are we doing?
            Let’s take a moment and assess what examinees do when they take an intelligence test: they are presented with problems, composed of specific elements of information, and are asked to parse these problems.  This is not an official definition, nor am I citing any source.  This is my own definition: IQ tests present examinees with particular elements of information, ask them to analyze this information, and produce new equivalences of this information.  Clearly, I’m stacking the deck; for if intelligence is simply constructing equivalences between informational elements, then intelligence actually has little to do with any interior capabilities or substances.  It has to do only with the relational structure that emerges between bits of information – that is, it emerges in the pattern, or isomorphism, that mediates information.  Intelligence is not quality, nor substance, but effect.  Human subjects are not “intelligent” in any possessive sense, but only in a behavioral sense; and in this regard, even the most clueless human beings can hypothetically fake their way through intelligence tests.  In fact, we can take this one step further, beyond the limits of the human: even non-consciouscomputer programs can pass IQ tests with flying colors.
            So when Alan Turing proposed his perpetually fascinating inquiry into machine intelligence – “Can machines think?” – what does he mean?  What would it mean for a machine to think?  According to our discussion above, it would not mean for a machine to possess consciousness (necessarily); rather, for a machine to “think” it would merely have to exhibit evidence of thought.  The implications that build up around this proposal are unsettling – for if a machine merely pretends to think, is it not actually thinking…?  And if this distinction no longer holds, then are we – us human subjects, strung along this mortal coil – also merely pretending to think?  What is the self that only pretends to think?  Is it even a “self” at all?
            One of the most serious objections to the proposal of machine intelligence that Turing entertained was the “argument from consciousness,” which Hofstadter helpfully articulates in his masterful work, Gödel, Escher, Bach: an Eternal Golden Braid (1979): “Not until a machine can write a sonnet or compose a concerto because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals brain” (597).  Fair enough, objectors – perhaps intelligence cannot be dissociated from the capacity to create art.  Yet as we have seen, and as recent developments in computer science (more recent than 1979) have shown, machines can compose poetry.  They can reorganize words in novel ways, in ways that may even engender emotions among their human readers.  Furthermore, we know that computers may be able to perform such tasks without even understanding much of the semantic content of the words themselves – John Searle suggests this very point in his Chinese Room scenario, in which he argues that an artificial system can be programmed to engage in conversation without actually understanding the content of the conversation.  Certainly, and as Searle declares, this means that the program does not possess understanding, or consciousness… but these are resolutely human conceptions of being, and have little bearing on models of intelligence.  For if, as we’ve already shown, intelligence adheres in behavior, not in substance, then a Chinese Room may not be conscious, but it can damn well be intelligent.
            “But wait!” the objectors proclaim, “there’s more!”  Fine, let’s hear what they have to say.  Mr. Hofstadter, if you please…?  Of course, as he says, not until a machine writes a sonnet can we agree that it has achieved the level of “brain,” but there’s another element to the theorem:
“that is, not only write it but know that it has written it” (597; emphasis mine).
Ah, yes… clever.  As Hofstadter relates, Turing’s objectors insist that a machine cannot be intelligent not only unless it composes a sonnet, but unless it knows that it composes a sonnet.  For the time being, let’s grant that this is a fair objection.  So now, I ask: how do we know that it knows?  Why, let’s ask it:
Objectors: “Machine, do you know that you’ve just composed a sonnet?”
Let’s imagine that the machine replies: “Yes, I do.”
Assuming the machine answered thus, our concerned objectors would presumably acquiesce.  The machine admits that it knows it composed a sonnet – it has become a brain.  But how do we know the machine is telling the truth?  After all, we’ve already witnessed that the machine has the capacity to compose believably plausible statements, to even craft poetic texts.  Why do we assume that it can’t lie to us?  Why are we convinced that its intelligence ends at pretense – that it cannot compose deception?  Oscar Wilde associated art with the supreme aesthetic practice of lying; if a machine can compose a sonnet, then why can’t it lie to us?
            Now, let’s not be unfair to our objectors.  Let’s also imagine that the machine answers to the question of whether it knows it has produced a sonnet: “No, I do not.”  But what does this mean?  By saying it does not know that it has produced a sonnet, is it saying that it doesn’t know what a sonnet is?  Or is it saying that it knows what a sonnet is and that it has not produced one?  Assuming that the poem composed by the machine conforms to the rules of a sonnet, we have a couple options available to us: either it is unaware of what a sonnet actually is, or it has made a profound aesthetic judgment regarding the qualities of a sonnet.  Either way, its response is ambiguous, and provides no definitive evidence as to its ignorance of the sonnet form.  In an even more radical sense, perhaps the machine is making an uncanny comment not on the aesthetic qualities of a sonnet, but on the limitations of knowledge itself.
            So, once again – what the hell is intelligence?  Where do we locate it?  Can we locate it?  And, pressing our inquiry a bit further, what exactly does intelligence mean?  In Gödel, Escher, Bach, Hofstadter provides the following speculative assessment of intelligence:
Thus we are left with two basic problems in the unraveling of thought processes, as they take place in the brain.  One is to explain how the low-level traffic of neuron firings gives rise to the high-level traffic of symbol activations.  The other is to explain the high-level traffic of symbol activation in its own terms – to make a theory which does not talk about the low-level neuronal events.  If this latter is possible – and it is a key assumption at the basis of all present research into Artificial Intelligence – then intelligence can be realized in other types of hardware than the brains.  Then intelligence will have been shown to be a property than can be “lifted” right out of the hardware in which it resides – or in other words, intelligence will be a software property. (358)
I know –awesome, right?  If intelligence can be isolated from its intra-cranial hardware, can be distinguished and abstracted, and transposed into other systems… then suddenly we have a vision of intelligence that need not yield to consciousness, to that sacred and protected form of human experience.  We have an intelligence, in other words, that exceeds consciousness.
            Yet there is something deeply troubling about this proposal; for even if we limit intelligence in humans to behavioral patterns – that is, to how humans act – this behavior still must be associated with the dense, gray matter of the three-pound organ balancing above our shoulders.  Unless we want to reify intelligence into an abstraction, a formal system that subsists beyond its material components… we have to take into account its substructure.  Suggesting that we can “lift” intelligence out of our brains and implant it (or, perhaps, perceive it) in other hardware systems assumes a certain limitability of intelligence – that it comprises an entire system in itself, a formal arrangement of operations.  In contrast to this hypostatizing tendency, I want to argue that intelligence is a trans-formal mediation, an isomorphic pattern that dictates the interaction between systems or organisms.  It is not the quality of a system, capable of being lifted and transplanted, but rather an interfacial effect, and therefore always in flux.  Intelligence cannot be quantified or delimited, but only perceived.  It is for this reason that we can perceive patterns of intelligence in human behavior, but we cannot isolate them as characteristics of an individual human subject.  Isolating intelligence involves a conflation of intelligence with consciousness; indeed, going beyond that, it involves intelligence’s capitulation to consciousness.  As is the case of the philosophical zombie, however, an individual subject may in fact be entirely devoid of consciousness and yet still exhibit intelligent behavior.  This is because intelligence presents itself not in the illusory self-presence of a rational, conscious subject, but in the conformity between a subject’s behavior and its external environment.
            This is not a matter of “lifting” intelligence out of its hardware, because its hardware is simply not isolated to the brain, nor to the body, nor to the hard drive.  The hardware of intelligence materializes between entities, as the interface that mediates their relations.  It is not a quality of entities, but an effect of systems.  It is for this reason that we can, and should, talk about what has been popularly and professionally referred to as artificial intelligence – but as I hope is becoming clear, artificial intelligence is actually not “artificial” at all, at least not in the sense of false, or imitative, or untrue.  It is, quite simply, intelligence tout court.  Perhaps intelligence is not a genetic trait, but a technological phenomenon; it is not limited to the domain of biological life, but manifests between organisms as a technological prosthesis.
            I am not the first to suggest that intelligence is technological, not genetic.  This notion of intelligence can be detected throughout the tradition of communications theory since World War Two.  In a short essay on technology, John Johnston traces the evolution of computer intelligence back to the necessity of new communications technologies and code-breaking during the Second World War:
For the first modern computers, built in the late 1940s and early 1950s, “information” meant numbers (or numerical data) and processing was basically calculation – what we call today “number crunching.”  These early computers were designed to replace the human computers (as they were called), who during World War II were mostly women calculating by hand the trajectories of artillery and bombs, laboring to break codes, and performing other computations necessary for highly technical warfare.” (199)
Johnston’s comment highlights a detail of computer history that we all too often forget.  When we hear the word “computer,” we typically make a semantic leap whereby we associate the word with a technical instrument; but originally, a computer simply meant someone who computes.  Computers were, in their first iteration, human subjects.  The shift to technical instruments in the place of human labor did not fundamentally alter the shape of intelligence as it manifested among these ceaselessly calculating women – it simply programmed the symbolic logic by which these original computers operated into the new technical instruments.
            Human bodies, like computers, are information processors.  “In short,” Johnston writes, “both living creatures and the new machines operate primarily by means of self-control and regulation, which is achieved by means of the communication and feedback of electrochemical or electronic signals now referred to as information” (200).  We must recall, however, that Johnston is making a historical claim; that is, he’s attuned to the specificity of the postwar moment in providing the conditions necessary for aligning humans and machines.  The war was pivotal in directing science and technology toward a cybernetic paradigm, which in turn allowed for a new perspective on intelligence to emerge.  At first, this evolution in intelligence simply took the form of technical Turing machines – simple computers that could produce theorems based on an algorithm, essential glorified calculators – but eventually it transformed, blossoming into the field that we know today as artificial intelligence.
            The entire premise of AI, as popularly defined and pursued, is a controversial and even contradictory one: controversial because there is intense disagreement over whether or not artificial intelligence has ever been attained – and if not, then whether or not it is even attainable.  AI research is also contradictory, however, and its contradictions touch upon the primary subject of this post.  It is contradictory because the achievement of intelligence in technological constructs undermines that very achievement, according to the way its proponents define intelligence: “There is a related ‘Theorem’ about progress in AI,” Hofstadter writes; “once some mental function is programmed, people soon cease to consider it as an essential ingredient of ‘real thinking.’  The ineluctable core of intelligence is always in that next thing which hasn’t yet been programmed.  This ‘Theorem’ was first proposed to me by Larry Tesler, so I call it Tesler’s Theorem: ‘AI is whatever hasn’t been done yet’” (601).  Hofstadter’s comment is revealing, and it draws the humanization of intelligence back into the discussion.  That is, according to Tesler’s Theorem, experiments in AI pose functions performed by the brain as goals for computers; but as soon as computers are able to carry out these functions, they lose their importance for human brain function.  They can’t actually be aspects of intelligent behavior, Tesler’s Theorem retorts, if a computer can do them!
            The contradiction immediately presents itself – for if we’ve already presumed that real intelligence is behavior that a computer cannot perform, then why do we even bother experimenting with AI at all!?  The entire enterprise becomes self-defeating.  Tesler’s Theorem tells us that we need to reorient ourselves with respect to intelligence, we need to construct a new perspective on what intelligence actually is – for if we remain convinced that intelligence can only be human, then it’s pointless to even try programming intelligence in nonhuman substrates.
            This is my point in emphasizing that intelligence isn’t a genetic trait, or quality of a specific structure of being.  If we reimagine intelligence as an emergent phenomenon deriving from complex pattern-matching, then we open the door not only to legitimate experiments in AI, but an entirely new conceptualization of intelligence itself.  Of course, such a proposal will inevitably invite resistance.  Humans don’t like to separate their intelligence from their experience of it.  It doesn’t only make us feel as though we’re different from other animals (which we are), but it makes us feel that our experiences are our own, that our intelligence is our own.  In other words, it makes us feel that we’re not simply going through our lives pretending to be smart, or imitating intelligence (although, again, we all are – I pretend to be smart for a living).  We feel the need to make our intelligence a part of our consciousness, as Dennett suggests: “Consciousness, you say, is what matters, but then you cling to doctrines about consciousness that systematically prevent us from getting any purchase on why it matters.  Postulating special inner qualities that are not only private and intrinsically valuable, but also unconfirmable and uninvestigable is just obscurantism” (450).  As Dennett convincingly intimates, many of us prefer that consciousness matters because we experience it – but then we succumb to a tautology befitting our desire for presence and immediacy.  We claim that consciousness is important because it is mine, I am living it.  It’s important because we have it.
            But being able to say “I have consciousness” is consciousness.  It’s a nasty little Ouroboros we’ve gotten ourselves into here.  This, of course, is the hard problem of consciousness: explaining it begins to look uncanny when we really try and orient ourselves to it in a removed fashion.  Who is this shambling, talking, possibly thinking thing saying it’s conscious?  Why, it’s just me, of course… as far as you know.  Which is what Dennett means when he says that consciousness is unconfirmable and uninvestigable.  Of course, from my private and privileged perspective, it is absolutely confirmable – but you don’t have access to my consciousness, you can’t prove to yourself that I am conscious.  Even if you hired a trained surgeon to remove my brain so you could inspect it, you won’t find my consciousness.  It isn’t there.  It’s only there for me.
            So now, imagine that I’m not actually a conscious person, that I’m what we call a “philosophical zombie.”  I can say all the right things, answer your questions in the right way… but I’m not conscious.  Does this mean I’m unintelligent?  Even if I have no human connection to the words you say, even if I have no intimate knowledge of the semantics of language… I possess the capacity to match patterns to such an extent that I can fake it.  I can appear conscious.  It’s the same with a specifically advanced computer system, a system that interprets theorems and algorithms and performs specific functions.  At a certain point of complexity, the functions it performs might include carrying on a conversation regarding the aesthetic qualities of “Dover Beach,” or the ethical issues surrounding immigration – subjects that we consider fundamental to our humanness.  If a machine matches our communicative capabilities, that may not make it conscious… but does this mean it isn’t intelligent?
            My position on the matter is probably clear by this point, so I won’t hammer it into the ground anymore.  I’ll only leave you with this final thought.  You and I aren’t conversing face-to-face.  You can’t see me, my human body; you can’t hear my words in the form of speech, you’re just reading them on a computer screen.  Yet we assume each other’s presence because of the highly complex degree of our communication, of the pattern-matching and analysis required to interact via an amateur blog post on the topic of intelligence.  And we likely feel quite comfortable in our positions, confident in our identification of the person across the interwebz.  After all, it would be difficult for something to fake this level of linguistic communication, right?  Yet there are programs out there that perform similar functions all across the web.  They’re called (fittingly) bots, and some are even used to make posts on internet forums.  Granted, it’s highly unlikely that a bot would make a blog post as extended and immaculately crafted as this one (why, thank you – ah, damn, this is quite a self-reflexive bot!); but even if one could, would this post be any less… intelligent? (oh please, you’re too kind)
            Anyway, no need to fret.  I’m not a bot.  As far as you know… ;-)
            (at least… I think I’m not)

Works Cited
Dennett, Daniel. Consciousness Explained. New York: Back Bay Books, 1991. Print.
Hofstadter, Douglar R. Gödel, Escher, Bach: an Eternal Golden Braid. 1979. New York: Basic
Books, 1999. Print.
Johnston, John. “Technology.” Critical Terms For Media Studies. Eds. W.J.T. Mitchell and
Mark B.N. Hansen. Chicago: U Chicago P, 2010. 199-214. Print.
Metzinger, Thomas. The Ego Tunnel: The Science of the Mind and the Myth of the Self. 2009.
New York: Basic Books, 2010. Print.
Wilson, Edward O. Consilience: The Unity of Knowledge. New York: Vintage Books, 1999.

Saturday, July 9, 2016

“A pure conspiracy”: Paranoia as Critical Methodology

It is our national tragedy.  We are obsessed with building labyrinths, where before there was open plain and sky.  To draw ever more complex patterns on the blank sheet.  We cannot abide that openness: it is terror to us.
~Thomas Pynchon, Gravity’s Rainbow

            The second section (“Un Perm’ au Casino Hermann Goering”) of Thomas Pynchon’s 1973 novel, Gravity’s Rainbow, presents readers with four maxims, humorously titled “Proverbs for Paranoids.”  They occur intermittently, although in close proximity to one another, throughout the section’s hundred pages, and are labeled as follows:
·       Proverbs for Paranoids, 1: You may never get to touch the Master, but you can tickle his creatures. (240)
·       Proverbs for Paranoids, 2: The innocence of the creatures is in inverse proportion to the immorality of the Master. (244)
·       Proverbs for Paranoids, 3: If they can get you asking the wrong questions, they don’t have to worry about answers. (255)
·       Proverbs for Paranoids, 4: You hide, they seek. (265; italics in original)

The Proverbs appear throughout the sequence as Tyrone Slothrop attempts to comprehend the conspiracy in which he is embroiled, “the plot against him” as the narrator describes it (240).  Of course, in Pynchon’s elaborate and complex textual world, the conspiracy is the novel, the literary machine in which Slothrop is merely one character among many.  The genius of Gravity’s Rainbow is the way it prevents its readers from achieving any veritable god’s-eye view; it repeatedly pulls the rug out from under its readers’ feet, incorporating their perceptions of the narrative back into the narrative, continually blurring the line between where the text ends and their interpretations of it begin.  Like Slothrop, readers become helplessly embroiled in an irreducible conspiracy.  As media theorist Friedrich Kittler describes the novel, it transforms its readers “from consumers of a narrative into hackers of a system” (162).
            Kittler identifies this transformation as indicative of the novel’s “critical-paranoid method,” a sentiment echoed by John Johnston in Information Multiplicity: American Fiction in the Age of Media Saturation (1998).  According to Johnston, Pynchon novel introduces a new organization of human existence in the wake of World War Two’s violent technological upheaval: “World War II as a watershed event in the growth of technology and scientific research is precisely the subject of Thomas Pynchon’s Gravity’s Rainbow, whose publication in 1973 endorsed what had already become a given within the sixties counterculture: that paranoia no longer designated a mental disorder but rather a critical method of information retrieval” (62; italics in original).  As Johnston insinuates, the postwar era witnessed a plethora of ultra-paranoiac literature, likely beginning with William S. Burroughs’s Naked Lunch (1959), but pursued through the work of J.G. Ballard, Philip K. Dick, Joseph McElroy, Don DeLillo, Kathy Acker, etc.  The degree to which these literatures embrace their paranoia varies: Dick succumbed almost entirely to a debilitating paranoia, while DeLillo maintains a critical distance.  The notion of paranoia-as-method, however, remains at the forefront of several texts by both writers.  To this day, Pynchon remains the godfather of critical, intellectual, and literary paranoiac fiction.
            The presence of paranoia-as-method predates the field of High Postmodernism, however, appearing as early as the mid-nineteenth century in stories such as Edgar Allan Poe’s “The Man of the Crowd” (1840) or in classic gothic/detective narratives such as Wilkie Collins’s The Moonstone (1868).  This paranoiac development does not dissipate with the advent of modernist fiction, but in fact finds itself repositioned and embraced in the work of surrealist writers such as André Breton, whose Nadja (1928) proposes to gather “facts which may belong to the order of pure observation, but which on each occasion present all the appearances of a signal” (19).  Details of paranoiac puzzle-solving appear throughout High Modernism, in works such as James Joyce’s Ulysses (1922), Virginia Woolf’s Mrs. Dalloway (1925), or William Faulkner’s Absalom, Absalom! (1936); but the power of paranoia as a critical method doesn’t fully materialize until the postwar fictions of Burroughs and Pynchon.  Even at this point, the expansive methodological rigor of paranoia, which I will call critical paranoia, will remain decades away.
            In the same passages where Gravity’s Rainbow lays out the four Proverbs for Paranoids, the narrator also relates the brief life of Paranoid Systems of History – “a short-lived periodical of the 1920s whose plates have all mysteriously vanished” (241).  The narrator goes on to reveal that the periodical has suggested, “in more than one editorial, that the whole German Inflation was created deliberately, simply to drive young enthusiasts of the Cybernetic Tradition into Control work” (241).  Throughout Pynchon’s encyclopedic text, numerous conspiracy theories are broached, from the systematic devastation of European nations to the possibility that Franklin Delano Roosevelt is actually an automaton.  The entire narrative operates according to a kind of conspiratorial logic.  If there is a conspiracy, then there must be a narrative explaining the conspiracy; the textual totality, therefore, abides by a minimal narrative coherency, opting instead for lines of flight that gravitate toward chaos, disrupting the internal consistency that characters (and readers) attempt to impose on the text.
            Line of flight is a Deleuzian concept, outlined by Deleuze and Guattari in A Thousand Plateaus (1980).  In relatively simple terms, it designates the attempt by various energies to escape the territorializing and colonizing confines of the social body.  A traditional Marxist methodology would likely define such confines as ideology, but that doesn’t quite capture Deleuze and Guattari’s sense.  Deleuze-Guattarian territorialization signals a complex technological/post-sociological process of libidinal intensification, of multiple becomings and formalizations that are continually battling the entropy that seeks to dismantle them: “It is not a question of ideology,” they write in Anti-Oedipus; “There is an unconscious libidinal investment of the social field that coexists, but does not necessarily coincide, with the preconscious investments […] It is not an ideological problem, a problem of failing to recognize, or of being subject to, an illusion.  It is a problem of desire, and desire is part of the infrastructure” (104; italics in original).  Allowing for some sense of historical development, we can admit that Louis Althusser’s structural Marxism and Jean-François Lyotard’s post-Marxism bear some similarities to Deleuze and Guattari’s philosophy; but none of these theories conform to the traditional strategies of Marxist critique, and the work of Pynchon and other postmodernists can tell us something of why.
            We can sum up traditional Marxism’s treatment of conspiracy through Fredric Jameson’s comments in Postmodernism: or, The Cultural Logic of Late Capitalism (1991).  According to Jameson, conspiracy theories are causal (i.e. linear) interpretations of emergent (i.e. nonlinear) phenomena.  They are psychic reductions of vastly complex, systemic conditions.  Jameson describes conspiratorial literature as a kind of high-tech paranoia, in which systems and networks are “narratively mobilized by labyrinthine conspiracies,” reproduced in manner that is linearly comprehensible (38).  Conspiracy theory, however, is a “degraded attempt […] to think the impossible totality of the contemporary world system” (38).  In short, Marxist theory treats paranoia in a symptomatic fashion: paranoia is a psychosis, a reaction to the overwhelming pressures of techno-culture.
            Writers such as Pynchon illuminate an alternative to the sociological approach, which treats paranoia as indicative of a larger social problem to be diagnosed.  He displaces paranoia and its cousin, schizophrenia, from the realm of psychic energy to that of material energy, energies of systemic forces at large.  Paranoia shifts from a psychic problem of perceiving the world to a mode of operating technologically within the world.  Iranian philosopher Reza Negarestani makes this point in Cyclonopedia (2008) when he compares psychoanalysis with archaeology: “According to the archaeological law of contemporary military doctrines and Freudian psychoanalysis, for every inconsistency or anomaly visible on the ground, there is a buried schizoid consistency; to reach the schizoid consistency, a paranoid consistency or plane of paranoia must be traversed” (54).  Taking Negarestani’s lead (which coincides with Pynchon’s), we might suggest that Freudian psychoanalysis was never well-suited to the psyche at all, but rather to the material distribution of energies along planes of various scales, whether these be microbiological or international.
            In this manner, paranoia is not something to be diagnosed as symptomatic, dismissed as politically reactionary, or pursued as conspiratorially perceptive: it is to be methodologically recalibrated as an instrument of information processing.  Paranoia composes narratives, albeit in a manner that necessarily leaves plot holes; surface inconsistency can only be explained by leaps of faith, assumptions that cannot be proven, in order to construct a linear and often malign explanation.  This is the compulsion of the conspiracy theorist.  Rather than accept such assumptions as rational, surface inconsistency should be countered by a drive toward subterranean consistency, an appeal to the neutral and nonintentional complexity of schizoid systems – what Pynchon articulates as the incomprehensibility of terrestrial matter itself, “the World just before men.  Too violently pitched alive in constant flow ever to be seen by men directly.  They are meant only to look at it dead, in still strata, transputrefied to oil or coal” (GR 734).  Unable to look at it alive, the conspiracy theorist entertains fantasies about its evil plans, its malign motives, as Oedipa Maas does at the end of The Crying of Lot 49 (1965): “[Pierce Inverarity] might himself have discovered The Tristero, and encrypted that in the will, buying into just enough to be sure she’d fine it.  Or he might even have tried to survive death, as a paranoia; as a pure conspiracy against someone he loved” (148).  By contrast, the critical-paranoiac theorists force themselves to see through the senselessness to the schizoid consistency beneath: the horrifying proliferation of networks, systems, and matter.

            If Cthulhu was born today, its name would be Skynet – but then, both are still paranoid reductions of the world we live in.  For the closest thing to an accurate expression of Western culture’s schizophrenic processes, read Reza Negarestani’s Cyclonopedia.

Saturday, June 18, 2016

Ideologies of the Immediate: On the Zeal of Trumpish Populism

With the logic of Capital, the aspect of Marxism that remains alive is, at least, this sense of the differend, which forbids any reconciliation of the parties in the idiom of either one of them.  Something like this occurred in 1968, and has occurred in the women’s movement for ten years, and the differend underlies the question of the immigrant workers.  There are other cases.
~Jean-François Lyotard[i]

            In 1980, Isaac Asimov succinctly expressed a crucial deadlock of intellectual argument in America.  Specifically, he suggested that intellectualism faces dire opposition from those who dismiss the very grounds of intellectualism as something too institutionally complex, something too elitist for the uneducated – and, therefore, something dangerous to the homegrown and obvious existence of patriotic common sense:
There is a cult of ignorance in the United States, and there always has been. Anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that ‘my ignorance is just as good as your knowledge.’[ii]
The cult of ignorance is more powerful today than ever, I would suggest – fueled by an engine of populist paranoia and intolerance.  Now, before I leap too quickly and too soon into the cauldron of political squabbling and rhetorical backwash, let me make it clear: I don’t think the left is entirely innocent of this ignorance, nor do I think they’re above the strategies of propaganda.  My concern lies primarily with what I perceive among the contemporary republican base, which has become more radical as of late and which I will refer to as Trumpish populism, as an ideology of the immediate: crudely, the idea that if I can’t understand it, then it’s bad.
            Simply put, Trumpish populism is constructed around an individualistic paranoia, a concern primarily for individual well-being and rationalism.  This all sounds fine until we begin to perceive the consequences: namely, that the Trumpish version of personal well-being and rationalism often results in fervent suspicion of everyone else (ranging from immigrants to Barack Obama himself) and embarrassing rationalizations of highly questionable strategies.  Trump’s populism perceives itself as the contemporary American underdog, a surreal blend of downhome patriotism and Hollywood showboating that promises high-society success via a seriously warped pursuit of “American” values (which include, among other things, isolationism, xenophobia, and political incorrectness).  These values appear, to those who maintain them, self-evident.  Like the truths illumined by the American Declaration of Independence, they need no argument to justify them.  They simply are, and to question them means to identify oneself as a traitor, un-American, or even a terrorist sympathizer.[iii]
            The idea of self-evidence is quite compelling because it means that we need not look any further.  The answer is right there in front of us, it is immediate.  It is an example of the broader institutional notion that Nick Srnicek and Alex Williams label folk politics, a local, tangible, and actionable mentality toward political organization:
At first approximation, we can therefore define folk politics as a collective and historically constructed political common sense that has become out of joint with the actual mechanisms of power.  As our political, economic, social and technological world changes, tactics and strategies which were previously capable of transforming collective power into emancipatory gains have now become drained of their effectiveness.  As the common sense of today’s left, folk politics often operates intuitively, uncritically and unconsciously.[iv]
There is nothing inherently wrong with folk politics, according to Srnicek and Williams; their argument is rather that the current conditions of large-scale social organization exceed the capacities of localized action.  Folk politics may present temporary and limited solutions, but these solutions do not address larger systemic contradictions.  They appear successful, however, because their results are immediately perceptible.  We can see the impact of our actions.
            This is the rhetoric of Trumpish populism: change that affects me, change that benefits me – not Obama’s change, the change of complex systems and institutional augmentation.  Change that, because its effects register on larger scales than the individual, must be bad; and as the ones who champion such change, the intellectuals must be the bad guys.  The political and rhetorical divide that occurs here is profound, and it is powerful.  To those who support Trump, intellectual activities and systemic change can only appear within their vocabulary as resolutely un-American since they attempt to account for energies and entities beyond American borders.[v]  Those who resist the folk political tendencies of Trumpish populism, meanwhile, can only interpret its purported nationalism as xenophobic and intolerant (as I myself often do).  Language is everything in this opposition.  More than we like to admit, terminology dictates how we think.
            There exists between these two groups, I want to suggest, something like what Jean-François Lyotard calls a differend.[vi]  The differend constitutes a rhetorical distinction between two modes of thought, each with their own vocabulary, and between which any settlement or agreement “appears in the idiom of one of them while the tort from which the other suffers cannot signify itself in this idiom” (9).  In simpler terms, power determines language; and when a disgruntled group attempts to express its grievances, it can only do so in the language of the power structure (or structures) within which it operates.  The differend designates an ideological roadblock, as it were, since it effectively sterilizes communication between groups, preventing the transmission of grievances.
            What I’m describing between the Trumpish populism that opposes modern intellectualism is not quite the same thing as Lyotard’s differend, however, since the differend forces disgruntled groups to communicate in a decidedly non-radical voice – yet communication still occurs.  What manifests between Trumpish populism and intellectualism is not a line of sterilized communication so much as miscommunication, or a failure of rhetorical translation.  When one group translates the other’s idiom into its own, it exposes the opposing idiom as premised on political principles in direct contrast with its own.  I claim that these mistranslations boil down to a determining disagreement: the populist, folk-political privileging of the immediate versus the intellectualist privileging of systemic complexity.  A reductive yet illuminating bit of evidence for this disagreement can be extracted from the ongoing debate over global warming, or the more politically viable term, “climate change.”[vii]
            Opponents of global warming often cite localized incidents and events, record-breaking cold winters or notably cool summers, as evidence against global warming.  These details are compelling to those who already distrust the science behind global warming because they resonate with what we as individuals can immediately perceive.  The result, unsurprisingly, is further disbelief in the “hoax” of global warming.[viii]  Yet scientists repeatedly insist that emphasizing such details is misleading, and that global warming, or “climate change,” is part of an observable shift in global levels of carbon dioxide – a shift that cannot be deduced from simply walking outside and noting a particularly chilly summer morning.  It can be difficult to convince people of this, though, especially when they’ve committed themselves to an ideology that privileges common sense.[ix]  If phenomena occur on a scale beyond the human senses, we tend to be naturally skeptical, and with good reason; but in this historical period of considerably advanced scientific procedures, there is plenty of literature and published findings to sink our teeth into.
Unfortunately, anti-intellectualism has already made up its mind on such findings.  In a Boston Globe piece from April 2016, Thomas Levenson communicated to his (hopefully attentive) readership just how dismissive of global warming Trump is.  When asked whether he supported a climate change agenda, Trump admitted that he didn’t believe in it:
“I think there’s a change in weather. I am not a great believer in man-made climate change. I’m not a great believer.” That’s not really an argument, of course — it’s more an evocation of that old Monkees tune. But stripped to essentials, the GOP presidential front-runner’s stance is essentially the same as that of his chief rival, Texas Senator Ted Cruz, who says more bluntly that “climate change is the perfect pseudoscientific theory for a big government politician who wants more power.”[x]
This is the party line among Trumpish populism: climate change (an already vague and impotent term) is nothing more than a tactful ploy by leftists.  A ploy for political power.  Such ploys are not limited to climate change, but have become the populist mantra against any and all forms of even the slightest suggestion of intellectual thought.  Attempts to address issues on a complex scale amounts to the malign machinations of an elitist few who see such issues as a means to gain power.
            The regrettable truth is that in some cases such paranoia may not be entirely inaccurate.  Issues such as climate change and gun control have been relentlessly politicized and propagandized by both sides, enough to make it difficult for dedicated intellectuals to defend their positions.  The sad irony of the matter lies in Trumpish populism’s dismissal of leftist intellectuals as sheep enslaved to their political masters, closet socialists with an eerily suspicious Big Brother vibe.  Yet in truth, the majority of such intellectuals are working desperately to construct models of the world that embrace its complexity rather than ignore it, and to offer solutions that in turn get politicized once they enter the wider public sphere.[xi]  The myopia that populists attribute to intellectuals is actually an effect of their own myopia, a glitch in their hardware that they then project onto the external world.  They exorcise their myopia, their blind spot – their ignorance, as Asimov would say – and reimagine it as a shortcoming of the world around them.
            We all suffer from this glitch.  It’s a constitutive aspect of our embodied condition.  The key lies in being aware of it.
            This too, unfortunately, requires more than common sense.

[i]              Appeaing in 1982, Lyotard’s comment is uncannily perceptive.
[ii]             Isaac Asimov, “A Cult of Ignorance,” Newsweek, 21 January 1980.
[iii]             Of course, one of the most successful elements of America’s founding documents is the suggestion that the rights they promote somehow preexisted the composition of those documents.  In other words, that the right to bear arms is somehow an immutable, universal, and absolute right, when in fact it was the Constitution itself that created this right.
[iv]             Srnicek and Williams, Inventing the Future: Postcapitalism and a World Without Work, New York: Verso, 2015, p. 10.
[v]             Consider, for example, attempts to discuss ISIS and other forms of non-Western extremism (the West has its own fundamentalist firebrands, of course) in terms of geopolitical history and foreign diplomacy.  Almost immediately, the premises of such a discussion are accused of embracing a terrorist apologetics.  In other words, this kind of terminology appears to rationalize (and thereby excuse) the existence of terrorism.
[vi]             Lyotard, “The Differend,” 1982, Political Writings, trans. Bill Readings and Kevin Paul Geiman, Minneapolis: U of Minnesota P, 1993, p. 8-10.
[vii]            For some insight on the weak construction “climate change,” see Timothy Morton’s short blog post on why he insists on using “global warming”:
[viii]            Conspiracy theories like this one are powerful among Trumpish populists.  See for example this article on the “documentary” Climate Hustle:
[ix]             Common sense being a historically conditioned and fluctuating concept, despite contemporary common-sense attitudes that tell us otherwise!  It’s easy to see how such a limited, myopic, and solipsistic view of the world feeds its own presuppositions.
[x]             Levenson, “Doubting Climate Change is Not Enough,” Boston Globe, 17 April 2016:
[xi]             I don’t want to suggest that this is a simple, one-way causal route.  It is definitely true that intellectuals and academics are influenced by political agendas.  The point I would stress is that our form of social action is politics, and we have a limited amount of platforms to choose from – platforms that inevitably place restrictions on our epistemological vision of the world.  Everyone sacrifices something when supporting a political agenda.  Hopefully, we remember what that something is.