The Ghost of Cognition Past, or Thinking Like An Algorithm

[Image: Wiring the ENIAC; via Wired]

One of many things I love about writing—that is, engaging in writing as an activity—is how it facilitates a discovery of connections between otherwise unrelated things. Writing reveals and even relies upon analogies, metaphors, and unexpected similarities: there is resonance between a story in the news and a medieval European folktale, say, or between a photo taken in a war-wrecked city and an 18th-century landscape painting. These sorts of relations might remain dormant or unnoticed until writing brings them to the foreground: previously unconnected topics and themes begin to interact, developing meanings not present in those original subjects on their own.

Wildfires burning in the Arctic might bring to mind infernal images from Paradise Lost or even intimations of an unwritten J.G. Ballard novel, pushing a simple tale of natural disaster to new symbolic heights, something mythic and larger than the story at hand. Learning that U.S. Naval researchers on the Gulf Coast have used the marine slime of a “300-million-year old creature” to develop 21st-century body armor might conjure images from classical mythology or even from H.P. Lovecraft: Neptunian biotech wed with Cthulhoid military terror.

In other words, writing means that one thing can be crosswired or brought into contrast with another for the specific purpose of fueling further imaginative connections, new themes to be pulled apart and lengthened, teased out to form plots, characters, and scenes.

In addition, a writer of fiction might stage an otherwise straightforward storyline in an unexpected setting, in order to reveal something new about both. It’s a hard-boiled detective thriller—set on an international space station. It’s a heist film—set at the bottom of the sea. It’s a procedural missing-person mystery—set on a remote military base in Afghanistan.

Thinking like a writer would mean asking why things have happened in this way and not another—in this place and not another—and to see what happens when you begin to switch things around. It’s about strategic recombination.

I mention all this after reading a new essay by artist and critic James Bridle about algorithmic content generation as seen in children’s videos on YouTube. The piece is worth reading for yourself, but I wanted to highlight a few things here.

[Image: Wiring the ENIAC; via Wired]

In brief, the essay suggests that an increasingly odd, even nonsensical subcategory of children’s video is emerging on YouTube. The content of these videos, Bridle writes, comes from what he calls “keyword/hashtag association.” That is, popular keyword searches have become a stimulus for producing new videos whose content is reverse-engineered from those searches.

To use an entirely fictional example of what this means, let’s imagine that, following a popular Saturday Night Live sketch, millions of people begin Googling “Pokémon Go Ewan McGregor.” In the emerging YouTube media ecology that Bridle documents, someone with an entrepreneurial spirit would immediately make a Pokémon Go video featuring Ewan McGregor both to satisfy this peculiar cultural urge and to profit from the anticipated traffic.

Content-generation through keyword mixing is “a whole dark art unto itself,” Bridle suggests. As a particular keyword or hashtag begins to trend, “content producers pile onto it, creating thousands and thousands more of these videos in every possible iteration.” Imagine Ewan McGregor playing Pokémon Go, forever.

What’s unusual here, however, and what Bridle specifically highlights in his essay, is that this creative process is becoming automated: machine-learning algorithms are taking note of trending keyword searches or popular hashtag combinations, then recommending the production of content to match those otherwise arbitrary sets. For Bridle, the results verge on the incomprehensible—less Big Data, say, than Big Dada.

This is by no means new. Recall the origin of House of Cards on Netflix. Netflix learned from its massive trove of consumer data that its customers liked, among other things, David Fincher films, political thrillers, and the actor Kevin Spacey. As David Carr explained for the New York Times back in 2013, this suggested the outline of a possible series: “With those three circles of interest, Netflix was able to find a Venn diagram intersection that suggested that buying the series would be a very good bet on original programming.”

In other words, House of Cards was produced because it matched a data set, an example of “keyword/hashtag association” becoming video.

The question here would be: what if, instead of a human producer, a machine-learning algorithm had been tasked with analyzing Netflix consumer data and generating an idea for a new TV show? What if that recommendation algorithm didn’t quite understand which combinations would be good or worth watching? It’s not hard to imagine an unwatchably surreal, even uncanny television show resulting from this, something that seems to make more sense as a data-collection exercise than as a coherent plot—yet Bridle suggests that this is exactly what’s happening in the world of children’s videos online.

[Image: From Metropolis].

In some of these videos, Bridle explains, keyword-based programming might mean something as basic as altering a few words in a script, then having actors playfully act out those new scenarios. Actors might incorporate new toys, new types of candy, or even a particular child’s name: “Matt” on a “donkey” at “the zoo” becomes “Matt” on a “horse” at “the zoo” becomes “Carla” on a “horse” at “home.” Each variant keyword combination then results in its own short video, and each of these videos can be monetized. Future such recombinations are infinite.

In an age of easily produced digital animations, Bridle adds, these sorts of keyword micro-variants can be produced both extremely quickly and very nearly automatically. Some YouTube producers have even eliminated “human actors” altogether, he writes, “to create infinite reconfigurable versions of the same videos over and over again. What is occurring here is clearly automated. Stock animations, audio tracks, and lists of keywords being assembled in their thousands to produce an endless stream of videos.”

Bridle notes with worry that it is nearly impossible here “to parse out the gap between human and machine.”

Going further, he suggests that the automated production of new videos based on popular search terms has resulted in scenes so troubling that children should not be exposed to them—but, interestingly, Bridle’s reaction here seems to be based on those videos’ content. That is, the videos feature animated characters appearing without heads, or kids being buried alive in sandboxes, or even the painful sounds of babies crying.

What I think is unsettling here is slightly different, on the other hand. The content, in my opinion, is simply strange: a kind of low-rent surrealism for kids, David Lynch-lite for toddlers. For thousands of years, western folktales have featured cannibals, incest, haunted houses, even John Carpenter-like biological transformations, from woman to tree, or from man to pig and back again. Children burn to death on chariots in the sky or sons fall from atmospheric heights into the sea. These myths seem more nightmarish—on the level of content—than some of Bridle’s chosen YouTube videos.

Instead, I would argue, what’s disturbing here is what the content suggests about how things should be connected. The real risk would seem to be that children exposed to recommendation algorithms at an early age might begin to emulate them cognitively, learning how to think, reason, and associate based on inhuman leaps of machine logic.

Bridle’s inability “to parse out the gap between human and machine” might soon apply not just to these sorts of YouTube videos but to the children who grew up watching them.

[Image: Replicants in Blade Runner].

One of my favorite scenes in Umberto Eco’s novel Foucault’s Pendulum is when a character named Jacopo Belbo describes different types of people. Everyone in the world, Belbo suggests, is one of only four types: there are “cretins, fools, morons, and lunatics.”

In the context of the present discussion, it is interesting to note that these categories are defined by modes of reasoning. For example, “Fools don’t claim that cats bark,” Belbo explains, “but they talk about cats when everyone else is talking about dogs.” They get their references wrong.

It is Eco’s “lunatic,” however, who offers a particularly interesting character type for us to consider: the lunatic, we read, is “a moron who doesn’t know the ropes. The moron proves his [own] thesis; he has a logic, however twisted it may be. The lunatic, on the other hand, doesn’t concern himself at all with logic; he works by short circuits. For him, everything proves everything else. The lunatic is all idée fixe, and whatever he comes across confirms his lunacy. You can tell him by the liberties he takes with common sense, by his flashes of inspiration…”

It might soon be time to suggest a fifth category, something beyond the lunatic, where thinking like an algorithm becomes its own strange form of reasoning, an alien logic gradually accepted as human over two or three generations to come.

Assuming I have read Bridle’s essay correctly—and it is entirely possible I have not—he seems disturbed by the content of these videos. I think the more troubling aspect, however, is in how they suggest kids should think. They replace narrative reason with algorithmic recommendation, connecting events and objects in weird, illogical bursts lacking any semblance of internal coherence, where the sudden appearance of something completely irrelevant can nonetheless be explained because of its keyword-search frequency. Having a conversation with someone who thinks like this—who “thinks” like this—would be utterly alien, if not logically impossible.

So, to return to this post’s beginning, one of the thrills of thinking like a writer, so to speak, is precisely in how it encourages one to bring together things that might not otherwise belong on the same page, and to work toward understanding why these apparently unrelated subjects might secretly be connected.

But what is thinking like an algorithm?

It will be interesting to see if algorithmically assembled material can still offer the sort of interpretive challenge posed by narrative writing, or if the only appropriate response to the kinds of content Bridle describes will be passive resignation, indifference, knowing that a data set somewhere produced a series of keywords and that the story before you goes no deeper than that. So you simply watch the next video. And the next. And the next.

6 thoughts on “The Ghost of Cognition Past, or Thinking Like An Algorithm”

  1. “algorithmic recommendation, connecting events and objects in weird, illogical bursts” ← surely “weird, logical bursts”? Since it’s algorithms…

    Also “thatr” → “that”.

    1. Thanks—fixed the typo!

      In terms of rhetorical rigor, I tentatively agree that I should have said “logical” in that case; however, the question of how machine logic differs from humanist logic, or how algorithmic recommendation is different than narrative, and how these lead to different things, is one of the most interesting problems here.

  2. The way I look at it is that yes, the content is disturbing. The concern that I have isn’t so much around the unexpected juxtapositions, but more around the lack of any evaluation of the results.

    So, as you say, “they suggest how kids would think” — from a quick look at some of the videos, the suggestion is that kids should think without considering of the any of consequences of their actions, be they aesthetic, moral or emotional. That “flatness”/lack of affect/lack of consideration would be a concern in most settings, but especially so when it comes to content designed for very young children.

    (I’m assuming there’s some content filtering going on to minimize the chance that it gets tripped up by any of youtube’s content filters, but that’s mostly bots reverse engineering bots, so wouldn’t be something that would be easy to comprehend)

  3. Geoff, I wholeheartedly agree with your initial thoughts about writing and thought-association of various kinds. When I wrote more frequently, I often found my brain firing off a volley of associations after bumping into even a humdrum concept like “seagull” — suddenly thinking of how many novels and plays and films I’d encountered seagulls, and what they meant in each context, and why those meanings and treatments differed… It slowed down both my reading and writing quite a bit, come to think of it (though not a bad thing).
    To your thoughts about algorithmic logic, I can add that I was recently prompted to start working on a (now shelved) documentary about deep learning by the fact that the decisions a neural network makes — the associations it draws, the features it extracts — cannot be described to humans post hoc. There’s no code to read in that stage of the cognition; simply an emergent, hyper-complex system of weightings and logical connections that create outward performance we can measure but no rules we can read or judge. So I think the problem you and Bridle are describing is about to get, um, deeper.

  4. “They replace narrative reason with algorithmic recommendation, connecting events and objects in weird, illogical bursts lacking any semblance of internal coherence, where the sudden appearance of something completely irrelevant can nonetheless be explained because of its keyword-search frequency. “–

    This does remind me of working with children, actually. The thinking is often associative, with no real logic besides “This event/object idea has had some kind of impact on me in the past. It will now connect to my present experience.”

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.