Fractalize Me

The genes that cause Romanesco, a kind of cauliflower, to grow in a fractal pattern have been identified. Researchers were subsequently able to manipulate one of those genes and get it to function inside another plant—thale cress—producing fractal blooms.

The language used to describe this is interesting in its own right—a vocabulary of memory, transience, perturbation, and abandoned flowering.

In the words of the researchers’ abstract, “we found that curd self-similarity arises because the meristems fail to form flowers but keep the ‘memory’ of their transient passage in a floral state. Additional mutations affecting meristem growth can induce the production of conical structures reminiscent of the conspicuous fractal Romanesco shape. This study reveals how fractal-like forms may emerge from the combination of key, defined perturbations of floral developmental programs and growth dynamics.”

It’s the fact that this gene appears to function in other plants, though, that is blowing my mind. Give this technique another ten or twenty years, and the resulting experiments—and the subsequent landscapes—seem endless, from gardens of infinitely self-similar roses and orchids to forests populated by bubbling forms of fractal pines, roiling oaks, and ivies.

Until, of course, the gene inevitably escapes, going mobile, infecting insects and animals, producing confused anatomies in fractal landscapes, like minor creatures in a Jeff VanderMeer novel, before breaching the human genome, and oracular multicephalous children are born, their bodies transitioning through monstrosities of self-reminiscence and new limbs, mythological, infinitely incomplete, cursed with endless becoming.

In any case, read more over at ScienceNews, and check out the actual paper at Science.

The Age of Horror

[Image: “Clouds, Sun and Sea” (1952) by Max Ernst, courtesy Phillips.]

There’s an interesting space where early modern, mostly 19th-century earth sciences overlap with armchair conjectures about the origins of human civilization. It’s a mix of pure pseudo-science, science-adjacent speculation, and something more like theology, as writers of the time tried to adjust new geological hypotheses and emerging biological evidence—Charles Lyell, Charles Darwin, etc.—to fit with Biblical creation myths and cosmogonic legends borrowed from other cultures. Was there really a Flood? If humans are separate from the animal kingdom, how did we first arrive or appear on Earth?

It is not those particular questions that interest me—although, if I’m being honest, I will happily stay at the table for hours talking with you about the Black Sea deluge hypothesis or the history of Doggerland, two of the most interesting things I’ve ever read about, and whether or not they might have influenced early human legends of a Flood.

Instead, there are at least two things worth pointing out here. One is that these sorts of people never really went away, they just got jobs at the History Channel.

The other is that impossibly long celestial cycles, ancient astronomical records, the precession of the Earth’s poles, and weird, racist ideas about the “fall of Man” all came together into a series of speculations that seem straight out of H.P. Lovecraft.

Take, for example, Sampson Arnold Mackey and his “Age of Horror.”

[Image: Diagram from The Mythological Astronomy in Three Parts by Sampson Arnold Mackey.]

As Joscelyn Godwin writes in a book called The Theosophical Enlightenment, Mackey—a shoemaker, not an astronomer—was fascinated by “the inclination of the earth’s axis and its changes over long spans of time. Astronomers have known at least since classical times that the Earth’s axis rotates once in about 25,920 years, pointing successively at different stars, of which the current one is Polaris, the North Star. One result of this cycle is the ‘precession of the equinoxes,’ according to which the spring-point of the sun moves around the twelve signs of the zodiac, spending about 2160 years in each sign.”

Of course, the assumption that these signs and stars might somehow influence life on Earth is the point at which astronomy morphs into astrology.

Godwin goes on to explain that—contrary to “most astronomers” of his time—Mackey assumed the Earth’s precession was dramatic and irregular, to the extent that, as Mackey speculated, “the earth’s axis describes not a circle but an alternately expanding and contracting spiral, each turn comprising one cycle of the precession of the equinoxes, and at the same time altering the angle of inclination by four degrees.”

The upshot of this is that, at various points in the history of our planet, Mackey believed that the Earth’s “inclination was much greater, to the point at which it lay in the same plane as the earth’s orbit around the sun.”

This sounds inconsequential, but it would have had huge seasonal and climatic effects. For example, Godwin explains, “At the maximum angle, each hemisphere would be pointed directly at the sun day and night during the summer, and pointed away for weeks on end during the winter. These extremes of light and dark, of heat and cold, would be virtually insupportable for life as we know it. In Mackey’s words, it was an ‘age of horror’ for the planet.”

[Image: Diagram from The Mythological Astronomy in Three Parts by Sampson Arnold Mackey.]

The flipside of this, for Mackey, is that the Earth would have gone back and forth, over titanic gulfs of time, between two angular extremes. Specifically, his model required an opposite extreme of planetary rotation in which “there would be no seasons on earth, but a perpetual spring and a ‘golden age.’ Then the cycle would begin again.”

None of this would have been recent: “Mackey dates the Age of Horror at 425,000 years in the past, the Golden Age about a million years ago, and its recurrence 150,000 years from now.”

Nevertheless, Godwin writes, “It was essential to [Mackey’s] system of mythography that the Age of Horror should have been witnessed and survived by a few human beings, its dreadful memory passing into the mythology of every land.”

For Mackey, the implications of this wobble—this dramatic precession between a Golden Age and an Age of Horror, between the darkness of Hell and the sunlight of Paradise—would have been highly significant for the evolution of human civilization.

In other words, either we are coming out of an age of eternal winter and emerging slowly, every minute of the day, every year of the century, into a time of endless sunlight and terrestrial calm, or we are inevitably falling, tipping, losing our planetary balance as we pass into near-permanent night, a frozen Hell of ruined continents and dead seas buried beneath plates of ice.

[Image: The August 2017 total eclipse of the sun, via NASA.]

One of the weirder aspects of all this—something Godwin himself documents in another book, called Arktos—is that these sorts of ideas eventually informed, among other things, Nazi political ideology and even some of today’s reactionary alt-right.

The idea that there was once a Hyperborean super-civilization, a lost Aryan race once at home in the Arctic north, lives on. It’s what we might call the cult of the fallen Northener.

[Image: “Cairn in Snow” (1807) by Caspar David Friedrich.]

What actually interests me here, though, is the suggestion that planetary mega-cycles far too long for any individual human life to experience might be slowly influencing our myths, our cultures, our consciousness (such as it is).

My point is not to suggest that this is somehow true—to say that astrologers and precession-truthers are right—but simply to say that this is a fascinating idea and it has within it nearly limitless potential for new films, novels, and myths, stories where entirely different ways of thinking emerge on planets with extreme seasonal inclinations or unusual polar relationships to the stars.

[Image: From Pitch Black, via Supernova Condensate.]

Think of the only good scene in an otherwise bad movie, 2000’s Pitch Black, where the survivors of a crash on a remote human planetary outpost discover an orrery—a model of the planet they’re standing on—inside an abandoned building.

Playing with the model, the survivors realize that the world they’ve just crashed on is about to be eclipsed by a nearby super-planet, plunging them into a night that will last several months (or weeks or years—I saw the film 20 years ago and don’t remember).

Just imagine the sorts of horrors this might inspire—an entire planet going dark perhaps for centuries, doomed by its passage through space.

[Image: Adolph Gottlieb, courtesy Hollis Taggart.]

In any case, the idea that the earliest human beings lived through something like this hundreds of thousands of years ago—an imminent night, a looming darkness, an Age of Horror that imprinted itself upon the human imagination with effects lasting to this day—would mean that what we think of as human psychology is just an angular epiphenomenon of planetary tilt. Call it orbital determinism.

(Very vaguely related: a planet without a sun.)

Fables of the Permanent and Insatiable

[Image: An otherwise unrelated photo of fire-fighting foam, via Wikipedia.]

There are at least two classes of materials that have always interested me: synthetic materials designed to be so resistant and indestructible that they verge on a kind of supernatural longevity, and engineered biomaterials, such as enzymes or microbes, designed to consume exactly these sorts of super-resistant materials.

There was a strangely haunting line in a recent tweet by journalist Sharon Lerner, for example: “Turns out it’s really hard to burn something that was designed to put out fires.” Lerner is specifically referring to a plant in upstate New York that was contracted to burn fire-fighting foam, a kind of industrial Ouroboros or contradiction in terms. How do you burn that which was made to resist fire?

Unsurprisingly, the plant is allegedly now surrounded by unburnt remnants of this unsuccessful incineration process, as “extremely persistent chemicals” have been found in the soil, groundwater, and bodies of nearby living creatures.

These chemicals are not literally indestructible, of course, but I am nevertheless fascinated by the almost mythic status of such materials: inhuman things that, Sorcerer’s Apprentice-like, cannot be turned off, controlled, or annihilated. In other words, we invent a hydrophobic industrial coating that resists water, only to find that, when it gets into streams and rivers and seas, it maintains this permanent separation from the water around it, never diluting, never breaking down, forming a kind of “extremely persistent” counter-ecology swirling around in the global deep.

Or we produce a new industrial adhesive so good at bonding that it cannot be separated from the things with which it has all but merged. In any other context, this would be pure metaphor, even folklore, a ghost story of possession and inseparable haunting. What if humans are actually too good at producing the permanent? What if we create something that cannot be killed or annihilated? It’s the golem myth all over again, this time set in the dust-free labs of BASF and 3M.

Coatings, metals, adhesives, composites: strange materials emerge from human laboratories that exceed any realistic human timescale, perhaps threatening to outlast geology itself. As continents melt in the heat of an expanding sun ten billion years from now, these ancient, undead materials will simply float to the top, resistant even to magma and celestial apocalypse. We will have created the supernatural, the uncannily permanent.

[Image: “Plastic-munching bacteria,” via PBS NewsHour.]

In any case, the flip-side of all this, then, is synthetic materials that have been designed to consume these very things. Every once in a while, for example, it’s announced that a lab somewhere has devised a new form of plastic-eating enzyme or that someone has discovered certain worms that eat plastic. In other words, there is now in the world a creature or thing that can degrade the eerily immortal materials coming from someone else’s lab down the hall. But what are the consequences of this, the metaphoric implications? What myths do we have of the omnivorous and insatiable?

It is not hard to imagine that classic sci-fi trope of something escaping from the lab and wreaking havoc in the outside world. At first, say, cars parked outside the laboratory where this stuff was developed begin showing structural wear; radio dials fall off; plastic handles on passenger seats break or even seem to be disintegrating. Then it appears inside houses, people accidentally taking it home with them in the pleats and folds of their cotton clothing, where this engineered microbe begins to feast on plastic housings for electrical connections, children’s toys, and kitchen goods, all of which have begun to age before failing entirely.

Then supermarkets and drugstores, then airports and planes themselves. Boats and ferries. Internal medical implants, from joints to stents. This plastic-eating organism begins to shift genes and mutate, inadvertently unleashed onto a world that seems exactly built for it, with new food everywhere in sight. Forty years later, no plastic exists. A hundred years later, even the cellulose in plants is threatened. The world is being consumed entirely.

My point—such as it is—is that materials science seems to operate within two mythic extremes, pulled back and forth between two supernatural ideals: there is that which resists to the point of uncanny permanence, of eerie immortality, and there is that which consumes to the point of universal insatiability, of boundless hunger. Both of these suggest such interesting fables, creating such otherworldly things and objects in the process.

Tax Incentives and the Human Imagination

[Image: Der Wanderer über dem Nebelmeer by Caspar David Friedrich (c. 1818).]

It would be interesting to look at locations of the American popular imagination, as seen in movies and TV, mapped against regional tax breaks for the film industry.

There was a brief span of time, for example, when rural Pennsylvania stood in for authentic Americana, a kind of Rust Belt imaginary, all pick-up trucks and hard-drinking younger brothers, stories framed against the hulking ruins of industrial landscapes—I’m thinking of Out Of The Furnace or Prisoners, both released in 2013, or even 2010’s Unstoppable. Whereas, today, Georgia seems to have stepped into that niche, between The Outsider and, say, Mindhunter (season two), let alone Atlanta, no doubt precisely because Georgia has well-known tax incentives in place for filming.

My point is that an entire generation of people—not just Americans, but film viewers and coronavirus quarantine streamers and TV binge-watchers around the world—might have their imaginative landscapes shaped not by immaterial forces, by symbolic archetypes or universal rules bubbling up from the high-pressure depths of human psychology, but instead by tax breaks offered in particular U.S. states at particular moments in American history.

You grow up thinking about Gothic pine forests, or you fall asleep at night with visions of rain-soaked Georgia parking lots crowding your head, but it’s not just because of the aesthetic or atmospheric appeal of those landscapes; it’s because those landscapes are, in effect, receiving imaginative subsidies from local business bureaus. You’re dreaming of them for a reason.

Your mind is not immaterial, in other words, some angelic force waltzing across the surface of the world, stopping now and again to dwell on universal imagery, but something deeply mundane, something sculpted by ridiculous things, like whether or not camera crews in a given state get hotel room discounts for productions lasting more than two weeks.

Of course, you could extend a similar kind of analysis way back into art history and look at, say, the opening of particular landscapes in western Europe, after decades of war, suddenly made safe for cultured travelers such as Caspar David Friedrich, whose paintings later came to define an entire era of European and European-descended male imaginations. That wanderer over a sea of fog, in other words, was wandering through a very specific landscape during a very particular window of European political accessibility. Had things been different, had history taken a slightly different path, Friedrich might have been stuck in his parents’ house, painting still-lives and weed-choked alleyways, and who knows what images today’s solo hikers might be daydreaming about instead.

[Image: From The Outsider, courtesy HBO; I should mention that The Outsider was set and filmed primarily in Georgia, a departure from Stephen King’s novel, which was primarily set in Oklahoma.]

In any case, the humid forests of rural America, the looming water towers and abandoned industrial facilities, the kudzu-covered strip malls and furloughed police stations—picture the Louisianan expanses of True Detective (season one)—have come to represent the dark narrative potential of the contemporary world. But what if, say, North Dakota or Manitoba (where, for example, The Grudge was recently filmed) had offered better tax breaks?

My own childhood imagination was a world of sunlit suburbs, detached single-family homes, and long-shadowed neighborhood secrets, but, as to my larger point here, I also grew up watching movies like E.T., Poltergeist, Fright Night, and Blue Velvet—so, in a sense, of course I would think that’s what the world looked like.

[Image: From David Lynch’s Blue Velvet (1986), specifically via the site Velvet Eyes.]

So, again, it would be interesting to explore how one’s vision of the world—your most fundamental imagination of the cosmos—is being shaped for you by tax breaks, film incentives, and other, utterly trivial local concerns, like whether or not out-of-state catering companies can get refunds on expenditures over a certain amount or where actors can write off per diems as gifts, not income, affecting whether crime films or horror stories will be shot there, and thus where an entire generation’s future nightmares might be set.

Or, for that matter, you could look at when particular colors, paints, and pigments became affordable for artists of a certain era, resulting in all those dark and moody images you love to stare at in the local museum—e.g. the old joke that, at some point, Rembrandt simply bought too much purple. It wasn’t promethean inspiration; it was material surplus.

We see things for a reason, yet, over and over again, mistake our dreams for signs of the cosmic. Or, to put this another way, we are not surrounded by mythology; we are surrounded by economics. The latter is a superb and confusing mimic.

PoMo- Mytho- Geo-

[Image: “Model of an Earth Fastener on the Delphi Fault (Temple of Apollo)” (2019) by Kylie White; photo courtesy Moskowitz Bayse.]

Artist Kylie White has two new pieces up in a group show here in Los Angeles, called Grammars of Creation, on display at Moskowitz Bayse till March 16th, which I will return to in a second.

White had a great solo show at the same gallery almost exactly a year ago, featuring a series of geological faults modeled in richly veined, colored marble Most also incorporated brass details, acting as so-called “Earth fasteners.”

[Images: From Six Significant Landscapes by Kylie White; photos courtesy Moskowitz Bayse.]

Gallery text explained at the time that White’s works “are at once sculptures, scale models, geologic diagrams, and proposals; each depicts an active fault line, a place of displaced terrain due to tectonic movement.”

The “proposal” in each work, of course, would be the fasteners: metal implants of a sort meant to span the rift of an open fault.

[Image: “Model of Earth Fastener on a Transform Fault; 1”=10” (2017) by Kylie White; note that this piece was not featured in Six Significant Landscapes. Photo courtesy Moskowitz Bayse.]

White’s fasteners seemed to suggest at least two things simultaneously: that perhaps we could fix the Earth’s surface in place, if only we had the means to stop faults from breaking open, but also that human interventions such as these, in otherwise colossal planetary landscapes, would be trivial at best, more sculptural than scientific, just temporary installations not permanent features of a changing continent.

[Image: From Six Significant Landscapes by Kylie White; photo courtesy Moskowitz Bayse.]

As I struggled to explain to my friends, however, while describing White’s work, the visual effect was strangely postmodern, almost tongue-in-cheek, as if her sculptures—all green marble blocks and inlaid brass—could have passed for avant-garde luxury furniture items from the 1980s (and, to be clear, I mean this in a good way: imagine scientific models masquerading as luxury goods).

[Images: Details from Six Significant Landscapes by Kylie White; photos by BLDGBLOG.]

All of which means I sort of laughed when I saw these more recent works that seem to take this postmodern aesthetic to a new height, complete with two fault models mounted atop faux-Greek columns.

[Image: “Model of an Earth Fastener on the Hierapolis Fault (Plutonion)” (2019) by Kylie White; photo courtesy Moskowitz Bayse.]

It’s like plate tectonics meets Learning From Las Vegas, by way of Greek mythology.

Because the columns are also a fitting reference to the pieces’ own subject matter: one, seen at the top of this post, is called “Model of an Earth Fastener on the Delphi Fault (Temple of Apollo)” and the other, immediately above, is “Model of an Earth Fastener on the Hierapolis Fault (Plutonion).” They perhaps suggest an entirely new approach to natural history museum displays—boldly gridded rooms filled with heroic blocks of the Earth’s surface, bathed in neon. Pomotectonics.

In any case, more information about the show is available at Moskowitz Bayse. It closes on March 16th, 2020, although White apparently has another, currently untitled solo show coming up in 2021.

Waller

[Image: Otherwise unrelated photo of a wall in Malta; photo by the author].

It’s a slow morning, so perhaps the laziness of linking to Wikipedia can be excused… Immurement is “a form of imprisonment, usually for life, in which a person is placed within an enclosed space with no exits.”

In folklore and myth, “immurement is prominent as a form of capital punishment, but its use as a type of human sacrifice to make buildings sturdy has many tales attached to it as well. Skeletal remains have been, from time to time, found behind walls and in hidden rooms and on several occasions have been asserted to be evidence of such sacrificial practices or of such a form of punishment.”

In terms of literature and film, an obvious example would be Edgar Allan Poe’s short story, “The Cask of Amontillado,” but there was also an absolutely God-awful horror movie a few years ago called, yes, Walled In.

The examples given by Wikipedia include a Moroccan serial killer sentenced to death in 1906 by being walled alive—or immured—and whose screams, inside the walls, were audible for two days; immurement as a tactic for military revenge; and a horrific photo of a woman “immured” inside a wooden crate with only her arm and head visible, left to die outside in Mongolia.

Vaguely related to this, anchorites are self-isolated religious hermits, but ones who “take a vow of stability of place, opting instead for permanent enclosure in cells often attached to churches.” While not immurement in a technical sense, becoming an anchorite was nonetheless also a radical act of bodily enclosure, using architecture as an extreme kind of “stability of place,” a permanent habitation.

I suppose exile would be the opposite spatial condition, a state in which one is permanently disallowed from ever entering architecture, always locked outside. Walled out, as it were.

Graphic Inferno

[Image: From Drawings for Dante’s Inferno by Rico Lebrun, via Annex Galleries].

Artist Rico Lebrun once remarked that he was interested in “changing what is disfigured into what is transfigured,” aiming to depict “mineral and spiritual splendor.”

[Image: “Figures in Black & White” (ca. 1961) by Rico Lebrun, via Mutual Art].

Originally from Naples, Italy, where he painted murals, Lebrun brought a macabre sense of body horror to classic myths and religious illustrations. Think of him as a kind of Italian-American version of Francis Bacon.

Lebrun has been described as “one of the most unjustly neglected artists of the postwar era… Lebrun’s last major exhibition was in 1967 and it was hastily thrown together. He has never had [a] critically curated retrospective that locates his art in its time and place, and neither has he had a scholarly monograph to take the measure of his career.”

[Image: From Drawings for Dante’s Inferno by Rico Lebrun, this is “Canto XXV—Circle Eight: Bolgia of the Thieves; their penance was to be changed from humans into snakes.” Via University of Wisconsin-Milwaukee].

What I find so interesting in Lebrun’s work is how sculptural and bloated anatomical forms become worlds unto themselves, divorced from their contexts. They are humid, planetary, often trapped in monstrous pregnancies or what could pass for ritualized medical events. Lebrun depicts Hell as a place of limitless metastasis and uncontrolled mutation.

[Image: Rico Lebrun, “Untitled” (1956), via Artnet].

In other images, broken skeletons seem to emerge from the wrong skin, people lump over one another as if grafted together in molten surgery, and limbs are splayed wide, almost pornographically, in tumbled piles of flesh.

[Image: From Drawings for Dante’s Inferno by Rico Lebrun].

His most notable projects—including illustrations for Dante’s Divine Comedy, scenes of the Crucifixion, and a project focused on the Holocaust—all explored grotesque exaggerations of the human form, seeming to fuse multiple figures into one, even hybridizing animal bodies with the isolated suffering of people broken and betrayed by the world around them.

[Image: From Rico Lebrun, Paintings and Drawings of the Crucifixion].

Serpents wrap around and consume doomed humans; writhing bodies seem frozen into stone atop tombs.

“Some are bloody,” he wrote in a letter to a friend, describing his drawings for Dante’s Inferno, “and horrifying as the cantos in the Inferno are; there is no other way to depict terror as Dante describes it, without turning the whole thing into an assembly of sedately arranged figures having a picnic in a dark place…”

[Images: From Drawings for Dante’s Inferno by Rico Lebrun].

According to translator John Ciardi, “it is only Rico Lebrun who succeeds in giving me a graphic Inferno… Hell is not a Gothic cave, nor is it a festival of dance rhythms, nor is it a series of monkish miniatures. It is a concept.”

Lebrun died in 1964.

The Ghost of Cognition Past, or Thinking Like An Algorithm

[Image: Wiring the ENIAC; via Wired]

One of many things I love about writing—that is, engaging in writing as an activity—is how it facilitates a discovery of connections between otherwise unrelated things. Writing reveals and even relies upon analogies, metaphors, and unexpected similarities: there is resonance between a story in the news and a medieval European folktale, say, or between a photo taken in a war-wrecked city and an 18th-century landscape painting. These sorts of relations might remain dormant or unnoticed until writing brings them to the foreground: previously unconnected topics and themes begin to interact, developing meanings not present in those original subjects on their own.

Wildfires burning in the Arctic might bring to mind infernal images from Paradise Lost or even intimations of an unwritten J.G. Ballard novel, pushing a simple tale of natural disaster to new symbolic heights, something mythic and larger than the story at hand. Learning that U.S. Naval researchers on the Gulf Coast have used the marine slime of a “300-million-year old creature” to develop 21st-century body armor might conjure images from classical mythology or even from H.P. Lovecraft: Neptunian biotech wed with Cthulhoid military terror.

In other words, writing means that one thing can be crosswired or brought into contrast with another for the specific purpose of fueling further imaginative connections, new themes to be pulled apart and lengthened, teased out to form plots, characters, and scenes.

In addition, a writer of fiction might stage an otherwise straightforward storyline in an unexpected setting, in order to reveal something new about both. It’s a hard-boiled detective thriller—set on an international space station. It’s a heist film—set at the bottom of the sea. It’s a procedural missing-person mystery—set on a remote military base in Afghanistan.

Thinking like a writer would mean asking why things have happened in this way and not another—in this place and not another—and to see what happens when you begin to switch things around. It’s about strategic recombination.

I mention all this after reading a new essay by artist and critic James Bridle about algorithmic content generation as seen in children’s videos on YouTube. The piece is worth reading for yourself, but I wanted to highlight a few things here.

[Image: Wiring the ENIAC; via Wired]

In brief, the essay suggests that an increasingly odd, even nonsensical subcategory of children’s video is emerging on YouTube. The content of these videos, Bridle writes, comes from what he calls “keyword/hashtag association.” That is, popular keyword searches have become a stimulus for producing new videos whose content is reverse-engineered from those searches.

To use an entirely fictional example of what this means, let’s imagine that, following a popular Saturday Night Live sketch, millions of people begin Googling “Pokémon Go Ewan McGregor.” In the emerging YouTube media ecology that Bridle documents, someone with an entrepreneurial spirit would immediately make a Pokémon Go video featuring Ewan McGregor both to satisfy this peculiar cultural urge and to profit from the anticipated traffic.

Content-generation through keyword mixing is “a whole dark art unto itself,” Bridle suggests. As a particular keyword or hashtag begins to trend, “content producers pile onto it, creating thousands and thousands more of these videos in every possible iteration.” Imagine Ewan McGregor playing Pokémon Go, forever.

What’s unusual here, however, and what Bridle specifically highlights in his essay, is that this creative process is becoming automated: machine-learning algorithms are taking note of trending keyword searches or popular hashtag combinations, then recommending the production of content to match those otherwise arbitrary sets. For Bridle, the results verge on the incomprehensible—less Big Data, say, than Big Dada.

This is by no means new. Recall the origin of House of Cards on Netflix. Netflix learned from its massive trove of consumer data that its customers liked, among other things, David Fincher films, political thrillers, and the actor Kevin Spacey. As David Carr explained for the New York Times back in 2013, this suggested the outline of a possible series: “With those three circles of interest, Netflix was able to find a Venn diagram intersection that suggested that buying the series would be a very good bet on original programming.”

In other words, House of Cards was produced because it matched a data set, an example of “keyword/hashtag association” becoming video.

The question here would be: what if, instead of a human producer, a machine-learning algorithm had been tasked with analyzing Netflix consumer data and generating an idea for a new TV show? What if that recommendation algorithm didn’t quite understand which combinations would be good or worth watching? It’s not hard to imagine an unwatchably surreal, even uncanny television show resulting from this, something that seems to make more sense as a data-collection exercise than as a coherent plot—yet Bridle suggests that this is exactly what’s happening in the world of children’s videos online.

[Image: From Metropolis].

In some of these videos, Bridle explains, keyword-based programming might mean something as basic as altering a few words in a script, then having actors playfully act out those new scenarios. Actors might incorporate new toys, new types of candy, or even a particular child’s name: “Matt” on a “donkey” at “the zoo” becomes “Matt” on a “horse” at “the zoo” becomes “Carla” on a “horse” at “home.” Each variant keyword combination then results in its own short video, and each of these videos can be monetized. Future such recombinations are infinite.

In an age of easily produced digital animations, Bridle adds, these sorts of keyword micro-variants can be produced both extremely quickly and very nearly automatically. Some YouTube producers have even eliminated “human actors” altogether, he writes, “to create infinite reconfigurable versions of the same videos over and over again. What is occurring here is clearly automated. Stock animations, audio tracks, and lists of keywords being assembled in their thousands to produce an endless stream of videos.”

Bridle notes with worry that it is nearly impossible here “to parse out the gap between human and machine.”

Going further, he suggests that the automated production of new videos based on popular search terms has resulted in scenes so troubling that children should not be exposed to them—but, interestingly, Bridle’s reaction here seems to be based on those videos’ content. That is, the videos feature animated characters appearing without heads, or kids being buried alive in sandboxes, or even the painful sounds of babies crying.

What I think is unsettling here is slightly different, on the other hand. The content, in my opinion, is simply strange: a kind of low-rent surrealism for kids, David Lynch-lite for toddlers. For thousands of years, western folktales have featured cannibals, incest, haunted houses, even John Carpenter-like biological transformations, from woman to tree, or from man to pig and back again. Children burn to death on chariots in the sky or sons fall from atmospheric heights into the sea. These myths seem more nightmarish—on the level of content—than some of Bridle’s chosen YouTube videos.

Instead, I would argue, what’s disturbing here is what the content suggests about how things should be connected. The real risk would seem to be that children exposed to recommendation algorithms at an early age might begin to emulate them cognitively, learning how to think, reason, and associate based on inhuman leaps of machine logic.

Bridle’s inability “to parse out the gap between human and machine” might soon apply not just to these sorts of YouTube videos but to the children who grew up watching them.

[Image: Replicants in Blade Runner].

One of my favorite scenes in Umberto Eco’s novel Foucault’s Pendulum is when a character named Jacopo Belbo describes different types of people. Everyone in the world, Belbo suggests, is one of only four types: there are “cretins, fools, morons, and lunatics.”

In the context of the present discussion, it is interesting to note that these categories are defined by modes of reasoning. For example, “Fools don’t claim that cats bark,” Belbo explains, “but they talk about cats when everyone else is talking about dogs.” They get their references wrong.

It is Eco’s “lunatic,” however, who offers a particularly interesting character type for us to consider: the lunatic, we read, is “a moron who doesn’t know the ropes. The moron proves his [own] thesis; he has a logic, however twisted it may be. The lunatic, on the other hand, doesn’t concern himself at all with logic; he works by short circuits. For him, everything proves everything else. The lunatic is all idée fixe, and whatever he comes across confirms his lunacy. You can tell him by the liberties he takes with common sense, by his flashes of inspiration…”

It might soon be time to suggest a fifth category, something beyond the lunatic, where thinking like an algorithm becomes its own strange form of reasoning, an alien logic gradually accepted as human over two or three generations to come.

Assuming I have read Bridle’s essay correctly—and it is entirely possible I have not—he seems disturbed by the content of these videos. I think the more troubling aspect, however, is in how they suggest kids should think. They replace narrative reason with algorithmic recommendation, connecting events and objects in weird, illogical bursts lacking any semblance of internal coherence, where the sudden appearance of something completely irrelevant can nonetheless be explained because of its keyword-search frequency. Having a conversation with someone who thinks like this—who “thinks” like this—would be utterly alien, if not logically impossible.

So, to return to this post’s beginning, one of the thrills of thinking like a writer, so to speak, is precisely in how it encourages one to bring together things that might not otherwise belong on the same page, and to work toward understanding why these apparently unrelated subjects might secretly be connected.

But what is thinking like an algorithm?

It will be interesting to see if algorithmically assembled material can still offer the sort of interpretive challenge posed by narrative writing, or if the only appropriate response to the kinds of content Bridle describes will be passive resignation, indifference, knowing that a data set somewhere produced a series of keywords and that the story before you goes no deeper than that. So you simply watch the next video. And the next. And the next.

Seismic Potential Energy

[Image: Photo by BLDGBLOG].

I got to hike with my friend Wayne last week through a place called the Devil’s Punchbowl, initially by way of a trail out and back from a very Caspar David Friedrich-ian overlook called the Devil’s Chair.

[Image: Wayne, Rückenfigur; photo by BLDGBLOG].

The Punchbowl more or less lies astride the San Andreas Fault, and the Devil’s Chair, in particular, surveils this violently serrated landscape, like gazing out across exposed rows of jagged teeth—terra dentata—or perhaps the angled waves of a frozen Hokusai painting. The entire place seems charged with the seismic potential energy of an impending earthquake.

[Image: It is difficult to get a sense of scale from this image, but this geological feature alone is at least 100 feet in height, and it is only one of hundreds; photo by BLDGBLOG].

The rocks themselves are enormous, splintered and looming sometimes hundreds of feet over your head, and in the heat-haze they almost seem buoyant, subtly bobbing up and down with your footsteps like the tips of drifting icebergs.

[Image: Looking out at the Devil’s Chair; photo by BLDGBLOG].

In fact, we spent the better part of an hour wondering aloud how geologists could someday cause massive underground rock formations such as these to rise to the surface of the Earth, like shipwrecks pulled from the bottom of the sea. Rather than go to the minerals, in other words, geologists could simply bring the minerals to them.

[Image: Photo by BLDGBLOG].

Because of the angles of the rocks, however, it’s remarkably easy to hike out amidst them, into open, valley-like groins that have been produced by tens of thousands of years’ worth of rainfall and erosion; once there, you can just scramble up the sides, skirting past serpentine pores and small caves that seem like perfect resting spaces for snakes, till you reach sheer drop-offs at the top.

There, views open up of more and more—and more—of these same tilted rocks, leading on along the fault, marking the dividing line between continental plates and tempting even the most exhausted hiker further into the landscape. The problem with these sorts of cresting views is that they become addictive.

[Image: Wayne, panoramically doubled; photo by BLDGBLOG].

At the end of the day, we swung by the monastic community at St. Andrew’s Abbey, which is located essentially in the middle of the San Andreas Fault. Those of you who have read David Ulin’s book The Myth of Solid Ground will recall the strange relationship Ulin explores connecting superstition, faith, folk science, and popular seismology amongst people living in an earthquake zone.

Even more specifically, you might recall a man Ulin mentions who once claimed that, hidden “in the pattern of the L.A freeway system, there is an apparition of a dove whose presence serves to restrain ‘the forces of the San Andreas fault’.”

This is scientifically cringeworthy, to be sure, but it is nonetheless interesting in revealing how contemporary infrastructure can become wrapped up in emergent mythologies of how the world (supposedly) works.

The idea, then, of a rogue seismic abbey quietly established in a remote mountainous region of California “to restrain ‘the forces of the San Andreas Fault’”—which, to be clear, is not the professed purpose of St. Andrew’s Abbey—is an idea worth exploring in more detail, in another medium. Imagine monks, praying every night to keep the rocks below them still, titanic geological forces lulled into a state of quiescent slumber.

[Image: Vasquez Rocks at sunset; photo by BLDGBLOG].

In fact, I lied: at the actual end of the day, Wayne and I split up and I drove back to Los Angeles alone by way of a sunset hike at Vasquez Rocks, a place familiar to Star Trek fans, where rock formations nearly identical to—but also less impressive than—the Devil’s Punchbowl breach the surface of the Earth like dorsal fins. The views, as you’d expect, were spectacular.

Both parks—not to mention St. Andrew’s Abbey—are within easy driving distance of Los Angeles, and both are worth a visit.

Boundary Stones and Capital Magic

[Image: “Chart showing the original boundary milestones of the District of Columbia,” U.S. Library of Congress].

Washington D.C. is surrounded by a diamond of “boundary stones,” Tim St. Onge writes for the Library of Congress blog, Worlds Revealed.

“The oldest set of federally placed monuments in the United States are strewn along busy streets, hidden in dense forests, lying unassumingly in residential front yards and church parking lots,” he explains. “Many are fortified by small iron fences, and one resides in the sea wall of a Potomac River lighthouse. Lining the current and former boundaries of Washington, D.C., these are the boundary stones of our nation’s capital.”

[Image: “District of Columbia boundary stone,” U.S. Library of Congress].

Nearly all of them—36 out of 40—can still be found today, although they are not necessarily easy to identify. “Some stones legibly maintain their original inscriptions marking the ‘Jurisdiction of the United States,’ while others have been severely eroded or sunk into the ground so as to now resemble ordinary, naturally-occurring stones.” They have been hit by cars and obscured by poison ivy.

The question of who owns the stones—and thus has responsibility for preserving them—is complex, as the Washington Post pointed out back in 2014. “Those that sit on the D.C./Maryland line were deemed the property of the D.C. Department of Transportation. ‘But on the Virginia side, if you own the land, you own the stone,’ [Stephen Powers of boundarystones.org] says.”

[Image: Mapping the stones, via boundarystones.org].

Novelist Jeremy Bushnell joked on Twitter that, “if anyone knows the incantations that correctly activate these, now would be a good time to utter them,” and, indeed, there is something vaguely magical—in a Nicolas Cage sort of way—in this vision of the nation’s capital encaged by a protective geometry of aging obelisks. Whether “activating” them would have beneficial or nefarious ends, I suppose, is something that remains to be seen.

Of course, Boston also has its boundary stones, and the “original city limits” of Los Angeles apparently have a somewhat anticlimactic little marker that you can find driven into the concrete, as well.

Read much, much more over at Worlds Revealed and boundarystones.org.

(Related: Working the Line. Thanks to Nicola Twilley for the tip!)

The Coming Amnesia

[Image: Galaxy M101; full image credits].

In a talk delivered in Amsterdam a few years ago, science fiction writer Alastair Reynolds outlined an unnerving future scenario for the universe, something he had also recently used as the premise of a short story (collected here).

As the universe expands over hundreds of billions of years, Reynolds explained, there will be a point, in the very far future, at which all galaxies will be so far apart that they will no longer be visible from one another.

Upon reaching that moment, it will no longer be possible to understand the universe’s history—or perhaps even that it had one—as all evidence of a broader cosmos outside of one’s own galaxy will have forever disappeared. Cosmology itself will be impossible.

In such a radically expanded future universe, Reynolds continued, some of the most basic insights offered by today’s astronomy will be unavailable. After all, he points out, “you can’t measure the redshift of galaxies if you can’t see galaxies. And if you can’t see galaxies, how do you even know that the universe is expanding? How would you ever determine that the universe had had an origin?”

There would be no reason to theorize that other galaxies had ever existed in the first place. The universe, in effect, will have disappeared over its own horizon, into a state of irreversible amnesia.

[Image: The Tarantula Nebula, photographed by the Hubble Space Telescope, via the New York Times].

It was an interesting talk that I had the pleasure to catch in person, and, for those interested, it includes Reynolds’s explanation of how he shaped this idea into a short story.

More to the point, however, Reynolds was originally inspired by an article published in Scientific American back in 2008 called “The End of Cosmology?” by Lawrence M. Krauss and Robert J. Scherrer.

That article’s sub-head suggests what’s at stake: “An accelerating universe,” we read, “wipes out traces of its own origins.”

[Image: A “Wolf–Rayet star… in the constellation of Carina (The Keel),” photographed by the Hubble Space Telescope].

As Krauss and Scherrer point out in their provocative essay, “We may be living in the only epoch in the history of the universe when scientists can achieve an accurate understanding of the true nature of the universe.”

“What will the scientists of the future see as they peer into the skies 100 billion years from now?” they ask. “Without telescopes, they will see pretty much what we see today: the stars of our galaxy… The big difference will occur when these future scientists build telescopes capable of detecting galaxies outside our own. They won’t see any! The nearby galaxies will have merged with the Milky Way to form one large galaxy, and essentially all the other galaxies will be long gone, having escaped beyond the event horizon.”

This won’t only mean fewer luminous objects to see in space; it will mean that, “as a result, Hubble’s crucial discovery of the expanding universe will become irreproducible.”

[Image: The “interacting galaxies” of Arp 273, photographed by the Hubble Space Telescope, via the New York Times].

The authors go on to explain that even the chemical composition of this future universe will no longer allow for its history to be deduced, including the Big Bang.

“Astronomers and physicists who develop an understanding of nuclear physics,” they write, “will correctly conclude that stars burn nuclear fuel. If they then conclude (incorrectly) that all the helium they observe was produced in earlier generations of stars, they will be able to place an upper limit on the age of the universe. These scientists will thus correctly infer that their galactic universe is not eternal but has a finite age. Yet the origin of the matter they observe will remain shrouded in mystery.”

In other words, essentially no observational tool available to future astronomers will lead to an accurate understanding of the universe’s origins. The authors call this an “apocalypse of knowledge.”

[Image: “The Christianized constellation St. Sylvester (a.k.a. Bootes), from the 1627 edition of Schiller’s Coelum Stellatum Christianum.” Image (and caption) from Star Maps: History, Artistry, and Cartography by Nick Kanas].

There are many interesting things here, including the somewhat existentially horrifying possibility that any intelligent creatures alive in that distant era will have no way to know what is happening to them, where things came from, even where they currently are (an empty space? a dream?), or why.

Informed cosmology will, by necessity, be replaced with religious speculation—with myths, poetry, and folklore.

[Image: 12th-century astrolabe; from Star Maps: History, Artistry, and Cartography by Nick Kanas].

It is worth asking, however briefly and with multiple grains of salt, if something similar has perhaps already occurred in the universe we think we know today—if something has not already disappeared beyond the horizon of cosmic amnesia—making even our most well-structured, observation-based theories obsolete. For example, could even the widely accepted conclusion that there was a Big Bang be just an ironic side-effect of having lost some other form of cosmic evidence that long ago slipped eternally away from view?

Remember that these future astronomers will not know anything is missing. They will merrily forge ahead with their own complicated, internally convincing new theories and tests. It is not out of the question, then, to ask if we might be in a similarly ignorant situation.

In any case, what kinds of future devices and instruments might be invented to measure or explore a cosmic scenario such as this? What explanations and narratives would such devices be trying to prove?

[Image: “Woodcut illustration depicting the 7th day of Creation, from a page of the 1493 Latin edition of Schedel’s Nuremberg Chronicle. Note the Aristotelian cosmological system that was used in the Middle Ages, below, with God and His retinue of angels looking down on His creation from above.” Image (and caption) from Star Maps: History, Artistry, and Cartography by Nick Kanas].

Science writer Sarah Scoles looked at this same dilemma last year for PBS, interviewing astronomer Avi Loeb.

Scoles was able to find a small glimmer of light in this infinite future darkness, however: Loeb believes that there might actually be a way out of this universal amnesia.

“The center of our galaxy keeps ejecting stars at high enough speeds that they can exit the galaxy,” Loeb says. The intense and dynamic gravity near the black hole ejects them into space, where they will glide away forever like radiating rocket ships. The same thing should happen a trillion years from now.

“These stars that leave the galaxy will be carried away by the same cosmic acceleration,” Loeb says. Future astronomers can monitor them as they depart. They will see stars leave, become alone in extragalactic space, and begin rushing faster and faster toward nothingness. It would look like magic. But if those future people dig into that strangeness, they will catch a glimpse of the true nature of the universe.

There might yet be hope for cosmological discovery, in the other words, encoded in the trajectories of these bizarre, fleeing stars.

[Images: (top) “An illustration of the Aristotelian/Ptolemaic cosmological system that was used in the Middle Ages, from the 1579 edition of Piccolomini’s De la Sfera del Mondo.” (bottom) “An illustration (influenced by Peurbach’s Theoricae Planetarum Novae) explaining the retrograde motion of an outer planet in the sky, from the 1647 Leiden edition of Sacrobosco’s De Sphaera.” Images and captions from Star Maps: History, Artistry, and Cartography by Nick Kanas].

There are at least two reasons why I have been thinking about this today. One was the publication of an article by Dennis Overbye earlier this week about the rate of the universe’s expansion.

“There is a crisis brewing in the cosmos,” Overbye writes, “or perhaps in the community of cosmologists. The universe seems to be expanding too fast, some astronomers say.”

Indeed, the universe might be more “virulent and controversial” than currently believed, he explains, caught-up in the long process of simply tearing itself apart.

[Image: A “starburst galaxy” photographed by the Hubble Space Telescope].

One implication of this finding, Overbye adds, “is that the most popular version of dark energy—known as the cosmological constant, invented by Einstein 100 years ago and then rejected as a blunder—might have to be replaced in the cosmological model by a more virulent and controversial form known as phantom energy, which could cause the universe to eventually expand so fast that even atoms would be torn apart in a Big Rip billions of years from now.”

In the process, perhaps the far-future dark ages envisioned by Krauss and Scherrer will thus arrive a billion or two years earlier than expected.

[Image: Engraving by Gustave Doré from The Divine Comedy by Dante Alighieri].

The second thing that made me think of this, however, was a short essay called “Dante in Orbit,” originally published in 1963, that a friend sent to me last night. It is about stars, constellations, and the possibility of determining astronomical time in The Divine Comedy.

In that paper, Frederick A. Stebbins writes that Dante “seems far removed from the space age; yet we find him concerned with problems of astronomy that had no practical importance until man went into orbit. He had occasion to deal with local time, elapsed time, and the International Date Line. His solutions appear to be correct.”

Stebbins goes on to describe “numerous astronomical references in [Dante’s] chief work, The Divine Comedy”—albeit doing so in a way that remains unconvincing. He suggests, for example, that Dante’s descriptions of constellations, sunrises, full moons, and more will allow an astute reader to measure exactly how much time was meant to have passed in his mythic story, and even that Dante himself had somehow been aware of differential, or relativistic, time differences between far-flung locations. (Recall, on the other hand, that Dante’s work has been discussed elsewhere for its possible insights into physics.)

[Image: Diagrams from “Dante in Orbit” (1963) by Frederick A. Stebbins].

But what’s interesting about this is not whether or not Stebbins was correct in his conclusions. What’s interesting is the very idea that a medieval cosmology might have been soft-wired, so to speak, into Dante’s poetic universe and that the stars and constellations he referred to would have had clear narrative significance for contemporary readers. It was part of their era’s shared understanding of how the world was structured.

Now, though, imagine some new Dante of a hundred billion years from now—some new Divine Comedy published in a trillion years—and how it might come to grips with the universal isolation and darkness of Krauss and Scherrer. What cycles of time might be perceived in the lonely, shining bulk of the Milky Way, a dying glow with no neighbor; what shared folklore about the growing darkness might be communicated to readers who don’t know, who cannot know, how incorrect their model of the cosmos truly is?

(Thanks to Wayne Chambliss for the Dante paper).