Fractalize Me

The genes that cause Romanesco, a kind of cauliflower, to grow in a fractal pattern have been identified. Researchers were subsequently able to manipulate one of those genes and get it to function inside another plant—thale cress—producing fractal blooms.

The language used to describe this is interesting in its own right—a vocabulary of memory, transience, perturbation, and abandoned flowering.

In the words of the researchers’ abstract, “we found that curd self-similarity arises because the meristems fail to form flowers but keep the ‘memory’ of their transient passage in a floral state. Additional mutations affecting meristem growth can induce the production of conical structures reminiscent of the conspicuous fractal Romanesco shape. This study reveals how fractal-like forms may emerge from the combination of key, defined perturbations of floral developmental programs and growth dynamics.”

It’s the fact that this gene appears to function in other plants, though, that is blowing my mind. Give this technique another ten or twenty years, and the resulting experiments—and the subsequent landscapes—seem endless, from gardens of infinitely self-similar roses and orchids to forests populated by bubbling forms of fractal pines, roiling oaks, and ivies.

Until, of course, the gene inevitably escapes, going mobile, infecting insects and animals, producing confused anatomies in fractal landscapes, like minor creatures in a Jeff VanderMeer novel, before breaching the human genome, and oracular multicephalous children are born, their bodies transitioning through monstrosities of self-reminiscence and new limbs, mythological, infinitely incomplete, cursed with endless becoming.

In any case, read more over at ScienceNews, and check out the actual paper at Science.

Geomedia, or What Lies Below

[Image: Courtesy USGS.]

I love the fact that the U.S. Geological Survey had to put out a press release explaining what some people in rural Wisconsin might see in the first few weeks of January: a government helicopter flying “in a grid pattern relatively low to the ground, hundreds of feet above the surface. A sensor that resembles a large hula-hoop will be towed beneath the helicopter,” the USGS explains—but it’s not some conspiratorial super-tool, silently flipping the results of voting machines. It’s simply measuring “tiny electromagnetic signals that can be used to map features below Earth’s surface,” including “shallow bedrock and glacial sediments” in the region.

Of course, the fictional possibilities are nevertheless intriguing: government geologists looking for something buried in the agricultural muds of eastern Wisconsin, part Michael Crichton, part Stephen King; or CIA contractors, masquerading as geologists, mapping unexplained radio signals emanating from a grid of points somewhere inland from Lake Michigan; or a rogue team of federal archaeologists searching for some Lovecraftian ruin, a lost city scraped down to its foundations by the last Ice Age, etc. etc.

In any case, the use of remote-sensing tools such as these—scanning the Earth to reveal electromagnetic, gravitational, and chemical signatures indicative of mineral deposits or, as it happens, architectural ruins—is the subject of a Graham Foundation grant I received earlier this autumn. That’s a project I will be exploring and updating over the next 10 months, combining lifelong obsessions with archaeology and ruins (specifically, in this case, the art history of how we depict destroyed works of architecture) with an interest in geophysical prospecting tools borrowed from the extraction industry.

In other words, the same remote-sensing tools that allow geological prospecting crews to locate subterranean mineral deposits are increasingly being used by archaeologists today to map underground architectural ruins. Empty fields mask otherwise invisible cities. How will these technologies change the way we define and represent architectural history?

[Image: Collage, Geoff Manaugh, for “Invisible Cities: Architecture’s Geophysical Turn,” Graham Foundation 2020/2021; based on “Forum Romano, Rome, Italy,” photochrom print, courtesy U.S. Library of Congress.]

For now, I’ll just note another recent USGS press release, this one touting the agency’s year-end “Mineral Resources Program Highlights.”

Included in the tally is the “Earth MRI” initiative—which, despite the apt medical-imaging metaphor, actually stands for the “Earth Mapping Resource Initiative.” From the USGS: “When learning more about ancient rocks buried deep beneath the surface of the Earth, it may seem surprising to use futuristic technologies flown hundreds of feet in the air, but that has been central to the USGS Earth Mapping Resource Initiative.”

[Image: A geophysical survey of northwestern Arkansas, courtesy USGS.]

What lies below, whether it is mineral or architectural, is becoming accessible to surface view through advanced technical means. These new tools often reveal that, beneath even the most featureless landscapes, immensely interesting forms and structures can be hidden. Ostensibly boring mud plains can hide the eroded roots of ancient mountain chains, just as endless fields of wheat or barley can stand atop forgotten towns or lost cities without any hint of the walls and streets beneath.

The surface of the Earth is an intermediary—it is media—between us and what it disguises.

(See also, Detection Landscapes and Lost Roads of Monticello.)

Fables of the Permanent and Insatiable

[Image: An otherwise unrelated photo of fire-fighting foam, via Wikipedia.]

There are at least two classes of materials that have always interested me: synthetic materials designed to be so resistant and indestructible that they verge on a kind of supernatural longevity, and engineered biomaterials, such as enzymes or microbes, designed to consume exactly these sorts of super-resistant materials.

There was a strangely haunting line in a recent tweet by journalist Sharon Lerner, for example: “Turns out it’s really hard to burn something that was designed to put out fires.” Lerner is specifically referring to a plant in upstate New York that was contracted to burn fire-fighting foam, a kind of industrial Ouroboros or contradiction in terms. How do you burn that which was made to resist fire?

Unsurprisingly, the plant is allegedly now surrounded by unburnt remnants of this unsuccessful incineration process, as “extremely persistent chemicals” have been found in the soil, groundwater, and bodies of nearby living creatures.

These chemicals are not literally indestructible, of course, but I am nevertheless fascinated by the almost mythic status of such materials: inhuman things that, Sorcerer’s Apprentice-like, cannot be turned off, controlled, or annihilated. In other words, we invent a hydrophobic industrial coating that resists water, only to find that, when it gets into streams and rivers and seas, it maintains this permanent separation from the water around it, never diluting, never breaking down, forming a kind of “extremely persistent” counter-ecology swirling around in the global deep.

Or we produce a new industrial adhesive so good at bonding that it cannot be separated from the things with which it has all but merged. In any other context, this would be pure metaphor, even folklore, a ghost story of possession and inseparable haunting. What if humans are actually too good at producing the permanent? What if we create something that cannot be killed or annihilated? It’s the golem myth all over again, this time set in the dust-free labs of BASF and 3M.

Coatings, metals, adhesives, composites: strange materials emerge from human laboratories that exceed any realistic human timescale, perhaps threatening to outlast geology itself. As continents melt in the heat of an expanding sun ten billion years from now, these ancient, undead materials will simply float to the top, resistant even to magma and celestial apocalypse. We will have created the supernatural, the uncannily permanent.

[Image: “Plastic-munching bacteria,” via PBS NewsHour.]

In any case, the flip-side of all this, then, is synthetic materials that have been designed to consume these very things. Every once in a while, for example, it’s announced that a lab somewhere has devised a new form of plastic-eating enzyme or that someone has discovered certain worms that eat plastic. In other words, there is now in the world a creature or thing that can degrade the eerily immortal materials coming from someone else’s lab down the hall. But what are the consequences of this, the metaphoric implications? What myths do we have of the omnivorous and insatiable?

It is not hard to imagine that classic sci-fi trope of something escaping from the lab and wreaking havoc in the outside world. At first, say, cars parked outside the laboratory where this stuff was developed begin showing structural wear; radio dials fall off; plastic handles on passenger seats break or even seem to be disintegrating. Then it appears inside houses, people accidentally taking it home with them in the pleats and folds of their cotton clothing, where this engineered microbe begins to feast on plastic housings for electrical connections, children’s toys, and kitchen goods, all of which have begun to age before failing entirely.

Then supermarkets and drugstores, then airports and planes themselves. Boats and ferries. Internal medical implants, from joints to stents. This plastic-eating organism begins to shift genes and mutate, inadvertently unleashed onto a world that seems exactly built for it, with new food everywhere in sight. Forty years later, no plastic exists. A hundred years later, even the cellulose in plants is threatened. The world is being consumed entirely.

My point—such as it is—is that materials science seems to operate within two mythic extremes, pulled back and forth between two supernatural ideals: there is that which resists to the point of uncanny permanence, of eerie immortality, and there is that which consumes to the point of universal insatiability, of boundless hunger. Both of these suggest such interesting fables, creating such otherworldly things and objects in the process.

Instrumental Revelation and the Architecture of Abandoned Physics Experiments

Semi-abandoned large-scale physics experiments have always fascinated me: remote and arcane buildings designed for something other than human spatial expectations, peppered with inexplicable instruments at all scales meant to detect an invisible world that surrounds us, its dimensions otherwise impenetrable to human senses.

[Image: Photo by Yulia Grigoryants, courtesy New York Times.]

Although the experiments he visits in the book are—or, at least, at the time of writing, were—still active, this is partly what made me a fan of Anil Ananthaswamy’s excellent The Edge of Physics: A Journey to Earth’s Extremes to Unlock the Secrets of the Universe, published in 2010. The book is a kind of journalistic pilgrimage to machines buried inside mines, installed atop remote mountain peaks, woven into the ground beneath European cities: sites that are incredibly evocative, religious in their belief that an unseen world is capable of revelation, but scientific in their insistence that this unveiling will be achieved through technological means.

A speculative architectural-literary hybrid I often come back to is Lebbeus Woods’s (graphically uneven but conceptually fascinating) OneFiveFour, which I’ve written about elsewhere. In it, Woods depicts an entire city designed and built as an inhabitable scientific tool. Everywhere there are “oscilloscopes, refractors, seismometers, interferometers, and other, as yet unknown instruments, measuring light, movement, force, change.” Woods describes how “tools for extending perceptivity to all scales of nature are built spontaneously, playfully, experimentally, continuously modified in home laboratories, in laboratories that are homes.”

Instead of wasting their lives tweeting about celebrity deaths, residents construct and model their own bespoke experiments, exploring seismology, astronomy, electricity, even light itself.

In any case, both Ananthaswamy’s and Woods’s books came to mind last week when reading a piece by Dennis Overbye in the New York Times about a still-active but seemingly forgotten observatory on Mt. Aragats in Armenia. There, in “a sprawling array of oddly shaped, empty buildings,” a tiny crew of scientists still works, looking for “cosmic rays: high-energy particles thrown from exploding stars, black holes and other astrophysical calamities thousands or millions of light-years away and whistling down from space.”

In the accompanying photographs, all taken by Yulia Grigoryants, we see black boxes perched atop pillars and ladders, in any other context easy to mistake for an avant-garde sculptural installation but, here, patiently awaiting “cosmic rain.” Grigoryants explores tunnels and abandoned labs, hiking around dead satellite-tracking stations in the snow, sometimes surrounded by stray dogs. Just think of the novels that could be set here.

As Overbye writes, despite advances in the design and construction of particle accelerators, such as CERN—which is, in effect, a giant Lebbeus Woods project in real life—“the buildings and the instruments at Aragats remain, like ghost ships in the cosmic rain, maintained for long stretches of time by a skeleton crew of two technicians and a cook. They still wait for news that could change the universe: a quantum bullet more powerful than humans can produce, or weirder than their tentative laws can explain; trouble blowing in from the sun.”

In fact, recall another recent article, this time in the Los Angeles Times, about a doomed earthquake-prediction experiment that has come to the end of its funding. It was “a network of 115 sensors deployed along the California coast to act as ears capable of picking up these hints [that might imply a coming earthquake], called electromagnetic precursors… They could also provide a key to understanding spooky electric discharges known ‘earthquake lights,’ which some seismologists say can burst out of the ground before and during certain seismic events.”

Like menhirs, these abandoned seismic sensors could now just stand there, silent in the landscape, awaiting a future photographer such as Grigoryants to capture their poetic ruination.

Speaking of which, click through to the New York Times to see her photos in full.

Terrestrial Oceanica

I’m grateful for two recent opportunities to publish op-eds, one for the Los Angeles Times back in May and the other just this morning in the New York Times. Both look at seismic activity and its poetic or philosophical implications, including fault lines as sites of emergence for a future world (“A fault is where futures lurk”).

They both follow on from the Wired piece about the Walker Lane, as well as this past weekend’s large earthquakes here in Southern California.

The L.A. Times op-ed specifically looks at hiking along fault lines, including the San Andreas, where, several years ago, I found myself walking alone at sunset, without cell service, surrounded by tarantulas. I was there in the midst of a “tarantula boom,” something I did not realize until I checked into a hotel room and did some Googling later that evening.

In any case, “Faults are both a promise and a threat: They are proof that the world will remake itself, always, whether we’re prepared for the change or not.”

The New York Times piece explores the philosophical underpinnings of architecture, for which solid ground is both conceptually and literally foundational.

The experience of an earthquake can be destabilizing, not just physically but also philosophically. The idea that the ground is solid, dependable—that we can build on it, that we can trust it to support us—undergirds nearly all human terrestrial activity, not the least of which is designing and constructing architecture… We might say that California is a marine landscape, not a terrestrial one, a slow ocean buffeted by underground waves occasionally strong enough to flatten whole cities. We do not, in fact, live on solid ground: We are mariners, rolling on the peaks and troughs of a planet we’re still learning to navigate. This is both deeply vertiginous and oddly invigorating.

To no small extent, nearly that entire piece was inspired by a comment made by Caltech seismologist Lucy Jones, who I had the pleasure of interviewing several years ago during a Fellowship at USC. At one point in our conversation, Jones emphasized to me that she is a seismologist, not a geologist, which means that she studies “waves, not rocks.” Waves, not rocks. There is a whole new way of looking at the Earth hidden inside that comment.

Huge thanks, meanwhile, to Sue Horton and Clay Risen for inviting me to contribute.

(Images: (top) Hiking at the San Andreas-adjacent Devil’s Punchbowl, like a frozen wave emerging from dry land. (bottom) A tarantula walks beside me at sunset along the San Andreas Fault near Wallace Creek, October 2014; photos by BLDGBLOG.)

After the Clouds

[Image: A cloudless day in the Alabama Hills of California; photo by BLDGBLOG].

The Earth could lose all its clouds according to a feasible runaway greenhouse scenario, modeled by scientists at Caltech.

“Clouds currently cover about two-thirds of the planet at any moment,” Natalie Wolchover writes for Quanta. “But computer simulations of clouds have begun to suggest that as the Earth warms, clouds become scarcer. With fewer white surfaces reflecting sunlight back to space, the Earth gets even warmer, leading to more cloud loss. This feedback loop causes warming to spiral out of control.”

Or, she warns, as if channeling J. G. Ballard’s novel The Drowned World, “think of crocodiles swimming in the Arctic.”

Nature Machine

[Image: Illustration by Benjamin Marra for the New York Times Magazine].

As part of a package of shorter articles in the New York Times Magazine exploring the future implications of self-driving vehicles—how they will affect urban design, popular culture, and even illegal drug activity—writer Malia Wollan focuses on “the end of roadkill.”

Her premise is fascinating. Wollan suggests that the precision driving enabled by self-driving vehicle technology could put an end to vehicular wildlife fatalities. Bears, deer, raccoons, panthers, squirrels—even stray pets—might all remain safe from our weapons-on-wheels. In the process, self-driving cars would become an unexpected ally for wildlife preservation efforts, with animal life potentially experiencing dramatic rebounds along rural and suburban roads. This will be both good and bad. One possible outcome sounds like a tragicomic Coen Brothers film about apocalyptic animal warfare in the American suburbs:

Every year in the United States, there are an estimated 1.5 million deer-vehicle crashes. If self-driving cars manage to give deer safe passage, the fast-reproducing species would quickly grow beyond the ability of the vegetation to sustain them. “You’d get a lot of starvation and mass die-offs,” says Daniel J. Smith, a conservation biologist at the University of Central Florida who has been studying road ecology for nearly three decades… “There will be deer in people’s yards, and there will be snipers in towns killing them,” [wildlife researcher Patricia Cramer] says.

While these are already interesting points, Wollan explains that, for this to come to pass, we will need to do something very strange. We will need to teach self-driving cars how to recognize nature.

“Just how deferential [autonomous vehicles] are toward wildlife will depend on human choices and ingenuity. For now,” she adds, “the heterogeneity and unpredictability of nature tends to confound the algorithms. In Australia, hopping kangaroos jumbled a self-driving Volvo’s ability to measure distance. In Boston, autonomous-vehicle sensors identified a flock of sea gulls as a single form rather than a collection of individual birds. Still, even the tiniest creatures could benefit. ‘The car could know: “O.K., this is a hot spot for frogs. It’s spring. It’s been raining. All the frogs will be moving across the road to find a mate,”’ Smith says. The vehicles could reroute to avoid flattening amphibians on that critical day.”

One might imagine that, seen through the metaphoric eyes of a car’s LiDAR array, all those hopping kangaroos appeared to be a single super-body, a unified, moving wave of flesh that would have appeared monstrous, lumpy, even grotesque. Machine horror.

What interests me here is that, in Wollan’s formulation, “nature” is that which remains heterogeneous and unpredictable—that which remains resistant to traditional representation and modeling—yet this is exactly what self-driving car algorithms will have to contend with, and what they will need to recognize and correct for, if we want them to avoid colliding with a nonhuman species.

In particular, I love Wollan’s use of the word “deferential.” The idea of cars acting with deference to the natural world, or to nonhuman species in general, opens up a whole other philosophical conversation. For example, what is the difference between deference and reverence, and how we might teach our fellow human beings, let alone our machines, to defer to, even to revere, the natural world? Put another way, what does it mean for a machine to “encounter” the wild?

Briefly, Wollan’s piece reminded me of Robert MacFarlane’s excellent book The Wild Places for a number of reasons. Recall that book’s central premise: the idea that wilderness is always closer than it appears. Roadside weeds, overgrown lots, urban hikes, peripheral species, the ground beneath your feet, even the walls of the house around you: these all constitute “wilderness” at a variety of scales, if only we could learn to recognize them as such. Will self-driving cars spot “nature” or “wilderness” in sites where humans aren’t conceptually prepared to see it?

The challenge of teaching a car how to recognize nature thus takes on massive and thrilling complexity here, all wrapped up in the apparently simple goal of ending roadkill. It’s about where machines end and animals begin—or perhaps how technology might begin before the end of wilderness.

In any case, Wollan’s short piece is worth reading in full—and don’t miss a much earlier feature she wrote on the subject of roadkill for the New York Times back in 2010.

Hard Drives, Not Telescopes

[Image: Via @CrookedCosmos].

More or less following on from the previous post, @CrookedCosmos is a Twitter bot programed by Zach Whalen, based on an idea by Adam Ferriss, that digitally manipulates astronomical photography.

It describes itself as “pixel sorting the cosmos”: skipping image by image through the heavens and leaving behind its own idiosyncratic scratches, context-aware blurs, stutters, and displacements.

[Image: Via @CrookedCosmos].

While the results are frequently quite gorgeous, suggesting some sort of strange, machine-filtered view of the cosmos, the irony is that, in many ways, @CrookedCosmos is simply returning to an earlier state in the data.

After all, so-called “images” of exotic celestial phenomena often come to Earth not in the form of polished, full-color imagery, ready for framing, but as low-res numerical sets that require often quite drastic cosmetic manipulation. Only then, after extensive processing, do they become legible—or, we might say, art-historically recognizable as “photography.”

Consider, for example, what the data really look like when astronomers discover an exoplanet: an almost Cubist-level of abstraction, constructed from rough areas of light and shadow, has to be dramatically cleaned up to yield any evidence that a “planet” might really be depicted. Prior to that act of visual interpretation, these alien worlds “only show up in data as tiny blips.”

In fact, it seems somewhat justifiable to say that exoplanets are not discovered by astronomers at all; they are discovered by computer scientists peering deep into data, not into space.

[Image: Via @CrookedCosmos].

Deliberately or not, then, @CrookedCosmos seems to take us back one step, to when the data are still incompletely sorted. In producing artistically manipulated images, it implies a more accurate glimpse of how machines truly see.

(Spotted via Martin Isaac. Earlier on BLDGBLOG: We don’t have an algorithm for this.”)

Terrain Jam

[Image: “arid wilderness areas” from @witheringsystem].

I’ve long been a fan of generative landscapes—topographies created according to some sort of underlying algorithmic code—and I’m thus always happy to stumble upon new, visually striking examples.

Of course, geology itself is already “generative,” as entire continents are formed and evolve over hundreds of millions of years following deeper logics of melting, crystallization, erosion, tectonic drift, and thermal metamorphosis; so digital examples of this sort of thing are just repeating in miniature something that has long been underway at a much larger scale.

In any case, @witheringsystem is a joint project between Katie Rose Pipkin and Loren Schmidt, the same artists behind the widely-known “moth generator” and last year’s “Fermi Paradox Jam,” among other collaborations. It is not exactly new, but it’s been tweeting some great shots lately from an algorithmic world of cuboid terrains; the image seen here depicts “arid wilderness areas,” offered without further context.

See several more examples over on their Twitter feed.

(Spotted via Martin Isaac; earlier on BLDGBLOG: British Countryside Generator and Sometimes the house you come out of isn’t the same one you went into.”)

The Coming Amnesia

[Image: Galaxy M101; full image credits].

In a talk delivered in Amsterdam a few years ago, science fiction writer Alastair Reynolds outlined an unnerving future scenario for the universe, something he had also recently used as the premise of a short story (collected here).

As the universe expands over hundreds of billions of years, Reynolds explained, there will be a point, in the very far future, at which all galaxies will be so far apart that they will no longer be visible from one another.

Upon reaching that moment, it will no longer be possible to understand the universe’s history—or perhaps even that it had one—as all evidence of a broader cosmos outside of one’s own galaxy will have forever disappeared. Cosmology itself will be impossible.

In such a radically expanded future universe, Reynolds continued, some of the most basic insights offered by today’s astronomy will be unavailable. After all, he points out, “you can’t measure the redshift of galaxies if you can’t see galaxies. And if you can’t see galaxies, how do you even know that the universe is expanding? How would you ever determine that the universe had had an origin?”

There would be no reason to theorize that other galaxies had ever existed in the first place. The universe, in effect, will have disappeared over its own horizon, into a state of irreversible amnesia.

[Image: The Tarantula Nebula, photographed by the Hubble Space Telescope, via the New York Times].

It was an interesting talk that I had the pleasure to catch in person, and, for those interested, it includes Reynolds’s explanation of how he shaped this idea into a short story.

More to the point, however, Reynolds was originally inspired by an article published in Scientific American back in 2008 called “The End of Cosmology?” by Lawrence M. Krauss and Robert J. Scherrer.

That article’s sub-head suggests what’s at stake: “An accelerating universe,” we read, “wipes out traces of its own origins.”

[Image: A “Wolf–Rayet star… in the constellation of Carina (The Keel),” photographed by the Hubble Space Telescope].

As Krauss and Scherrer point out in their provocative essay, “We may be living in the only epoch in the history of the universe when scientists can achieve an accurate understanding of the true nature of the universe.”

“What will the scientists of the future see as they peer into the skies 100 billion years from now?” they ask. “Without telescopes, they will see pretty much what we see today: the stars of our galaxy… The big difference will occur when these future scientists build telescopes capable of detecting galaxies outside our own. They won’t see any! The nearby galaxies will have merged with the Milky Way to form one large galaxy, and essentially all the other galaxies will be long gone, having escaped beyond the event horizon.”

This won’t only mean fewer luminous objects to see in space; it will mean that, “as a result, Hubble’s crucial discovery of the expanding universe will become irreproducible.”

[Image: The “interacting galaxies” of Arp 273, photographed by the Hubble Space Telescope, via the New York Times].

The authors go on to explain that even the chemical composition of this future universe will no longer allow for its history to be deduced, including the Big Bang.

“Astronomers and physicists who develop an understanding of nuclear physics,” they write, “will correctly conclude that stars burn nuclear fuel. If they then conclude (incorrectly) that all the helium they observe was produced in earlier generations of stars, they will be able to place an upper limit on the age of the universe. These scientists will thus correctly infer that their galactic universe is not eternal but has a finite age. Yet the origin of the matter they observe will remain shrouded in mystery.”

In other words, essentially no observational tool available to future astronomers will lead to an accurate understanding of the universe’s origins. The authors call this an “apocalypse of knowledge.”

[Image: “The Christianized constellation St. Sylvester (a.k.a. Bootes), from the 1627 edition of Schiller’s Coelum Stellatum Christianum.” Image (and caption) from Star Maps: History, Artistry, and Cartography by Nick Kanas].

There are many interesting things here, including the somewhat existentially horrifying possibility that any intelligent creatures alive in that distant era will have no way to know what is happening to them, where things came from, even where they currently are (an empty space? a dream?), or why.

Informed cosmology will, by necessity, be replaced with religious speculation—with myths, poetry, and folklore.

[Image: 12th-century astrolabe; from Star Maps: History, Artistry, and Cartography by Nick Kanas].

It is worth asking, however briefly and with multiple grains of salt, if something similar has perhaps already occurred in the universe we think we know today—if something has not already disappeared beyond the horizon of cosmic amnesia—making even our most well-structured, observation-based theories obsolete. For example, could even the widely accepted conclusion that there was a Big Bang be just an ironic side-effect of having lost some other form of cosmic evidence that long ago slipped eternally away from view?

Remember that these future astronomers will not know anything is missing. They will merrily forge ahead with their own complicated, internally convincing new theories and tests. It is not out of the question, then, to ask if we might be in a similarly ignorant situation.

In any case, what kinds of future devices and instruments might be invented to measure or explore a cosmic scenario such as this? What explanations and narratives would such devices be trying to prove?

[Image: “Woodcut illustration depicting the 7th day of Creation, from a page of the 1493 Latin edition of Schedel’s Nuremberg Chronicle. Note the Aristotelian cosmological system that was used in the Middle Ages, below, with God and His retinue of angels looking down on His creation from above.” Image (and caption) from Star Maps: History, Artistry, and Cartography by Nick Kanas].

Science writer Sarah Scoles looked at this same dilemma last year for PBS, interviewing astronomer Avi Loeb.

Scoles was able to find a small glimmer of light in this infinite future darkness, however: Loeb believes that there might actually be a way out of this universal amnesia.

“The center of our galaxy keeps ejecting stars at high enough speeds that they can exit the galaxy,” Loeb says. The intense and dynamic gravity near the black hole ejects them into space, where they will glide away forever like radiating rocket ships. The same thing should happen a trillion years from now.

“These stars that leave the galaxy will be carried away by the same cosmic acceleration,” Loeb says. Future astronomers can monitor them as they depart. They will see stars leave, become alone in extragalactic space, and begin rushing faster and faster toward nothingness. It would look like magic. But if those future people dig into that strangeness, they will catch a glimpse of the true nature of the universe.

There might yet be hope for cosmological discovery, in the other words, encoded in the trajectories of these bizarre, fleeing stars.

[Images: (top) “An illustration of the Aristotelian/Ptolemaic cosmological system that was used in the Middle Ages, from the 1579 edition of Piccolomini’s De la Sfera del Mondo.” (bottom) “An illustration (influenced by Peurbach’s Theoricae Planetarum Novae) explaining the retrograde motion of an outer planet in the sky, from the 1647 Leiden edition of Sacrobosco’s De Sphaera.” Images and captions from Star Maps: History, Artistry, and Cartography by Nick Kanas].

There are at least two reasons why I have been thinking about this today. One was the publication of an article by Dennis Overbye earlier this week about the rate of the universe’s expansion.

“There is a crisis brewing in the cosmos,” Overbye writes, “or perhaps in the community of cosmologists. The universe seems to be expanding too fast, some astronomers say.”

Indeed, the universe might be more “virulent and controversial” than currently believed, he explains, caught-up in the long process of simply tearing itself apart.

[Image: A “starburst galaxy” photographed by the Hubble Space Telescope].

One implication of this finding, Overbye adds, “is that the most popular version of dark energy—known as the cosmological constant, invented by Einstein 100 years ago and then rejected as a blunder—might have to be replaced in the cosmological model by a more virulent and controversial form known as phantom energy, which could cause the universe to eventually expand so fast that even atoms would be torn apart in a Big Rip billions of years from now.”

In the process, perhaps the far-future dark ages envisioned by Krauss and Scherrer will thus arrive a billion or two years earlier than expected.

[Image: Engraving by Gustave Doré from The Divine Comedy by Dante Alighieri].

The second thing that made me think of this, however, was a short essay called “Dante in Orbit,” originally published in 1963, that a friend sent to me last night. It is about stars, constellations, and the possibility of determining astronomical time in The Divine Comedy.

In that paper, Frederick A. Stebbins writes that Dante “seems far removed from the space age; yet we find him concerned with problems of astronomy that had no practical importance until man went into orbit. He had occasion to deal with local time, elapsed time, and the International Date Line. His solutions appear to be correct.”

Stebbins goes on to describe “numerous astronomical references in [Dante’s] chief work, The Divine Comedy”—albeit doing so in a way that remains unconvincing. He suggests, for example, that Dante’s descriptions of constellations, sunrises, full moons, and more will allow an astute reader to measure exactly how much time was meant to have passed in his mythic story, and even that Dante himself had somehow been aware of differential, or relativistic, time differences between far-flung locations. (Recall, on the other hand, that Dante’s work has been discussed elsewhere for its possible insights into physics.)

[Image: Diagrams from “Dante in Orbit” (1963) by Frederick A. Stebbins].

But what’s interesting about this is not whether or not Stebbins was correct in his conclusions. What’s interesting is the very idea that a medieval cosmology might have been soft-wired, so to speak, into Dante’s poetic universe and that the stars and constellations he referred to would have had clear narrative significance for contemporary readers. It was part of their era’s shared understanding of how the world was structured.

Now, though, imagine some new Dante of a hundred billion years from now—some new Divine Comedy published in a trillion years—and how it might come to grips with the universal isolation and darkness of Krauss and Scherrer. What cycles of time might be perceived in the lonely, shining bulk of the Milky Way, a dying glow with no neighbor; what shared folklore about the growing darkness might be communicated to readers who don’t know, who cannot know, how incorrect their model of the cosmos truly is?

(Thanks to Wayne Chambliss for the Dante paper).

From Bullets, Seeds

[Image: From the “Flower Shell” project by Studio Total].

The Department of Defense is looking to develop “biodegradable training ammunition loaded with specialized seeds to grow environmentally beneficial plants that eliminate ammunition debris and contaminants.”

As the DoD phrases it, in a new call-for-proposals, although “current training rounds require hundreds of years or more to biodegrade,” they are simply “left on the ground surface or several feet underground at the proving ground or tactical range” after use.

Worse, “some of these rounds might have the potential [to] corrode and pollute the soil and nearby water.”

The solution? From bullets to seeds. Turn those spent munitions into gardens-to-come:

The US Army Corps of Engineers’ Cold Regions Research and Engineering Laboratory (CRREL) has demonstrated bioengineered seeds that can be embedded into the biodegradable composites and that will not germinate until they have been in the ground for several months. This SBIR effort will make use of seeds to grow environmentally friendly plants that remove soil contaminants and consume the biodegradable components developed under this project. Animals should be able to consume the plants without any ill effects.

The potential for invasive species to take root and dominate the fragile, disrupted ecology of a proving ground is quite obvious—unless region-specific munitions are developed, with bullets carefully chosen to fit their ecological context, a scenario I find unlikely—but this is nonetheless a surprising, almost Land Art-like vision for the U.S. military.

Recall our earlier look at speculative mass-reforestation programs using tree bombs dropped from airplanes. This was a technique that “could plant as many as a million trees in one day,” in a state of all-out forest warfare. Here, however, a leisurely day out spent shooting targets in a field somewhere could have similar long-term landscape effects: haphazardly planted forests and gardens will emerge in the scarred grounds where weapons were once fired and tested.

In fact, the resulting plants themselves could no doubt also be weaponized, chosen for their tactical properties. Consider buddleia: “buddleia grows fast and its many seeds are easily dispersed by the wind,” Laura Spinney wrote for New Scientist back in 1996. “It has powerful roots used to thin soil on rocky substrata, ideally suited to penetrating the bricks and mortar of modern buildings. In London and other urban centres it can be seen growing out of walls and eves.”

It is also, however, slowly and relentlessly breaking apart the buildings it grows on.

Pack buddleia into your bullets, in other words, and even your spent casings will grow into city-devouring thickets, crumbling your enemy’s ruins with their roots. Think of it as a botanical variation on the apocryphal salting of Carthage.

In any case, if seed-bullets sound like something you or your company can develop, you have until February 7, 2017 to apply.

(Spotted via Adam E. Anderson).