Mechanical Magic

[Image: “Design for the Water Clock of the Peacocks,” from the Kitab fi ma’rifat al-hiyal al-handasiyya (Book of the Knowledge of Ingenious Mechanical Devices) by Badi’ al-Zaman b. al Razzaz al-Jazari, courtesy Metropolitan Museum of Art].

Although it starts in only a few hours, if you’re in the Bay Area tonight, it sounds well-worth attending this talk by Brittany Cox at The Interval: “Horological Heritage: Generating bird song, magic, and music through mechanism.” Cox “specializes in the conservation of automata, mechanical magic, mechanical music, and complicated clocks and watches.” The event opens at 6:30pm for a 7:30pm kick-off.

And, if magical, time-telling automata are not enough, The Interval has amazing drinks.

Alien Geology, Dreamed By Machines

[Image: Synthetic volcanoes modeled by Jeff Clune, from “Plug & Play Generative Networks,” via Nature].

Various teams of astronomers have been using “deep-learning neural networks” to generate realistic images of hypothetical stars and galaxies—but their work also implies that these same tools could work to model the surfaces of unknown planets. Alien geology as dreamed by machines.

The Square Kilometer Array in South Africa, for example, “will produce such vast amounts of data that its images will need to be compressed into low-noise but patchy data.” Compressing this data into readable imagery opens space for artificial intelligence to work: “Generative AI models will help to reconstruct and fill in blank parts of those data, producing the images of the sky that astronomers will examine.”

The results are thus not photographs, in other words; they are computer-generated models nonetheless considered scientifically valid for their potential insights into how regions of space are structured.

What interests me about this, though, is the fact that one of the scientists involved, Jeff Clune, uses these same algorithmic processes to generate believable imagery of terrestrial landscape features, such as volcanoes. These could then be used to model the topography of other planets, producing informed visual guesstimates of mountain ranges, ancient ocean basins, vast plains, valleys, even landscape features we might not yet have words to describe.

The notion that we would thus be seeing what AI thinks other worlds should look like—that, to view this in terms of art history, we are looking at the projective landscape paintings of machine intelligence—is a haunting one, as if discovering images of alien worlds in the daydreams of desktop computers.

(Spotted via Sean Lally; vaguely related, “We don’t have an algorithm for this”).

Robot War and the Future of Perceptual Deception

tesla
[Image: A diagram of the accident site, via the Florida Highway Patrol].

One of the most remarkable details of last week’s fatal collision, involving a tractor trailer and a Tesla electric car operating in self-driving mode, was the fact that the car apparently mistook the side of the truck for the sky.

As Tesla explained in a public statement following the accidental death, the car’s autopilot was unable to see “the white side of the tractor trailer against a brightly lit sky”—which is to say, it was unable to differentiate the two.

The truck was not seen as a discrete object, in other words, but as something indistinguishable from the larger spatial environment. It was more like an elision.

Examples like this are tragic, to be sure, but they are also technologically interesting, in that they give momentary glimpses of where robotic perception has failed. Hidden within this, then, are lessons not just for how vehicle designers and computers scientists alike could make sure this never happens again, but also precisely the opposite: how we could design spatial environments deliberately to deceive, misdirect, or otherwise baffle these sorts of semi-autonomous machines.

For all the talk of a “robot-readable world,” in other words, it is interesting to consider a world made deliberately illegible to robots, with materials used for throwing off 3D cameras or LiDAR, either through excess reflectivity or unexpected light-absorption.

Last summer, in a piece for New Scientist, I interviewed a robotics researcher named John Rogers, at Georgia Tech. Rogers pointed out that the perceptual needs of robots will have more and more of an effect on how architectural interiors are designed and built in the first place. Quoting that article at length:

In a detail that has implications beyond domestic healthcare, Rogers also discovered that some interiors confound robots altogether. Corridors that are lined with rubber sheeting to protect against damage from wayward robots—such as those in his lab—proved almost impossible to navigate. Why? Rubber absorbs light and prevents laser-based navigational systems from relaying spatial information back to the robot.
Mirrors and other reflective materials also threw off his robots’ ability to navigate. “It actually appeared that there was a virtual world beyond the mirror,” says Rogers. The illusion made his robots act as if there were a labyrinth of new rooms waiting to be entered and explored. When reflections from your kitchen tiles risk disrupting a robot’s navigational system, it might be time to rethink the very purpose of interior design.

I mention all this for at least two reasons.

1) It is obvious by now that the American highway system, as well as all of the vehicles that will be permitted to travel on it, will be remade as one of the first pieces of truly robot-legible public infrastructure. It will transition from being a “dumb” system of non-interactive 2D surfaces to become an immersive spatial environment filled with volumetric sign-systems meant for non-human readers. It will be rebuilt for perceptual systems other than our own.

2) Finding ways to throw-off self-driving robots will be more than just a harmless prank or even a serious violation of public safety; it will become part of a much larger arsenal for self-defense during war. In other words, consider the points raised by John Rogers, above, but in a new context: you live in a city under attack by a foreign military whose use of semi-autonomous machines requires defensive means other than—or in addition to—kinetic firepower. Wheeled and aerial robots alike have been deployed.

One possible line of defense—among many, of course—would be to redesign your city, even down to the interior of your own home, such that machine vision is constantly confused there. You thus rebuild the world using light-absorbing fabrics and reflective ornament, installing projections and mirrors, screens and smoke. Or “stealth objects” and radar-baffling architectural geometries. A military robot wheeling its way into your home thus simply gets lost there, stuck in a labyrinth of perceptual convolution and reflection-implied rooms that don’t exist.

In any case, I suppose the question is: if, today, a truck can blend-in with the Florida sky, and thus fatally disable a self-driving machine, what might we learn from this event in terms of how to deliberately confuse robotic military systems of the future?

We had so-called “dazzle ships” in World War I, for example, and the design of perceptually baffling military camouflage continues to undergo innovation today; but what is anti-robot architectural design, or anti-robot urban planning, and how could it be strategically deployed as a defensive tactic in war?

Machine Quarantines and “Persistent Drones”

scout[Image: An otherwise unrelated photo of a “Scout” UAV, via Wikipedia].

There’s an interesting short piece by Jacob Hambling in a recent issue of New Scientist about the use of “persistent drones” to “hold territory in war zones,” effectively sealing those regions off from incursion. It is an ominous vision of what we might call automated quarantine, or a cordon it’s nearly impossible to trespass, maintained by self-charging machines.

Pointing out the limitations of traditional air power and the tactical, as well as political, difficulties in getting “boots on the ground” in conflict zones, Hambling suggests that military powers might turn to the use of “persistent drones” that “could sit on buildings or trees and keep watch indefinitely.” Doing so “expands the potential for intervention without foot soldiers,” he adds, “but it may lessen the inhibitions that can stop military action.”

Indeed, it’s relatively easy to imagine a near-future scenario in which a sovereign or sub-sovereign power—a networked insurgent force—could attempt to claim territory using Hambling’s “persistent drones,” as if playing Go with fully armed, semi-autonomous machines. They rid the land of its human inhabitants—then watch and wait.

Whole neighborhoods of cities, disputed terrains on the borders of existing nations, National Wildlife Refuges—almost as an afterthought, in a kind of political terraforming, you could simply send in a cloud of machine-sentinels to clear and hold ground until the day, assuming it ever comes, that your actual human forces can arrive.

Greek Gods, Moles, and Robot Oceans

[Image: The Very Low Frequency antenna field at Cutler, Maine, a facility for communicating with at-sea submarine crews].

There have been about a million stories over the past few weeks that I’ve been dying to write about, but I’ll just have to clear through a bunch here in one go.

1) First up is a fascinating request for proposals from the Defense Advanced Research Projects Agency, or DARPA, who is looking to build a “Positioning System for Deep Ocean Navigation.” It has the handy acronym of POSYDON.

POSYDON will be “an undersea system that provides omnipresent, robust positioning” in the deep ocean either for crewed submarines or for autonomous seacraft. “DARPA envisions that the POSYDON program will distribute a small number of acoustic sources, analogous to GPS satellites, around an ocean basin,” but I imagine there is some room for creative maneuvering there.

The idea of an acoustic deep-sea positioning system that operates similar to GPS is pretty interesting to imagine, especially considering the strange transformations sound undergoes as it is transmitted through water. To establish accurately that a U.S. submarine has, in fact, heard an acoustic beacon and that its apparent distance from that point is not being distorted by intervening water temperature, ocean currents, or even the large-scale presence of marine life is obviously quite an extraordinary challenge.

As DARPA points out, without such a system in place, “undersea vehicles must regularly surface to receive GPS signals and fix their position, and this presents a risk of detection.” The ultimate goal, then, would be to launch ultra-longterm undersea missions, even establish permanently submerged robotic networks that have no need to breach the ocean’s surface. Cthulhoid, they will forever roam the deep.

[Image: An unmanned underwater vehicle; U.S. Navy photo by S. L. Standifird].

If you think you’ve got what it takes, click over to DARPA and sign up.

2) A while back, I downloaded a free academic copy of a fascinating book called Space-Time Reference Systems by Michael Soffel and Ralf Langhans.

Their book “presents an introduction to the problem of astronomical–geodetical space–time reference systems,” or radically offworld navigation reference points for when a craft is, in effect, well beyond any known or recognizable landmarks in space. Think of it as a kind of new longitude problem.

The book is filled with atomic clocks, quasars potentially repurposed as deep-space orientation beacons, the long-term shifting of “astronomical reference frames,” and page after page of complex math I make no claim to understand.

However, I mention this here because the POSYDON program is almost the becoming-cosmic of the ocean: that is, the depths of the sea reimagined as a vast and undifferentiated space within which mostly robotic craft will have to orient themselves on long missions. For a robotic submarine, the ocean is its universe.

3) The POSYDON program is just one part of a much larger militarization of the deep seas. Consider the fact that the U.S. Office of Naval Research is hoping to construct permanent “hubs” on the seafloor for recharging robot submarines.

These “hubs” would be “unmanned, underwater pods where robots can recharge undetected—and securely upload the intelligence they’ve gathered to Navy networks.” Hubs will be places where “unmanned underwater vehicles (UUVs) can dock, recharge, upload data and download new orders, and then be on their way.”

“You could keep this continuous swarm of UUVs [Unmanned Underwater Vehicles] wherever you wanted to put them… basically indefinitely, as long as you’re rotating (some) out periodically for mechanical issues,” a Naval war theorist explained to Breaking Defense.

The ultimate vision is a kind of planet-spanning robot constellation: “The era of lone-wolf submarines is giving away [sic] to underwater networks of manned subs, UUVs combined with seafloor infrastructure such as hidden missile launchers—all connected to each other and to the rest of the force on the surface of the water, in the air, in space, and on land.” This would include, for example, the “upward falling payloads” program described on BLDGBLOG a few years back.

Even better, from a military communications perspective, these hubs would also act as underwater relay points for broadcasting information through the water—or what we might call the ocean as telecommunications medium—something that currently relies on ultra-low frequency radio.

There is much more detail on this over at Breaking Defense.

4) Last summer, my wife and I took a quick trip up to Maine where we decided to follow a slight detour after hiking Mount Katahdin to drive by the huge antenna field at Cutler, a Naval communications station found way out on a tiny peninsula nearly on the border with Canada.

[Image: The antenna field at Cutler, Maine].

We talked to the security guard for a while about life out there on this little peninsula, but we were unable to get a tour of the actual facility, sadly. He mostly joked that the locals have a lot of conspiracy theories about what the towers are actually up to, including their potential health effects—which isn’t entirely surprising, to be honest, considering the massive amounts of energy used there and the frankly otherworldly profile these antennas have on the horizon—but you can find a lot of information about the facility online.

So what does this thing do? “The Navy’s very-low-frequency (VLF) station at Cutler, Maine, provides communication to the United States strategic submarine forces,” a January 1998 white paper called “Technical Report 1761” explains. It is basically an east coast version of the so-called Project Sanguine, a U.S. Navy program from the 1980s that “would have involved 41 percent of Wisconsin,” turning the Cheese State into a giant military antenna.

Cutler’s role in communicating with submarines may or may not have come to an end, making it more of a research facility today, but the idea that, even if this came to an end with the Cold War, isolated radio technicians on a foggy peninsula in Maine were up there broadcasting silent messages into the ocean that were meant to be heard only by U.S. submarine crews pinging around in the deepest canyons of the Atlantic is both poetic and eerie.

[Image: A diagram of the antennas, from the aforementioned January 1998 research paper].

The towers themselves are truly massive, and you can easily see them from nearby roads, if you happen to be anywhere near Cutler, Maine.

In any case, I mention all this because behemoth facilities such as these could be made altogether redundant by autonomous underwater communication hubs, such as those described by Breaking Defense.

5) “The robots are winning!” Daniel Mendelsohn wrote in The New York Review of Books earlier this month. The opening paragraphs of his essay are is awesome, and I wish I could just republish the whole thing:

We have been dreaming of robots since Homer. In Book 18 of the Iliad, Achilles’ mother, the nymph Thetis, wants to order a new suit of armor for her son, and so she pays a visit to the Olympian atelier of the blacksmith-god Hephaestus, whom she finds hard at work on a series of automata:

…He was crafting twenty tripods
to stand along the walls of his well-built manse,
affixing golden wheels to the bottom of each one
so they might wheel down on their own [automatoi] to the gods’ assembly
and then return to his house anon: an amazing sight to see.

These are not the only animate household objects to appear in the Homeric epics. In Book 5 of the Iliad we hear that the gates of Olympus swivel on their hinges of their own accord, automatai, to let gods in their chariots in or out, thus anticipating by nearly thirty centuries the automatic garage door. In Book 7 of the Odyssey, Odysseus finds himself the guest of a fabulously wealthy king whose palace includes such conveniences as gold and silver watchdogs, ever alert, never aging. To this class of lifelike but intellectually inert household helpers we might ascribe other automata in the classical tradition. In the Argonautica of Apollonius of Rhodes, a third-century-BC epic about Jason and the Argonauts, a bronze giant called Talos runs three times around the island of Crete each day, protecting Zeus’s beloved Europa: a primitive home alarm system.

Mendelsohn goes on to discuss “the fantasy of mindless, self-propelled helpers that relieve their masters of toil,” and it seems incredibly interesting to read it in the context of DARPA’s now even more aptly named POSYDON program and the permanent undersea hubs of the Office of Naval Research. Click over to The New York Review of Books for the whole thing.

6) If the oceanic is the new cosmic, then perhaps the terrestrial is the new oceanic.

The Independent reported last month that magnetically powered underground robot “moles”—effectively subterranean drones—could potentially be used to ferry objects around beneath the city. They are this generation’s pneumatic tubes.

The idea would be to use “a vast underground network of pipes in a bid to bypass the UK’s ever more congested roads.” The company’s name? What else but Mole Solutions, who refer to their own speculative infrastructure as a network of “freight pipelines.”

[Image: Courtesy of Mole Solutions].

Taking a page from the Office of Naval Research and DARPA, though, perhaps these subterranean robot constellations could be given “hubs” and terrestrial beacons with which to orient themselves; combine with the bizarre “self-burying robot” from 2013, and declare endless war on the surface of the world from below.

See more at the Independent.

7) Finally, in terms of this specific flurry of links, Denise Garcia looks at the future of robot warfare and the dangerous “secrecy of emerging weaponry” that can act without human intervention over at Foreign Affairs.

She suggests that “nuclear weapons and future lethal autonomous technologies will imperil humanity if governed poorly. They will doom civilization if they’re not governed at all.” On the other hand, as Daniel Mendelsohn points out, we have, in a sense, been dealing with the threat of a robot apocalypse since someone first came up with the myth of Hephaestus.

Garcia’s short essay covers a lot of ground previously seen in, for example, Peter Singer’s excellent book Wired For War; that’s not a reason to skip one for the other, of course, but to read both. See more at Foreign Affairs.

(Thanks to Peter Smith for suggesting we visit the antennas at Cutler).

Infrastructure as Processional Space

[Image: A view of the Global Containers Terminal in Bayonne; Instagram by BLDGBLOG].

I just spent the bulk of the day out on a tour of the Global Containers Terminal in Bayonne, New Jersey, courtesy of the New York Infrastructure Observatory.

That’s a new branch of the institution previously known as the Bay Area Infrastructure Observatory, who hosted the MacroCity event out in San Francisco last May. They’re now leading occasional tours around NYC infrastructure (a link at the bottom of this post lets you join their mailing list).

[Image: A crane so large my iPhone basically couldn’t take a picture of it; Instagram by BLDGBLOG].

There were a little more than two dozen of us, a mix of grad students, writers, and people whose work in some way connected them to logistics, software, or product development—which, unsurprisingly, meant that everyone had only a few degrees of separation from the otherworldly automation on display there on the peninsula, this open-air theater of mobile cranes and mounted gantries whirring away in the precise loading and unloading of international container ships.

The clothes we were wearing, the cameras we were using to photograph the place, even the pens and paper many of us were using to take notes, all had probably entered the United States through this very terminal, a kind of return of the repressed as we brought those orphaned goods back to their place of disembarkation.

[Images: The bottom half of the same crane; Instagram by BLDGBLOG].

Along the way, we got to watch a room full of human controllers load, unload, and stack containers, with the interesting caveat that they—that is, humans—are only required when a crane comes within ten feet of an actual container. Beyond ten feet, automation sorts it out.

When the man I happened to be watching reached the critical point where his container effectively went on auto-pilot, not only did his monitor literally go blank, making it clear that he had seen enough and that the machines had now taken over, but he referred to this strong-armed virtual helper as “Auto Schwarzenegger.”

“Auto Schwarzenegger’s got it now,” he muttered, and the box then disappeared from the screen, making its invisible way to its proper location.

[Image: Waiting for the invisible hand of Auto Schwarzenegger; Instagram by BLDGBLOG].

Awesomely—in fact, almost unbelievably—when we entered the room, with this 90% automated landscape buzzing around us outside on hundreds of acres of mobile cargo in the wintry weather, they were listening to “Space Oddity” by David Bowie.

“Ground control to Major Tom…” the radio sang, as they toggled joysticks and waited for their monitors to light up with another container.

[Image: Out in the acreage; Instagram by BLDGBLOG].

The infinitely rearrangeable labyrinth of boxes outside was by no means easy to drive through, and we actually found ourselves temporarily walled in on the way out, just barely slipping between two containers that blocked off that part of the yard.

This was “Damage Land,” our guide from the port called it, referring to the place where all damaged containers came to be stored (and eventually sold).

[Image: One of thousands of stacked walls in the infinite labyrinth of the Global Containers Terminal; Instagram by BLDGBLOG].

One of the most consistently interesting aspects of the visit was learning what was and was not automated, including where human beings were required to stand during some of the processes.

For example, at one of several loading/unloading stops, the human driver of each truck was required to get out of the vehicle and stand on a pressure-sensitive pad in the ground. If nothing corresponding to the driver’s weight was felt by sensors on the pad, the otherwise fully automated machines toiling above would not snap into action.

This idea—that a human being standing on a pressure-sensitive pad could activate a sequence of semi-autonomous machines and processes in the landscape around them—surely has all sorts of weird implications for everything from future art or museum installations to something far darker, including the fully automated prison yards of tomorrow.

[Image: One of several semi-automated gate stations around the terminal; Instagram by BLDGBLOG].

This precise control of human circulation was also built into the landscape—or perhaps coded into the landscape—through the use of optical character recognition software (OCR) and radio-frequency ID chips. Tag-reading stations were located at various points throughout the yard, sending drivers either merrily on their exactly scripted way to a particular loading/unloading dock or sometimes actually barring that driver from entry. Indeed, bad behavior was punished, it was explained, by blocking a driver from the facility altogether for a certain amount of time, locking them out in a kind of reverse-quarantine.

Again, the implications here for other types of landscapes were both fascinating and somewhat ominous; but, more interestingly, as the trucks all dutifully lined-up to pass through the so-called “OCR building” on the far edge of the property, I was struck by how much it felt like watching a ceremonial gate at the outer edge of some partially sentient Forbidden City built specifically for machines.

In other words, we often read about the ceremonial use of urban space in an art historical or urban planning context, whether that means Renaissance depictions of religious processions or it means the ritualized passage of courtiers through imperial capitals in the far east. However, the processional cities of tomorrow are being built right now, and they’re not for humans—they’re both run and populated by algorithmic traffic control systems and self-operating machine constellations, in a thoroughly secular kind of ritual space driven by automated protocols more than by democratic legislation.

These—ports and warehouses, not churches and squares—are the processional spaces of tomorrow.

[Image: Procession of the True Cross (1496) by Gentile Bellini, via Wikimedia].

It’s also worth noting that these spaces are trickling into our everyday landscape from the periphery—which is exactly where we are now most likely to find them, simply referred to or even dismissed as mere infrastructure. However, this overly simple word masks the often startlingly unfamiliar forms of spatial and temporal organization on display. This actually seems so much to be the case that infrastructural tourism (such as today’s trip to Bayonne) is now emerging as a way for people to demystify and understand this peripheral realm of inhuman sequences and machines.

In any case, as the day progressed we learned a tiny bit about the “Terminal Operating System”—the actual software that keeps the whole place humming—and it was then pointed out, rather astonishingly, that the actual owner of this facility is the Ontario Teachers’ Pension Plan, an almost Thomas Pynchonian level of financial weirdness that added a whole new level of narrative intricacy to the day.

If this piques your interest in the Infrastructure Observatory, consider following them on Twitter: @InfraObserve and @NYInfraObserve. And to join the NY branch’s mailing list, try this link, which should also let you read their past newsletters.

[Image: The Container Guide; Instagram by BLDGBLOG].

Finally, the Infrastructure Observatory’s first publication is also now out, and we got to see the very first copy. The Container Guide by Tim Hwang and Craig Cannon should be available for purchase soon through their website; check back there for details (and read a bit more about the guide over at Edible Geography).

(Thanks to Spencer Wright for the driving and details, and to the Global Containers Terminal Bayonne for their time and hospitality!)

Roentgen Objects, or: Devices Larger than the Rooms that Contain Them

[Image: Photo courtesy of the Rijksmuseum Amsterdam and the Metropolitan Museum of Art].

A gorgeous exhibition last year at the Metropolitan Museum of Art featured mechanical furniture designed by the father and son team, Abraham and David Roentgen: elaborate 18th-century technical devices disguised as desks and tables.

First, a quick bit of historical framing, courtesy of the Museum itself: “The meteoric rise of the workshop of Abraham Roentgen (1711–1793) and his son David (1743–1807) blazed across eighteenth-century continental Europe. From about 1742 to its closing in the early 1800s, the Roentgens’ innovative designs were combined with intriguing mechanical devices to revolutionize traditional French and English furniture types.”

Each piece, the Museum adds, was as much “an ingenious technical invention” as it was “a magnificent work of art,” an “elaborate mechanism” or series of “complicated mechanical devices” that sat waiting inside palaces and parlors for someone to come along and activate them.

If you can get past the visual styling of the furniture—after all, the dainty little details and inlays perhaps might not appeal to many BLDGBLOG readers—and concentrate instead only on the mechanical aspect of these designs, then there is something really incredible to be seen here.

[Image: Photo courtesy of the Rijksmuseum Amsterdam and the Metropolitan Museum of Art].

Hidden amidst drawers and sliding panels are keyholes, the proper turning of which results in other unseen drawers and deeper cabinets popping open, swinging out to reveal previously undetectable interiors.

But it doesn’t stop there. Further surfaces split in half to reveal yet more trays, files, and shelves that unlatch, swivel, and slide aside to expose entire other cantilevered parts of the furniture, materializing as if from nowhere on little rails and hinges.

Whole cubic feet of interior space are revealed in a flash of clacking wood flung forth on tracks and pulleys.

As the Museum phrases it, Abraham Roentgen’s “mechanical ingenuity” was “exemplified by the workings of the lower section” of one of the desks on display in the show: “when the key of the lower drawer is turned to the right, the side drawers spring open; if a button is pressed on the underside of these drawers, each swings aside to reveal three other drawers.”

And thus the sequence continues in bursts of self-expansion more reminiscent of a garden than a work of carpentry, a room full of wooden roses blooming in slow motion.

[Images: Photos courtesy of the Rijksmuseum Amsterdam and the Metropolitan Museum of Art].

The furniture is a process—an event—a seemingly endless sequence of new spatial conditions and states expanding outward into the room around it.

Each piece is a controlled explosion of carpentry with no real purpose other than to test the limits of volumetric self-demonstration, offering little in the way of useful storage space and simply showing off, performing, a spatial Olympics of shelves within shelves and spaces hiding spaces.

Sufficiently voluminous furniture becomes indistinguishable from a dream.

[Image: Photo courtesy of the Rijksmuseum Amsterdam and the Metropolitan Museum of Art].

What was so fascinating about the exhibition—and this can be seen, for example, in some of the short accompanying videos (a few of which are archived on the Metropolitan Museum of Art’s website)—is that you always seemed to have reached the final state, the fullest possible unfolding of the furniture, only for some other little keyhole to appear or some latch to be depressed in just the right way, and the thing just keeps on going, promising infinite possible expansions, as if a single piece of furniture could pop open into endless sub-spaces that are eventually larger than the room it is stored within.

The idea of furniture larger than the space that houses it is an extraordinary topological paradox, a spatial limit-case like black holes or event horizons, a state to which all furniture makers could—and should—aspire, devising a Roentgen object of infinite volumetric density.

A single desk that, when unfolded, is larger than the building around it, hiding its own internal rooms and corridors.

Suggesting that they, too, were thrilled by the other-worldly possibilities of their furniture, the Roentgens—and I love this so much!—also decorated their pieces with perspectival illusions.

[Image: Photo courtesy of the Rijksmuseum Amsterdam and the Metropolitan Museum of Art].

The top of a table might include, for example, the accurately rendered, gridded space of a drawing room, as if you were peering cinematically into a building located elsewhere; meanwhile, pop-up panels might include a checkerboard reference to other possible spaces that thus seemed to exist somewhere within or behind the furniture, lending each piece the feel of a portal or visual gateway into vast and multidimensional mansions tucked away inside.

The giddiness of it all—at least for me—was the implication that you could decorate a house with pieces of furniture; however, when unfolded to their maximum possible extent, these same objects might volumetrically increase the internal surface area of that house several times over, doubling, tripling, quadrupling its available volume. But it’s not magic or the supernatural—it’s not quadraturin—it’s just advanced carpentry, using millimeter-precise joinery and a constellation of unseen hinges.

[Images: Photos courtesy of the Rijksmuseum Amsterdam and the Metropolitan Museum of Art].

You could imagine, for example, a new type of house; it’s got a central service core lined with small elevators. Wooden boxes, perhaps four feet cubed, pass up and down inside the walls of the house, riding this network of dumbwaiters from floor to floor, where they occasionally stop, when a resident demands it. That resident then pops open the elevator door and begins to unfold the box inside, unlatching and expanding it outward into the room, this Roentgen object full of doors, drawers, and shelves, cantilevered panels, tabletops, and dividers.

And thus the elevators grow, simultaneously inside and outside, a liminal cabinetry both tumescent and architectural that fills up the space with spaces of its own, fractal super-furniture stretching through more than one room at a time and containing its own further rooms deep within it.

But then you reverse the process and go back through in the other direction, painstakingly shutting panels, locking drawers, pushing small boxes inside of larger boxes, and tucking it all up again, compressing it like a JPG back into the original, ultra-dense cube it all came from. You’re like some homebound god of superstrings tying up and hiding part of the universe so that others might someday rediscover it.

To have been around to drink coffee with the Roentgens and to discuss the delirious outer limits of furniture design would have been like talking to a family of cosmologists, diving deep into the quantum joinery of spatially impossible objects, something so far outside of mere cabinetry and woodwork that it almost forms a new class of industrial design. Alas, their workshop closed, their surviving objects today are limited in number, and the exhibition at the Metropolitan Museum of Art is now closed.

Landscape Futures Arrives

[Image: Internal title page from Landscape Futures; book design by Everything-Type-Company].

At long last, after a delay from the printer, Landscape Futures: Instruments, Devices and Architectural Inventions is finally out and shipping internationally.

I am incredibly excited about the book, to be honest, and about the huge variety of content it features, including an original essay by Elizabeth Ellsworth & Jamie Kruse of Smudge Studio, a short piece of landscape fiction by Pushcart Prize-winning author Scott Geiger, and a readymade course outline—open for anyone looking to teach a course on oceanographic instrumentation—by Mammoth’s Rob Holmes.

These join reprints of classic texts by geologist Jan Zalasiewicz, on the incipient fossilization of our cities 100 million years from now; a look at the perverse history of weather warfare and the possibility of planetary-scale climate manipulation by James Fleming; and a brilliant analysis of the Temple of Dendur, currently held deep in the controlled atmosphere of New York’s Metropolitan Museum of Art, and its implications for architectural preservation elsewhere.

And even these are complemented by an urban hiking tour by the Center for Land Use Interpretation that takes you up into the hills of Los Angeles to visit check dams, debris basins, radio antennas, and cell phone towers, and a series of ultra-short stories set in a Chicago yet to come by Pruned‘s Alexander Trevi.

[Images: A few spreads from the “Landscape Futures Sourcebook” featured in Landscape Futures; book design by Everything-Type-Company].

Of course, everything just listed supplements and expands on the heart of the book, which documents the eponymous exhibition hosted at the Nevada Museum of Art, featuring specially commissioned work by Smout Allen, David Gissen, and The Living, and pre-existing work by Liam Young, Chris Woebken & Kenichi Okada, and Lateral Office.

Extensive original interviews with the exhibiting architects and designers, and a long curator’s essay—describing the exhibition’s focus on the intermediary devices, instruments, and spatial machines that can fundamentally transform how human beings perceive and understand the landscapes around them—complete the book, in addition to hundreds of images, many maps, and an extensive use of metallic and fluorescent inks.

The book is currently only $17.97 on Amazon.com, as well, which seems like an almost unbelievable deal; now is an awesome time to buy a copy.

[Images: Interview spreads from Landscape Futures; book design by Everything-Type-Company].

In any case, I’ve written about Landscape Futures here before, and an exhaustive preview of it can be seen in this earlier post.

I just wanted to put up a notice that the book is finally shipping worldwide, with a new publication date of August 2013, and I look forward to hearing what people think. Enjoy!