Piscine Virtual Reality

[Image: From “Putting the Fish in the Fish Tank: Immersive VR for Animal Behavior Experiments” by Sachit Butail, Amanda Chicoli, and Derek A. Paley].

I’ve had this story bookmarked for the past four years, and a tweet this morning finally gave me an excuse to write about it.

Back in 2012, we read, researchers at Harvard University found a way to fool a paralyzed fish into thinking it was navigating a virtual spatial environment. They then studied its brain during this trip that went nowhere—this virtual, unmoving navigation—in order to understand the “neuronal dynamics” of spatial perception.

As Noah Gray wrote at the time, deliberately highlighting the study’s unnerving surreality, “Paralyzed fish navigates virtual environment while we watch its brain.” Gray then compared it to The Matrix.

The paper itself explains that, when “paralyzed animals interact fictively with a virtual environment,” it results in what are called “fictive swims.”

To study motor adaptation, we used a closed-loop paradigm and simulated a one-dimensional environment in which the fish is swept backwards by a virtual water flow, a motion that the fish was able to compensate for by swimming forwards, as in the optomotor response. In the fictive virtual-reality setup, this corresponds to a whole-field visual stimulus that is moving forwards but that can be momentarily accelerated backwards by a fictive swim of the fish, so that the fish can stabilize its virtual location over time. Remarkably, paralyzed larval zebrafish behaved readily in this closed-loop paradigm, showing similar behavior to freely swimming fish that are exposed to whole-field motion, and were not noticeably compromised by the absence of vestibular, proprioceptive and somatosensory feedback that accompanies unrestrained swimming.

Imagine being that fish; imagine realizing that the spatial environment you think you’re moving through is actually some sort of induced landscape put there purely for the sake of studying your neural reaction to it.

Ten years from now, experimental architecture-induction labs pop up at universities around the world, where people sit, strapped into odd-looking chairs, appearing to be asleep. They are navigating labyrinths, a scientist whispers to you, trying not to disturb them. You look around the room and see books full of mazes spread across a table, six-foot-tall full-color holograms of the human brain, and dozens of HD computer screens flashing with graphs of neural stimulation. They are walking through invisible buildings, she says.

[Image: From “Putting the Fish in the Fish Tank: Immersive VR for Animal Behavior Experiments” by Sachit Butail, Amanda Chicoli, and Derek A. Paley].

In any case, the fish-in-virtual-reality setup was apparently something of a trend in 2012, because there was also a paper published that year called “Putting the Fish in the Fish Tank: Immersive VR for Animal Behavior Experiments,” this time by researchers at the University of Maryland. Their goal was to “startle” fish using virtual reality:

We describe a virtual-reality framework for investigating startle-response behavior in fish. Using real-time three dimensional tracking, we generate looming stimuli at a specific location on a computer screen, such that the shape and size of the looming stimuli change according to the fish’s perspective and location in the tank.

As they point out, virtual reality can be a fantastic tool for studying spatial perception. VR, they write, “provides a novel opportunity for high-output biological data collection and allows for the manipulation of sensory feedback. Virtual reality paradigms have been harnessed as an experimental tool to study spatial navigation and memory in rats, flight control in flies and balance studies in humans.”

But why stop at fish? Why stop at fish tanks? Why not whole virtual landscapes and ecosystems?

Imagine a lost bear running around a forest somewhere, slipping between birch trees and wildflowers, the sun a blinding light that stabs down through branches in disorienting flares. There are jagged rocks and dew-covered moss everywhere. But it’s not a forest. The bear looks around. There are no other animals, and there haven’t been for days. Perhaps not for years. It can’t remember. It can’t remember how it got there. It can’t remember where to go.

It’s actually stuck in a kind of ursine Truman Show: an induced forest of virtual spatial stimuli. And the bear isn’t running at all; it’s strapped down inside an MRI machine in Baltimore. Its brain is being watched—as its brain watches the well-rendered polygons of these artificial rocks and trees.

(Fish tank story spotted via Clive Thompson. Vaguely related: The Subterranean Machine Dreams of a Paralyzed Youth in Los Angeles).

The Switching Labyrinth

[Image: From “Labyrinths, Mazes and the Spaces Inbetween” by Sam McElhinney].

Sam McElhinney, a student at the Bartlett School of Architecture, has been building full-scale labyrinths in London and testing people’s spatial reactions to them. See photos of his constructions, below.

McElhinney explained his research to BLDGBLOG in a recent email, attaching a paper that he delivered earlier this month at a cybernetics conference in Vienna, where it was awarded best paper. Called “Labyrinths, Mazes and the Spaces Inbetween,” it describes McElhinney’s fascinating look at how people actually walk through, use, and familiarize themselves with the internal spaces of buildings, using mazes and labyrinths as his control studies.

In the process, McElhinney introduces us to movement-diagrams, Space Syntax, and other forms of architectural motion-analysis, asking: would a detailed study of user-behaviors help architects design more consistently interesting buildings, spaces that “might evoke,” he writes, “a sense of continual delight”? Pushing these questions a bit further, we might ask: should all our buildings be labyrinths?

[Images: Movement-typologies from “Labyrinths, Mazes and the Spaces Inbetween” by Sam McElhinney].

Early in the paper, McElhinney differentiates between the two types of interior experiences—between mazes and labyrinths.

A path system can be multicursal: a network of interconnecting routes, intended to disorient even the cunning. It may contain multiple branches and dead ends, specifically designed to confuse the occupant. This is a maze.

Alternatively, a path can form a single, monocursal route. Once embarked upon, this may fold, twist and turn, but will remain a constant and ultimately reach a destination; this is a labyrinth.

The experience of walking these two topologies is very different.

These basic definitions set the stage for McElhinney’s own “premise,” which is “that all space is found, experienced and inhabited in a state of ‘switching’ flux between the diametrically opposed topologies of maze and labyrinth. This offers insights into how we might evoke a sense of continual delight in the user [of the buildings that we go on to design].” Accordingly, he asks how architects might actually construct “a path that switches from a labyrinth into a maze (and vice-versa).”

How can architects design for this switch?

[Images: From “Labyrinths, Mazes and the Spaces Inbetween” by Sam McElhinney].

McElhinney’s argument segues through a discussion of Alasdair Turner’s Space Syntax investigations (and the limitations thereof). He describes how Turner put together a series of automated test-runs through which he could track the in-labyrinth behavior of various “maze-agents”; these reprogrammable “agents” would continually seek new pathways through the twisty little passages around them—a spatial syntax of forward movement—and Turner took note of the results.

Turner’s test-environments included, McElhinney explains, a maze that “was set to actively re-configure upon a door being opened, altering the maze control algorithms” behind the scenes, thus producing new route-seeking behavior in the maze-agents.

[Images: From “Labyrinths, Mazes and the Spaces Inbetween” by Sam McElhinney].

Unsatisfied with Turner’s research, however, McElhinney went on to build his own full-scale “switching labyrinth” near London’s Euston Station. Participants in this experiment “animated” McElhinney’s switching labyrinth by way of “a stepper motor and slide mechanism” that, together, were “able to periodically shift, ‘switching’ openings to offer alternative entrance and exit paths.”

The participants walked in and their routes warped the labyrinth around them.

[Image: Sam McElhinney’s “switching labyrinth,” or psycho-cybernetic human navigational testing ground, constructed near Euston Station].

After watching all this unfold, McElhinney suggested that further research along these lines could help to reveal architectural moments at which there is an “emergence of labyrinthine, or familiar, spatialities within an unknown or changing maze framework.”

There can be a place or moment within any building, in other words, at which the spatially unfamiliar will erupt—and from movement-pathway studies we can extrapolate architectural form, buildings that perfectly rest at the cognitive flipping point between maze and labyrinth, familiar and disorienting, adventurous and strange.

[Images: Sam McElhinney’s “switching labyrinth”].

The cybernetics of human memory and in-situ spatial decision-making processes provide a framework from which we can extract and assemble a new kind of architecture.

[Image: From “Labyrinths, Mazes and the Spaces Inbetween” by Sam McElhinney].

How we move through coiled, labyrinthine environments can be studied for insights into human navigation, physiology, and more.

[Image: From “Labyrinths, Mazes and the Spaces Inbetween” by Sam McElhinney].

McElhinney sent over a huge range of maze and labyrinth precedents that served as part of his research; some images from that research appear below.

[Images: Maze-studies from “Labyrinths, Mazes and the Spaces Inbetween” by Sam McElhinney].

It’s fascinating research, and I would love to see it scaled way, way up, beyond a mere test-maze in a warehouse into something both multileveled and city-sized.

The Subterranean Machine Dreams of a Paralyzed Youth in Los Angeles

[Image: A glimpse of Honda’s brain-interface technology, otherwise unrelated to the post below].

Among many other interesting things in the highly recommended Wired for War: The Robotics Revolution and Conflict in the Twenty-First Century by P.W. Singer – a book of interest to historians, psychologists, designers, military planners, insurgents, peace advocates, AI researchers, filmmakers, novelists, future soldiers, legislators, and even theologians – is a very brief comment about military research into the treatment of paralysis.
In a short subsection called “All Jacked Up,” Singer refers to “a young man from South Weymouth, Massachusetts,” who was “paralyzed from the neck down in 2001.” After nearly giving up hope for recovery, “a computer chip was implanted into his head.”

The goal was to isolate the signals leaving [his] brain whenever he thought about moving his arms or legs, even if the pathways to those limbs were now broken. The hope was that [his] intent to move could be enough; his brain’s signals could be captured and translated into a computer’s software code.

None of this seemed like news to me; in fact, even the next step wasn’t particularly surprising: they hooked him up to a computer mouse and then to a TV remote control, and the wounded man was thus able not only to surf the web but to watch HBO.
What I literally can’t stop thinking about, though, was where this research “opens up some wild new possibilities for war,” as Singer writes.
In other words: why hook this guy up to a remote control television when you could hook him up to a fully-armed drone aircraft flying above Afghanistan? He would simply pilot the plane with his thoughts.

[Image: A squadron of drones awaits its orders].

This vision – of paralyzed soldiers thinking unmanned planes through war – is both terrible and stunning.
Singer goes on to describe DARPA‘s “Brain-Interface Project,” which helped pay for this research, in which training the paralyzed to control machines by thought could be put to use for military purposes.
Later, Singer describes research into advanced, often robotic prostheses; “these devices are also being wired directly into the patient’s nerves,” he writes.

This allows the solder to control their artificial limbs via thought as well as have signals wired back into their peripheral nervous system. Their limbs might be robotic, but they can “feel” a temperature change or vibration.

When this is put into the context of the rest of Singer’s book – where we read, for instance, that “at least 45 percent of [the U.S. Air Force’s] future large bomber fleet [will be] able to operate without humans aboard,” with other “long-endurance” military drones soon “able to stay aloft for as long as five years,” and if you consider that, as Singer writes, the Los Angeles Police Department “is already planning to use drones that would circle over certain high-crime neighborhoods, recording all that happens” – you get into some very heady terrain, indeed. After all, the idea that those drone aircraft circling over Los Angeles in the year 2013 are actually someone’s else literal daydream simply blows me away.
In other words, if you can directly link the brain of a paralyzed soldier to a computer mouse – and then to a drone aircraft, and then perhaps to an entire fleet of armed drones circling over enemy territory – then surely you could also hook that brain up to, say, lawnmowers, remote-controlled tunneling machines, lunar landing modules, strip-mining equipment, and even 3D printers.
And here’s where some incredible landscape design possibilities come in.

[Image: 3D printing, via Thinglab].

A patient somewhere in Gloucestershire dreams in plastic objects endlessly extruded from a 3D printer… Architectural models, machine parts, abstract sculpture – a whole new species of object is emitted, as if printing dreams in three-dimensions.
Or you go to a toy store in Manhattan – or to next year’s Design Indaba, or to the Salone del Mobile – and you find nothing but rooms full of strange objects dreamed into existence by paralyzed 16-year olds.
The idea of brain-controlled wireless digging machines, in particular, just astonishes me; at night you dream of tunnels – because you are actually in control of tunneling equipment operating somewhere beneath the surface of the earth.
A South African platinum mine begins to diverge wildly from real sites of mineral wealth, its excavations more and more abstract as time goes on – carving M.C. Escher-like knots and strange cursive whorls through ancient reefwork below ground – and it’s because the mining engineer, paralyzed in a car crash ten years ago and in control of the digging machines ever since, has become addicted to morphine.
Or perhaps this could even be used as a new and extremely avant-garde form of psychotherapy.
For instance, a billionaire in Los Angeles hooks his depressed teenage son up to Herrenknecht tunneling equipment which has been shipped, at fantastic expense, down to Antarctica. An unmappably complex labyrinth of subterranean voids is soon created; the boy literally acts out through tunnels. If rock is his paint, he is its Basquiat.
Instead of performing more traditional forms of Freudian analysis by interviewing the boy in person, a team of highly-specialized dream researchers is instead sent down into those artificial caverns, wearing North Face jackets and thick gloves, where they deduce human psychology from moments of curvature and angle of descent.
My dreams were a series of tunnels through Antarctica, the boy’s future headstone reads.

[Image: Three varieties of underground mining machine].

That, or we stay aboveground and we look at the design implications of brain-interfaced gardening equipment.
I’m imagining a new film directed by Alex Trevi, in which a landscape critic on commission from The New Yorker visits a sprawling estate house somewhere in southern France. The owner has been bed-bound for three decades now, following a near-fatal car accident, but his brain was recently interfaced directly with an armada of wireless gardening machines: constantly trimming, mowing, replanting, and pruning, the gardens outside are shifted with his every thought process.
Having arrived simply to write a thesis about this unique development in landscape design, our critic finds herself entranced by the hallucinatory goings-on, the creeping vines and insectile machines and moving walls of hedges all around her.

[Image: The gardens at Versailles, via Wikipedia].

Returning to Singer, briefly, he writes that “Many robots are actually just vehicles that have been converted into unmanned systems” – so if we can robotize aircraft, digging machines, riding lawnmowers, and even heavy construction equipment, and if we can also directly interface the human brain to the controls of these now wireless robotic mechanisms, then the design possibilities seem limitless, surreal, and well worth exploring (albeit somewhat cautiously) in real life.
It could be a new episode of MythBusters, or the next iteration of the DARPA Grand Challenge. What’s the challenge?
A paralyzed teenager has to dig a tunnel through the Alps using only his or her brain and a partial face excavation machine.