Rise of the Machines: Mario Carpo on Robotic Construction

2020

Publication History:

“Rise of the Machines. Mario Carpo on Robotic Construction.” Artforum 58, 7 (2020): 172-79, 235 

The text posted here is an early draft, and it is different from the published version. Please only cite from copy in print

When architects and designers started some serious tinkering with computer aided design, in the early 1990s, they soon realized that the computers they used to draw objects on the screen could also serve to fabricate the same objects right away.  The integration of computer aided drawing and manufacturing (CAD-CAM) was at the core of digital design theory from the very start, and the machinery for numerically controlled fabrication adopted over time has often been as influential and even inspirational for designers as the software at the core of the computer systems themselves.

Computer controlled (CNC) milling machines were the designers’ tool of choice in the 1990s—a legacy subtractive fabrication technology where a drill moving along a continuous path carves seamless grooves into a solid volume.  CNC machines were a perfect match for the new, user-friendly programs for digital streamlining (or spline modeling) that were then coming to market, and this felicitous pairing of software and manufacturing tools accounts for the rise in popularity of the smooth and curvy, continuous lines and surfaces that have marked end-of-millennium global architecture—a style that is now often called parametricism.  

However, a few years into the new millennium the rise of 3D printing technologies—both cheap personal desktops and industrial grade machinery—accompanied a significant change of tack in computational theory and design practice: most 3D printers work by materializing small units of printed matter, called voxels; hence the 3D printing of even small and simple objects involves the notation, calculation, and fabrication of a huge number of minuscule, identical boxlike units.  At the same time, various “discrete” tools for electronic calculation or simulation, such as Finite Element Analysis, Cellular Automata, and Agent Based Systems, became increasingly popular in the design community.  Unlike traditional mathematics, which has developed over time sophisticated strategies to simplify natural phenomena and reduce the number of parameters and variants at play, these new computational tools are designed to deal with extraordinary amounts of unsorted and often meaningless data (now often called Big Data), which today’s electronic computers can process way better than humans ever could.  Not surprisingly, signs of this new and increasingly pervasive technical logic soon started to show up in the work of technically driven experimental designers: raw voxels, for example, were often left visible, sometimes ostentatiously, in numbers far exceeding the powers of human calculation, and at resolutions deliberately pushed beyond the thresholds of human perception.

These early tools for computational fabrication shared two core aspects.  First, they did not use any mechanical casts, molds, or matrixes to reproduce identical copies.  Mechanical matrixes have an upfront cost that must be amortized by repeated use; the savings obtained through mass-production are called economies of scale.  This does not apply to digital fabrication: due to the absence of mechanical matrixes, every digital replica is, in a sense, a new original, hence making more copies of the same item does not make any of them cheaper.  This is the technical logic of digital mass-customization—one of the most disruptive ideas that the design community ever came up with.  Second, however, and mitigating to some extent the import of the above, digital fabrication, well suited for scalability in numbers, appeared for a long time to be eminently non-scalable in size.  Milling machines can mill any number of panels in a sequence, all the same or all different, but must be bigger than each panel they mill.  Most 3D printing, even when based on extruded filaments, happens inside a printing chamber; to print a house in one go, a 3D printer must be bigger than the house itself.  Some quixotic attempts at building humongous 3D printers having proved unpractical, it was long assumed that digital fabrication would be destined primarily to the production of small non-standard components (facade panels, for example), to be later put together more or less by hand—a process so labor-intensive, and costly, that digital fabrication technologies were often dismissively seen as suited at best for making teapots, chairs, and experimental architectural pavilions (teapots and chairs requiring little or no assembly; pavilions being mostly assembled by hand by unpaid architectural students).  This is when architects realized that the ideal tool for the automatic, computer driven assembly of any number of parts had been in existence for half a century: hiding in plain sight, in a sense—or rather inside factories, where industrial robots have been used extensively since the 1960s.

Early in the 21st century the industrial robot was a mature, unexciting technology.  Industrial robotic arms had been known since the early 1960s, and went mainstream in the 1970s, mass-produced by a number of American, European and Japanese companies, and adopted in particular by car manufactures to replace manual workers in moving assembly lines.  Frederick Taylor famously saw the modern industrial worker as a gorilla, with the intelligence of an ox.[1]  In the Taylorist tradition industrial workers were assumed to be too stupid to be able to learn more than a limited number of simple motions; in modern moving assembly lines the worker was expected to stand still, learn just one gesture, and repeat it forever.  When industrial robots were called in to replace human workers (due to the rising costs of labor in the 1960s and 1970s) they naturally inherited the same inherent stolidity: industrial robots need a scripted program for each motion they make; when programs had to be individually written or otherwise prepared by humans, it made sense to have each robot repeat the same identical motion, i.e. the same scripted program, ad infinitum, just like the human automaton it was replacing had done since the invention of the assembly line.  All the “intelligence” these early industrial robot needed was the memory of a few sequential motions, recorded on magnetic tape. 

As it happens, the best computer scientists of the time would have been hard-pressed to offer much more than that.  The memory and processing power of even the biggest mainframe computers of the 1960s were a minuscule fraction of what we have now in any cell phone; not surprisingly, most projects of Artificial Intelligence conceived in the 1960s were abandoned in the 1970s—as they proved undeliverable.  Cyberneticians of the 60s, particularly in academia, had high expectations for the imminent development of intelligent industrial robots, but this did not happen back then, because factory owners did not need intelligent robots to replace unintelligent workers—and there would have been no artificial intelligence to power those robots anyway.

Fast forward to 2005, when Fabio Gramazio and Matthias Kohler, young architects then in their mid-30s, started their seminal experiments with industrial robots in the department of architecture of the Federal Institute of Technology in Zurich (ETHZ).  Personal computers of the early 2000s could easily script all the instructions needed for driving the motions of a robotic arm—that was often done by some kind of reverse engineering, by setting the final positions of the robot’s hands, so to speak, then letting the computer calculate all the motions needed to get there.  Gramazio and Kohler soon realized that computers could also be tasked to write sets of instructions for sequences of incrementally different robotic operations: for example, when picking identical bricks from the same location, instructing the robot to lay each brick in that sequence not in the exact location where the preceding brick had been laid (as that location would already have been taken, evidently), but next to it, or above it; and furthermore laying each brick in a given course at a different angle with the horizontal alignment of the wall, or of the brick next to it, and so forth.  Driven by a theoretically unlimited number of different scripts automatically calculated by a computerized algorithm, robots could carry out a theoretically unlimited number of different motions—without any supplemental cost, or at the same cost per motion.  Gramazio and Kohler famously demonstrated the architectural potentials of this new technical logic by designing and building a number of geometrically complex brick walls: the complexity of their geometry was, by design, beyond the reach of what most skilled artisan bricklayers could master—assuming that any such artisan still existed.  More recently, Gramazio and Kohler have applied the same logic of differential, or non-standard, assembly to stocks of different components, each individually designed and custom built: for the roof of the Arch Tech Lab at ETHZ, completed in 2016, that included almost 50,000 different timber elements, automatically assembled in 168 different lattice trusses, each with a width of 14.70 meters, covering a surface of 2,300 square meters.

Industrial robots had been endowed with some rudimentary sensing capabilities almost from the start.  At the beginning, simple sensors of touch and velocity were needed for basic feedback, and for the automatic correction of some motions and grip.  Today’s computers can easily interpret all sorts of sensory data, and they can do so fast enough to allow for real-time interaction (at least, in the case of relatively slow motions like those of a robotic arm).  Early roboticists thought that intelligent industrial robots would soon learn to pick and choose loose components from a random heap, or straight out of a bin.  Even today that is still a tall order, but due to the combination of advanced sensors and Artificial Intelligence, or machine learning, industrial robots can now be tasked to make independent decisions on the fly, based on the detection, or sensing, of unpredictable factors.  This mode of operation used to be called “adaptive” fabrication, and it was originally meant to cope with local incidents and tolerances in a traditional, and highly controlled industrial environment.  When generalized, however, the “adaptive”, or intelligent skills of today’s industrial robots are the harbinger of a truly revolutionary approach to design and making.

Modern design is an authorial, allographic, notational tool of control based on prediction.  Its traditional modern vehicle, the engineering blueprint, is based on the assumption that humans (the workers on the receiving end) will do as told, and materials (all the physical parts to be put together) will behave as expected.  To that end, industrial workers were laboriously trained to forsake all intelligent skills they may have owned, and industrial materials were laboriously standardized over time, to make them compliant with mathematical models (and not the other way around): steel is a case in point.  Stones, as found in nature, are all different from one another, so modern engineers had to process them to make then bland and tame, homogeneous, predictable, and calculable—thus creating an artificial stone, which we call concrete.  Ditto for timber: engineers cannot work with timber as found, because each log is different when chopped off a felled tree; for that reason, the timber we use in building is “engineered” and served in standard and heavily processed formats—as plywood, particle boards, laminated timber, etc.  

Today, however, intelligent robots tasked to build a wall in an open field could in theory scan the horizon, pick and choose from any random boulder on sight, analyze, combine and assemble those boulders as found to minimize waste, and pack them together in a dry wall without any need for infill or mortar.  Likewise, intelligent robots could theoretically compose with the random shapes and structural irregularities of natural timber—fitting together each log as found, or almost, without having to bring it to a far away plant, slice it or reduce it to pulp then mix it with glue and other chemicals to convert it into factory-made boards with standard measurements and tested structural performance.  Before the rise of modern engineering, living as they did in a world of physiocratic penury where manufacturing and building were at the mercy of local supplies of materials and labor, pre-industrial artisans never had a choice: they had to make do with whatever they found on site.  Today we can use computation and robotic labor to reenact at least some of this ancestral artisan economy (and its inherent, circular sustainability): Achim Menges’s team in Stuttgart, in particular, has already offered eloquent evidence of that; but it is noteworthy that some recent buildings by the celebrated Japanese architect Kengo Kuma show evident formal affinities with the “discrete” computational work being discussed here—without any reference to computational theory, and driven exclusively by Kuma’s creative reinterpretation of traditional craft.

In recent times others schools of computational design have preferred to focus on, and emphasize, the combinatory and modular features inherent in the technical logic of robotic assembly.  Bricks have been made to measure for human manipulation since the beginning of time.  Today, more powerful robotic arms could easily deal with bigger and heavier chunks—so long as all the chunks are identical, or almost, as different chunks may need different grips and handling tools (i.e. physical alterations in the “body” of the robotic machinery).  The assembly of standardized modular chunks (mostly made of industrial-grade, processed timber) has become the visual marquee of the so-called computational brutalists (see in particular the work of Gilles Retsin and, with different premises, of Daniel Kohler, but similar technical and logistic considerations also underpin work produced at SCI-Arc, USC, the MIT, and elsewhere).  The averred predilection of some computational brutalists for the mass-production of a limited number of conspicuous—almost obnoxious—prefabricated components (or chunks) may appear quirky, and even arbitrary in this context, unless one admits that the main reason for this technical choice may not be technical.  

For reasons too long to explain, and perhaps inexplicable, the curvy smoothness that characterized end-of-century digital design is today often seen as the architectural style of choice for neoliberals, neoconservatives, and free-marketeers of all denominations around the world.  This symbolism is now so pervasive that many on the opposite site of the ideological divide ended up, often unintentionally, championing stuff that looks exactly the opposite.  In eighteenth-century art theory, the opposite of “smooth” was “rough”.[2]  Today, the consensus among the design community seems to be that the opposite of architectural smoothness is architectural chunkiness.  By definition, symbols do not have to be rational.  Dissenters of all ilks have a long history of allegiance to all expressions and representations of aggregation, dissonance and disjointedness in the arts.  In the same tradition, the architectural chunk is a de facto rallying cry of today’s activist left.

More controversially, similar aesthetic and political choices often team up today with some forms of nostalgia for the entire technical system of mechanical modernity, and for its main avatar, the industrial factory.[3]  Many who regret the rise of robotic manufacturing never saw an industrial assembly line—let alone worked at one, evidently.  But today’s non-standard robots—as redefined and reinvented by architects and designers—will not automate the moving assembly line: they will eliminate it; they will not replace the industrial worker: they will create the automated version of a pre-industrial artisan.  One hundred years ago Le Corbusier thought that building sites should become factories; by a curious reversal of roles, today’s robotic revolution promises to turn the post-industrial factory into something very similar to a pre-industrial building site.  The intelligent, adaptive, “agile” robots that are being developed by the design community are likely the future of manufacturing, but the social and economic import of this technical revolution—unleashed, almost accidentally, by research in computational design and architectural automation—far transcends the ambit of our discipline, and raises questions of a more general nature.

 

 

 


[1] Frederick Winslow Taylor, The Principles of Scientific Management (New York and London: Harper and Brothers, 1911), 24, 34, 36.  The first moving assembly line for the mass-production of an entire automobile was inaugurated at the Ford plant of Highland Park (MI) in the fall of 1913.

[2]  See in particular William Gilpin, Three Essays: On Picturesque Beauty […] (London: Blamire, 1792), 26. 

[3] As an anecdotal example,  the UK radical organization Novara Media, noted among other things for its vocal support of Jeremy Corbyn and for the pro-Brexit position of some of its leaders, was named after the North-Italian industrial town where Elio Petri shot the movie The Working Class Goes to Heaven (1971).   The movie was a parody of the daily life of the Italian industrial proletariat of the time, but it was shot in a real factory, showing the actual machinery, tools, and assembly lines then in use for the production of electro-mechanical elevators. 

Publication

Artforum 58