The Age of Computational Brutalism

2019

Publication History:

 ”The Age of Computational Brutalism,” Introduction, in Robotic Building. Architecture in the Age of Automation, edited by M. Claypool, M. Jimenez Garcia, G. Retsin, V. Soler, 8-9. Munich: Detail, 2019

The text posted here may be different from the published version.  Please only cite from copy in print

 

To all who, like me, became architects in the last quarter of the 20th century, modernism was the defining moment in the history of world architecture: the one point in time when the design professions rose up to the task of shaping the totality of the built environment, in full awareness of the technical and economic constraints, of the political nature, and—a bit later—also of the natural limits, of design choices.  Not surprisingly, for that very reason some saw modernism as the zenith, others as the nadir, of design history.  Yet, in purely conceptual terms, the main achievement of the masters of modernist design was in fact a fairly modest one: they just collectively decided—and none too soon—that architecture should come to terms with the technical logic of mechanical mass production.  That, in turn, was never rocket science; now, as then, everyone can find its basics explained in any first-year economics primer.  Most mechanical technologies of mass production are matrix based, and use mechanical matrixes (molds, casts, dies, or stamps) to print out identical copies.  As matrixes cost money, it makes sense to try to use them many times over, to amortize their cost by spreading it on as many identical copies as possible.  This is how the reproduction of standardized, identical copies generates economies of scale: the more identical copies we make, the cheaper each copy will be.

Today we know that digital fabrication technologies no longer work that way: in so far as most digital production tools, either subtractive or additive, are not matrix-based, the digital reproduction of identical copies does not generate any economies of scale: there is no need to reuse any mechanical matrix when there is none to begin with.  When digitally made, any copy is a new original, and making more of the same will not make any of them cheaper.  This is what we call mass-customization: one of the most revolutionary ideas that architects ever came up with; one that will change—and has already deeply changed—the world in which we live.

And that’s already history—a history that started almost one generation ago.  Above and beyond that, however, for the last ten years or so a new crop of digitally intelligent designers could not fail to notice that more and more powerful computers are now increasingly taking on a number of “intelligent”, problem-solving tasks.  Much as in the 90s computers were primarily seen as tools for making (tools for making architectural drawings, and tools for making physical objects), today computers are again being hailed as tools for thinking—in a sense, going back to what computers were meant to be when they were invented, and they were called “electronic brains” (and not “electronic hands”, for example: that came later).  Indeed, Artificial Intelligence was already a very hot topic from 1956 (when the term was minted) to the late 1970s, when both the term, and the research for which it stood, were forgone and ditched.  Unlike its hapless predecessor, however, today’s AI often seems to work, sometimes even surprisingly well, and in architectural design this revival of the computer as a thinking machine has been the springboard for what we now call the second digital turn, based on discretization, excessive resolution, and particlization—the visual display of logics of notation, calculation, and fabrication beyond the scope and compass of human understanding.

This important, ground-breaking book is about the merger of new computational tools for making and thinking in a new architectural paradigm, which the authors call automation.  The term is felicitous, and aptly chosen. First generation tools for digital manufacturing were mostly subtractive, and second generation mostly additive.  Today, the most versatile tool for computer-driven making is the robot, generally seen as a versatile device for carrying out physical tasks, hence the ideal bodily extension, so to speak, for an electronic mind that is now capable of making independent, intelligent choices.  However, as nobody has produced to date any workable definition, let alone taxonomy, of either robotics or Artificial Intelligence, the authors of this book have very reasonably chosen to avoid sci-fi diversions and ontological traps, keeping instead to the oldie but goodie notion of automation.  A cuckoo clock can keep the time automatically (intellectual problem-solving independent from human intervention: that’s AI), and equally automatically perform a set of mechanical operations in physical space as means to an assigned end (in this instance, telling the time: that’s robotics).  But computational automation of making and computational automation of thinking can now be synced, as never before.  This is the conceptual novelty highlighted and advocated in this book, and shown by many of the cases here presented.

And this is how it works.  When architects first started experimenting with digital mass-customization, in the late 90s, they had to fit that novel idea to some contingent, but drastic technical limits in the materials they could use, and in the sizes of the pieces they could produce.  In Greg Lynn’s seminal Alessi Teapots, a non-standard series of 99 variations of the same parametric model, each of the 99 teapots looked like, and was meant to be seen as a monolith—a single metal block: that was not the case, but it was nonetheless the idea the series was meant to convey, and this is how each teapot would have been fabricated, if titanium sheets could have been milled, 3D printed, or extruded.  However, when trying to apply the same logic to a non-standard series of actual buildings (the Embryologic Houses, circa 1998-2000), Lynn was the first to point out that that those fabrication processes would not easily scale up.  The shell of a parametric, non-standard house could be made of a vast number of digitally mass-customized, non-standard panels, but these in turn should be fastened to a structural frame, and to each other; the frame itself, due to its irregular form, would have to be laboriously hand-crafted, and so would each fastening nut and bolt and junction and joint between the parts.  In short, twenty years ago Lynn’s non-standard house would have ended up being more hand-made than digitally fabricated.

The transfer of non-standard technologies from the small scale of product fabrication to the large scale of building and construction remains to this day a major design issue.  Some digital makers have tried to duck the issue by enlarging the size of their printers (and examples of these technologies are featured in this book); but as printing machines tend to be bigger that the objects they print, this has resulted in a number of quixotic experiments with rather unwieldy hardware.  Today, the versatility of robotic machinery leveraged by the brute force of big data computation suggests a different approach: instead of printing bigger and bigger monoliths, it may conceivably be easier to start with any number of parts, as many and as small as needed, leaving to AI and robots the task of sorting them out and putting them together.  Today, thanks to AI, an almost unlimited number of different (or identical) parts can be individually notated, calculated, and fabricated; and, also thanks to AI, an almost unlimited number of different (or identical) robotic gestures and movements can be scripted and executed for their assembly.  The technical logic of digital mass-customization (the mass-production of individual variations at the same unit cost of mass-produced identical ones) is perfectly mirrored by the technical logic of AI driven robotic assembly: identically repeated robotic gestures cost the same as variable ones; robotic operations of similar amplitude or duration will cost the same whether the actual robotic movements are the same or not. 

In fact, as both fabrication and assembly can be standardized (making identical copies, or identical gestures, respectively) or non-standard (making variable copies, or variable gestures, respectively), the diligent reader is invited to compile the mathematical matrix of the four resulting modes of combined production and assembly—of which one, the standard assembly of non-standard parts, appears improbable; the remaining three describe most of the examples discussed in this book.  Should an even more diligent reader feel inclined to build a bigger matrix, one may add that standard fabrication, non-standard fabrication, standard assembly, and non-standard assembly, can all be delivered by dint of manual, mechanical, and digital processes (at least in theory, but with the exception of non-standard production and assembly, which can only be achieved manually, or digitally).  This bigger matrix could in turn apply to the quasi totality of architectural history.

Given this bigger picture, the authors’ noted, and distinctive predilection for the non-standard assembly of standardized parts (which, at the time of writing, is the marquee of the Bartlett school of computational brutalism) may appear quirky, and arbitrary.  Assuredly, if new technologies today exist that allow us to mass-customize any assembly of parts, why should similar—indeed, often older and more established—technologies not be used to mass-customize the parts themselves?  As the authors carefully explain, the reasons for this choice are mostly not technical.  This is a design book, and there is more to design than technical optimization.  Let the alert reader keep reading, and find out the key to this conundrum.  One hint: computer scientists largely agree that the novel, often truly astounding technical performance of today’s computational tools is the main reason for many recent breakthroughs in AI.  Yet there is more to today’s dataism than a linear accretion of information and processing power; what some computer scientists call, somewhat dismissively, brute force computing is in fact a new set of problem-solving methods (hence a new kind of science) triggered by today’s unprecedented data affluence and computing speed.  In architectural history, however, the term brutalism has a quite different lineage, and harks back to a different set of meanings and values.  Considering where and when this is happening, and what is at stake for the present and future of the world in which we live, and of the ideas in which we still believe, the rise of a new school of computational brutalism in design, conspicuously drawing from both traditions, is not coincidental.   

 

London, March 2019

 

  

Publication

Robotic Building. Architecture in the Age of Automation

Citation

“The Age of Computational Brutalism,” Introduction, in Robotic Building. Architecture in the Age of Automation, edited by M. Claypool, M. Jimenez Garcia, G. Retsin, V. Soler, 8-9. Munich: Detail, 2019