A Very Short History of the Digital Turn in Architecture


Publication History:

“Storia brevissima, ma si spera veridica, della svolta numerica in architettura.” Casabella 914, 10 (2020): 28-35 (English transl., 99-100: “A Very Short, but Hopefully Believable History of the Digital Turn in Architecture.”)

The text posted here is an earlier draft and may be different from the published version. Please only cite from copy in print   


The number-based (or digital) logic of today’s electronic machines is replacing the analog technique of many mechanical or electro-mechanical devices that formed the basis of the industrial revolution, and contributed to forge many technical and social paradigms of the modern age.  Certain superficial aspects of the digital revolution in architecture are immediately evident, but also in a more strictly professional sphere many techniques of design, notation, calculation, and construction have been deeply impacted, with results that are already visible in the built environment, and now start  to be the focus of the first critical and historiographical assessments.

The first computers, in the modern sense of the term (as imagined by Alan Turing in 1936) were built during World War II. The famous ENIAC began operation in 1946: it weighed 26 tons and covered an area of 127 square meters in the building of the School of Electrical Engineering of the University of Pennsylvania (Philadelphia).  It performed the four arithmetical operations, but in every programmable sequence—this was the novelty.  Computers (known in Italy at the time as “electronic brains”) became smaller, but not necessarily more powerful, after the invention of transistors in the 1950s.  Olivetti played a leading role in the development of the first computers for commercial uses; these programs were interrupted by the sudden deaths of Adriano Olivetti (1960) and of the engineer Mario Tchou (1961). In the end, the first mainframe computer to reach a public of small and medium companies was not the Olivetti Elea but the IBM 360, launched with great fanfare in New York on 7 April 1964. Some years later (towards 1968) the most advanced IBM 360 models reached an equivalent of RAM equal to 1/250 of what exists today in any cheap smartphone (in other words, 250 IBM 360s would theoretically be needed to have performances comparable to that of our cell phones).  Even the most powerful military and industrial computers of the time had insufficient processing power for the numerical treatment of complex images, drawings or photographs; in 1964 the futuristic Sketchpad of Ivan Sutherland, at MIT, only permitted the laborious manipulation of elementary geometric diagrams.  Computers of this kind would have been good for practically nothing in an architecture studio.  Instead, certain general ideas derived from the science of the first computers had a singular influence on architecture.

Cedric Price was famously seduced by the cybernetic theories of Norbert Wiener: at its origins, Wiener’s cybernetics (1948) was a general theory of feedback and interaction between people and machines; Price derived from it the quirky idea that an intelligent building should be capable of reorganizing itself, dismantling and rebuilding itself ad libitum atque ad infinitum, based on use and the preferences of its inhabitants, through a system of mechanical movements driven by an electronic brain.  Various versions of this idea cross all of Cedric Price’s work; his Fun Palace (1963–67), with mobile walls and ceilings and automatically reconfigurable spaces, had an influence on the Centre Pompidou of Piano and Rogers, where the only parts in motion, however, were  the elevators and the monumental external escalator.

 At MIT towards the end of the 1960s the very young Nicholas Negroponte, on the other hand, was investigating the first theories of artificial intelligence of John McCarthy, Marvin Minsky and others (1956–61) to invent a computer program (URBAN5) that would have simply replaced the architect. Through an automatic system of questions and answers, the program would interact with the client, drawing the diagram of a single-family house on the screen, made to measure. Though the examples shown by Negroponte in his famous The Architecture Machine (1970) were very simple, the system never functioned, and the failure in itself is indicative of the technological ambitions of the time, which vastly exceeded the performance of the technical means then available. In the 1970s the failure of cybernetic dreams and of the first projects of artificial intelligence became clear, not just in architecture; research funding dwindled and many initiatives were abandoned. In technical history this period of disenchantment is known as the “AI winter.” In those same years, with the rise of the architectural culture of the postmodern, the entire high-tech panoply of the 1960s and the early 1970s (artificial intelligence, cybernetics, electronic brains…) simply vanished from architecture culture and from pedagogy in architecture schools, all over the world – a sudden and drastic oblivion.

But while architects were looking elsewhere, another revolution was building – a revolution no one had foreseen. The technologists of the 1960s envisioned a future made of bigger and more powerful computers. Instead, the future came from a new generation of smaller and way less powerful machines –personal computers, or PCs– that from the start of the 1980s put very limited processing power within the reach of a vast audience: machines that at the outset could do almost nothing, but made that almost nothing available to almost anyone. Thanks to progress in the microchip industry, in the meantime these machines gained access to very simple Computer Aided Design (CAD) and computer graphics programs: the IBM PC, with a DOS by Microsoft (MS-DOS), was launched in 1981; the first CAD by Autodesk followed in 1982, and Postscript, the first program for laser printers by Adobe, came in 1984. At the end of the 1980s many architecture schools in Europe, the United States and Canada offered introductory courses in Computer Aided Design, but unlike the big (unfulfilled) dreams of the 1960s, the CAD programs of the early 1990s were not intended to make major design choices, nor did they set out to revolutionize the technical functioning of buildings; they were plain but effective drawing tools, used to file and edit plans, elevations and cross sections in a new electronic format. As one of my colleagues at a school of architecture I shall not mention said, towards 1990–91: “CAD means: Cheaper And faster Drawing.”

Others had less prosaic ideas. In the early 1990s Bernard Tschumi, the new director of the school of architecture of Columbia University in New York, created the seminal Paperless Studio, which soon became a laboratory of architectural ideas driven by digital technologies of ideation and fabrication; in 1993 one of the young assistants at the Paperless Studio, Greg Lynn, published the first manifesto of the new digital avant-garde: an issue of «AD» (Architectural Design) titled Folding in Architecture. Peter Eisenman largely mentored the operation, and various recent works of his were documented in the volume; the “fold” (le pli) in the title, in turn, referred to a book by Gilles Deleuze (Leibniz, le pli et le baroque, 1988) which contained several pages of architectural theory attributed by Deleuze to his brilliant student, the architect and polymath Bernard Cache. The “fold” of Deleuze was, in effect, the point of inflexion of a continuous function – the transition from convexity to concavity in a function of double curvature. Gilles Deleuze attributed vast and profound philosophical, ontological and aesthetic meanings to this figure of differential calculus—otherwise well known to all high school students.

At the same time, new CAD programs (starting from Form Z, 1991) began to include graphic interfaces for the intuitive manipulation, directly on the screen, of an aerodynamic type of curves known in English as “splines” or NURBS, and in France as Bézier curves, from the name of the French engineer who invented them from 1958 to 1966, while employed by the (then state-owned) automaker Renault. Bézier may have been the first to discover the mathematical notation of the aerodynamic curves utilized to minimize friction with water or air in the construction of transport vehicles (ship hulls, fuselages, wings and rudders of airplanes, automotive bodywork). The CAD programs of the early 1990s transformed this complicated math into a sort of video game, which explains to a great extent the rising popularity among designers in those years of smooth, continuous curves, like those used for aerodynamic vehicles (streamlining). The first practical utilization in architecture of the CATIA software created by Dassault Aviation to optimize the aerodynamics of military airplane wings was for the construction of a big fish installed in a prominent position at the central beach of Barcelona (1992). Frank Gehry observed that the sinuous, smooth and continuous curves of a fish are aerodynamic (actually hydrodynamic) for the same reasons that the keel of a ship or the wings of a plane are aerodynamic – namely so they can move more easily, with less friction, respectively in water or in the air. Gehry discovered that Dassault had software which with a bit of effort could be adapted for the construction of a metal fish.  Since then, Frank Gehry has become  the worldwide specialist of architectural streamlining  (and of the use of the CATIA software in architecture). In 1996, Greg Lynn introduced the term “blob” to indicate the new style of digital aerodynamics;  more recently, Patrik Schumacher has preferred the term “parametric” (otherwise familiar in Italian for its use by Luigi Moretti, many years earlier and without any reference to electronics); in the current lexicon of digital architecture “parametricism” at times means the specific style of digital aerodynamics, and at times the use of numerical techniques in architecture in general.

From the end of the 1990s, and to some extent until the present, the style of digital streamlining  has often been seen as the external and visible sign of the digital turn in architecture – the eloquent image of a new way of building that until a few years ago, without digital techniques, would have been practically impossible (if not at a high cost and with enormous effort). Digital techniques have made aerodynamics available to everyone; it matters little that for most ordinary architecture, aerodynamic design is not required and has no real purpose. The popularity of the parametric style (or the style of digital streamlining) style has partially overshadowed the scope of the technical, ideological and cultural change pertaining to the use of new tools of numerical design and fabrication.

As the first pioneers of the digital turn pointed out from the outset, the numerical or parametric notation designates theoretically unlimited families (or series) of similar objects, which differ from one another with the variation of each parameter. Given the fact that most of the digital fabrication techniques do not utilize mechanical matrices, this new generic object (which Gilles Deleuze and Bernard Cache had called an “objectile”) can be produced in series of idential copies or of differential variations  (within preset parametric limits), indifferently, and at the same unit cost. In serial productions of this type, also known as non-standard series production, the marginal cost is constant, hence there are no economies of scale. In other words, if the cost of a product is X, it will remain X even if the (numeric) factory reproduces the same product an unlimited number of times; reciprocally, variations between items within the same parametric series do not imply additional costs. In the digital world, standardization, which in the mechanical world reduces production costs, reduces nothing – apart from the choices made available to the designer. This revolutionary novelty, today also known as digital mass-customization, overturns all the tenets of the industrial and modern world, and is intrinsic to all the techniques of digital design and fabrication, regardless of form and style.

In effect, starting from the first decade of the new century a new generation of architects interested in new technologies has rejected the technical and  formal premises, and also certain socio-political implications, of digital streamlining.  Digital codes are discrete by definition, and as Philippe Morel was among the first to note, in many cases there is no reason to conceal or confuse the discrete logic of the mathematical and technical systems that are their basis (Bolivar Chair, 2004). In recent years many designers of the second digital avant-garde have preferred to display, or even to show off, the discrete volumetric units used in structural calculation for finite elements, or produced by a new generation of 3D printers. These uniform volumetric elements known as “voxels,” by analogy with the “pixels” used in digital images, have given rise to a new style called “voxellated”, which is the precise opposite of the smooth and continuous style of the aerodynamic tradition.

Furthermore, the growth of processing power of new computers, at increasingly affordable costs, has in more recent times led to the introduction of a new conceptual framework, known as that of “big data” – a new computational universe where the simple quantity of information, and the power of the new tools of simulation, make it possible to solve complex problems in the absence of traditional mathematical methods. In architecture the most conspicuous manifestation of the methods known as “big data” is found in objects that display a formal exuberance without precedents (or, to be more precise, without non-artisan precedents). The grottos of Michael Hansmeyer and Benjamin Dillenburger, for example, made with industrial 3D printers, are composed of an amazing number of voxels, each of which can be seen as an individually designed, calculated, constructed and positioned micro-brick. No craftsman or worker or traditional engineer could work in that way, because the design calculations, working drawings and worksite instructions needed for that purpose would occupy thousands of volumes. The disquieting or even hostile aesthetic of these creations thus reflects an already post-human logic that no longer responds to our mental categories and –not surprisingly– has an appearance that transcends our capacity for perception and understanding.  Traces of this style, called “excessive resolution,” are also found in recent works by Marjan Colletti, Matias del Campo and Sandra Manninger, Mark Foster Gage, Alisa Andrasek and –with different premises– also in the “particlized” style of Kengo Kuma. Jenny Sabin, Claudia Pasquero and Marco Poletto add forceful bio-mimetic inspirations. But the aesthetic of “excessive resolution” now seems to also be spreading outside the rarified circles of the digital avant-garde, asserting itself as one of the generic stylistic currents of our time, without any direct reference to numerical techniques (see, for example, recent works of Sou Fujimoto).

Of course the production of objects composed of an enormous number of irregular micro-components, though entirely designed, brings the problem of their assembly to the fore. A problem that could not have been solved until recently, because the cost of traditional or artisanal assembly would have been prohibitive in most cases, while the industrial robots in use since the 1960s, conceived for the automatic repetition of simple, identical gestures, are not capable of carrying out irregular, improvised and at times unpredictable operations, as often happens even on the most advanced construction projects.  This is why in recent years various digital creators have developed a new generation of post-industrial robots, so to speak: intelligent robots that are able to emulate certain artisanal operations. Pioneers in this field include Fabio Gramazio and Matthias Kohler at the ETH in Zurich, who starting in the early 2000s have demonstrated that traditional industrial robots can be reprogrammed to carry out the automatic laying of bricks in compositions of all kinds, including irregular ones. The group of Achim Menges and Jan Knippers at the University of Stuttgart specializes in the development of “adaptive” or versatile robots, capable of altering robotic operations in response to unexpected circumstances, and thus to work with natural, non-standardized materials (non-industrial wood, in particular) or new composite materials having unpredictable structural performance (non-linear). The group of Gilles Retsin and Manuel J. Garcia at the Bartlett in London concentrates on robotic assembly of modular macro-components – a trend that resurfaces in the experiments of José Sanchez at USC (Los Angeles), and elsewhere, producing an aesthetic that sometimes curiously suggests late-mechanical precedents (this trend has also been called “digital neobrutalism”).

This is not the only sign of nostalgia for the glory years of cybernetics and early artificial intelligence – nostalgia that is now widespread in architecture schools and certain circles of the digital avantgarde. Mistakenly, I believe, because that glorious age ended badly, and there is no reason to imagine that the same yarn spinning could achieve better results today that it did half a century ago.  It is clear that digital techniques offer extraordinary new tools to today’s architects and designers, who can and should find the best possible uses for them – because if they do not do so, others will. But to imagine that a new generation of computers will be able to entirely replace the creative work of architects (as Negroponte and others thought at the end of the 1960s, and many are again thinking today) is neither useful nor intellectually interesting. Of course, today’s artificial intelligence has amazing capacities. But even if one of these new “electronic brains” were capable of developing automatic projects (and that does not seem to be an imminent development), I cannot imagine what kind of client would prefer one of those machines to one of us. If only because we continue to cost less – unfortunately.





“Storia brevissima, ma si spera veridica, della svolta numerica in architettura.” Casabella 914, 10 (2020): 28-35 (English transl., 99-100)