The Imitation Machine
Publication History:
“The Imitation Machine.” Opening essay of the section “Artificial” in the Catalogue of Biennale Architettura 2025, Intelligens. Natural. Artificial. Collective, edited by Carlo Ratti, 1, 190-193. Venice, Edizioni La Biennale di Venezia, 2025.
The text posted here is a preprint draft and it is significantly different from the published version. Please only cite from copy in print
Artificial intelligence is not a new technology. The term itself was invented in 1956, and Marvin Minsky’s theory of artificial intelligence as a “general problem solving machine,” first outlined in 1960, still holds good today. Architectural design was one of the first fields to experiment with the new science of automated decision making; in 1970 Nicholas Negroponte’s Architecture Machine, a design-capable computer system, was meant to enable everyone sitting in front of a computer screen to the design a building by answering sequences of multiple-choice questions. The client of the machine, so to speak, was not supposed to have any architectural expertise; the expert was the machine itself. But Negroponte’s machine famously didn’t work, and in the years that followed many concluded that any such application of artificial intelligence was bound to be delusional—to say the least. Norbert Wiener’s Cybernetics (1948), seen as a general theory of feedback and interactivity, had a more pervasive and lasting influence in the arts and in design, but the digital turn of architecture, which started for good only in the early 1990s, was not related to either cybernetics or artificial intelligence. When architects and designers started using cheaper and smaller computers (often “personal” computers) to make architectural drawings, they favored photo-editing and animation software, or spline-modeling programs originally developed by the automobile and aircraft industries to calculate and manufacture streamlined, aerodynamic surfaces. Through the works of Frank Gehry, Zaha Hadid, Thom Mayne, and many others, those computational tools have already changed the history of architecture, for better or worse (that’s a matter of personal judgment). But CAD-CAM computers of the 1990s were drawing machines and fabrication tools; they were not thinking machines, and nobody in the 1990s saw Wiener’s, Minsky’s, or even Negroponte’s early work as a precedent of digitally intelligent design.
The second coming of Artificial Intelligence—this time capitalized, often abbreviated as AI, and returned to its original remit as a problem-solving machine—is recent, and is due to the extraordinary performance of today’s computational tools, which have in some cases delivered on the visions of artificial intelligence’s early pioneers, and vindicated the foundations of the technical logic they imagined. The resurrection of AI in the visual arts and in design is more recent still, and came with the rise of a machine learning technology known as GAN (Generative Adversarial Networks) in 2014–16, and of text-to-image generative tools, such as Dall-E or Midjourney, starting in the spring of 2022. All these technologies pertain to the more general field of Generative AI, but for evident reasons architects and digital artists have been primarily captivated, and often mesmerized, by their image-making potential.
The working principles of image-making Generative AI are by now well-known, and have been largely commented upon. The foundation of all these systems is a corpus, or collection of models (a “dataset”), which can be custom-made (made for a specific project) or vast and generic (then called a “Large Language Model,” or LLM). This corpus of models is then processed by a machine learning algorithm, and used as a source for the generation of new exemplars that will have something in common with the dataset from which they derive. If the dataset is made of images, the new images thus generated will be similar to all those in the original dataset, but identical to none. Generative AI has therefore successfully managed to automate visual imitation—where imitation, as in classical art theory, means the creative transformation of one or more exemplary models. And just as the methods of classical imitation were never particularly transparent, today’s machinic imitation happens entirely within the black box of a computational algorithm; based on the current state of the art, we can only change the output of any such generations (or generative transformations) by changing the datasets from which they derive.
When the technical logic of Generative AI first started to transpire, many wondered why architects would want to borrow someone else’s intelligence—never mind if artificial—to imitate someone else’s work, or even their own. But this is indeed what is happening, as many offices have already trained proprietary AI system to imitate their distinctive styles. For the time being these pictorial fabrications are used primarily as marketing tools, but even in such limited capacities they are replacing a panoply of more traditional, photorealistic digital renderings (and eliminating all related competences, and jobs).
Beyond these more-or-less embryonic professional applications, however, the use of Generative AI as an image-making tool has already prompted a reassessment of some critical categories and principles of contemporary design theory. By the way it works, Generative AI reminds us that everything is generated, to some extent, out of something that already exists; and that nothing is created out of nothing. Generative AI reminds us that every invention depends on conventions, and that the relation between every new creation and its precedent, or sources, is as inevitable as it is problematic. As a result, discussions on style, and on the processes of creative imitation—which had long been embargoed by modernist ideology—are becoming actual again. As every dataset is de facto a canon based on precedent, tradition, and history, and by its own nature exclusionary, the mode of functioning of Generative AI obliges us to come to terms with the biases, blindness, prejudice, and societal violence inherent in every idea of tradition, and all invocation of precedent—regardless of the technologies involved.
Some may see all the above as a formalist squabble of little consequence outside of academia, art galleries, and some experimental design practices. It may also be the harbinger of much to come. The same logic of automated imitation learning—until now mostly limited to image making—could in principle be applied to datasets of all sorts, regardless of visual implications. In fact, this appears to be already happening, with some unexpected consequences.
Much like other methods of artificial intelligence that preceded it, Generative AI is not a rule-based problem solving tool. If we ask Generative AI to calculate the result of 10 times 10, Generative AI will not perform the arithmetical calculation we learn in primary school; in principle, it will instead sift through a huge array of evidence retrieved from its dataset and report that, over time, a social consensus appears to emerge from most sources suggesting that the result of that operation tends to be equal to 100. In short, Generative AI will answer a mathematical query by conducting an opinion poll based on the retrieval, aggregation, and averaging of available historical precedents. This makes Generative AI eminently unsuited to providing results we could more easily get via traditional quantitative or analytical tools (problems of descriptive geometry, for example, such as tying together plans, elevations, and sections). But at the same time, this makes Generative AI an extraordinary tool to deal with problems we cannot easily formalize—i.e., processes we cannot translate into rules. And, needless to say, there are plenty of those, and in the design professions more than elsewhere.
Anthropologists and sociologists have long argued than many artisanal activities are based on a kind of “tacit knowledge” that cannot be learned from books. That may or may not be so, but regardless, an AI system trained, for example, on the recorded observation of a huge number of instances of bricks being laid by an expert bricklayer could in principle learn to lay bricks by imitation alone. Importantly, that system will never try to spell out the rules of good bricklaying the way a modern engineer would; it would instead learn the art of bricklaying the way apprentice bricklayers always learned from their masters: by sheer observation and emulation over time. Come to think of it, is this not also the way we all learn one or more mother tongues—imitating what we hear, before we are taught the rules of grammar at school? Once we accept the principle that automated imitation may learn to solve problems by observation alone, many design methods—including some of the most “wicked problems” of urban, architectural, and even structural design—could be seen in a different light. Machinic intuition could, in some cases, replace deductive problem-solving. When calculating machines fail us, we may now have imitation machines. If this is what Generative AI can do (which only time will tell), Generative AI is not going to be a foe, or the nemesis of the design professions, as many fear. In fact, it may even turn out to be a very friendly, sympathetic tool of our trade.
Publication
Citation
Mario Carpo, “The Imitation Machine,” in Carlo Ratti, ed., Catalogue of Biennale Architettura 2025. Intelligens. Natural. Artificial. Collective (Venice: Edizioni La Biennale di Venezia, 2025), 1, 190-193.