Excessive Resolution: Designers meet the second coming of artificial intelligence
”Excessive Resolution. Designers meet the second coming of artificial intelligence.” Architectural Record, 6 (2018): 135-136. Also on line at https://www.architecturalrecord.com/articles/13465-excessive-resolution-artificial-intelligence-and-machine-learning-in-architectural-design
The text posted here my be different from the published version. Please only cite from copy in print
Designers have been using computer-based tools for design and fabrication for almost one generation. In the course of the last 30 years we have learned that computers can help us draw and build new forms of unprecedented complexity, and we have also discovered that, using CAD-CAM technologies, we can mass-produce variations at no extra cost: that is already history—the history of the first digital turn in architecture. Today, however, more and more powerful computational tools can do way more than that. Computers, oddly, seem now capable of solving some design problems on their own—sometimes solving problems we could not solve in any other way. Twenty years ago we thought computers were machines for making; today we find out they are even more indispensable as machines for thinking. That’s one reason why many, including in the design professions, are now so excited about Artificial Intelligence, or AI. The term itself, however, is far from new: it was already popular in the 1950s and 60s, when computer scientists thought that Artificial Intelligence should imitate the logic of the human mind—that computers should “think” in the same way we do. Today, to the contrary, it is increasingly evident that computers can solve some hitherto impervious categories of problems precisely because they follow their own, quite special logic: a logic that is different from ours. And it also already appears that this new, post-human (or, simply, non-human) logic vastly outsmarts ours in many cases.
The main difference between the way we think, and the way computers solve problems, is that our own brain was never hard-wired for big data. When we have to deal with too many facts and figures, we must inevitably drop some—or compress them into shorter notations we can more easily work with. Most of classical science was a means to that end. Geometry and mathematics—calculus in particular—are stupendous data compression technologies. They allow us to forget too many details we could never remember anyways, so we can focus on the essentials. Sorting is another trick of our trade. As we could never find one name in a random list of one million, we invest a lot of work in sorting that list before we use it: if the names are ordered alphabetically, for example, as in a telephone directory, we can aim directly at the name we are looking for without having to read all the names in the list—which would take forever. Yet, that’s exactly what computers do: as they can scan any huge sequence of letters and numbers from start to finish in almost no time when they look for any name, they do not need to keep anything sorted in any particular order. Take alphabetic sorting as a metaphor for the way we think in general: we put things in certain places so we know where they are when we need them; we also sort things and ideas to make some sense of the world. But computers need none of that: unlike us, they can search without sorting; and computers are not in the business of investigating the meaning of life, either.
Just as we could not easily deal with a random list of one million names when we look for one in particular, we could not easily work with a random heap of one million different bricks when we need them to build a house. In that case too, our natural aversion to big data (or, to data too big to manage) drives us to some drastic simplifications: first, we standardize the bricks, so we can assume they are all the same. Then, we lay them in regular rows, and we arrange all rows within simple geometric figures—most of the time, rectangles or circles drawn in plans, elevations and sections. Thus we can forget about the physical shape and material properties of each individual brick, and we can design entire buildings by composing simpler and cleaner outlines of bigger and supposedly uniform surfaces and volumes. An individual craftsman with no blueprint to follow and no accounts to render could deal with each brick (or stone or wooden beam) on the fly and on the whim of the moment, following his talent, intuition, or inspiration—that’s the way many pre-modern buildings were built. But no modern engineer or contractor would dream of notating each brick one by one, as that would take forever, and the construction documents would be as big as the Encyclopaedia Britannica in print. Yet, once again, this is what computers do. Today, we can notate, calculate, and fabricate each individual brick or block of a building—one by one, to the most minute particle. If the particles are small, they are 3D printed on site. If they are bigger, they are assembled by robotic arms. That procedure is exactly the same, and takes the same time, regardless of the regularity of the components, their number, size, and layout: never mind if the blocks are all the same, or all different; never mind if they are all aligned, or not. Computation at that scale today already costs very little—and it will cost less and less.
The advantages of the process are evident: micro-designing each minute particle of a building, to the smallest scale available, can save plenty of building material, energy, labor, money, and can deliver buildings that are better fit to specs. Not surprisingly, buildings designed and built that way may also look somewhat unusual. And rightly so, as the astounding degree of resolution they show is the outward and visible form of an inner, invisible logic at play that is no longer the logic of our mind. Perhaps human workers could still work that way—given unlimited time and money. But no human mind could think that way, because no human mind could take in, and take on, that much information. Big data is what humans don’t like, but computers do well. And there is no point in asking whether we like that way of working or not, either—as that is none of our business any longer. Each to its trade: let’s devolve to machines what we are not good at doing—and keep for us what machines cannot do, which is plenty.
Machines search—big data is for them. We sort: compressing data (and losing or disregarding some in the process) is for us. With comparison, selection, formalization, generalization, and abstraction, come choice, meaning, value, and ideology—but also argument and dialogue. Regardless of any metaphysical implications, no machine learning system can optimize all parameters of a design process at the same time; that choice is still the designer’s. Fears of the competition coming from Artificial Intelligence today may be as misleading as the fear of the competition coming from industrial mass-production was one hundred years ago. But, just as coping with the mechanical way of making was the challenge of industrial design in the twentieth century, coping with the computer’s way of thinking is going to be the challenge of post-industrial design in the twenty-first century. Architects (and, more ominously, societies as a whole) were slow in coming to terms with the industrial revolution—architects in particular fought a rearguard action against it throughout most of the 19th century. We may still choose to follow that (ill-fated) example. I hope we won’t.
“Excessive Resolution. Designers meet the second coming of artificial intelligence.” Architectural Record, 6 (2018): 135-136. Also on line at https://www.architecturalrecord.com/articles/13465-excessive-resolution-artificial-intelligence-and-machine-learning-in-architectural-design