On Cats, Dunces, and the Uncannily Human Talents of Generative AI

2025

Publication History:

“On Cats, Dunces, and the Uncannily Human Talents of Generative AI.” A+U 654 (March 2025): 56-59 (in English and Japanese

The text posted here is a preprint draft and is significantly different from the published version. Please only cite from copy in print

 When I was in high school, my parents adopted a stray cat. We don’t know if he already had a name when he ended up in our garden, from thence made his way to my mother’s larder, liked the fare, and decided to stay. At some point, somehow, we just started to call him Toto – a common Italian name. Now, by normal feline standards, Toto was remarkably dumb. Don’t misunderstand me; he was a jolly good fellow, friendly and forthcoming.  Everyone liked him. But he was definitely not inclined to abstract thinking. Famously, being fond of heat, as all reasonable cats are, he always tried to get as close as possible to open fires; and sure enough, the first time he burnt his whiskers by getting too close to a gas burner on a stovetop, he learned the lesson and kept thenceforward at a safe distance from said gas burner. But only from that one. When during holidays he traveled with us to other houses, he invariably burned his whiskers again by getting too close to other gas burners. He had memory all right – he could remember each individual stovetop and the dangers it represented. But he would not see that all gas burners, despite their differences, would similarly burn his whiskers. He could not infer general statements out of his recorded experiences. As most cats, he lived in a world of hic et nunc, phenomenological haecceitas – a world of individual events, not of general ideas.

 Unlike Toto, and unlike, I believe, most cats, humans tend to compare the results of many experiments, and extract and abstract from them some general traits that these events have in common. These common traits are then spelled out as formal rules, which we use to predict similar events before they happen. This is the inductive method of modern science, which let us predict the velocity of the fall of a body subjected to gravity using a formula where the height of the fall is factored in, but the color of the falling body, for example, or the date of birth of the person performing the experiment, are not. One advantage of formal rules is that they can be taught, so one can use them without having to repeat all the experiments that led to them – a remarkable shortcut, which allows us to profit from other people’s learning, and one of the benefits of schooling. That’s a benefit that cats don’t have. There are no schools for cats: Cats can only remember their own, individual trials and errors. (There are schools for dogs, I regret to say, but let’s leave that for another discussion). Whereas, when we, humans, learn from ready-made rules, we apply general formulas to individual cases, and this mode of reasoning is called deductive.

 Computer scientists, with some exceptions, are not conversant with the history of science; yet in their quest for a universal “problem solving machine,” they ended up replicating and reenacting all major steps of Western science from its Greek inception to this day (and I say Western science, as that’s the one I studied at school, but I do not doubt that scholars of non-Western sciences would come to similar conclusions). Starting around 1960, Marvin Minsky conceived artificial intelligence as a machine that would solve problems by endless trials – but with a capacity to learn from its errors marginally superior to that of my parents’ cat Toto. Minsky’s machine would not only remember each individual error, to avoid repeating it (just like the cat), but would also compare all errors already made to orient the next trials toward solutions that would lead to better results. This rudimental inductive process was the basis of a computational method called gradient-based optimization, in use to this day. In the years that followed, when it appeared that the machines of the time were not powerful enough to make this inductive method work, computer scientists switched to an antithetical method based on deduction, where computers are not meant to learn from trials but are instead tasked to apply ready-made systems of rules, preemptively compiled by competent humans. Rule-based computation thrived in the 1970s and 1980s, when Douglas Lenat famously claimed that “intelligence is ten million rules.” But when the number of rules needed to set up rule-based systems kept rising, many started to doubt that deductive systems would ever live up to their promises, and indeed the recent successes of artificial intelligence (now capitalized and acronymized as AI) are mostly due to revived and upgraded machine learning (inductive) systems, not to deductive ones.

 However, when advanced machine learning systems, such as GAN (generative adversarial networks) started delivering their surprising results, around 2014–2016, computer scientists soon realized that they were grappling with something quite new, and that the technical logic of this new form of AI vastly transcended that of the traditional machine learning technologies from which it derived. Yes, GAN is a form of machine learning; but what GANs learn to do is quite unlike what computational machines had been typically expected to learn from trials. Generative AI, which is now booming, is the latest avatar of that trend.

 Nobody knows what happens inside the bowels of the computational “latent space” that drives the performance of today’s generative AI – I least of all. But, if we judge the machine from what it does, rather than from what it is, one must conclude that generative AI is nothing less than a surprising, unprecedented, almost miraculous imitation machine. Never mind how we input our verbal or visual “prompts, ” image-making generative AI creates new images that are the synthesis of sets of existing images, and express and represent, somehow, some commonalities among the images from which they derive – just like the famed Greek painter Zeuxis could create the image of one ideal and perfectly beautiful woman by merging the traits of five real but imperfect models he had picked out of many. Today’s datasets comprise millions of images, instead of Zeuxis’s five, but the spirit of the operation is the same.

 Because this is the thing: Generative AI is an imitation machine, but this is imitation in the classical, not in the modern, sense of the term. While 20th century modernists mostly saw imitation as plagiarism or copy, in the classical tradition imitation was always seen as an indispensable component of every new creation. In the classical tradition, nothing is created out of nothing; everything is generated out of something that already exists. In the classical tradition, imitation is not the copy of an original; it is the transfiguration of a model. For this is what the very same idea of a tradition implies and portends: a common language, the continuity of which enables the expression of new ideas. Just as there is no invention without a tradition, there can be no creation without imitation.

 The golden age of creative imitation was the early Renaissance, when humanists and artists had to find ways to imitate ancient models they admired, without reproducing them verbatim. This started with Cicero’s Latin – a language that nobody spoke any longer, and which only existed in a limited corpus of exemplary texts. The imitation of Cicero became the obsession of the age, and the methods of Ciceronian imitation soon spread from the art of discourse to all the arts and sciences – including architecture. The enemy of the humanist theory of imitation was the rule-based method of medieval Scholasticism, where every argument or statement is deducted from more general principles by dint of the rules of formal logic. Soon the humanists started to make fun of what they saw as Scholastic pedantry, and they found their enemy of choice in the great medieval logician Duns Scotus; the term they derived from his name, dunce, is still used to disparage uninspired morons – bean counters, slow-witted pipsqueaks who insist on following rules to the letter. Renaissance humanists thought they knew better: Why follow rules when you can simply let yourself be inspired and intimately transformed by the model you are looking at? Creative imitation is artsier, flashier, and more fun, than playing by the book (literally). 

 The problem with creative imitation is that there is no way to teach it. Anyone can play by the book; playing by ear is a gift, which some have and some don’t. Creative imitation is not a scientific method, nor can be reduced to one; yet it is, undeniably, one of the ways we learn.  Is that not the way we all learn to speak one or more mother tongues, by simply imitating the first sound we hear, well before we learn to write, and we are taught the rules of grammar and syntax at school? The rise of generative AI, the ultimate imitation machine, is here to remind us that, alongside induction and deduction, heuristics and syllogistics, there is another way to learn, which is as mysterious as it is effective: that’s imitation, the first thing we do when we hear sounds, and we try to emit similar ones, or we see gestures, and we try to replicate them with our infant body. Once again, cats can’t do that, nor dogs – but apes can.

 So this is what we should keep in mind whenever we use generative AI – and even when we don’t: Imitation is a talent, which sometimes we may have to put to task. Let’s think of our daily trade as designers: How many things do we do that we cannot deduct from rules, yet in the end turn out to work just fine? How many things do we teach, which we cannot translate into laws? How often do we say – in a design studio, or in daily life – just look at me, do as I do? How often do we lead by example, and learn from models? Time ago anthropologists developed a theory of what they still call “tacit knowledge” – knowledge we cannot spell out, a way of doing things that is based on feeling, not thinking. In principle, that’s not a theory I like very much: A born and bred modernist, I still believe that inductive formalization is what make us smarter than my parents’ cat Toto. Tacit knowledge – dumb imitation – is ape science. Yet it is a science we need – and sometimes, to solve some very wicked problems, such as those we often find in design, it’s the only science we have. And I find it puzzling, and to some extent exhilarating, that imitation learning should now be vindicated by the latest developments of artificial intelligence. After all, our own natural intelligence has been doing that – for a very long time.

Publication

A+U

Citation

Mario Carpo, “On Cats, Dunces, and the Uncannily Human Talents of Generative AI,” A+U 654 (March 2025): 56-59.