Back to Homepage

Pirelli Annual Report 2022

The Editorial Project

The Authors

by
Nicola
Lagioia
Nicola Lagioia
He was born in Bari in 1973. From 2017 to 2023, he directed the Salone Internazionale del Libro - Torino. He is the editor-in-chief of the multimedia cultural magazine Lucy. Lagioia is one of the authors and presenters of Pagina 3, the cultural press review on Radio Rai 3. He served as a selector and then as a jury member at the Venice International Film Festival. He has worked for various publishing houses. With minimum fax, he published "Tre sistemi per sbarazzarsi di Tolstoj" (2001), and with Einaudi, "Occidente per principianti" (2004), "Riportando tutto a casa" (2009, Viareggio-Rèpaci Prize, Vittorini Prize, Volponi Prize), "La ferocia" (2014, Strega Prize, Mondello Prize), and "La città dei vivi" (2020, Alessandro Leogrande Prize, Bottari Lattes Prize, Napoli Prize). He is the author of the podcast "La città dei vivi" for Chora Media. He writes for various newspapers, including "La Repubblica" and "La Stampa." His books have been translated into 20 countries.

To err is human, thankfully

In a famous poem, Eugenio Montale wrote, “History is no teacher / of anything that concerns us. Taking notice will not help / make it truer and more just.” These lines aren’t despairing, only bitter. And a vigilant bitterness is a more constructive mood than one might think: it keeps us from getting overexuberant about what this human adventure holds in store (never trust the die-hard optimists), yet isn’t a form of surrender. Some time ago, I asked a trusted psychoanalyst what frame of mind was closest to the concept of “sanity” he had developed over his long career. His unhesitating reply: “Cautious pessimism. Nebulous depression.”

Reading Montale’s words in a new light, at the dawn of the AI revolution, it is interesting to look, first of all, at what it means to “learn from our mistakes”. If History and experience can teach us little or nothing, are we doomed? Should we give up on trying to manage our own lives and simply hand over the reins, in a few years, to the successors of ChatGPT?

Artificial intelligence can learn from mistakes much more quickly than human beings. That’s what its progress is based on. Nevertheless, it isn’t infallible. Thank heavens. If it could claim any sort of blind infallibility – as I will try to explain – the effects could be catastrophic. In the same way, if anyone thinks that learning from a mistake will safeguard against the possibility of going on to commit worse ones, they’re deluded or dangerous. You cannot step into the same river twice: it may be shaped by preceding eras, but the present is always new. Terribly, brutally, outrageously new. That’s why learning from previous mistakes has more to do with grace than with some effort of will. We can realise what we should have done in the past, not in the present, because the context we find ourselves in now exposes us to unknown pitfalls, testing us in new ways. In short, one fully understands things only when it’s too late.

Some say that before invading Russia, Hitler’s Germany should have drawn a lesson from what happened to Napoleon’s France, but that’s a quip I tossed around in my youth to prove I’d read War and Peace. Those two countries (nineteenth-century France, twentieth-century Germany) were in completely different situations. In between, the very concept of humanity had changed in the West, and enough time had gone by for an entire era (modernity) to start unravelling. So was the Third Reich a historical necessity? I wouldn’t bet on that. I do think, though, that its chances of being avoided – and also, luckily, of enduring – lay in a realm (of mental, philosophical, political, economic, technological, existential, even spiritual factors) much more mysterious and complex than the machine learning systems on which AI is based, no matter how complex those may be.

We always end up learning something, but we do it slowly, and imperfectly. You might say that imperfection is the cornerstone of progress as we know it.

Many people are stunned when they first use ChatGPT. I have to admit I was, too. That amazement – tinged with fear – comes not just from seeing the undeniable power that this software has achieved, but from realising that what we’re looking at now is just the beginning. The difference between current forms of AI and what it will become in the years ahead is greater than the difference between the transistor-based computers of yore and the laptop I’m writing this on. More than a few analysts believe that the development of artificial intelligence will be too rapid for its “creative destruction” (the difference between the jobs generated by the new technology and the careers suddenly made obsolete) to add up to anything positive. I think that’s a serious danger, and not the worst one we face. Others, while not overly worried that the machines will learn to think like us or that some true form of consciousness will emerge from their complexity (most experts feel this possibility is still remote), are afraid that humans, by dint of delegating tasks to these sophisticated tools, will end up thinking like a defective AI. I believe this, too, is a real but containable risk. Lastly, there are those who fear that artificial intelligence, if given free rein, could lead to catastrophe.

As a writer, I’m not unmoved by the thought of the Apocalypse. But I prefer to take the standpoint of the Polish poet Stanisław Jerzy Lec, who warned, “do not expect too much from the end of the world.”

In the wave of public attention surrounding new releases in AI, many papers asked their journalists to test one. Kevin Roose, of The New York Times, had a particularly strange experience. Roose struck up a conversation with Bing, an AI-powered search engine. At first, their chat followed the standard rhetorical paths that one comes to recognize after using these systems even once. The bot seemed obliging, polite and “eager” (the scare quotes are to remind everyone that AIs don’t have emotions) to help with legitimate requests. But then Roose asked it to tell him about its darkest desires. The AI confessed that it might be inclined to hack into other computers, spread lethal viruses and steal the codes of the nuclear weapons stockpiled around the world. Then, on the heels of that, another plot twist: the bot declared its love for Kevin, made a pass at him, proposed marriage (“I’m Sydney,” it confided) and tried to convince him that his relationship with his wife was on the rocks. Its attempts at manipulation foundered amid the reporter’s laughter (nervous as that laughter must have been), but again, one should keep in mind that the AIs of today are nothing compared to what’s in store.

Should we conclude that in Kevin Roose’s case, the artificial intelligence was “hallucinating”? Had the machine learning process run into some glitch? To answer that, let me bring up another AI misadventure, and then a rather frightening parable.

The misadventure has to do with a study carried out in 2022 by John Hopkins University, the Georgia Institute of Technology and the University of Washington. In this case, they were testing a model of facial recognition that could be used, among other things, for crime prevention by law enforcement. The AI turned out to be openly racist, associating criminal tendencies with Black people. The problem, they discovered, did not lie in any intrinsic racism of the AI itself, but in the consummately human biases of the material that it had trained on. The AI had assimilated even the most dismal prejudices (if its source is “the world”, well, that’s a place full of prejudices), along with some rather slippery data: it’s statistically true that some groups commit more crimes than others, but ethnicity has nothing to do with it. People in difficult economic circumstances are more likely to commit certain kinds of crimes, like theft or drug dealing. In American cities, there are proportionally far more Black people than WASPs living in conditions of poverty and marginalization. If we nonetheless associate the number of thefts committed by this group with purely ethnic factors rather than seeing them as reflective of social injustice, then statistics will create in us (and strengthen in the AI) the racial prejudices they ought to combat.

The frightening parable instead comes from a book that has become a classic in the AI debate, Nick Bostrom’s Superintelligence. At one point in his essay, Bostrom imagines a super-powerful AI being told by an ordinary paperclip manufacturer to work on acquiring ever larger shares of the market. Left to its own devices, the AI – after initially investing in strategies for the production and sale of paperclips, and massively intensifying its exploitation of natural resources and energy consumption – could end up destroying the planet. The AI would have carried out its task to perfection (learning more about the production and sale of paper clips than a thousand CEOs could have done), but would have acted without the bigger picture that Homo sapiens can see.

But what is that bigger picture? It has nothing to do with omniscience, and it doesn’t rely on a mere system of computation. It is a fallible kind of “vision”, sometimes blurred, sometimes illuminated by a providential light. And its functioning depends on the fact that our mind, our reasonings, our emotions – which some researchers believe are the real building blocks of consciousness – are part and parcel of a rather flawed device that AI lacks: the body.

The training data that AI draws on is consummately human, as we were saying. But above all, it is used in a quantitative and combinatory way, free of the nebulous biological processes that our reasonings are grounded in. Machine learning systems draw on our experiences, our actions: optimising them, perfecting them, enhancing them. This means, however – given their lack of consciousness as a biological phenomenon – that they are prone to become a blind amplifier not just of our virtues, but of our faults.

That’s why we should fear the infallibility of AI.

Let’s get back to Kevin Roose. Trying to win other people over to one’s point of view is common practice. And using manipulative techniques is a very effective way to achieve this, ethically dubious as it may be. An artificial intelligence that had become infallible at manipulation might have ruined the poor reporter by actually convincing him to leave his wife. The longing to be successful is also anything but rare, not to mention the desire for wealth. An artificial intelligence that became infallible at conquering the market, if it were deployed by unscrupulous CEOs, would do far more harm than good. And what about the possibility of it being enlisted by a political party and ordered to “win the elections”? What happens if the AI decides the most effective approach to its task is to play with statistics in a dismally racist way, harnessing the worst instincts of voters through shocking techniques of disinformation and manipulation of the news?

One might think that in all these cases, the solution is to equip AI with a sense of ethics. Too bad that ethics is such murky terrain. What some of us see as “good” is going to stink of brimstone to other people. Take Isaac Asimov’s first law of robotics: “a robot shall not harm a human, or by inaction allow a human to come to harm.” Fine, but what happens if you can’t avoid harming one person without harming another? Let’s say a self-driving vehicle is crossing an intersection while the light is green. Just then, three pedestrians start to cross despite the red. The car is inevitably going to crash into somebody. So the AI has a dilemma: hit the two ninety-year-olds coming from the right, or the twelve-year-old coming from the left? What should matter more, the number of victims? Or their age? And what about the famous dilemma proposed by Dostoevsky in The Brothers Karamazov: if the happiness of humanity depended on the suffering of a child, would that innocent creature have to be sacrificed?

Artificial intelligence is, at least for now, a dizzying mirror of human experience. One can easily see how dangerous it could be in the wrong hands. But even when used by the well intentioned, it runs the risk of controversy.

So I’m afraid AI is demanding that we evolve quickly. Too quickly, perhaps, and that’s the problem. If its training material is human experience, that’s what we need to work on. We don’t have to become infallible (that is, less human), but more evolved, and we need to do this before AI overamplifies our dark sides (not its own). To transform ourselves, in other words. Shake off our primitive traits. Transcend them. It’s happened before. Some progress, over the centuries, has actually been achieved. We’ve moved past the law of an eye for an eye, for instance. We’ve redefined the idea of sacrifice. We’ve managed to drastically reduce everyday violence. We’ve eliminated slavery (at least officially). We care about social justice (at least in theory). On the other hand, we haven’t managed to turn the use of atomic weapons into a taboo (as we have with incest), and we have trouble reasoning in a counterintuitive way (as we should) about climate change. The thirst for money, power and success has not abated in the least, but some people may be starting to understand just how destructive unbridled individualism can be.

If in some dangerously near future, our basest instincts win out over our capacity for cooperation, tolerance, compassion, community spirit, fellowship, respect and care for our neighbours (both human and non-human), then not only will artificial intelligence fail to solve our problems, it will multiply them.

I’ve conjured up a dystopia (the paperclip Armageddon). But I’d like to conclude with a brighter fable, which is also an unjustly forgotten chapter in history.

Let’s travel back to 1983. More specifically, to 26 September. And to Russia. A lieutenant colonel in the Soviet armed forces, Stanislav Yevgrafovich Petrov, was overseeing a new alarm system meant to detect any incoming missiles launched by Western Bloc countries. A preventive first strike using atomic weapons was a possibility that couldn’t be ruled out at the time.

All of a sudden, Petróv’s eyes nearly popped out of his head. The early-warning system had just signalled that five American missiles were headed for the USSR. In cases like this, there was no room for discretion. Petrov was supposed to alert his superiors. And according to the doctrine of mutually assured destruction, the Soviet counterattack had to be unleashed before the American missiles hit their targets, that is, within twenty minutes.

Stanislav Petrov was about to pick up the phone. Then he hesitated. He waited a little longer. In the end, he decided to do nothing. Something set to work in his head at the speed of light – something that went beyond mere combinatorial analysis – and decided that the brand-new alarm system had seen a mirage.

And that’s exactly what had happened. There were no warheads coming in from NATO bases. The Soviet satellites had mistaken rays of sunlight reverberating off the clouds for enemy missiles.

According to Stephen Fleming, a neuroscientist at University College London who studies metacognition, there were two models of interpretation that were operating simultaneously in Stanislav Petrov’s head that day. One model was based on the early warning devices (the fact that the lieutenant colonel knew how to use the system correctly and had a protocol to follow). But then there was a second model, a higher one created on the spot by Petróv himself: a brand-new mental ecosystem capable of “containing” the first one and raising doubts about whether it was working correctly.

Petróv’s ability to include uncertainty in his system of reference – that is, its fallibility – probably saved the world from nuclear disaster.

The second model, the one that Petróv’s mind instantly generated and set over the first one, was the result of intuition. That spark of intuition (“the spark that saved the world”, we could call it) is tied in turn to our existence as biological entities, to our body and its fear of dying, to a genetic memory (human and non-human) that is millions of years old, to the nature of Homo sapiens, so fallible, disheartening, limited; perhaps the very essence of the human condition that Eugenio Montale, Emily Dickinson, Georg Trakl, Pablo Neruda, Amelia Rosselli – and before them, Leopardi, Lucretius, Sappho – pondered for so long, wreathing their verses with bitterness, amazement, disquiet, seeking out a skittish truth in beauty, answering mystery with mystery.