“AI and A-bomb, same fight!” »


TRIBUNE. Artificial intelligence, like the atomic bomb, questions the ethical limits of our quest for scientific knowledge, believes digital strategy expert Morgane Soulier.

A scene from the film The Day After (1983), directed by Nicholas Meyer.
A scene from the film The day after (1983), directed by Nicholas Meyer.
© MFF Feature Film Productions /N/Collection ChristopheL via AFP

Lrecent movie Oppenheimer has captivated global attention, garnering astronomical revenues and rekindling collective memory of the crucial role played by this physicist in the development of nuclear weapons. In 1942, faced with the Nazi threat and the arms race, the United States launched the Manhattan Project. Julius Robert Oppenheimer, often referred to as the “father of the atomic bomb,” led the creation of this destructive weapon, ushering in a new era of nuclear deterrence. But beyond history a universal question arises: how to balance scientific progress with the moral and practical consequences it generates?

With every great technological leap comes an insatiable quest for knowledge, often motivated by pure scientific curiosity and the desire to explore the frontiers of the unknown. It was this passion for discovery that led Robert Oppenheimer and his team to the development of the atomic bomb.

In the early days of the Manhattan Project, the primary motivation was not so much the manufacture of a weapon of mass destruction as the resolution of a major scientific challenge: how to release the energy contained in the atom? Science, in his eyes, was an adventure, an exploration of the mysteries of the universe.

Potential hazards

Similarly, Artificial Intelligence (AI) is today at the forefront of scientific research. The idea of ​​creating machines that can think, learn, and evolve autonomously has long been a dream of scientists. These advances promise immense benefits, from personalized medicine to the automation of routine tasks. But with these opportunities come risks…

Just as Robert Oppenheimer faced the terrifying realization of the practical consequences of his discoveries – the creation of a weapon that caused the deaths of hundreds of thousands of people and changed the global geopolitical landscape – the pioneers of AI warn of the potentially dangerous implications of their work. The tools they designed, initially in the spirit of progress and improvement of the human condition, could be misused for malicious purposes, thus threatening the very structure of our society.

A revelation highlights these concerns: in May 2023, Geoffrey Hinton, considered one of the fathers of AI and neural network research, left Google. Geoffrey Hinton has expressed concerns about future versions of AI, suggesting they could pose a “risk to humanity”.

AI, taken to its extreme, could surpass human capabilities, rendering many jobs obsolete and, in the darkest scenarios, pose an existential threat. Just as Robert Oppenheimer and his team found themselves facing a force they could not control, could AI researchers one day regret their creations?

Large-scale monitoring

The creation of the atomic bomb and advances in artificial intelligence both raise important ethical dilemmas. For Robert Oppenheimer, he was confronted with the fact that his research had resulted in a weapon of mass destruction. How can we reconcile the love of science with the practical and often tragic consequences of this same science?

In the field of AI, the dilemmas are just as profound. While intelligent machines, capable of learning and making decisions, can be used to improve society in many areas – from medicine to education – their potential for abuse is also enormous. They can be used for disinformation, large-scale surveillance or even, ultimately, for the creation of autonomous weapons.

READ ALSO Michel Desmurget: “Artificial intelligence must not think for us” AI challenges our very understanding of what it means to be human. If a machine can think, feel or create like a human being, what rights do we give it? Who is responsible when an AI makes a mistake or causes harm? Who decides how these technologies are deployed and regulated? And most importantly, how far can we push the boundaries of science and technology before we risk compromising the integrity and security of society as a whole?

Unintended consequences

The impact of the atomic bomb on the global political landscape was unprecedented. It not only reshaped war strategies but also ushered in a new era of geopolitical tensions, giving rise to the Cold War and the arms race between the superpowers. The possession of nuclear weapons has become a symbol of power and influence, reshaping international alliances and rivalries.

Similarly, AI is redefining economic and military power on a global scale. Nations that dominate in AI hold a strategic advantage, whether in economics, defense or intelligence.

AI has the potential to disrupt labor markets, influence elections through disinformation, and even aid in mass surveillance of citizens by governments. The race for AI supremacy could well be the next big geopolitical issue, with nations competing to develop more advanced technologies.

History reminds us that technological progress can have unintended consequences. AI, like the atomic bomb, forces us to think about the ethical limits of our quest for knowledge. As we move forward into this new era, how can we learn from the lessons of the past and act with caution and wisdom? Hinton and other researchers today recognize the responsibility inherent in their discoveries. The recent open letter signed by experts, including Yoshua Bengio and Elon Musk, calling for a pause in AI research, illustrates this collective awareness.

* Morgane Soulier is a consultant and speaker, expert in digital strategies.


Leave a Comment

Your email address will not be published. Required fields are marked *

*