(San Francisco) Artificial intelligence (AI) specialist Gary Marcus has spent the past few months alerting his peers, elected officials and the public to the risks of developing and blazing fast adoption of new computing tools. ‘IA. But the danger of human extinction is “exaggerated”, he told AFP in an interview in San Francisco.
“Personally, and for the moment, I am not very worried about it, because the scenarios are not very concrete”, explains this professor emeritus of the university of New York, who came to California for a conference.
“What worries me is that we’re building AI systems that we don’t control well,” he continues.
Gary Marcus designed his first AI program in high school – software to translate Latin into English – and after years of studying child psychology, he founded Geometric Intelligence, a “machine learning” company machines) then acquired by Uber.
In March, he co-signed the letter from hundreds of experts asking for a six-month break from developing ultra-powerful AI systems like those of start-up OpenAI, while he ensures that programs already existing are “reliable, secure, transparent, fair […] and aligned” with human values.
But he did not sign the succinct statement from business leaders and experts that caused a stir this week.
Sam Altman, the boss of OpenAI, Geoffrey Hinton, a prominent former engineer at Google, Demis Hassabis, the head of DeepMind (Google) and Kevin Scott, chief technology officer of Microsoft, in particular, call for fighting against “the risks of extinction” of humanity “linked to AI”.
The unprecedented success of ChatGPT, OpenAI’s conversational robot capable of producing all kinds of text on simple request in everyday language, has sparked a race for this so-called “generative” artificial intelligence between the technology giants, but also many updates. warning and calls to regulate this area.
Including from those who build these computer systems with a view to achieving “general purpose” AI, with cognitive abilities similar to those of humans.
“If you really think this poses an existential risk, why are you working on this?” It’s a legitimate question,” notes Gary Marcus.
“The extinction of the human species… It’s quite complicated, actually,” he ponders. “One can imagine all kinds of plagues, but people would survive. »
There are, on the other hand, realistic scenarios where the use of AI “can cause massive damage”, he points out.
“For example, people could successfully manipulate the markets. And maybe we’d blame the Russians, and we’d attack them when they had nothing to do with it, and we could end up in an accidental, potentially nuclear war,” he said. .
In the shorter term, Gary Marcus is more concerned about democracy.
Because generative AI software produces fake photographs, and soon videos, more and more convincing, at little cost. The elections are therefore likely, according to him, “to be won by the people most gifted in spreading disinformation. Once elected, they will be able to change the laws […] and impose authoritarianism.”
Above all, “democracy relies on access to the information needed to make the right decisions. If no one knows what is true or not, it’s over.”
The author of the book Rebooting AI, however, doesn’t believe in throwing everything into this technology.
“There is a chance that one day we will use an AI that we have not invented yet, which will help us to make progress in science, in medicine, in the care of the elderly […] But for the moment, we are not ready. We need regulation, and to make programs more reliable.”
During a hearing before a US parliamentary committee in May, he defended the creation of a national or international agency responsible for the governance of artificial intelligence.
A project that is also supported by Sam Altman, who has just returned from a European tour where he urged political leaders to find the “right balance” between protection and innovation.
But be careful not to leave the power to the companies, warns Gary Marcus: “The last few months have reminded us of how they are the ones who make the important decisions, without necessarily taking into account […] the collateral effects”.