(New York) The most advanced companies in the field of artificial intelligence (AI) must accept that their employees publicly take a critical look at their activities and the risks linked to AI, several OpenAI employees demanded under covered by anonymity as well as former members of the start-up.
“To the extent that these companies are not subject to the supervision of the authorities, current or former employees are among the few people who can hold them to account,” explain the 13 signatories of the open letter, published Tuesday.
Among them are four OpenAI employees and seven alumni.
These signatories regret that “exhaustive confidentiality clauses prevent them from expressing (their) concerns” publicly.
“Some of us fear retaliation (should we communicate openly), given the precedents that exist in the industry,” according to the letter.
Among the risks cited, “the reinforcement of inequalities, manipulation, disinformation, even going so far as the loss of control of autonomous AI systems which could lead to the extinction of humanity”.
If they believe that these risks can be limited “with the help of the scientific community, authorities and the public”, “AI companies have a financial incentive to escape effective supervision”.
They also encourage the creation of anonymous reporting channels internally to warn of certain risks.
Asked by AFP, Open AI said it was “proud” to have what it considers to be the “most powerful and safest” AI systems.
“We share the idea that a rigorous debate is crucial given the scope of this technology and we will continue to exchange with governments, civil society and other entities around the world,” added a spokesperson for ‘OpenAI.