openai-insider-estimates-70-percent-chance-that-ai-will-destroy-or-catastrophically-harm-humanity

OpenAI Insider Raises Concerns About AI’s Impact on Humanity

“The world isn’t ready, and we aren’t ready.” After former and current employees of OpenAI spoke out about being silenced on safety issues, a startling prediction has emerged. A former governance researcher at OpenAI, Daniel Kokotajlo, believes there is a 70 percent chance that artificial intelligence (AI) will either destroy or catastrophically harm humanity.

In an interview with The New York Times, Kokotajlo criticized OpenAI for overlooking the risks associated with artificial general intelligence (AGI) due to their enthusiasm for its potential. He claimed that the company is racing towards AGI without considering the potential consequences.

Kokotajlo, along with other industry experts, has raised concerns about the likelihood of AI causing harm to humanity. Despite efforts to warn the public about these risks, it seems that companies like OpenAI are focused on advancing AI technology without adequate safeguards in place.

After urging OpenAI’s CEO to prioritize safety measures, Kokotajlo ultimately decided to leave the company. His departure, along with other high-profile exits, signals a growing unease within the AI community about the potential dangers posed by advancing technologies.

In response to these concerns, OpenAI has emphasized its commitment to safety and ongoing engagement with stakeholders to address the risks associated with AI development. However, the debate surrounding the implications of AI on society is far from over.

As discussions about the future of AI continue, it is clear that the need for transparency and accountability in the industry is more important than ever.

**Biography: Daniel Kokotajlo**

Daniel Kokotajlo is a former governance researcher at OpenAI who raised concerns about the potential risks of artificial intelligence. He joined OpenAI in 2022 and became convinced of the significant dangers posed by AI technology. Kokotajlo’s advocacy for safety measures and his decision to leave OpenAI have sparked discussions about the ethical implications of AI development.