Experts Fear ‘Nuclear-Level Catastrophe’ Caused By AI

Despite the pleas of more than 1,100 artificial intelligence developers and industry experts who published an open letter calling for a temporary moratorium on further advancement of the rapidly evolving technology, there has been no apparent slowdown in the push to adopt more powerful forms of AI.

Now, Stanford University is sounding the alarms in the form of a new report that shows roughly one-third of the experts surveyed said that AI could result in a “nuclear-level catastrophe.”

The university reached out to a group of natural language processing researchers who develop and implement codes that allow machines to understand human language. Their concerns about the future of artificial intelligence are evident — both from a security standpoint and on ethical grounds.

Citing concerns by the AI Algorithmic and Automation Incident and Controversy Repository’s database, the report determined that “the number of AI incidents and controversies has increased 26 times since 2012.”

So-called “deep fake” videos, including one released last year that appeared to depict the surrender of Ukrainian President Volodymyr Zelenskyy, are particularly concerning for industry insiders and advocacy groups.

“This growth is evidence of both greater use of AI technologies and awareness of misuse possibilities,” the report concluded.

Such concerns have already prompted action across Europe, where Italian officials imposed a temporary ban on the popular ChatGPT AI chatbot.

The European Union is reportedly considering a slate of new regulations that would restrict how AI can be used.

In the aforementioned open letter initially posted last month, its authors posed a series of questions they determined must be considered as AI continues to evolve.

“Should we let machines flood our information channels with propaganda and untruth?” they asked. “Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?”

Despite the stark nature of the letter, some experts determined that its warning did not go far enough.

The Machine Intelligence Research Institute’s Eliezer Yudkowsky asserted that it will take more than a six-month pause to gain the necessary perspective regarding how to ensure that AI is a benefit to humanity and not a detriment.

“Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die,” he wrote. “Not as in ‘maybe possibly some remote chance,’ but as in ‘that is the obvious thing that would happen.'”

Previous articleTwo House Democrats Call For Feinstein’s Resignation: ‘It’s Time’
Next articleRep. Jordan Confirms ‘100%’ Support For Trump 2024