Over the past year, artificial intelligence has become one of the hottest topics of discussion across several mainstream media. Leading the AI discussions are generative tools like ChatGPT by OpenAI, which is able to answer and provide detailed information about any subject.
However, on the other side of this trend lies a growing concern among CEOs and executives of major tech companies, including Google, OpenAI, and Microsoft, about the disruptive impact of AI on humanity.
In the absence of regulation, the insanely fast pace at which AI models are improving is concerning because as they become better, so does the potential for their misuse widen. This issue has led many experts to sound warnings that artificial intelligence could pose a “risk of extinction” to humanity.
Experts Say AI Poses Risks of Extinction
During the recent Yale CEO Summit in June, up to 42% of CEOs polled agreed that AI is “pretty dark and alarming” and has the potential to destroy humanity five to ten years from now.
The survey report came weeks after more than 350 tech executives and scientists signed a statement by a nonprofit organization, the Center for AI Safety, which warned that AI posed an “extinction risk” to humanity on par with pandemics and nuclear war.
The signatories even included Sam Altman, the CEO of OpenAI, the leading AI company behind ChatGPT. Together, the execs all called to open discussion about mitigating AI risks.
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the organisation stated.
There have only been speculations about what risks AI poses to humanity. Some people say it could be leveraged to create bioweapons or introduce new viruses. Others say it could be used to deliberately spread misinformation, incite crisis, and even hack into mission-critical computer systems.
AI Systems Should be Regulated
Although there exist some other groups that believe AI risks are overstated, regulating the tools and technology itself remains paramount.
The European Union has already begun taking steps towards regulating AI in the region. In June, the parliament passed a draft bill on AI – the world’s first comprehensive AI law – setting out guidelines by which AI tools will be regulated across the EU based on the risks they pose.
In accordance with the rule, the higher risks posed by an AI system, the stricter the regulation will be. For instance, AI tools that pose threats to people will be outrightly banned in the EU. However, generative AI, like ChatGPT, would have to comply with transparency requirements.
Source: https://www.cryptopolitan.com/execs-warn-ai-poses-risk-of-extinction/