Nvidia CEO Jensen Huang doesn’t see a way around it. He says more AI is the only thing that can tackle AI abuse.
Speaking at a Washington event hosted by the Bipartisan Policy Center, Huang’s whole point was that artificial intelligence is both the problem and the solution.
He explained that AI’s ability to generate fake data and false information at breakneck speeds means that it’ll take something just as fast to catch up. Huang also compared the current AI situation to cybersecurity.
He pointed out that “almost every single company” is vulnerable to attacks at any moment, and the only way to defend against these constant threats is through advanced AI-driven systems.
The same goes for AI. Better AI is needed to defend against harmful AI.
AI threats the U.S. elections
Concerns about AI misuse are increasing in the United States as the country prepares for the federal elections in November.
With the rise of AI-generated misinformation, the public is worried about its influence on democracy.
A Pew Research Center survey found that nearly 60% of Americans are “extremely” or “very” concerned that AI will be used to spread false information about candidates.
Both Democrats and Republicans are equally anxious about it. What’s even more concerning is that about two in five people believe AI will mostly be used for bad purposes during the elections.
Only 5% were optimistic about AI’s potential. Huang called on the U.S. government to step up its AI game. He said the government needs to become a practitioner of AI to stay ahead of the curve.
The Nvidia CEO emphasized that every department should embrace AI, especially Energy and Defense. Huang also suggested building an AI supercomputer.
He believes scientists would eagerly work on new algorithms that could help the country advance.
Energy demands of AI to skyrocket
As AI continues to develop, it’s going to need more power. Literally. AI data centers today already use up to 1.5% of global electricity, but Huang predicts this number will increase dramatically.
Future data centers could need 10 to 20 times more energy than what we’re seeing now. He explained that AI models could start teaching each other, which would further drive up energy use.
But Huang also sees a solution. He suggested building data centers near sources of excess energy that are difficult to transport.
Since AI doesn’t care where it learns, data centers could be built in remote locations with enough energy resources needed.
Meanwhile, there’s another battle brewing over how to regulate AI. In California, Governor Gavin Newsom vetoed a bill just last night.
It’s called SB 1047 and was designed to impose mandatory safety measures for AI systems. The bill had caused some major resistance from Big Tech companies like OpenAI, Meta, and Google.
Newsom believes that the bill would stifle innovation and fail to protect people from the real dangers.
According to Newsom, the bill’s standards were too stringent for basic AI functions, and its approach to regulation wasn’t the best way to tackle AI threats.
Written by Democratic Senator Scott Wiener, the bill would have required AI developers to implement a “kill switch” for their models and publish plans for mitigating extreme risks.
It also would have made developers liable to legal action if their systems bring an ongoing threat like an AI grid takeover.
Source: https://www.cryptopolitan.com/huang-says-more-ai-will-fight-ai-abuse/