Recent years have brought rapid innovations to artificial intelligence (AI) technology. AI is now found in a host of different industries, and more and more companies are experimenting with incorporating AI systems into their business operations in order to increase efficiency, broaden reach, or expand their range of products and services. Further, AI has never held a more prominent place in the public eye: Microsoft’s $10-billion investment in OpenAI, the company that launched the chatbot ChatGPT in late 2022, and the way that AI system has taken the world by storm are just the latest evidence of AI’s dominance.
But as AI continues to proliferate and becomes broader in scope and more powerful, there are also a variety of dangers it poses, ranging from the small to, potentially, the existential. Not all of these are equally likely, of course, but they remain reasons to be cautious about this new technology. Below, we take a closer look at some of the many dangers of AI.
Deepfakes and Misinformation
A deepfake is a convincing, computer-generated artificial image or video. The word “deepfake” may originate from the AI-related term “deep learning,” the process by which some AI systems analyze huge amounts of data to train themselves and “learn.” Deepfakes are an emerging concern within the last few years.
Many deepfakes contain images or videos of celebrities, but they can also be used to create a variety of other types of misinformation or malicious content, from misleading news reports to revenge pornography and more.
The potential dangers of deepfakes are significant, and they represent one of the most visible examples of a broader category of AI risk: misinformation. AI can be used to create and to widely share material that is incorrect but which looks convincingly true. There are a host of social, political, and legal ramifications for deepfakes and other AI-generated misinformation, and one of the biggest issues is that there currently exists essentially no regulation about these materials.
Privacy
AI uses huge amounts of information to train itself, and this information usually consists of real data from real people. This alone constitutes a potential invasion of privacy. And there are specific examples of AI systems, including computer vision and facial recognition tools, among others, which present even greater privacy threats. AI systems can be trained to recognize individual human faces, allowing them to potentially surveil societies and exert control over human behavior. Used by bad actors, these AI programs could play a key role in large-scale societal repression or worse.
Job Loss
One of the most common concerns about AI is that it will lead to automation of skills and processes that is so efficient that it leads to the elimination of human jobs or even entire industries. In reality, experts are conflicted about the possible impact of AI on jobs. One recent study suggests some 85 million jobs could be lost due to AI automation between 2020 and 2025. Another suggests that AI could create 97 million new jobs by 2025.
On one hand, AI is working toward automating many simple tasks that are used in some work settings: consider how the push toward autonomous vehicles could pose a threat to the job of delivery driver, for instance.
On the other hand, generative AI—AI used to create new content rather than to automate preexisting tasks—likely has many more barriers before it can pose a threat to human jobs. These positions often require uniquely human characteristics like empathy and creativity.
Some experts believe that AI will lead to a jobs crisis, as the jobs that eliminates and the jobs that it creates will not overlap in terms of skill sets. The workers displaced from jobs that are eliminated like the delivery driver position example above may be less likely to apply for the jobs AI generates, which could be tied to more specific skills and experiences.
Bias, Discrimination, and the Issue of “Techno-Solutionism”
Society is liable to view AI through the lens of “techno-solutionism,” a belief that technology is a cure-all for a variety of problems when it is only a tool. It is crucial for us to remember that AI systems are not perfect, and in fact they can reflect and amplify many preexisting human biases and inconsistencies.
The more that AI systems are applied to issues that involve the potential for bias and discrimination, the greater this problem becomes. Many companies already use AI to help sort and process job applications. While AI has the benefit of being able to analyze many more applications in the same time that human reviewers can, this does not mean it is free from biases. AI systems in this case must be very carefully designed and monitored to make sure they are not treating some applicants unfairly.
A real-world example of the above situation came in 2018, when Amazon ceased using a proprietary recruiting tool after it became clear that the AI system had trained on data that was systematically biased against female applicants.
Bias and discrimination is also a huge potential factor in recommendation algorithms that are found on search engines and in social media. These AI programs can and likely will confirm and heighten bias without careful planning.
Financial Volatility
AI has the potential to upend the financial sector. An increasing number of investment companies rely on AI to analyze and select securities to buy and sell. Many also use AI systems for the actual process of buying and selling assets as well. AI trading algorithms benefit from clarity when human analysts may be clouded by emotion. But they also fail to take into account the broader societal impact and context. Markets rely on levels of trust and can easily be swayed by investor fear. It is easy to imagine a situation in which AI investment tools aiming to maximize profits engage in a large number of buys and sells in a short period, triggering panic among human participants in the market and leading to flash crashes or other volatility.
The Singularity
Perhaps the greatest threat that AI poses, although the one least understood, has to do with a so-called “Singularity,” or a moment in which artificial intelligence surpasses human intelligence. This critical shift could mark the time at which humans are no longer able to control AI, and AI dominates the way that society is created (or destroyed). Discussion about the Singularity is almost entirely theoretical, as no one knows exactly what could happen as AI continues to become more and more powerful. But the potential implications could be drastic, and some experts believe the Singularity could even lead to the extinction of the human race.
When AI surpasses human intelligence, it could be capable of developing ideas that no human has ever thought of before. This super-human AI would still be driven to improve itself, just as AI today aims to do, and the gap between the constantly-improving AI and human beings would grow even more quickly.
This AI could have a tremendous benefit for society, finding ways that humans could not to eradicate disease, poverty, or climate change. On the other hand, it could find that humans are not helpful to its goals and conceive of ways to destroy society or kill people on a massive scale. The reality is unknown, but it is not completely outside of our control. Experts in technological futurism like Ray Kurzweil believe the Singularity is likely to take place in the coming decades, while others expect we may be only a few years away. Many also believe that the ways that humans create AI now could have implications for a potential Singularity whenever it may occur. Thus, it is all the more important that society faces the questions of the ethics and dangers of AI sooner than later.
Cheat Sheet
- Artificial intelligence (AI) is a dominant new technology which has the potential to automate, maximize efficiency, and transform many industries.
- The more powerful AI becomes, the more types of threats it poses.
- Some of the major dangers of AI include misinformation (including creating convincing fake images and video known as deepfakes), privacy concerns, the loss of jobs, bias and discrimination, market and financial volatility, and a so-called singularity in which AI surpasses human intelligence.
- AI is expected to make some 85 million jobs obsolete between 2020 and 2025, but it is also expected to create 97 million jobs by 2025.
- Some experts believe the Singularity could occur within the next decade, although the implications for this event are broadly unknown and speculative.
Stay on top of crypto news, get daily updates in your inbox.
Source: https://decrypt.co/resources/what-are-the-dangers-of-ai