Amnesty International has announced that it will no longer use AI-generated images of Colombian protests in its reports after facing criticism for the decision. The human rights organization came under fire after it was revealed that it had used digitally created images to represent protests in the country instead of using real photos.
Amnesty International blasted for using AI to depict Colombian protests
Amnesty International has retracted images generated by artificial intelligence (AI) that were used in a 2021 campaign to publicize police brutality in Colombia during national demonstrations.
According to reports, the group was criticized for using artificial intelligence to create images for its social media accounts. On May 2, The Guardian highlighted one photo in particular.
It depicts a woman being taken away by police during Colombian protests against entrenched economic and social disparities in 2021. However, a closer inspection reveals several inconsistencies in the image, including unnatural-looking faces, out-of-date police uniforms, and a protester who appears to be draped in a flag that is not the Colombian flag.
However, at the bottom of each image is a disclaimer stating that the images were generated by artificial intelligence.
The use of AI-generated images was met with criticism from journalists and activists who pointed out that it was misleading and could potentially undermine the credibility of Amnesty’s reporting. Critics argued that using digitally created images could create a false impression of what was happening on the ground and could also be seen as a way to manipulate the narrative of the protests.
AI-generated images have become increasingly common in recent years, with some news organizations and advertising agencies using the technology to create images and videos that are indistinguishable from real ones. However, the technology has also raised concerns about its potential to be used for misinformation and propaganda purposes.
Amnesty addresses the withdrawal of AI images
Amnesty International told The Guardian that it chose to use artificial intelligence to generate images in order to shield demonstrators from potential state reprisals. Erika Guevara Rosas, Amnesty’s director for the Americas, stated:
We have removed the images from social media posts, as we don’t want the criticism for the use of AI-generated images to distract from the core message in support of the victims and their calls for justice in Colombia.
Erika Guevara Rosas
Photojournalists criticized the use of the images, stating that in today’s era of highly polarized fake news, individuals are more likely to question the media’s credibility.
Roland Meyer, a media scholar, remarked on the deleted images by stating, “Image synthesis reproduces and reinforces visual stereotypes almost by default,” adding that the images were “ultimately nothing more than propaganda.”
In the case of Amnesty International, the use of AI-generated images highlights the importance of transparency and accuracy in reporting on human rights issues. The organization’s decision to listen to criticism and make changes to its reporting practices is a positive step towards ensuring that its work is seen as credible and trustworthy.
AI is increasingly being used to generate visual media and images. Late in April, HustleGPT founder Dave Craige published a video depicting the Republican Party of the United States utilizing AI imagery in its political campaign.
The case of Amnesty International in Colombia, where an AI-generated image was used to portray a false scenario of police brutality, highlights the dangers of using AI to manipulate images. It underscores the importance of responsible use of technology and the need for accountability in the use of AI-generated images. Other dangers include:
1. Misinformation: AI-generated images can be used to manipulate the public’s perception of events, people, and situations. This can lead to the spread of misinformation and fake news.
2. Lack of accountability: Because AI-generated images can be difficult to distinguish from real ones, it can be challenging to hold individuals or organizations accountable for their actions based on such images.
3. Privacy concerns: AI-generated images can be used to create fake profiles, alter personal photos, or invade people’s privacy by placing them in situations they were never in.
4. Implications for journalism: If AI-generated images are widely used, it could become more challenging for journalists to verify the authenticity of images used in news reports.
5. Ethical concerns: The use of AI to generate photoshopped images raises ethical questions about the manipulation of images and the implications this has for society’s trust in media, technology, and government.
Source: https://www.cryptopolitan.com/amnesty-international-scraps-ai-images/