NIX Solutions: Impact of Generative AI – Google’s Alarming Report

Scientists from Google’s DeepMind AI lab, the Jigsaw think tank, and Google.org’s philanthropic wing have unveiled alarming results from a new study about the impact of generative artificial intelligence (AI) on society. The report, “Generative AI Misuse: A Taxonomy of Tactics and Insights from Real-World Data,” is based on an examination of approximately 200 cases of misuse of the technology reported in media and academic publications from January 2023 to March 2024.

Real-World Harm and Political Implications

In contrast to some tech executives’ rhetoric about the “existential threats” of artificial general intelligence (AGI) in the future, Google’s research focuses on the real harm that generative AI is causing today and that could worsen in the near term. The main conclusion of the study is that flooding the Internet with artificially created texts, audio, images, and videos is now not difficult for any user.

NIX Solutions

According to the authors, the vast majority of cases of abuse of generative AI (almost 9 out of 10 documented incidents) involve exploiting the capabilities of these systems, rather than direct attacks on the models themselves. Moreover, many methods of abuse do not even directly contradict the rules for using models and do not look outright malicious.

Researchers warn that the availability of generative AI and the realism of its results make it possible to create huge volumes of content that are indistinguishable from the real thing. The use of such technologies in the political sphere is particularly dangerous – now, on the eve of elections, deepfakes are especially often passed off as real statements by public figures. For example, there was a case of Joe Biden calling voters and asking them not to come to the preliminary vote. We’ll keep you updated on developments in this area.

Consequences and the Need for a Multifaceted Approach

The influx of low-quality AI-generated content could undermine user trust in any information on the Internet. People will have to constantly question the authenticity of what they see and read, leading to information overload. Already, the dangerous phenomenon of the “liar’s dividend” is being observed: public figures caught in unseemly acts justify themselves by claiming that compromising materials were created by artificial intelligence. This shifts the burden of proof onto prosecutors, making fact-checking more difficult and expensive.

However, the researchers acknowledge the limitations of their methodology, which is based on the analysis of media publications. According to them, journalists tend to pay more attention to sensational cases or events that directly affect public opinion. At the same time, the real scale of the problem may be much greater than what was reflected in the work. Many cases of misuse of AI remain in the shadows, not attracting public attention or making it onto the pages of newspapers and news sites.

Of particular concern is the issue of non-consensual intimate images (NCII). Despite periodic media coverage, the true scale of the phenomenon remains underestimated. The reasons lie in the sensitivity of the problem and the huge number of individual cases that are almost impossible to fully track and document. Researchers give an interesting example: one user on the Patreon platform created more than 53,000 obscene images of celebrities without their consent. This case was revealed through investigative journalism, but it is just the tip of the iceberg. Similar communities continue to be active in various messengers. For instance, in January 2024, fake nude photos of Taylor Swift circulated on Twitter.

Notably, the study does not mention Google’s own role as one of the largest developers and distributors of generative AI technologies, adds NIX Solutions. Instead, the authors call for a multifaceted approach to addressing the problem, including collaboration among policymakers, researchers, industry leaders, and civil society.