📲 Oxford researchers: Overblown fears about misinformation from AI
Generative AI like ChatGPT, Midjourney and DALL-E will “trigger the next misinformation nightmare." But so far, the evidence doesn't support this claim and there are good arguments as to why that won't change, writes three researchers.
Share this story!
You've heard the warnings. Generative AI like ChatGPT, Midjourney and DALL-E will “trigger the next misinformation nightmare”, people “will not be able to know what is true anymore” and we are encountering a “tech-enabled Armageddon."
But, that is not the case, say researchers from the University of Oxford, University of Zurich and PSL University.
The three researchers write that there already exists a lot of misinformation online, but the average internet user consumes very little of it. The consumption of misinformation is concentrated to a small group of very active and vocal people. Misinformation only gains influence when people see it, and mostly, they don't.
But couldn't misinformation from generative AI be of higher quality and therefore more convincing? They give three reasons why this will likely not happen:
- It is already relatively easy for producers of misinformation to increase the perceived reliability of their content.
- Most people consume content from mainstream sources, typically the same handful of popular media outlets and are not exposed, or only very marginally exposed, to misinformation.
- Generative AI could also help increase the quality of reliable news sources—for instance, by facilitating the work of journalists in some areas.
According to another study, given the rarity of misleading content compared to reliable content, the increase in the appeal of misinformation would have to be 20 to 100 times larger than the increase in the appeal of reliable content for the effects of generative AI to tip the scales in favor of misinformation.
The three researchers conclude:
Our aim is not to settle or close the discussion around the possible effects of generative AI on our information environment. We also do not wish to simply dismiss concerns around the technology. Instead, in the spirit of Robert Merton’s observation that “the goal of science is the extension of certified knowledge” on the basis of “organized skepticism." We hope to contribute to the former by injecting some of the latter into current debates on the possible effects of generative AI.
By becoming a premium supporter, you help in the creation and sharing of fact-based optimistic news all over the world.