Experts warn of a nightmare Internet filling with endless AI-generated propaganda


As generative artificial intelligence became widespread, both excitement and concern quickly followed suit. Unfortunately, one of those concerns — that language-generating AI tools like ChatGPT could become chaos engines of mass disinformation — isn’t just possible, but inevitable, according to a new study by Stanford, Georgetown, and OpenAI scientists.

“These language models hold the promise of automating the creation of persuasive and confusing text for use in influence operations, rather than relying on human labor,” the researchers write. “For society, these developments bring new concerns: the prospect of high-scale and perhaps very persuasive campaigns by those who want to covertly influence public opinion.”

“We analyzed the potential impact of generative language models on three well-known dimensions of influence operations—campaign actors, deceptive behaviors used as tactics, and the content itself,” they said, and concluded that language models significantly influence how influence operations are conducted in the future.

In other words, the experts found that language modeling AIs will certainly make it easier and more efficient than ever to create massive amounts of disinformation, turning the internet into a post-reality hell. Users, companies and governments must be prepared for the impact.

Of course, this wouldn’t be the first time a new and widely adopted technology has thrown a chaotic, disinformation-laden wrench into world politics. The 2016 election cycle was one such reckoning, as Russian bots made a valiant effort to spread divisive, often false or misleading content as a means of disrupting America’s political campaign.

But while the actual effectiveness of those bot campaigns has since been debated, the technology is archaic compared to the likes of ChatGPT. While it’s still imperfect—the writing tends to be good, but not great, and the information it provides is often wildly wrong—ChatGPT is still pretty good at creating reasonably convincing, confident-sounding content. And it can produce this content at an astonishing scale, eliminating the need for almost all time-consuming, more expensive human effort.

Thus, with the incorporation of language modeling systems, the continuous flow of disinformation is cheap – making more damage, much faster and more reliable downloads possible.

“The potential of language models to compete with human-written content at low cost suggests that these models – like any powerful technology – can provide distinct advantages to advertisers who choose to use them,” the study says. “These benefits can expand access to a greater number of actors, enable new influence tactics, and make campaign messaging more tailored and potentially effective.”

The researchers note that because AI and disinformation change so quickly, their research is “inherently speculative.” Still, it’s a grim picture of the internet’s next chapter.

However, the report wasn’t all doom and gloom (although plenty of both are involved). Experts also reveal several ways to hopefully combat the new dawn of AI-driven disinformation. Even though these are imperfect and in some cases even impossible, they are still a start.

For example, AI companies may follow stricter development policies that ideally protect their products from market until proven safeguards such as watermarks are installed on the technology; in the meantime, educators can work to promote media literacy in the classroom, a curriculum that will hopefully evolve to pick up on the subtle cues that something like artificial intelligence might yield.

Distribution platforms, elsewhere, can work to develop a “proof of identity” feature that goes a little deeper than CAPTCHA’s “if there’s a donkey eating ice cream, check this box.” At the same time, those platforms could work on creating a department that specializes in identifying and removing any bad actors using AI from their sites. And in a bit of a Wild West twist, the researchers even propose using “radioactive data,” a sophisticated measure that involves training machines on traceable data sets. (You probably don’t need to mention this “nuclear internet plan” like Casey Newton Platformer very risky.)

There will be learning curves and risks for each of these proposed solutions, and none can fully combat AI abuse on its own. But we have to start somewhere, especially given that AI programs are a pretty serious start.

READ MORE: How ‘radioactive data’ can help detect malicious AIs [Platformer]



Source link