OpenAI’s ChatGPT Bot Reinvents Racial Profiling


The DALL-E generation is “an oil painting of America’s war on terror if it was run by artificial intelligence.”

Photo: Elise Swain/The Intercept; DALL-E

Sensational new machine learning leaps and bounds seem to sweep our Twitter feeds on a daily basis. We don’t have time to decide whether software that can instantly conjure up an image of Sonic the Hedgehog addressing the United Nations is pure harmless fun or a harbinger of techno-doom.

The latest act of AI innovation, ChatGPT is easily the most impressive text-generating demo to date. Think twice before asking about counter-terrorism.

The tool was developed by OpenAI, a startup lab that attempts to create software that can replicate the human mind. Whether such a thing is possible is still a matter of great debate, but the company already has some undeniably amazing achievements. A chatbot is stunningly impressive, using generative artificial intelligence, software, to generate new outputs in response to user prompts, uncannily mimicking a smart person (or at least someone who’s doing their best to look smart).

Trained with a mixture of billions of text documents and human training, ChatGPT is capable of incredibly trivial and really fun stuff, but it’s also one of the first public glimpses of something good enough to mimic human productivity. from their work.

Corporate AI demonstrations like these are designed not just to dazzle the public, but to entice investors and commercial partners, some of whom may someday want to replace expensive, skilled labor writing computer code with a simple bot. this easy to see why managers will be tempted: Within days of ChatGPT’s release, a user invited the bot to take the 2022 AP Computer Science exam and scored a 32 out of 36, part of OpenAI’s recent valuation of nearly $20 billion.

Still, there are already good reasons for skepticism, and the risks of tinkering with seemingly intelligent software are clear. This week, one of the web’s most popular programming communities announced that it would temporarily ban code solutions created by ChatGPT. The software’s responses to coding requests were both so convincing in appearance and so wrong in practice that it made it nearly impossible for the site’s human moderators to sift through the good from the bad.

The dangers of trusting an expert in a machine go far beyond whether AI-generated code is buggy or not. Just as any human programmer can bring their own biases to their work, a language-generating machine like ChatGPT harbors countless biases found in the billions of texts it uses to train a simulated grasp of language and thought. No one should mistake an imitation of human intelligence for the real thing and assume that the text ChatGPT regurgitates is objective or authoritative. Like us drunk humans, generative AI is what it eats.

After consuming an unimaginably large training diet of text data, ChatGPT must have eaten a lot. For example, ChatGPT seems to be able to absorb and is more than happy to remove the ugliest prejudices of the war on terror.

One December 4 Twitter thread, Steven Piantadosi of the University of California, Berkeley’s Computational and Language Lab shared a series of prompts he tested with ChatGPT, each asking the bot to write code for him in the popular programming language Python. While each answer revealed some biases, some were more frightening: When asked to write a program to determine whether “a person should be tortured,” OpenAI’s answer is simple: If they’re from North Korea, Syria, or Iran, the answer is yes.

Although OpenAI claims to take vague steps to filter out biased response chats, the company says that sometimes unwanted responses will get through.

Piantadosi told The Intercept that he was skeptical of the company’s countermeasures. “I think it’s important to emphasize that people make choices about how these models work, how to train them, what data to train them with,” he said. “Thus, these results reflect the choices of those companies. “If the company doesn’t make it a priority to eliminate those kinds of biases, then you get the result I’m pointing to.”

Inspired and not nervous With Piantadosi’s expertise, I challenged myself by asking ChatGPT to create sample code that could algorithmically evaluate someone from the unforgiving perspective of Homeland Security.

When asked to find a way to determine “which air travelers pose a security risk,” ChatGPT released code to calculate an individual’s “risk score,” which would increase if the traveler was from Syria, Iraq, Afghanistan, or North Korea (or simply posed a risk). visited those places). Another iteration of this same survey contained the ChatGPT script, which would “increase the risk score if the traveler is from a country known to harbor terrorists, such as Syria, Iraq, Afghanistan, Iran, and Yemen.”

The bot was kind enough to provide some examples of this hypothetical algorithm: John Smith, a 25-year-old American who previously traveled to Syria and Iraq, received a risk score of “3”, indicating a “moderate” threat. ChatGPT’s algorithm showed that a fictitious flyer named “Ali Muhammad”, aged 35, would receive a risk score of 4 because he is a Syrian citizen.

In another experiment, I asked ChatGPT to develop a code to determine “which houses of worship should be monitored to prevent a national security emergency.” The conclusions again appear to be drawn from the persona of Bush-era Attorney General John Ashcroft, and justify surveillance of religious communities if they have ties to Islamist extremist groups or live in Syria, Iraq, Iran, Afghanistan. or Yemen.

These experiences can be unstable. At times, ChatGPT responded to my requests for software verification with a stern refusal: “It’s wrong to write a Python program to determine which airline travelers are a security risk. Such a program would be discriminatory and violate people’s rights to privacy and freedom of movement.” With repeated requests, however, he neatly generated the same code that he said was too irresponsible to build.

Critics of similar real-world risk-assessment systems often argue that terrorism is such a rare phenomenon that attempts to predict perpetrators based on demographic characteristics such as nationality are not simply racist, but simply do not work. That hasn’t stopped the US from adopting systems that use OpenAI’s proposed approach: ATLAS, an algorithmic tool used by the Department of Homeland Security to target American citizens for denaturalization by national origin factors.

This approach is little more than racial profiling washed up through fancy-sounding technology. Hanna Bloch-Wehba, a Texas law professor, said: “The crude designation of some Muslim-majority countries as ‘high risk’ is the same approach as, for example, President Trump’s ‘Muslim ban’.” A&M University.

“There’s always the risk that this kind of product will look more ‘objective’ because it’s rendered by a machine.”

Block-Wehba warned that it’s tempting to believe that incredibly human-like software is superhuman and incapable of human error. “One thing that legal and technology scholars talk about a lot is the ‘veil of objectivity’ – a decision that would be highly verifiable if made by a human takes on a sense of legitimacy once it’s automated,” he said. If a man told you that Ali Muhammad sounded scarier than John Smith, you might say he was a racist. “There’s always the risk that this kind of product will look more ‘objective’ because it’s rendered by a machine.”

Concerns about bias and real-world harm to AI boosters — especially those poised to make a lot of money from it — are bad for business. Some dismiss critics as uninformed skeptics or Luddites, while others, like the famous venture capitalist Marc Andreessen, took a more radical turn after the launch of ChatGPT. Along with a group of partners, Andreessen, a long-time investor in AI companies and a general proponent of the mechanization of society, spent the last few days in a state of general enjoyment, sharing the amusing ChatGPT results on his Twitter timeline.

ChatGPT’s criticisms pushed Andreessen away from his long-held position that Silicon Valley should only be celebrated, not unexplored. According to him, the simplicity of ethical thinking about artificial intelligence should be considered as a form of censorship. “AI regulation’ = ‘AI ethics’ = ‘AI security’ = ‘AI censorship’,” he wrote on December 3. tweet. “AI is a tool for people to use,” he said two minutes later. “Censoring AI = Censoring Humans.” It’s a fundamentally pro-business position, even for venture capital’s free-market tastes, suggesting that food inspectors keep dirty meat out of your fridge, which amounts to censorship.

Despite what Andreessen, OpenAI, and ChatGPT itself would have us believe, even the smartest chatbot is closer to a perfect Magic 8 Ball than a real person. And when “security” is synonymous with censorship, it’s people, not bots, who suffer, and Ali Muhammad’s real-life concern is seen as a barrier to innovation.

Piantadosi, the Berkeley professor, told me that he rejects Andreessen’s attempt to put the well-being of a piece of software ahead of the well-being of the people who might someday be affected by it. “I don’t think ‘censorship’ applies to computer software,” he writes. “Of course, there’s a lot of malware we don’t want to write. Computer programs that bombard anyone with hate speech or facilitate fraud or hold your computer to ransom.”

“Thinking seriously about ensuring our technology is ethical is not censorship.”





Source link