ChatGPT proves that artificial intelligence is finally key – and things are about to get weirder

A friend messaged me earlier this week to ask what I thought of ChatGPT. I wasn’t surprised he was interested. He knows I write about AI and is the kind of guy who keeps up with whatever is trending on the internet. We talked for a while and I asked him: “So what to do? you Thinking about ChatGPT? He replied, “Well, I wrote a half-decent Excel macro that saved me a few hours at work this morning”—and my jaw dropped.

For context: this is someone whose job involves quite a bit of database tinkering, but who I wouldn’t describe as particularly tech-savvy. He works in higher education, studied English at university and never formally learned to code. But here he is not only playing with an experimental AI chatbot, but also uses it to get the job done faster after only a few days of access.

“I asked him a few questions, asked some more, put it into Excel, then did some debugging,” he described the process. “It wasn’t perfect, but it was easier than Googling.”

Tools like ChatGPT have made AI accessible to the public like never before

Stories like these are piling up this week like the first drops of rain before a downpour. On social media, people share stories of using ChatGPT to write code, compose blog posts, compose college essays, compose business reports, and even improve chat games (okay, that last one was definitely done as a joke, but the prospect of AI-enhanced rizz is still is also attractive). As a reporter covering this space, it’s been largely impossible to keep track of everything that’s going on, but there’s one general trend that’s been overlooked: AI is going mainstreamand we are just beginning to see its impact on the world.

There’s a concept I particularly like in artificial intelligence that I think helps explain what’s going on. It’s called “skill augmentation” and refers to AI’s latent capabilities: skills and abilities hidden in systems that researchers haven’t even begun to explore. You may have heard before that AI models are “black boxes” – they are so large and complex that we don’t fully understand how they work or arrive at specific results. This is broadly true, and that is what creates this excess.

“Current models are far more capable than we thought, and our techniques are available to explore [them] “very juvenile,” is how AI policy expert Jack Clark described the concept in a recent issue of his newsletter. “What about all the possibilities we don’t know about because we haven’t thought to test for them?”

Capability overload is a technical term, but it also perfectly describes what is happening now as AI enters the public domain. For years, researchers have been on a tear, churning out new models faster than they can be commercialized. But in 2022, a slew of new apps and programs suddenly made these skills available to a general audience, and things will begin to change rapidly in 2023 as we continue to expand into this new territory.

A bottleneck has always been available, as ChatGPT demonstrates. The bones of this program are not entirely new (it is based on GPT-3.5, a large language model released by OpenAI this year but upgraded to GPT-3 from 2020). OpenAI previously sold access to GPT-3 as an API, but the company’s ability to improve the model’s ability to speak in natural dialogue and then publish it online for anyone to play with has brought it to a larger audience. No matter how creative AI researchers are in exploring a model’s strengths and weaknesses, they will never be able to match the massive and chaotic intelligence of the internet. Suddenly, the ledge becomes accessible.

The same dynamic can be seen in the rise of AI image generators. Again, these systems have been in development for years, but access has been restricted in various ways. This year, systems like Midjourney and Stable Diffusion allowed anyone to use the technology for free, and suddenly the art of AI was everywhere. Much of this is due to Stable Diffusion, which offers companies an open source license to build on. In fact, it’s an open secret in the AI ​​world that whenever any company launches a new AI imaging feature, there’s a decent chance it’s just a repackaged version of Stable Diffusion. This includes everything from the viral ‘magic avatar’ app Lens to Canva’s AI text-to-image conversion tool to MyHeritage’s ‘AI Time Machine’. It’s all the same technology below.

As the metaphor suggests, the ability perspective ledge not necessarily good news. Along with hidden and emerging opportunities, there are also hidden and emerging threats. And these threats, like our new skills, are almost too numerous to name. For example, how will colleges adapt to the proliferation of AI-written essays? Will creative industries be destroyed by the spread of generative AI? Will machine learning create a spam tsunami that will destroy the internet forever? But what about the inability of AI language models to distinguish fact from fiction, or the proven biases of AI image generators that sexualize women and people of color? Some of these problems are known; others are ignored and still more are only just beginning to gain attention. As the excitement of 2022 fades, 2023 is sure to have some rude awakenings.

Welcome to AI overhang. Hold tight.

Source link