From image generators to language models, 2023 will be the year of artificial intelligence

A few years ago, I sometimes thought, “Why does Future Perfect, who should be focusing on the world’s most important problems, write so much about artificial intelligence?” I needed to answer your question.

After 2022, I don’t have to answer this very often. This was the year AI went from a niche topic to mainstream.

In 2022, powerful image generators like Stabil Diffusion made it clear that the design and art industries were at risk of mass automation, prompting artists to demand answers—which meant that the details of how modern machine learning systems learn and train became key questions. .

The meta fueled the releases of both Blenderbot (which was a flop) and the world-conquering, two-way Diplomacy-playing agent Cicero (which wasn’t).

OpenAI ended the year with the release of ChatGPT, the first widely used artificial intelligence language model with millions of users and could herald the end of the college essay, among other potential impacts.

And more comes – more. Greg Brockman, president and co-founder of OpenAI, on December 31 he tweeted the following: “Prediction: 2023 will make 2022 look like a sleeper year for AI development and adoption.”

Artificial intelligence is moving from hype to reality

One of the defining characteristics of progress in artificial intelligence over the past few years is that it has happened very, very quickly. Machine learning researchers often rely on benchmarks to compare models against each other and determine the state of the art on a given task. However, in AI today, a benchmark is often hardly created before a model is released that prevents it.

When GPT-2 came out, a lot of work went into characterizing its limitations, most of which went into GPT-3. A similar thing happened for GPT-3 and ChatGPT has already grown in most cases those who restrictions. ChatGPT, of course, has its limitations, many of which are the product of reinforcement learning on human feedback, which has been used to tune it to say things that are less objectionable.

But I would caution people against drawing too many conclusions from these limitations; GPT-4 is said to be released sometime this winter or spring, and is by all accounts much better.

Some artists have found comfort in the fact that current art models are too limited, but others (correctly, I think) have warned that the next generation of models will not be similarly limited.

While art and text are big leaps in 2022, there are many other areas where machine learning techniques could be on the verge of industry-changing breakthroughs: music composition, video animation, coding, translation.

It’s hard to predict which dominoes will fall first, but by the end of this year, I don’t think artists will be alone in the fight against the sudden automation of their industry.

What to look for in 2023

I think it’s healthy for experts to make specific predictions rather than vague ones; So you, the reader, can hold us accountable for our accuracy. So here are some features.

In 2023, I think we will have image models that can represent multiple characters or objects and consistently perform more complex modeling of object interactions (a weakness of current systems). I doubt they will be perfect, but I suspect that many of the complaints about the limitations of the current systems will no longer apply.

I think we’ll have text generators that answer almost every question you ask them better than ChatGPT (as rated by human raters). That may already be happening — this week, Information reported that Microsoft, which owns a $1 billion stake in OpenAI, plans to integrate ChatGPT into its blocked Bing search engine. Instead of providing links in response to search queries, a search engine powered by a language model can simply answer questions.

I think we’ll see more widespread adoption of coding assistant tools like Copilot, at which point more than 1 in 10 software engineers will say they use them regularly. (I wouldn’t be surprised if half of software engineers used such tools habitually, but that would depend on how much the systems cost.)

I think the AI ​​personal assistants and AI “friends” space will open up with at least three options for such uses that are better for user experience in head-to-head comparisons than models like Siri or Alexa that exist today.

Greg Brockman knows more about what OpenAI is under the hood than I do, and I think he expects faster progress than I do, so maybe all of the above is actually very conservative! But these are just some of the concrete ways I think you can expect AI to change the world in the coming year — and the changes aren’t small.


Elon Musk he answered One word for Brockman’s tweet about the prospects for artificial intelligence in 2023: “Yikes.”

There’s a lot of history here, but I’ll try to give you a brief overview: In 2014 and 2015, Musk read about the potential and enormous risks of artificial intelligence and became convinced that it was one of the biggest challenges of our time:

With artificial intelligence, we summon a genie. You know all those stories where there’s a guy with a pentagram and holy water and he’s convinced that yes, he can control a demon? It doesn’t work.

Along with other Silicon Valley luminaries like Y Combinator’s Sam Altman, Musk founded OpenAI in 2015 to make sure AI development would benefit all of humanity. It’s a complicated mission, to say the least, because the best way for the AI ​​to go well depends a lot on what you expect to go wrong. Musk said he fears centralization of power under tech elites; others worry that technological elites will lose control of their creations.

Although Musk left OpenAI in 2019, he warns about AI, including the AIs the company he helped found is building and introducing to the world.

I rarely find a common language with Elon Musk. But these “yiks” are also some of the things I feel when I read Brockman’s prediction. Warnings from AI experts that we are creating gods used to be easily dismissed as hype; it is not so easy to clean them anymore.

I’m proud of my prediction record, but I’d like to be wrong about these. I think a slow, sleepy year on the AI ​​front would be good news for humanity. We will have some time to adapt to the challenges posed by AI, learn the models we have and learn how they work and how they break down.

We can make progress on the problem of understanding the goals of artificial intelligence systems and predicting their behavior. With the hype cooling, we may have time for a more serious conversation about why artificial intelligence is so important, and how we, as a human civilization with a shared stake in it, can best make it happen.

I would like to see that. But the easiest way to go wrong with predictions is to guess what you want to see instead of where you see the indicated incentives and technological developments. And incentives for AI do not indicate a sleepy year.

A version of this story originally appeared in the Future Perfect newsletter. Sign up here to subscribe!

Source link