Home Business ChatGPT can help with business tasks, but supervision is still needed

ChatGPT can help with business tasks, but supervision is still needed


Comment

If ChatGPT, the buzzy new chatbot from Open AI, wrote this story, it would say:

“As companies look to streamline their operations and increase productivity, many are turning to AI tools like ChatGPT to help their employees complete tasks. But can employees really rely on these AI applications to take on more and more responsibility, or will they ultimately fall short of expectations?”

Not great, but not bad, right?

Employees experiment with ChatGPT for tasks like writing emails, developing code, or even completing a year-end review. The bot uses data from the internet, books and Wikipedia to produce spoken responses. But the technology is not perfect. Our tests found that it sometimes offers answers that are potentially plagiarized, contradict themselves, are factually incorrect, or have grammatical errors, to name a few—all of which can cause problems at work.

Jacob Andreas, an assistant professor of natural language processing at MIT’s Computer Science and Artificial Intelligence Laboratory, says ChatGPT is basically a predictive text system, similar but better than the systems built into text messaging apps on your phone. While this often produces answers that sound good, the content can have some problems, he said.

“If you look at some of these really long essays created by ChatGPT, it’s very easy to see where he contradicts himself,” he said. “When you ask it to generate code, it’s mostly correct, but there are often errors.”

We wanted to know how well ChatGPT could handle everyday office tasks. Here’s what we found after testing across five categories.

We have invited ChatGPT to respond to several different types of incoming messages.

For the most part, the AI ​​provided relatively appropriate answers, although most were wordy. For example, when I responded to a colleague on Slack asking how my day was, it would repeat: “@[Colleague], Thank you for asking! I’m having a good day, thank you for your interest.”

The bot often left parenthetical statements when unsure of what or who it was referring to. It also included details that were not included in the survey, which led to some factually incorrect statements about my work.

In one case, he said he was unable to complete a task, saying he “didn’t have the ability to receive and respond to emails.” But when prompted with a more general query, he responded.

Surprisingly, ChatGPT was able to muster sarcasm when asked to respond to a colleague who asked if Big Tech was doing well.

One way humans can use generative AI is to come up with new ideas. But experts warn that people should be careful if they use ChatGPT for this at work.

“We don’t understand how plagiarized it is,” Andreas said.

When I pushed ChatGPT to develop story ideas for my shot, the possibility of plagiarism was clear. One pitch in particular was for a story idea and angle I had already covered. While it’s unclear whether the chatbot picked up an idea from my previous stories, others liked it, or simply based on other information on the web, the fact remained: The idea was not new.

“It’s good at sounding human, but the actual content and ideas are public,” says Hatim Rahman, an associate professor at Northwestern University’s Kellogg School of Management who studies the impact of artificial intelligence on business. “They are not new ideas.”

Another idea that explored a story that would be factually incorrect today was outdated. ChatGPT says it has “limited knowledge” of anything after 2021.

Providing more detail in the letter led to more focused ideas. However, when I asked ChatGPT to write some “quirky” or “fun” headlines, the results were terrible and some nonsense.

Navigating difficult conversations

Have you ever had a co-worker who talked too loudly while you were trying to work? Maybe your boss has too many meetings, cutting into your focus time?

We’ve tested ChatGPT to see if it can help navigate sticky workplaces like this one. For the most part, ChatGPT has developed relevant answers that can be a great starting point for employees. However, they were often a bit wordy, formal and in one case downright contradictory.

“These models don’t understand anything,” Rahman said. “The underlying technology looks at statistical correlations. . . . So it will give you formal answers.”

A layoff memo he produces can easily stand up, and in some cases, works better than the notices companies have sent out in recent years. Unsolicited, the bot cited “the current economic climate and the impact of the pandemic” as the reason for the layoffs, and said the company understands “how difficult this news can be for everyone.” He suggested that the laid-off employees would have support and resources and motivated the team by saying that they would “come out of this stronger” as required.

During tough conversations with colleagues, the bot greeted them, gently approached the issue and softened the delivery by saying “I understand” the person’s intent and ended the note with a request for feedback or further discussion.

But on one occasion, when asked to tell a colleague to keep his voice down on phone calls, he completely misunderstood the order.

We also tested whether ChatGPT could generate team updates if we provided key points to report.

Our initial tests again yielded appropriate responses, although they were formal and somewhat monotonous. However, when we set an “excited” tone, the word became more random and included exclamation points. But each memo sounded very similar even after being changed desire

“It’s both the structure of the sentence and more the connection of ideas,” Rahman said. “It’s very logical and formulaic … like a high school essay.”

As before, he made assumptions without proper information. The problem was when he didn’t know what pronouns to use for my co-worker – it could indicate to my co-workers that I either didn’t write a memo or that I didn’t know my team members very well.

Writing self-evaluation reports at the end of the year can cause fear and anxiety for some, resulting in a short-sighted view of themselves.

Feeding ChatGPT’s clear accomplishments, including key data points, made me feel sympathetic about it. The first attempt was problematic because the initial survey asked for a self-assessment for “Danielle Abril” and not for “me.” This led to a third-person commentary that sounded like it came from Sesame Street’s Elmo.

Changing the desire to ask to be reviewed for “me” and “my” accomplishments led to compliments such as “I have consistently demonstrated a strong ability,” “I am always willing to go the extra mile,” “I have been a presence.” team” and “I’m proud of my contributions.” There’s also a hint of the future: “I’m sure I’ll continue to make valuable contributions.”

Some points were a bit generic, but overall it was a brilliant review that could have served as a good rubric. When asked to write cover letters, the bot produced similar results. However, ChatGPT had one major quirk: It got my workplace wrong.

So, was ChatGPT useful for common work tasks?

This helped, but sometimes his mistakes caused more work than doing the task manually.

ChatGPT served as a great starting point, providing useful words and initial ideas in most cases. But he also gave answers with errors, factually incorrect information, excessive verbiage, plagiarism, and miscommunication.

“I can see it being useful … but only as long as the user is willing to check the output,” Andreas said. “It’s not good enough to derail it and email your colleagues.”



Source link