It is possible that the information generated by this program may be incorrect.
OpenAI’s AI chatbot, ChatGPT, has a warning on its homepage. It is one of nine points that describe the system’s capabilities.
“May sometimes produce incorrect information.”
This is a warning that you could add to any source of information, from Wikipedia, to Google, to the front page The New York Times, and it would be accurate.
This may occasionally produce incorrect data.
OpenAI doesn’t do enough to prepare people for a technology that is as powerful, hyped and misunderstood by the public as ChatGPT.
This weekend, broke the news that US attorney Steven A. Schwartz used the chatbot in order to gather evidence for a lawsuit against Colombian airline Avianca. ChatGPT was unable to find any of the suggested cases.
Schwartz claimed he wasn’t aware that the content of ChatGPT could be false, but transcripts of their conversation show he checked his research. He asked ChatGPT and, again, it misled him by reassuring that the fictitious case histories were legitimate.
Schwartz is to blame for this situation, but it’s also important to consider the wider picture.
In the last few months, we have received numerous reports about people who were misled by ChatGPT. The majority of cases have been trivial, and had no or little negative impact. The system usually makes up a fake news story, an academic article, or book. When someone tries finding this source, they either waste their time, or look like fools (or both). It’s not hard to imagine how ChatGPT could have more serious effects.
In May, a Texas A&M professor , for instance, used the chatbot to verify that students’ essays were written using AI. ChatGPT, ever accommodating, said yes, allthe essays of the students were AI-generated even though it had no reliable ability to make this assessment. The professor threatened that he would fail the class and hold back their diplomas unless his mistake was corrected. In April, a professor of law revealed how the system had generated false news articles accusing him sexual misconduct. A colleague who was conducting research alerted him of the situation. The professor said to Washington Post that the incident was “quite chilling”. “An accusation of this type is extremely harmful.”
I don’t believe that cases like this invalidate the potential for ChatGPT or other chatbots. These tools are incredibly useful in the right situation and with the correct safeguards. This potential also includes retrieving information. Interesting research is being conducted to show how these systems will and can be more grounded in factual information. It’s important to note that, at this time, the research is not sufficient.
The media is partially to blame. Many reports on ChatGPT, and other bots of this type, portray these systems with human-like intelligences and emotions. Journalists often fail to highlight the unreliability and contingent nature of information provided by these systems.
ChatGPT is used as a search tool by many people. OpenAI should be able to detect this and warn users in advance
OpenAI can certainly help, however, as the article’s introduction hoped to make clear. Despite the fact that chatbots have been marketed as a “new” technology, people are using them to search. Many are launched , as searches engines. Of course people get confused. It’s not surprising that this happens. A generation of internet users has been taught to ask questions and get answers by typing them into a box. While sources such as Google and DuckDuckGo offer links that are prone to scrutiny, chatbots mix up their information with regenerated texts and speak in an enthusiastic tone. This type of priming cannot be stopped by a sentence or two in the disclaimer.
ChatGPT is able to perform web searches, but only when you pay for the Plus version of the software and use the beta plug ins. The self-contained nature of the software makes it more likely for it to be misleading.
The interventions don’t have to be complicated, but they do need to be . Why can ChatGPT, for instance, not recognise when it is being asked to create factual citations, and warn the user to “check their sources”? Why doesn’t ChatGPT respond to someone who asks “Is this text AI generated?” by saying “I’m afraid, I can’t make that judgement”? We have reached out to OpenAI and will update the story if they respond.
OpenAI has improved significantly in this area. ChatGPT has become more honest about its limitations since it launched. It often begins answers with the AI-speak: “As an AI Language Model…”. But it is also inconsistent. When I asked this morning’s bot, “Can you detect AI generated text?”, it warned that it wasn’t foolproof, but when I gave it a piece of the story, and asked the question again, it replied, “Yes, this was AI-generated.” It replied, “Certainly!” before offering ten suggestions. The list was good, with many classics. However, two of the titles were completely made up. I would not have known if I had not checked. You can do the same tests to find errors.
This sort of performance makes a disclaimer such as “May sometimes generate incorrect information” seem inaccurate.