Sam Altman (CEO and co-founder of OpenAI) confirmed in a discussion on AI threats that the company does not train GPT-5 at this time, the presumptive successor to GPT-4, , its AI language model , released last March .
Altman was recently asked at an MIT about a recent letter that circulated in the tech industry. The letter raised concerns about future AI systems, but it has also been criticized. The experts disagree on the nature of AI’s threat (is it existential, or mundane?) As well as the way the industry could “pause” development.
Altman, at MIT, said that the letter “lacked most technical nuance regarding where we need to pause”, and that an earlier version had claimed that OpenAI was currently training GPT-5. Altman said that “We won’t be for some time.” “In that sense, it was kind of silly.”
Altman was quick to point out that just because OpenAI does not work on GPT-5, it doesn’t mean they aren’t expanding GPT-4’s capabilities, or considering safety implications. Altman said that OpenAI was working on other projects that were built upon GPT-4, and that they had safety implications that should be addressed.
GPT hype and version numbers are a myth
Altman’s remarks are intriguing, but not because they reveal anything about OpenAI’s future plans. They highlight an important challenge in the debate over AI safety, namely the difficulty in measuring and tracking progress. Altman might say that OpenAI does not train GPT-5 at the moment, but this isn’t a very meaningful statement.
The fallacy of version numbering is responsible for some of the confusion. It’s the belief that tech updates are numbered to reflect linear and definite improvements. This is a long-held misconception in the consumer technology world, where the numbers assigned to phones and operating systems are aspired to be as rigorous as version control, but they’re really just marketing tools. This system is based on the logic that “well, of course, the iPhone 35 will be better than the iPhone 34.” The number is larger ipsofacto the phone will be better.
This logic is often used to explain systems such as OpenAI’s Language Models, due to the overlap of the consumer tech world and artificial intelligence. It is not just the hucksters that post hyperbolic Twitter posts predicting superintelligent AI in a few years due to the increasing numbers, but also more sophisticated and informed commentators. These individuals use similar rhetoric because many of the claims about AI superintelligence cannot be proven. These individuals draw graphs that have axes for “progress” or “time,” and plot a line up and to their right. They then present these as proof without critical thinking.
It is not our intention to ignore concerns about AI safety, or the fact that AI systems are improving rapidly and not under our full control. It is important to note that there are both good and bad arguments. Just because we put a number on something, whether it’s a new smartphone or a concept of intelligence, doesn’t mean that we know the whole story.
Instead, I believe the focus should be on capabilities. This includes demonstrations of what systems can do and cannot do as well as predictions about how these may change in time.
Altman’s statement that OpenAI does not develop GPT-5 will be no consolation for those who are concerned about AI safety. Other companies are also building tools that allow AI systems to act in users’ behalf. The company continues to expand the capabilities of GPT-4, by connecting the software with the internet. OpenAI could release GPT 4.5 first, as it did with GPT 3.5. Version numbers can be misleading.
Even if governments around the world were able to ban new developments in AI, it is clear that the current systems are not enough. GPT-5 may not be here yet, but is it important when GPT-4 still isn’t fully understood?