It is amazing how much a four-month difference can make.
In November, I may have shrugged if you asked me how AI systems were progressing. Yes, OpenAI had released DALL_E by November, but I was still captivated by the creative possibilities it offered. Despite all the hype about artificial intelligence from the major platforms, very few products have ever lived up to the grandiose visions we’ve seen over the years.
Then OpenAI launched ChatGPT. This chatbot captured the attention of the entire world with its generative capabilities. In quick succession, Microsoft’s GPT powered Bing browser and Anthropic’s -Claude followed. Google’s bard was also released. AI-powered tools are quickly working their way into A picture of Pope Francis in a white puffer coat was viral this weekend. I was one of those fooled into believing that it was real. After using the site to distribute plausible images of Donald Trump being arrested, Bellingcat, founder of open-source intelligence website Bellingcat, was expelled from Midjourney. After an influx in new signups, the company has disabled free trials.
A group of well-known technologists are asking the makers of these tools for a slowdown.
Students, copywriters, and all other knowledge workers are rapidly adopting synthetic text. BuzzFeed was the latest publisher that began to experiment with AI-written posts.
tech platforms are also cutting AI ethics team members. The large language model that Meta created was leaked to 4chan. Soon, someone found out how to make it run on a laptop.
OpenAI also released plug-ins to GPT-4. This allows the language model access APIs and interface with the internet, sparking concerns that it could open up new doors for harm. I asked OpenAI directly about this one; they didn’t reply to me.
A group of well-known technologists are asking the makers of these tools for a slowdown against this backdrop. Here are Cade Metz, and Gregory Schmidt.
The letter is a landmark in the march by existential AI toward mainstream consciousness. For years academics and critics have warned about the dangers these technologies pose. Yet, even though Midjourney and DALL-E were popular last fall, not many people were concerned about “an out of control race to develop ever more digital minds.”
The technologists’ letter is not without merit. Emily M. Bender is a University of Washington professor of Linguistics and AI critic. She called it “hot mess” and argued that doomer-ism such as this can actually benefit AI companies, making them appear more powerful than they really are. ( For more information, see Max Read.
A number of people who were initially listed as signatories to the letter turned out to not have signed it, which was embarrassing for the group that claimed to be concerned about AI-powered deception. Forbes pointed out that the letter campaign was primarily funded and funded by Musk , who has his own AI ambitions.
It seems like AI’s pace of development could quickly outpace our collective capacity to process it
You could also argue that speed shouldn’t be our main concern. Ezra Klein suggested last month that should really be focused on the system’s business models. There is a danger that ad-supported AI systems could be more effective at manipulating our behavior that we currently consider. This will make it dangerous, no matter how slow or fast we go. “Society will have to decide what it is comfortable with A.I. What A.I. should be doing and what A.I. shouldn’t do. Klein wrote that we shouldn’t be allowed to try before it is too late.
These are valid and important criticisms. However, despite any flaws in the open letter — which I am not able to identify — I give a very steep discount to Musk’s statements these days. In the end, I believe their collective argument. It seems like the pace of AI change could quickly overwhelm our collective ability to process it. The change signatories want a brief pause to develop language models larger than those that have been released. This seems like a small request in the grand scheme.
Tech coverage tends not to be about innovation or the disruptions that result from it. It is often less skilled at analyzing how new technologies could cause society-level changes. Yet, AI has the potential to have a profound impact on the labor market, information environment and cybersecurity. We should be able to see the bigger picture.
Aviv Ovadya is a researcher on the information environment and . I covered his work here before. A red team was formed for OpenAI before the launch of GPT-4. Red-teaming, which is basically a role-playing exercise where participants play the part of adversaries to a system to find its weaknesses, is essentially role-playing. GPT-4’s red team found that the language model could do many things that we don’t want it to, such as hiring an unwitting TaskRabbit in order to solve a CAPTCHA. OpenAI was able to resolve this and other issues before the model was released.
Ovadya, in a new article in Wired argues that red-teaming is not enough. He writes that it’s not enough just to know the material the model produces. It is also important to understand the impact that model release could have on society as a whole. What will the model’s release have on schools, journalism, and military operations? Ovadya suggests that experts in these areas be brought in before a model is released to build resilience in institutions and public goods, and to determine if the tool can be modified to protect against misuse.
Violet teaming, if adopted by companies such as Google and OpenAI, either voluntarily, or at the request of a federal agency, could be a better way to prepare for the impact that more powerful models have on the world.
Violet teams are only part of the regulation that we need. We have so many fundamental issues to resolve. Laptops should be allowed to run models as large as GPT-4. Do we limit how these models can access the wider web, as OpenAI’s plug ins do? Is there a government agency that can regulate these technologies or should we create one? How quickly can that be done?
Sometimes the speed of the internet works against us
You don’t have to be a believer in AI hype to understand that we will need answers to these questions. If not now, I think it’s soon. Our sclerotic government will need time to find answers. We will regret allowing technology to advance faster that our government can understand it.
Regardless of what, we will be able to observe the effects of GPT-4 on the real world and compare them with its competitors, which will help us decide how and where to act. However, the fact that no larger models are expected to be released in that period would be a comfort for those who believe that AI may be as dangerous as some believe.
One thing I learned from the social media backlash is that the internet’s speed often works against you. Lies spread faster than anyone can control them. Hate speech incites violence quicker than tempers can be calmed. Social media networks are more resistant to malicious actors than they were if they had been censored before they went viral.
AI may eventually cause the chaos that alarmists have predicted. However, I think these harms will be more likely if the industry continues to move at its full speed.
It is not a solution to all the problems. It could be a way to create one.