The US and China may be fierce rivals in the technology industry, but they seem to have something in common: concern about AI misuse and accountability. Both countries announced regulations for AI development on Tuesday.
The National Telecommunications and Information Administration, a division of the US Department of Commerce (NTIA), issued a public request to gather input on how policies should be shaped for an AI accountability eco-system.
There are questions about data access, accountability and the different approaches AI may take in different industries, like employment or healthcare.
The NTIA must receive written comments within 60 days of the publication date in the Federal Register, by the 10th June 2023.
This news was released on the same day as Cyberspace Administration of China announced a draft of measures to manage generative AI services. These include making the providers responsible for validating the data used in training generative AI.
According to a article, the CAC said that providers should be held responsible for validating data used to train AI software and that steps should be taken to avoid discrimination in designing algorithms and data sets to be used as training data. The government will also require firms to submit security assessments before they launch their AI tools.
According to the draft rule, if inappropriate content is generated on their platforms, companies are required to update the technology within 3 months to prevent the same content being generated again. If providers do not comply with these rules, they will be fined, have their services suspended or face criminal investigation.
The CAC stated that any content created by generative AI should be aligned with the socialist core values of the country.
China’s tech titans are well on their way to developing AI. The CAC announcement came on the same day Alibaba Cloud released a new large-language model called Tongyi Qianwen, which it plans to roll out in a chatGPT style front end for all its business apps. Baidu, a Chinese AI and internet giant, announced last month a Chinese-language ChatGPT replacement, Ernie Bot.
AI Regulation vs. Innovation
Other governments are adopting a different strategy. While the Chinese government is clear in its regulatory guidelines, many other countries around the globe have a more flexible approach.
Last month, UK said, in order to avoid “heavy-handed legislation that could stifle innovative,” that it had chosen not to assign responsibility for AI governance a single regulator. Instead, it called on existing regulators come up with their respective approaches that best fit the way AI was being used in each sector.
criticised this approach. Industry experts said that due to the complexity and multi-layered nature of AI tools, existing frameworks might not be able regulate AI effectively. This would lead to a conflation of different regimes.
The UK data regulator has also issued a warning for tech companies to protect personal information when they develop and deploy large language, generative models of AI, while Italy’s data privacy regulator, , banned ChatGPT due to alleged privacy breaches. has also called for a 6-month break from developing systems that are more powerful than OpenAI’s recently launched GPT-4.
Frank Buytendijk is an analyst with Gartner. He said that when it comes to innovation in technology and regulation, most governments and legislators tend to follow a natural path.
He said that when new technologies are introduced, “we learn to use them responsibly by making mistakes.” “That’s the situation we find ourselves in right now, with AI.”
Buytendijk explained that after this, regulations start to emerge – allowing users, developers and legal systems to learn more about responsible usage through interpretations of the law and case law – followed by a final phase where technologies with responsible use built in are introduced.
Buytendijk explained that “we learn about responsible usage through these inbuilt best practices. So it’s a continuous process.”