The Senate’s Hearing on AI Regulation was A Friendly Danger

    The affable of this week’s Senate AI hearing was what was most unusual about . The industry representatives, namely OpenAI CEO Sam Altman, were in agreement that new AI technologies should be regulated. Politicians seemed to be happy to delegate responsibility for the drafting of rules to companies. In his opening remarks, Senator Dick Durbin said : “I cannot recall when people representing large companies or private sector entities came before us and pleaded with us for regulation.”

    This kind of chumminess can make people feel nervous. Experts and representatives from the industry say that this hearing may indicate a new era in AI. They say that if tech giants were allowed to set the rules for this technology, it would have many negative effects, including stifling small firms and introducing weak regulation.

    The industry capture can harm smaller companies and weaken regulations

    At the hearing, IBM’s Christina Montgomery was present as well as noted AI critic Gary Marcus who raised the issue of regulatory capture. Marcus warned that the danger is “that we make it seem as if something is being done, but nothing actually happens. We just keep the little players out.”

    Altman’s OpenAI, although still referred to as a “startup,” is arguably the most influential AI firm in the world. OpenAI’s launches of ChatGPT, and its deals with Microsoft to recreate Bing sent shockwaves throughout the tech industry. Altman is in a good position: he can appeal to the imaginations of both the VC class, and the hardcore AI supporters with grand promises of building superintelligent artificial intelligence and, perhaps one day, or in his words “capture all the future value of the universe.”

    He was less grandiose at the hearing last week. Altman also mentioned the issue of regulatory capture, but was less specific about his views on licensing smaller entities. We don’t want to slow down smaller startup companies. He said that “we don’t want to slow down open-source efforts.”

    Sarah Myers West is the managing director of AI Now Institute and told The Verge that she was skeptical of the licensing system suggested by many speakers. She said that the licensing system proposed by many speakers was a superficial exercise where companies would say “yes, we are licensed, we understand the risks and can continue with business as usual”, but not be held accountable when the systems went wrong.

    “Requiring licenses to train models… would further concentrate power in the hand of a select few”

    Others, including those who run their own AI firms, have also highlighted the threat that AI poses to competition. Emad Mostaque is the founder and CEO at Stability AI. He told The Verge that regulations are always in favour of incumbents, and they can also stifle innovations. Clem Delangue , CEO of AI startup Hugging Face , , , tweeted with a similar response: “Requiring licenses to train models is like requiring licenses to write code.” IMO, this would increase the concentration of power and slow down fairness, progress, and transparency.

    Some experts believe licensing is a good idea. Margaret Mitchell, the co-author of a paper that outlined the possible harms caused by AI language models and was fired from Google along with Timnit Gebru, is a “proponent of some self-regulation combined with top-down regulations.” She told the Verge she could see certification’s appeal, but maybe for individuals instead of companies.

    “You could imagine a scenario where a developer, in order to train a model above a certain threshold, would require a ML commercial developer license,” said Mitchell, now the chief ethics scientist for Hugging Face. This would be an easy way to incorporate’responsible’ AI into a legal framework.

    Mitchell said that good regulation relies on establishing standards that companies can’t easily bend for their own benefit, and that this requires nuanced knowledge of the technology to be assessed. She uses the facial recognition company Clearview AI as an example, which claimed that its algorithms were “100%” accurate. Experts say that although this sounds comforting, the company relied on distorted tests in order to arrive at these numbers. Mitchell said that, in general, she does not believe Big Tech will act in the best interest of society. She said that “tech companies [have] repeatedly demonstrated that they don’t see respecting others as part of running a business.”

    Even if licensing were introduced, the effect may not be immediate. At the hearings, representatives of the industry often focused on hypothetical future harms, and in the process, paid scant attention the known problems AI can already enable.

    Researchers like Joy Buolamwini, for example, have identified bias problems in facial recognition. This technology is still inaccurate in identifying Black faces, and has led to many cases wrongful imprisonment. The hearing did not mention AI-driven surveillance, and facial recognition was only mentioned in passing.

    In order to avoid discussing current issues, industry figures tend to focus on future harms from AI.

    AI Now’s West claims that this emphasis on future harms is a common rhetorical trick among AI industry figures. She said that these individuals “position responsibility right into the future” by referring to artificial general intelligence (AGI): a hypothetical AI smarter than humans in a variety of tasks. Some experts believe we are close to creating these systems. However, this conclusion is strongly challenged.

    The rhetorical feint at the hearing was evident. Altman, OpenAI’s Altman, quietly said that government licensing only applies to future system. Altman said that he did not think licensing schemes should be based on what models can do today. “But I think that as we move towards artificial general intelligence, we will need a licensing scheme.”

    Experts have compared Congress (and Altman)’s proposals negatively to the EU AI Act. This legislation’s current draft does not have licensing mechanisms, but does classify AI systems according to their risk level and has varying requirements regarding data protection and safeguards. clearly prohibits harmful AI use cases like predictive policing and mass surveillance. This has been praised by digital rights experts.

    West says: “That’s the direction in which we need to take this conversation if we want any kind of meaningful accountability within this industry.”

    Recent Articles

    Apple 15-inch Laptops are Worth Paying Attention

    Apple M1 devices and Apple M2 devices are now the default recommendation for many reviewers when it comes to popular laptop classes. Slim and...

    Reddit to Exempt Accessibility Focused Apps From Unpopular API Pricing Changes

    Reddit has created an exemption from its new API pricing terms that are unpopular for developers of accessibility apps. This could be...

    Apple Turn iPhone Into a Pet Tracking Camera

    Apple has released a new framework for developers that allows you to transform your iPhone into a pet tracking camera. According the documentation...

    WhatsApp New Channels Feature Brings Social Media to Messaging App

    WhatsApp's newest feature brings a completely new way to communicate with the most popular chat application in the world. Channels is a feature...

    ChatGPT on iOS Improves iPad Support and Shortcuts Integration

    OpenAI has released a new update for ChatGPT that takes advantage of the iPad's entire screen. The ChatGPT iOS and iPadOS app has been updated...

    Related Stories


    Please enter your comment!
    Please enter your name here

    Stay on op - Ge the daily news in your inbox