More

    AI is All Set to Enter the Era of Corporate Control

    A report on AI progress highlights the growing dominance of the industry over academia and government when it comes to deploying and protecting AI applications.

    The 2023 AI Index, which was compiled by Stanford University researchers and AI companies like Anthropic, Hugging Face, and Google, suggests that AI is in a new phase. A large number of AI tools, including chatbots such as ChatGPT and image-generating software such as Midjourney, have become mainstream over the past year. Corporate players are responsible for making decisions about how to use this technology, and how to balance risk with opportunity.

    Research in cutting-edge AI is expensive and beyond the capabilities of academia.

    According to the AI Index, while academia was a leader in developing state of-the-art AI systems for many years, industry has taken over. It states that there were 32 machine learning models produced by industry in 2022, compared to three from academia. This is mainly due to the increasing demands on resources — data, staff, computing power, etc. — to create these applications.

    OpenAI’s GPT-2 was an early large-language mode or LLM in 2019. It is the same application that powers ChatGPT and Microsoft’s Bing chatbot. GPT-2 was developed at a cost of approximately $50,000 and includes 1.5 billion parameters. This metric tracks the model’s size as well as its relative sophistication. Google’s state-of the-art LLM was created in 2022 by PaLM. It cost $8 million to train, and has 540 billion parameters. This makes it 360x larger than GPT-2, and 160x more expensive.

    The increasing resources required for AI development have shifted the power balance to corporate hands. Experts in the AI field worry that the incentive system of the business world could also result in dangerous outcomes. Companies will rush to produce products and ignore safety concerns to be competitive. Many experts call for a halt or even a halt in AI development. This is evident in the open-letter that was signed last week by Elon Musk.

    Misuse cases have increased dramatically as AI systems become more widespread.

    According to the report’s authors, ethical misuse incidents have increased as AI applications are more mainstreamed by industry players. The AI, Algorithmic and Automation Incidents & Controversies Repository is an index that tracks these incidents. It shows a 26-fold increase in instances between 2021 to 2012. These incidents include fatalities in Tesla’s self driving software, the use of deepfakes in corporate scams , the creation of nonconsensual richfake nudes, and many cases of wrong arrests due to faulty facial recognition software. This software is often plagued with racial biases.

    It’s not surprising that AI tools are becoming more common. However, the number of malicious use cases and errors will increase. This is not a sign of inadequate safeguarding. There are other evidence that suggests a connection. For example, the trend for companies like Google and Microsoft to reduce their AI safety or ethics teams.

    However, the report notes that legislators and policymakers are increasingly interested in AI regulation. Analysing legislative records from 127 countries revealed that there were 37 AI-related bills passed in 2016, compared to just one in 2016. The scrutiny of state legislation is increasing in the US. Five such bills were proposed in 2015, and 60 AI-related bills were proposed in 2022. This increased interest could be a counterweight for corporate self-regulation.

    However, the AI Index report covers a lot more than that. You can view it in its entirety here , or just a few highlights:

    • The first decade in a row, private investment in AI has declined. While global private investment in AI has been increasing for many years, it fell by 26.7 percent between 2021 and $91.9 billion. In 2022.
    • There are environmental consequences to training large AI models. According to a 2022 paper, training an AI language model called BLOOM produced 25 times more carbon than flying one passenger from New York City to San Francisco. OpenAI’s GPT-3, on the other hand, was found to have a carbon footprint 20 times greater than BLOOM.
    • AI could potentially reduce emissions. DeepMind, a Google subsidiary, created an AI system called BCOOLER in 2022. It reduced energy consumption by 12.7 per cent in three months by optimizing cooling processes. It’s unclear if Google adopted this system in more places.
    • Chinese are more optimistic about AI than Americans. A Ipsos poll in 2022 revealed that 78% of Chinese respondents agreed that AI has more benefits than drawbacks. Next came respondents from Saudi Arabia (76%), and India (71%). Only 35% of respondents in the US agreed with the above statement.

    Recent Articles

    Aishah Hasnie Husband Dating Dalton Blaine Everything to Know

    Aishah Hashie is a prominent Pakistani American journalist who has established herself in the media as an impressive figure. She has been recognized...

    Darren Daulton Wife Amanda Dick Read More

    Darren Daulton made an indelible mark on Major League Baseball as a professional baseball catcher for the Philadelphia Phillies, yet off the...

    Love Island Tink Reading Ethnicity Find the Facts

    Tink Reading is the charismatic new addition to Love Island 2023. Her striking beauty and vibrant character have captured the attention of...

    Reeves Callaway Passed Away Get to Know the Details

    Early Life of Reeves Callaway Ely Reeves Callaway II, also known as Reeves Callaway to the rest of the...

    What Happened to Patrick Landers Jr Discover the Updates

    What happened to Patrick Landers Jr.?Allen Kerr, 34, from Hillsboro shot and killed Patrick Landers Jr before turning the gun on himself...

    Related Stories

    LEAVE A REPLY

    Please enter your comment!
    Please enter your name here

    Stay on op - Ge the daily news in your inbox