More

    The Revolution of AI is at its Peak

    Some technologists want to stop the development of artificial intelligent systems, before their neural pathways are misused by humans. Some computer experts believe that mistakes are inevitable, and that the development of artificial intelligence systems must continue.

    Over 1,000 AI luminaries and techs signed a recent petition urging the computing industry not to train AI systems that are more powerful than GPT-4 for six months. The petitioners want AI developers create safety standards to mitigate the risks of the most risky AI technologies.

    The Future of Life Institute, a nonprofit organization, organized the Petition which calls for an immediate public and verifiable stop by all developers. If governments do not act, they should impose a moratorium. Future of Life Institute has reported that it has collected over 50,000 signatures, which are currently going through the vetting process.

    The letter does not attempt to stop all AI research in general. The letter’s supporters are calling for developers to take a step back and stop the dangerous race towards “ever-larger, unpredictable black-boxes with emerging capabilities.”

    The letter states that “AI research should be refocused to make today’s powerful and state-of-the art systems more accurate and reliable, as well as safer, more interpretable, transparent and robust. They also need to be aligned and trustworthy.”

    Support Not Universal

    John Bambenek is a principal threat hunter for Netenrich, a SaaS security and operations analytics company. He sees that there is a growing recognition of the fact that the consideration of ethical implications for AI projects lags behind the pace of development.

    Bambenek, speaking to TechNewsWorld, said: “It is important to reassess our actions and their impact. We have seen many spectacular failures when it comes down to ill-considered AI/ML deployments.”

    Andrew Barratt is vice president of cybersecurity advisory services company Coalfire. This is impossible to achieve in a coordinated global fashion.

    “AI will enable productivity in the next two generations. The danger is that it will replace search engines, and then be monetized through advertisers who intelligently place their products in the answers. It is interesting that the’spike in fear’ seems to have been triggered by the recent attention given to ChatGPT,” Barratt said.

    Barratt suggests that knowledge workers around the world should not pause but instead look into how they can use AI tools, which are becoming more user-friendly and will help to increase productivity. Those who do not will fall behind.

    Dave Gerry is the CEO of crowdsourced cybersecurity firm Bugcrowd. He believes that safety and privacy are top concerns for all tech companies, whether they focus on AI or not. It is important to ensure that AI models have the appropriate safeguards, feedback loops and mechanisms for highlighting potential safety issues.

    Gerry, a reporter, said: “As organizations adopt AI to reap the benefits of efficiency, productivity and democratization, it’s important that, as concerns are raised, there’s a mechanism for surfacing them, just like a vulnerability is identified and reported.”

    Highlighting Legitimate Concerns

    Anthony Figueroa is a machine learning expert and co-founder of Rootstrap. He supports regulation but does not believe that a pause on its development would lead to meaningful changes.

    Figueroa helps companies monetize services by using big data and machine-learning. He is sceptical that regulators are going to move at the correct speed and will understand what they should regulate. He compares the challenges to those that social media posed two decades ago.

    Advertisement  Decode your Future with an online Computer Science degree from Drexel
    Drexel University’s online computer science program is designed to prepare you to work at the forefront of technology. Students with any experience level or prior knowledge can benefit from the curriculum.

    “I believe the letter that they wrote is very important. We have reached a turning point and need to think about what we didn’t do before. Figueroa said that he did not believe it was possible to stop anything for six-months, a year, or two years.

    AI-powered technology is now the next big thing. OpenAI’s ChatGPT has literally become a worldwide success overnight. This has made people sit up and take notice of the power and potential that AI and ML technology can offer.

    We do not yet know the full implications of this technology. What are the risks? “We know that this double-edged blade can cause some problems,” he warned.

    Does AI need regulation?

    When spoke with Anthony Figueroa about the need for machine learning to be controlled by developers and the possible need for artificial intelligence to be regulated by the government.

    These data come from different sources, both public and private. We use a technique known as deep learning. Its foundations are based on studying how the brain functions.

    What impact does this have on the use of ethical and guidelines?

    Often, we don’t even understand why AI solves problems in a particular way. We don’t understand the AI ecosystem’s thinking process. This concept is called explainability. You need to be able determine the reasoning behind a decision. AI is different because it’s not always clear how a decision was made.

    What are the differences between AI and these factors?

    Consider a doctor addressing a case of cancer. There are several options available. One of the three medications is completely explainable, and the patient will have a 60% cure rate. They then have a treatment that is not explainable, which, based upon historical data, has an 80% chance of cure, but they don’t really know why.

    The combination of drugs combined with DNA and other factors affect the outcome. What should the patient do? It is a difficult decision.

    What is “intelligence” as it relates to AI development?

    AI will solve problems by focusing primarily on outcomes. Self-driving vehicles are a good example. What if the outcome is bad?

    Advertisement  Decode your Future with an online Computer Science degree from Drexel
    Drexel University’s online computer science program is designed to prepare you for a career at the forefront of technology. Students with any experience level or prior knowledge can benefit from the curriculum.

    The self-driving vehicle will select the outcome that is least harmful. You can argue both sides if AI is forced to choose between a navigational move that kills the “passenger driver” or two people on the road who crossed the red light.

    You can argue that the pedestrians were wrong. The AI will then make a moral judgement and decide to kill pedestrians. AI can also say that it will try to kill as few people as possible. There is no right answer.

    What are the concerns surrounding regulation?

    OpenAI developers are known for building their technology out in the open. The developers could continue to work on their technology and develop a more sophisticated version in two years. They decided to reveal the breakthrough to the public so that people could start to think about regulations and how they can be applied.

    How do you begin the assessment process?

    In relation to these first two questions, a third question is also relevant: Who or what are the contenders? We can have a government that is localized within a single country, or we can have separate national entities such as the UN which may be powerless to deal with these situations.

    You can argue that self-regulation is the best option in industries with industry self regulation. You will still have bad actors. It is possible to have professional associations, but that would mean more bureaucracy. AI is advancing at a rapid pace.

    What is your best approach?

    OpenAI, Microsoft and Google are all working together to develop more fundamental technology. Many AI companies also use the technology created to work with another level of abstraction. They are also the oldest entities.

    Recent Articles

    Aishah Hasnie Husband Dating Dalton Blaine Everything to Know

    Aishah Hashie is a prominent Pakistani American journalist who has established herself in the media as an impressive figure. She has been recognized...

    Darren Daulton Wife Amanda Dick Read More

    Darren Daulton made an indelible mark on Major League Baseball as a professional baseball catcher for the Philadelphia Phillies, yet off the...

    Love Island Tink Reading Ethnicity Find the Facts

    Tink Reading is the charismatic new addition to Love Island 2023. Her striking beauty and vibrant character have captured the attention of...

    Reeves Callaway Passed Away Get to Know the Details

    Early Life of Reeves Callaway Ely Reeves Callaway II, also known as Reeves Callaway to the rest of the...

    What Happened to Patrick Landers Jr Discover the Updates

    What happened to Patrick Landers Jr.?Allen Kerr, 34, from Hillsboro shot and killed Patrick Landers Jr before turning the gun on himself...

    Related Stories

    LEAVE A REPLY

    Please enter your comment!
    Please enter your name here

    Stay on op - Ge the daily news in your inbox