Humanity is on the verge of extinction, thanks to artificial intelligence – such an alarm is published by leading figures associated with this technology together with representatives of the world of science and other intellectuals. In their opinion, action must be taken to prevent such an apocalypse. Just like actions are taken to prevent pandemics or nuclear war.
“Reducing the risk of extinction due to AI should be a global priority on par with other societal threats such as pandemics and nuclear war,” reads an excerpt from an open letter on the Center for AI Safety website. It was signed by dozens of representatives of the largest companies working on artificial intelligence. The list includes, among others, two scientists called the “godfathers” of this technology – Geoffrey Hinton and Yoshua Bengio.
– Artificial intelligence can be seen as something similar to nuclear weapons. Tech companies are in an arms race right now. They are racing to develop ever more powerful systems and prioritizing that over security. This means that we are dealing with an extremely powerful technology that has the potential to be a ticking time bomb,” explains Dan Hendrycks, director of the Center for AI Safety.
According to the signatories of the letter, artificial intelligence can, among other things, be used to build weapons, spread disinformation and increase oppression in totalitarian states. – As Max Tegmark, a well-known physicist, an expert in artificial intelligence, said, we know very well that its progress is something that at some point may exceed our cognitive abilities. The question is whether at some point, as Tegmark claims, it will not turn out that our relationship to what artificial intelligence has become will be like the relationship of Neanderthals to homo sapiens – says Dr. Łukasz Święcicki, signatory of the letter from the University of Natural Sciences and Humanities in Siedlce.
Recently, thousands of businessmen, investors and AI experts – including Elon Musk – called for a halt to the development of this technology for six months. In turn, the US Congress summoned Sam Altman, the president of OpenAI, the creator of ChatGPT, to a hearing. “If this technology goes in the wrong direction, and it might, we want to talk about it. We want to work with the government to prevent that from happening,” said Samuel Altman. According to some experts and scientists, fear has big eyes. “The most common reaction of AI people to these predictions of doom is to scratch their heads,” said Professor Yann Lecun, head of AI engineering at Meta.
“I think too much attention is being given to artificial intelligence systems taking over humanity right now. I think these are simply dystopian narratives that distract from some of the real problems these systems pose, adds Professor Julia Stoyanovich, director of the Center for Responsible AI.
– It’s not about hitting the biggest tones and preaching some ideas or visions of the apocalypse of humanity or the end of the world as we know it – explains Doctor Łukasz Święcicki. – It’s just a matter of somehow supervising this progress, subjecting it to some control by institutions, mainly state ones – he adds.
The first proposals are already here. The creators of ChatGPT propose to create a global institution to oversee the development of artificial intelligence. It’s supposed to be something like the International Atomic Energy Agency. Such an institution would protect people from the threat of “superintelligent” systems. We need all hands on deck to control this technology. The key in this aspect should be the preparation of appropriate regulations. We need legislation, warns Professor Julia Stoyanovich.
The European Union wants to create the first law in the world that comprehensively regulates problems related to artificial intelligence. The AI act, according to announcements, is to be passed by the end of 2023.
Facts about the world TVN24 BiS
Main photo source: Reuters