Governments and organizations are working to regulate AI technologies and tools. Artificial intelligence researcher Professor Aleksandra Przegalińska from the Leon Koźmiński University, artificial intelligence expert Lech C. Król and TVN24 reporter Piotr Szostak talked about the opportunities and threats resulting from the use of artificial intelligence in “Fakty po Faktach” on TVN24 .
The rapid development of artificial intelligence (AI) causes governments and international organizations to race to regulate its tools and create the first legal regulations regarding this technology. AI developers at the first dedicated global security summit in the UK on November 2 agreed to work with governments to test new tools before making them more widely available to help manage risks associated with the emerging technology.
– This is a technology that is almost nine decades old, so it has a bit of a history of ups and downs. Indeed, recently she has been able to perform various tasks as well as a human or better – she said in “Fakty po Faktach” on TVN24, prof. Aleksandra Przegalińska from the Leon Koźmiński Academy.
The artificial intelligence researcher pointed out that new algorithms that have been created in recent years have meant that “in a large number of knowledge-based tasks that we have so far assigned to people, analyzing various tables in Excel, but also writing poetry or creating business reports, can show off.”
– In this sense, it is a rival to humans. He is currently reasoning, but we wouldn’t call it thinking. She learned a lot from us. We have given her all our cataloged knowledge, so she relies on it and can also support us, she added.
“One of the concepts assumes that we are creating a god”
Artificial intelligence expert Lech C. Król emphasized that computers have learned to understand the world. – Until now, there was a person who had to explain the world to them, he entered the train timetable and they created the result using a human-made algorithm. Today they are multimodal, i.e. they understand what is drawn, they understand what is written and they understand what is said, he said.
Lech C. Król admitted that opinions were divided on whether artificial intelligence itself “combines” or recreates “by finding something that was similar”.
– One of the concepts of artificial intelligence assumes that we are creating a god – he pointed out. – It could happen very quickly, even tomorrow. There is a concept called “FOOM”, which is that some artificial intelligence will achieve a certain level of intelligence similar to humans and will be able to improve itself a little, he said.
– He will improve and decide that he wants to improve even more. Therefore, it will improve once again, and because it will be smarter than before, it will improve faster and better, and with each subsequent cycle it will rapidly reach a level that will exceed human intelligence – he added.
When asked whether such a development of artificial intelligence would be related to the fact that someone would connect it to even greater memory resources on servers, based on which it would come to some conclusions, he replied: – the wealth of knowledge or understanding in neural networks will make it more powerful . – At some point, if she starts learning on her own, she will connect these servers herself – he added.
“It’s something people believe in, but she doesn’t believe in it.”
Prof. Aleksandra Przegalińska, when asked on what basis artificial intelligence is able to determine that, for example, there is no Santa Claus, she replied: – this may indeed be the case in the case of any language model, that it is a copy of some statement that is in the knowledge engine of artificial intelligence . It may also be the accumulated knowledge of many people who spoke on some forums.
– Artificial intelligence, based on many statements, could have come to the conclusion that it can create a synthetic whole where it is presented in a polite way, that it is something that people believe in, but it does not – she said and explained that this may be due to that in its extensive database there is no large representation that would say that he (Saint Nicholas) is absolutely real.
The researcher pointed out that “it is possible to create a language model fed with information about the existence of elves, dwarfs and Santa Claus.” – Then such artificial intelligence answers this question completely differently – she added.
Threats arising from the use of artificial intelligence
Lech C. Król pointed out that both the head of Tesla, SpaceX and the X platform Elon Muskas well as Sam Altman, who served as OpenAI’s CEO until 2023, believe that artificial intelligence can be dangerous.
– Now, if you look at certain similarities in events, groups of scientists talk about artificial intelligence in a very similar way to how they talked about nuclear energy several dozen years ago – he said. – The only difference is that there the superpowers fought for primacy, and now corporations are fighting among themselves – he added.
According to Lech C. Król, “we are playing with fire because we are creating a tool that is already better than us in many things.” – Artificial intelligence has created algorithms for matrix multiplication. For 50 years, scientists have been trying to find a better way, and artificial intelligence found it, he pointed out.
– The intellectual capacity of AI is higher than ours, because after several dozen years of work we have not managed to create such a solution, and it did the calculations in a few minutes and that’s it – he said. Lech C. Król also mentioned the example of AlphaGo Zero – a model that learned on itself how to play a popular old Chinese board game. – Not only is he better than the best Go player, he is unimaginably better – he added.
Szostak: AI can fuel disinformation
In turn, Piotr Szostak, a TVN24 reporter dealing with topics related to artificial intelligence, said that the current artificial intelligence, based on machine learning, “is primarily an achievement in the field of statistics.” – The change is enormous computing power and better algorithms for data analysis, but it is still statistics – he pointed out.
– I do not share fears that artificial intelligence may be an existential threat to the human species. I think these are science fiction fears. What is more important is here and now or in a few years and what a threat artificial intelligence may be – said the TVN24 reporter.
– For example, disinformation and fake news are an area where artificial intelligence will make it much easier to create this type of content. ChatGPT already writes English better in terms of style and grammar than many people. This translates into lower costs of creating this type of content, he added. All this, as he explained, can lead to disinformation fueling itself.
Main photo source: Adobe Stock