15.7 C
London
Sunday, May 19, 2024

USA. Sundar Pichai, head of Google: poorly implemented artificial intelligence can be very harmful

Must read

- Advertisement -


Google CEO Sundar Pichai has concerns about artificial intelligence. He even admitted that they did not let him sleep at night. According to him, this technology can be “very harmful” if implemented incorrectly.

Speaking to CBS’s “60 Minutes,” Pichai also called for a global regulatory framework for artificial intelligence (AI) similar to the nuclear arms treaties, because, he warned, competition to produce more advanced technology could lead to putting aside security concerns. “It can be very damaging if implemented poorly and we don’t have all the answers yet – and technology is moving fast.” So does that keep me from sleeping at night? Absolutely, he said.

Regulations such as those applicable to nuclear weapons

Google-owned conglomerate Alphabet Inc. in March launched the Bard chatbot, responding to the global popularity of ChatGPT, developed by American technology company OpenAI, which was unveiled last November. Pichai said that as AI develops, governments will need to come up with a global framework for its regulation. In March, thousands of artificial intelligence experts, researchers and its supporters, including the owner of Twitter Elon Musksigned a letter calling for a halt to the development of “giant” artificial intelligences for at least six months amid fears that the technology’s development could spiral out of control. When asked if nuclear weapons regulation might be needed, Pichai said, “We would need that.” The AI ​​technology used in ChatGPT and Bard, known as the Large Language Model, is trained on a huge dataset downloaded from the internet and is able to generate authoritative answers to questions posed by users in a variety of formats, from poems to academic essays and software codes. Pichai noted that AI can cause damage through its ability to produce misinformation. “With AI, it will be possible to easily create a video of Scott[Pelley, CBS interviewer]saying something, or me saying something we never said. And it could look realistic. On a societal scale, it could do a lot of damage,” he explained.

- Advertisement -

Pichai: Society doesn’t seem ready

He assured that the version of artificial intelligence made public by Google through the Bard chatbot is safe, and the company withheld more advanced versions of Bard for testing. Pichai admitted that Google doesn’t fully understand how its AI technology produces certain answers. – There’s an aspect of what we call, everyone in this field calls, the ‘black box’. And you can’t quite say why she said that or why she was wrong. When asked why Google made Bard public if it doesn’t fully understand how it works, Pichai replied: “Let me put it this way. I don’t think we fully understand how the human mind works either.” be ready for rapid advances in AI. He said there “seems to be a mismatch” between the rate at which society thinks and adapts to change, compared to the rate at which AI has been advancing. However, he added that at least people became alert to its potential dangers sooner. “Compared to any other technology, I’ve seen more people concerned about it earlier in its life cycle. So I’m optimistic,” he said.

Read also: A revolution that can affect each of us. “The market will change dramatically”

Main photo source: blog.google



Source link

More articles

- Advertisement -

Latest article