16.3 C
London
Friday, June 21, 2024

OpenAI researcher resigns, claiming security has taken ‘a backseat to shiny merchandise’

Must read

- Advertisement -


Jan Leike, a key OpenAI researcher who resigned earlier this week following the departure of cofounder Ilya Sutskever, posted on X Friday morning that “security tradition and processes have taken a backseat to shiny merchandise” on the firm.

Leike’s statements got here after Wired reported that OpenAI had disbanded the crew devoted to addressing long-term AI dangers (referred to as the “Superalignment crew”) altogether. Leike had been working the Superalignment crew, which formed last July to “resolve the core technical challenges” in implementing security protocols as OpenAI developed AI that may motive like a human.

The unique thought for OpenAI was to overtly present their fashions to the general public, therefore the group’s title, however they’ve change into proprietary information because of the firm’s claims that permitting such highly effective fashions to be accessed by anybody might be doubtlessly damaging.

“We’re lengthy overdue in getting extremely critical in regards to the implications of AGI. We should prioritize making ready for them as greatest we will,” Leike stated in follow-up tweets about his resignation Friday morning. “Solely then can we guarantee AGI advantages all of humanity.”

The Verge reported earlier this week that John Schulman, one other OpenAI co-founder who supported Altman throughout final yr’s unsuccessful board coup, will assume Leike’s tasks. Sutskever, who performed a key function within the infamous failed coup towards Sam Altman, announced his departure on Tuesday.

- Advertisement -

“Over the previous years, security tradition and processes have taken a backseat to shiny merchandise,” Leike posted.

Leike’s posts spotlight an rising pressure inside OpenAI. As researchers race to develop synthetic basic intelligence whereas managing shopper AI merchandise like ChatGPT and DALL-E, staff like Leike are elevating considerations in regards to the potential risks of making super-intelligent AI fashions. Leike stated his crew was deprioritized and couldn’t get compute and different sources to carry out “essential” work.

“I joined as a result of I believed OpenAI could be one of the best place on this planet to do that analysis,” Leike wrote. “Nonetheless, I’ve been disagreeing with OpenAI management in regards to the firm’s core priorities for fairly a while, till we lastly reached a breaking level.”



Source link

More articles

- Advertisement -

Latest article