• Mon. Nov 25th, 2024

OpenAI Co-Founder Launches Company Dedicated to Safe, Superintelligent AI

Jun 20, 2024

Ilya Sutskever, a co-founder of OpenAI, has created a new company that will tackle one of the mostimportant topics in technology today – what happens when an AI system becomes smarter than humans?

This question lies at the heart of Safe Superintelligence Inc.’s (SSI) mission. In fact, the company’s website states that the creation of safe superintelligence is “the most important technical problem of our time.”

Along with OpenAI engineer Daniel Levy and former Y Combinator partner Daniel Gross, SSI hopes to make safety just as big a priority for AI development as overall capability.

Reigning in a Powerful Tool

It’s clear that both the potential benefits and challenges presented by a superintelligent AI have been at the top of Sutskever’s mind for some time. In an OpenAI blog post published in 2023, Sutskever worked with Jan Leike to discuss the potential for AI systems to become much smarter than humans.

Ilya Sutskever co-founds new Safe Superintelligence Inc.
Credit: Stanford HAI

As Sutskever and Leike point out, we don’t have a solution for controlling a potentially superintelligent AI. Right now, our best technique for aligning AI is reinforcement learning from human feedback. Although this works for current deployments, it relies on humans directly supervising the AI.

“But humans won’t be able to reliably supervise AI systems much smarter than us, and so our current alignment techniques will not scale to superintelligence,” the blog post stated. “We need new scientific and technical breakthroughs.”

While this blog post went on to discuss certain actions OpenAI wants to take, Sutskever’s departure from the company makes it clear that he wants to do more with SSI. The company’s website points out that working toward safe superintelligent AI is SSI’s “singular focus” which “means no distraction by management overhead or product cycles.”

The website goes on to mention that the company is working to attract the world’s best engineers and researchers who will focus on SSI and “nothing else.”

At the moment, Sutskever and SSI are light on details concerning exactly how they will achieve SSI. Sutskever gave an interview to Bloomberg on his new venture, and he mentioned that the new venture will hope to bring about SSI by engineering safety protocols within the AI system itself, rather than tacking on guardrails after initial development.

That said, he seems to have a specific vision in mind for the direction he wants this technology to move.

“By safe, we mean safe like nuclear safety as opposed to safe as in ‘trust and safety,’” Sutskever said to Bloomberg.

While we don’t know much about what SSI will be in the near future, it’s clear that Sutskever and the other founders have a singular devotion to safely implementing superintelligent AI tools. SSI is definitely a company to keep an eye on in the coming months and years.


#AI/ML/DL #Security #Slider:FrontPage #OpenAI #SafeSuperintelligenceInc. #safety #superintelligence
[Source: EnterpriseAI]

Related Post