You are currently viewing OpenAI’s former co-founder Ilya Sutskever starts AI venture Safe Superintelligence

OpenAI’s former co-founder Ilya Sutskever starts AI venture Safe Superintelligence


Ilya Sutskever, Co-founder and former Chief Scientist at OpenAI, is starting a venture called Safe Superintelligence Inc, focused on the safety of artificial intelligence

The new venture has a singular goal and product—safe superintelligence, according to a post on X (formerly Twitter).

“Superintelligence is within reach. Building safe superintelligence (SSI) is the most important technical problem of our​​ time. We’ve started the world’s first straight-shot SSI lab, with one goal and one product: a safe superintelligence,” said the announcement, jointly issued by Sutskever and the other co-founders Daniel Gross and Daniel Levy

Gross is the former AI lead at Apple and Levy is a former OpenAI employee. 

The firm aims to tackle safety and capabilities in tandem, viewing them as technical challenges to be addressed through groundbreaking engineering and scientific breakthroughs.  

“We plan to advance capabilities as fast as possible while making sure our safety always remains ahead. This way, we can scale in peace. Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures,” said the announcement on X.

Superintelligence Inc is based out of the US, with offices in Palo Alto and Tel Aviv.

“We have deep roots and the ability to recruit top technical talent,” the announcement said.

After announcing his departure from OpenAI in May, Sutskever had led the effort to oust CEO Sam Altman last year. Shortly thereafter, AI researcher Jan Leike resigned from OpenAI due to safety concerns, to join AI startup Anthropic.


Edited by Swetha Kannan



Source link

Leave a Reply