A little over a month after Ilya Sutskever, the former co-founder and chief scientist of OpenAI, announced his resignation, he returned to the artificial intelligence sphere with a new company.
On Wednesday, June 19, 2024, Sutskever announced that he, alongside former Apple employee and tech investor Daniel Gross, and OpenAI team leader Daniel Levy, have launched Safe Superintelligence Inc. (SSI).
“Building safe superintelligence (SSI) is the most important technical problem of our time,” the founders wrote in a post on X.
Superintelligence is an, as of yet, theoretical form of AI which, essentially, is so intelligent it supersedes any human on Earth.
SSI will be an American company, based out of Palo Alto, California, and Tel Aviv, Israel.
The announcement elaborated that, “Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures,” before concluding with the, quite frankly eerie, message of, “Now is the time. Join us.”
Meanwhile, pressure is rising at OpenAI. The company’s flagship product ChatGPT has seemingly been experiencing an increasing number of errors and outages, with two major outages in under a month.
On June 13, 2024, the company further stirred controversy, especially among people concerned with digital privacy and surveillance, when OpenAI announced on X that it had appointed Paul M. Nakasone to its Board of Directors.
Paul M. Nakasone served as Director of the National Security Agency (NSA) from 2018 to 2024. According to OpenAI, he was hired to “deliver on our mission by protecting our systems from increasingly bad actors.”
Online, the reactions to Nakasone’s hiring were less than enthusiastic.
Famed whistleblower and former NSA employee Edward Snowden stated on X that,
“The intersection of AI with the ocean of mass surveillance data that's been building up over the past two decades is going to put truly terrible powers in the hands of an unaccountable few.”