Gaining confidence from Silicon Valley, Safe Superintelligence (SSI), an AI company co-founded by former OpenAI chief scientist Ilya Sutskever, has closed a $2 billion funding round. This influx of capital multiplies the company’s valuation over sixfold, from $5 billion to $32 billion, in less than a year, firmly placing SSI at the centre of the AI safety movement. SSI’s first funding round was in September 2024, when it raised $1 billion at a $5 billion valuation.
The funding includes a mix of equity and partnerships, with leading firms like Greenoaks Capital committing $500 million and further backing from major players like Alphabet, NVIDIA, Andreessen Horowitz, Lightspeed Venture Partners, and DST Global.
Notably, Alphabet (Google’s parent company) is not only an investor but has also entered into a major infrastructure agreement with SSI, providing access to Google Cloud’s TPUs (tensor processing units)—a significant shift, as Google previously reserved these chips for internal use. This partnership positions SSI to leverage cutting-edge AI hardware beyond the industry-standard NVIDIA GPUs, which dominate over 80% of the AI chip market.
The funding will be used to rapidly scale SSI’s research and development, expand its global operations, and deepen its computing resources for developing safe and powerful AI systems. Rather than pushing out commercial products, SSI deliberately channels the capital into long-term innovation, investing in supercomputing infrastructure, safety alignment research, and a high-caliber team of AI researchers across its US and Israeli hubs.
SSI’s total funding now stands at $3 billion, making it one of the most highly valued AI startups globally before releasing any product or public roadmap.
Here’s an in-depth look at five things to know about this AI unicorn.
Exclusive focus on AI safety
SSI was founded in June 2024 with a bold mission: to create superintelligent AI that is fundamentally safe for humanity. Co-founders Ilya Sutskever (formerly OpenAI), Daniel Gross (ex-Apple), and Daniel Levy (former researcher at OpenAI) broke away from the current commercial trajectory of AI development. They aim to take a safety-first approach, designing systems where alignment and control are embedded from the ground up, not retrofitted as an afterthought. The company intentionally avoids short-term product cycles to stay focused on long-term breakthroughs.
SSI’s mission is singular: to build a “safe superintelligence” as its only product, with all resources and research directed toward this goal. The company’s website currently serves as a placeholder, emphasizing this focused mission.
Operates out of two AI talent hotspots
Operating in Palo Alto and Tel Aviv, SSI positions itself in proximity to cutting-edge research and top technical talent. Palo Alto offers access to Silicon Valley’s innovation ecosystem, while Tel Aviv is a hub for cybersecurity and AI research. SSI is building a small, elite team of engineers and researchers, attracting talent by offering opportunities to work on groundbreaking projects that prioritise global safety.
SSI’s founders launched the company shortly after Sutskever’s high-profile departure from OpenAI in May 2024, following a failed internal coup against CEO Sam Altman. This context underscores SSI’s distinct philosophical and strategic break from OpenAI’s current direction.
Doesn’t focus on building consumer products
Unlike OpenAI or Anthropic, which are racing to deploy chatbots and productivity tools, SSI is deliberately not commercialising its work in the short term. It’s focused solely on foundational research and superintelligence alignment. The founders believe that releasing AI products too early, without sufficient safeguards, risks real-world harm. Their approach could redefine norms in the AI race, prioritising long-term safety over quarterly returns.
SSI’s approach is seen as a subtle critique of the commercial and product-driven strategies of OpenAI, Anthropic, and Google DeepMind, aiming instead for a breakthrough that surpasses current language models and AGI efforts
Real-world applications with safety protocols
SSI envisions applying its superintelligent systems in various sectors, including healthcare and education. For example, in healthcare, their AI could analyse medical records and research to provide accurate diagnoses and personalised treatment plans, ensuring patient data privacy and ethical use. In education, superintelligent tutoring systems could offer personalized learning plans, adapting to individual styles and paces, with safety protocols to protect students’ privacy and provide equitable access.
While SSI’s product is still under development, insiders suggest the company is pursuing novel methods to scale AI systems and achieve reasoning capabilities beyond today’s models, though details remain closely guarded
Potential to reshape AI governance
With the support from Silicon Valley investors and a dedicated focus on safe superintelligence, SSI may become a key player in shaping how the world thinks about and regulates advanced AI. The company is expected to contribute significantly to open research on alignment, safety benchmarks, and best practices for governance. As global debates heat up on how to manage the risks of AI, SSI’s influence could extend far beyond the lab and venture into policy, academia, and ethics frameworks.
SSI’s emergence and massive valuation, despite having no public product, reflect a growing recognition among investors and policymakers of the existential risks and governance challenges posed by superintelligent AI. The company’s “safety-first” branding positions it as a potential leader in the development of global AI safety standard.
Our thoughts
SSI is not just another AI unicorn, It is focused on safety in one of the most transformative technologies of our time. With deep-pocketed backers, a world-class team, and infrastructure partnerships in place, it’s on a mission to build the safest possible future for AI. With $2 billion in its kitty, it has the potential to do exactly that.