Groq, a Silicon Valley-headquartered startup specialising in fast AI inference, has raised $640M in a Series D round, valuing the company at $2.8B. The investment, led by BlackRock Private Equity Partners, with participation from strategic investors including Neuberger Berman, Type One Ventures, Cisco Investments, Global Brain’s KDDI Open Innovation Fund III, and Samsung Catalyst Fund, underscores Groq’s ambition to disrupt the AI hardware market currently dominated by Nvidia.
How will it use the investment?
Groq’s mission is to democratise access to cutting-edge AI products, making resources available to developers beyond just the largest tech companies. This inclusive approach not only expands the reach of AI but also fosters a sense of community among developers. The funding will enable Groq to deploy over 100,000 additional LPUs into GroqCloud, supporting further this mission.
Jonathan Ross, CEO and Founder of Groq, stated, “Training AI models are solved, now it’s time to deploy these models so the world can use them. Having secured twice the funding sought, we now plan to significantly expand our talent density. We’re the team enabling hundreds of thousands of developers to build on open models, and – we’re hiring.”
Groq also announced two key additions to its leadership team. Stuart Pann, formerly a senior executive from HP and Intel, will join as Chief Operating Officer, bringing his extensive experience in the tech industry to the company. Yann LeCun, VP and chief AI Scientist at Meta, will become a technical advisor, providing valuable insights and guidance in AI.
What does Groq do?
Founded in 2016 by Jonathan Ross, a former Google engineer, Groq specialises in AI inference. It develops chips (LPUs) to accelerate AI workloads, particularly natural language processing. Groq’s hardware and software platform delivers exceptional AI compute speed, quality, and energy efficiency. The company offers cloud and on-premises solutions, targeting applications demanding real-time processing, such as autonomous vehicles, advanced robotics, and large-scale data centres.
Groq has rapidly grown to over 360,000 developers building on GroqCloud, creating AI applications using open models such as Meta’s Llama 3.1, OpenAI’s Whisper Large V3, Google’s Gemma, and Mistral’s Mixtral. The new funding will expand GroqCloud’s capabilities and scale its tokens-as-a-service (TaaS) offering, a unique service that allows developers to access and use Groq’s AI compute resources. By the end of Q1 2025, Groq plans to deploy over 108,000 LPUs manufactured by GlobalFoundries, marking the largest AI inference compute deployment outside of major tech giants.
Investors’ views
Samir Menon, Managing Director at BlackRock Private Equity Partners, highlighted: “The market for AI computing is meaningful, and Groq’s vertically integrated solution is well positioned to meet this opportunity. We look forward to supporting Groq as they scale to meet demand and accelerate their innovation further.”
Similarly, Marco Chisari, Head of Samsung Semiconductor Innovation Center and EVP of Samsung Electronics, shared his vision: We are highly impressed by Groq’s disruptive compute architecture and their software-first approach. Groq’s record-breaking speed and near-instant Generative AI inference performance leads the market.”
What do we think about Groq?
Groq’s funding and strategic partnerships position the company as a formidable challenger to Nvidia’s dominance in the AI hardware market. The company’s focus on AI inference, technological advancements, and growing developer community create a strong foundation for future growth. As demand for AI-powered solutions surges, Groq is well-placed to capitalise on this opportunity.