NEWSLETTER

By clicking submit, you agree to share your email address with TFN to receive marketing, updates, and other emails from the site owner. Use the unsubscribe link in the emails to opt out at any time.

Multiverse Computing grabs $215M to cut AI costs, challenging SandboxAQ and Classiq

Multiverse Computing team
Picture credits: Multiverse Computing

Artificial intelligence is reshaping industries, but the rapid adoption of LLMs has exposed a critical bottleneck: the enormous computational and energy costs of training and deploying these models. Training a single large language model costs up to $5 million and requires thousands of GPUs, doubling infrastructure costs annually. Multiverse Computing addresses this challenge head-on.

Its technology, CompactifAI, uses quantum-inspired tensor networks to compress large language models by up to 95% without meaningful performance loss. On June 12, 2025, Multiverse Computing announced a landmark Series B funding round of $215 million, bringing the company’s total funding to about $250 million. 

The round was led by Bullhound Capital, known for backing transformative companies like Spotify, Klarna, Revolut, Slack, Unity, and Discord. Notable strategic investors included HP Tech Ventures, SETT, Forgepoint Capital International, CDP Venture Capital, Santander Climate VC, Quantonation, Toshiba, and Capital Riesgo de Euskadi – Grupo SPR1.

This investment represents one of the largest funding rounds in the quantum software space and reflects strong confidence in Multiverse’s revolutionary technology. The funding will accelerate widespread adoption of CompactifAI, enabling Multiverse to address the massive costs currently limiting LLM deployment and revolutionise AI infrastructure globally.

Bridging science and business

Multiverse Computing was co-founded in 2019 by accomplished scientists and business leaders. CEO Enrique Lizaso Olmos holds multiple mathematical degrees, including a PhD in Medicine (Biostatistics) and an MBA from IESE Business School. His experience as deputy CEO of Unnim Bank brings extensive banking and business expertise to the company. CTO and Chief Scientific Officer Román Orús, a professor at the Donostia International Physics Centre in San Sebastián, Spain, is a pioneer in tensor network research. His academic work laid the foundation for CompactifAI’s quantum-inspired compression techniques.

The founders identified a gap between advanced quantum research and practical AI solutions. Their vision was to make quantum-inspired algorithms accessible on classical hardware, enabling AI deployment today rather than years from now. They aimed to give companies a competitive edge by offering software solutions that use quantum principles to solve complex industry problems.

What’s so special about Multiverse Computing?

CompactifAI excels at compressing leading open-source large language models, including Llama 4 Scout, Llama 3.3 70B, Llama 3.1 8B, and Mistral Small 3.1, with DeepSeek R1 and additional reasoning models coming soon. Unlike proprietary models from OpenAI and similar providers, the technology achieves up to 95% compression with only a 2–3% accuracy drop. It enables 4–12 times faster inference, reduces inference costs by 50–80%, improves energy efficiency by 84%, and cuts training times in half while increasing inference speeds by 25%.

Unlike conventional methods that reduce neurons or parameters, CompactifAI focuses on the model’s correlation space, replacing trainable weights with Matrix Product Operators (MPOs) through sequential Singular Value Decompositions. This approach enables exponential reduction in memory requirements while maintaining polynomial computational complexity, creating dramatically smaller, faster, and more energy-efficient models.

The practical impact is significant: models compressed with CompactifAI see only a 2–3% drop in accuracy while achieving 4–12 times faster inference speeds, reducing inference costs by 50–80%, and improving energy efficiency by 84%. Training times are cut in half while inference speeds increase by 25%. For example, Multiverse’s Llama 4 Scout Slim model costs just 10 cents per million tokens on AWS, compared to 14 cents for the original version.

CompactifAI models are available through cloud deployment on Amazon Web Services, on-premises licensing for enterprises requiring data sovereignty, and edge computing for devices with limited resources. Over 100 customers are already using Multiverse’s technology across 10 industries, including Iberdrola, Bosch, Bank of Canada, BBVA, and the European Tax Agency. The company has been recognised as a “Gartner Cool Vendor” for quantum software technologies in financial services and holds 160 patents in quantum and AI technologies.

Multiverse Computing operates alongside competitors such as Classiq, SandboxAQ, QpiAI, Quantum Mads, Quantum Motion, Terra Quantum, 1QBit, Zapata AI, and CogniFrame. While these companies focus on various aspects of quantum and AI integration, Multiverse stands out for its quantum-inspired compression for large language models. Traditional compression techniques typically result in 20–30% accuracy loss with 50–60% compression rates, while CompactifAI achieves up to 95% compression with only 2–3% accuracy loss.

Democratising AI and shaping the next decade

CompactifAI’s cost and efficiency improvements could democratise AI access by making powerful language models affordable for smaller organisations and enabling deployment in resource-constrained environments. The AI inference market is projected to grow from $106 billion in 2025 to $255 billion by 2030, and Multiverse is well-positioned to capture significant value in this rapidly expanding sector. 

Beyond cost and accessibility, CompactifAI addresses urgent environmental concerns by drastically reducing AI’s energy footprint, aligning with global sustainability goals. The technology’s scalability and flexibility support real-time decision-making and resource optimisation in environments with limited computational resources, such as autonomous vehicles, mobile devices, IoT applications, and remote operations.

Multiverse’s success demonstrates the practical value of quantum-inspired algorithms running on classical computers, potentially bridging the gap until fault-tolerant quantum computers become widely available. 

With $215 million in new funding and a rapidly expanding customer base, Multiverse Computing is leading a technological revolution that will fundamentally reshape AI model deployment, making it more efficient, cost-effective, and accessible across industries and applications worldwide.

Total
0
Shares
Related Posts
Total
0
Share

Get daily funding news briefings in the tech world delivered right to your inbox.

Enter Your Email
join our newsletter. thank you