LLMs are known for their ability to generate code, giving companies virtually unlimited software engineering resources on demand. But what if you could harness that power in real time, letting LLMS write and execute code instantly in production to solve problems as they arise? That’s the bold new reality Riza is building.
Today, the company emerged from stealth with $2.7M in funding, led by Matrix Partners, with participation from 43, to enable LLMs to write and run code autonomously. This advancement allows AI product engineers to enhance their agents’ and workflows’ capability, accuracy, and reliability.
The company plans to use the funding to expand its team, continue its work on untrusted code execution, and develop more tools to make AI code generation more reliable.
How Riza addresses the pain points of software engineers
Former Twilio, Stripe, and Retool engineers Andrew Benton and Kyle Gray founded Riza to tackle a critical challenge in AI and software development. While LLMs can generate code, safely executing it, especially when untrusted or dynamically generated, involves significant risks and complexity. Traditional infrastructure demands human review and a complex setup for secure code execution, slowing development and increasing operational overhead.
Riza’s story began with a Slack message from an old coworker. They faced a challenging problem: safely executing LLM-generated code without compromising infrastructure or getting bogged down by endless human reviews. Benton and Gray — veterans of building developer APIs and plugin systems — created a prototype in a day by extracting the WASM plugin system from their open-source project sqlc.This prototype became Riza’s foundation.
Riza provides an “AI-first infrastructure” that enables developers and AI agents to run code safely and efficiently using a sandboxed WebAssembly (WASM) runtime. This allows LLMs and applications to execute code in multiple languages (such as Python and JavaScript) in isolation, protecting the host environment. The platform is simple to set up, supports multiple languages, and frees developers from managing complex infrastructure or worrying about security vulnerabilities from untrusted code.
Through this approach, Riza enables safe execution of untrusted or LLM-generated code, reduces latency and setup time for code execution in development, CI, and production, and enhances AI agents’ capabilities by allowing them to write and run their tools.
Riza’s “Just-in-Time Programming”
The Riza team coined “Just-in-Time Programming” to describe this pattern. Running unreviewed AI-generated code in production carries significant security risks, including server compromise and data exfiltration. Adopters of Just-in-Time Programming must either build their safeguards or accept these risks.
Riza’s production-ready environment for safely running untrusted code lets companies securely harness the benefits of Just-in-Time Programming. For instance, one customer generates custom reports combining data from multiple sources. An LLM writes code to fetch, join, and analyse the data, then creates charts embedded directly in the report. While LLMs often struggle with direct data manipulation and analysis tasks, having them write and execute code produces reliable, accurate reports.
Having just announced its platform’s general availability, Riza has gained significant traction among companies implementing Just-in-Time Programming. Their customers generated over 850 million code execution requests in March alone.
“We are in the midst of a generational shift in software where AI and coding agents will become the primary infrastructure users,” says Patrick Malatack, partner at Matrix. “This shift doesn’t eliminate human users but creates new needs that existing solutions can’t meet. At Riza, you have a team of engineers who have built some of the most popular developer APIs in history — applying all that knowledge to build the next generation of compute infrastructure.”