Success! You're on the list.

Success! You're on the list.
London Tech Week

Cambridge female empowering the web: Unitary raises $15M to weed out abusive online content


Unitary, a London-based AI-powered visual moderation company, has just announced to have secured a $15 million Series A investment. The round was led by Creandum (known for backing Spotify, and Bolt) and also saw participation from Paladin Capital Group and Plural. The fund will be used to support Unitary as it continues its growth. With a vision to provide ethical, empathetic AI, Unitary has launched in multiple languages and is already classifying six million videos a day.

Moderation is a colossal task for platforms. The challenge is not just the volume of material that needs moderating, but also meeting the requirements of online safety legislation, and avoiding the mental health harms human moderators suffer. AI has emerged as a solution to the problem, and TFN asked Unitary’s CEO and founder Sasha Haco (also a Cambridge graduate ), about her and Unitary’s journey.

From theoretical physics to practical problem-solving

Haco’s background is not like most founders. She started her working life as a scientist, earning her PhD working alongside Stephen Hawking researching black holes. But, for Haco, being at the forefront of theoretical physics did not have the same pull as a black hole. “Eventually, I realised that I wanted to play a role in solving problems where I could see a tangible impact on the world within my lifetime,” Haco told us. Feeling that black hole research, however exciting, was like working on a small piece of an infinite puzzle, Haco looked for other options. Referring to James Thewlis, her co-founder and Unitary’s CTO, Haco said, “I joined Entrepreneur First, where I met my co-founder, James, who introduced me to the knotty universe of online safety and content moderation. It was a problem I found really inspiring, and we set up Unitary to try to play a part in tackling it.”

Unitary aims at understanding content as well as a human can. “We take into account the extra information surrounding the image or video, to understand whether it’s in an appropriate context or not,” Haco explains. And that additional information makes a crucial difference, “in an image featuring a syringe and some pills, the accompanying context might suggest a medical study, or promote drug abuse.” It means that Unitary can act accordingly when the image or video content is identical, but the intent is very different.

An array of moderation and safety applications

Unitary is currently focused on B2B — although Haco says a user-first experience may follow — meaning that almost anywhere you see visual content, Unitary might be playing a moderation role. “We see Unitary as providing a safety layer across all user-generated content, which means video sharing websites, messaging services, marketplaces, and more,” Haco said.

And it is an enormous task. Video content makes up 80% of internet traffic, and is set to increase by a factor of ten between 2020 and 2025. It’s a scale that is overwhelming for human moderation, but Haco is confident that AI is well-placed to address the challenge. “AI is actually better set up to get it right than humans,” she said. However, she acknowledges that no system is perfect, “there is always the risk that it gets it wrong, and we work hard to mitigate those risks: both of misidentifying content as harmful, and of under-identifying content.” Unitary, therefore, works closely with customers, adjusting its moderation not just to get more accurate generally, but also to get the levels of false positives and negatives right for the platform’s needs.

It’s that AI which has set them apart in the moderation field, says Gemma Bloemen, a Principal at Creandum and one of Unitary’s board members, “we first met Sasha and James two years ago and have been incredibly impressed by the thoughtful way they have built their product and team since, and the way they have scaled and grown the business. Unitary has emerged as clear early leaders in the important AI field of content safety.”

That success has occurred even though Haco’s background is a long way from that of a typical founder, both as a female-founder and with an academic background in the abstract world of theoretical physics. However, that may have been part of her success. “Starting a startup is all about beating the odds,” she told us. “Venture capitalists say they want outliers, and as a founder, you basically have to be one if you’re going to succeed.” However, she believes that the startup scene needs to change.

“Diversity should be the norm, not the exception. It shouldn’t matter whether you’re a woman or a minority; what should count is your idea and drive,” she said. “We need to expand our view of what a ‘good’ founder looks like and offer equal opportunities to everyone, and lower the barrier to entry so that genuinely committed and driven people have the opportunity to start their own companies and solve pressing problems.”

And if there is one field where beating the odds is a necessity, it’s in moderating images and video that are not just increasing in volume, but also in complexity and type. “Our mission is to make the internet safer, and what that means is constantly evolving,” declares Haco. “We will always be fighting to tackle the biggest harms, whether that’s deep fakes, misinformation, or something that we don’t even know about yet!”

Related Posts

Get daily funding news briefings in the tech world delivered right to your inbox.

Enter Your Email
join our newsletter. thank you