Cookie Preferences
close

By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.

Close icon
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

The future of content moderation is not separate AI and human teams. It's a truly blended solution.

The future of moderation is an integrated team working side-by-side. Enter Unitary's Virtual Moderators: AI agents, backed up by expert human judgment, that replace your moderation vendor at a fraction of your current costs, with zero integration.

Unitary

Table of contents

Unitary’s Virtual Moderators keep fast-growing online communities safe. Our hybrid solution delivers moderation decisions with the speed of AI and the accuracy and accountability of human judgement. 

When we started Unitary five years ago, we developed cutting edge AI to detect harms on the world’s leading user-generated content platforms. But detection was only ever part of moderation. Deciding what belongs or doesn’t belong on a platform requires a holistic approach to content - a close look at its context and its relationship to the platform’s ethos and guidelines. 

Human moderation limits platforms

The nuance of moderation plays a large part in how a community maintains trust and builds its identity. And that’s why a significant percentage of moderation on the world’s most technologically sophisticated platforms is still done by people.

But online communities that rely on people for moderation experience limitations. When you’re used to technology supporting most of your platform’s growth, the part of your operations that relies on people quickly becomes the most expensive and unscalable part.

Platforms around the world spend over $25 billion a year on content moderation and employ over a million people, which if it were a city, would be the 10th largest city in the United States.

Human moderation has other limitations. It’s hard to scale up or pare down a human team. When too much flagged content all arrives at once, the speed of moderation slows down and service levels can’t be guaranteed. Headcount turnover means new people constantly require training. When policies are updated, it takes human moderators time to adjust, which incurs additional expense.

Platforms have tackled this by auto-actioning on cases where their detection models are most confident. But there’s a ceiling on what these lightweight scanning models can accurately automate. They cannot automate cases that are challenging, contain very nuanced contextual information, or involve investigating or reasoning through multiple steps. 

On top of this, having completely separate AI and outsourced human teams gets in the way of the fast feedback loops that are essential for surfacing policy edge cases and improving the AI - especially when the outsourced human teams have no incentive to label data accurately.

Platforms are starting to experiment with LLMs, but making these work in a sandbox is very different to making them work accurately and efficiently at scale, and significant ongoing effort is needed to stay on top of constantly-evolving models. 

Unitary’s Virtual Moderators provide a truly blended approach

What we realised at Unitary is that the future of fast, accurate, low-cost content moderation is an integrated team of ML engineers, AI agents and expert human agents working side-by-side, iterating constantly.

Unitary’s Virtual Moderators provide exactly this, delivering moderation at human-level accuracy for a fraction of the price and zero engineering resource.

The Virtual Moderators solution is composed of three elements. The first element is an AI agent - powered by large language models (LLMs) and vision language models (VLMs) - that can learn nuanced policies, gather data and reason through cases just like a human would. The agent can look at content in context, even when the relevant context varies by case. It can moderate any single or any combination of modalities, whether it’s text, images, video, or live streams. 

The second element is a team of expert humans that train, supervise and support the AI agent. When a customer first onboards, this team makes the majority of decisions, training the AI agent in parallel. As the agent learns, it quickly takes on more and more decisions, showing its working and knowing when to escalate the trickiest cases to the human team. As more and more decisions are fully automated, the average exposure time of harmful content reduces.

The third element is a human quality assurance team that constantly oversees the AI and human agents to ensure its performance matches that of a human-only moderation team.

With sparing but precise use of human moderation and oversight, the Virtual Moderators solution is able to reach human levels of accuracy at a fraction of the price of human-only moderation.

Virtual Moderators can be implemented with zero engineering resource

You can implement the Virtual Moderators with zero engineering resources or integration. We know that creating API access and structuring data uses up scarce engineering resources and can delay time-to-value. So we’ve engineered the Virtual Moderators to work directly in the interfaces that your moderators do, logging into your tools and recording decisions exactly as a human would.

Many Trust and Safety teams don’t always have ready access to engineering resources. With the Virtual Moderator, inhouse Trust & Safety team will have a technology partner at their disposal to update policies and refine moderation criteria.

Pay only for what you use, and reduce costs as you grow

Whereas human moderation usually charges by resource, the Virtual Moderator charges per task. That means platforms pay for exactly what they use. Since the Virtual Moderators are available 24/7 and can be easily scaled up and down, platforms will no longer have to worry about utilisation rate during volume troughs or staffing shortages during volume peaks. The scalability of the Virtual Moderator’s hybrid approach means that you can significantly reduce the average time that harmful content stays on your platform. 

Combined with the fact that the Virtual Moderator solution comes with a commitment to human-level accuracy, this also means the solution’s incentives are completely aligned with the customer’s: delivering the most accurate moderation decisions with the highest efficiency.

On top of this, the Virtual Moderators pricing model delivers a lower price per task as volumes increase. Dispensing with the linear moderation costs that come with outsourced human teams enables you to increase moderation coverage within your current budgets, without increasing headcount but instead increasing margins as you scale your business.

How the Virtual Moderators operate to deliver fast, highly accurate moderation responses

Virtual Moderators are accountable and agile

We understand that explainability and accountability are core to trust and safety. So all of the Virtual Moderators provide an explanation of their decisions. This means we can make sure that the Virtual Moderators’ decisions are consistent with platform rules and previous moderation decisions. The explanations that accompany our moderation decisions will provide you smarter insights to inform the evolution of your platform policy.

The Virtual Moderator is able to quickly adapt to policy changes because of its AI-first approach and human fallback service model.

Our AI-first approach to policy adoption means that we can immediately action policy, product, or operations changes. We can incorporate policy updates quickly and with less training overhead, reducing the risk of those changes having unplanned impact on operations. The Virtual Moderators’ human fallback means policy changes that still require training can still be implemented in short order. Our task-based charge model means we’re incentivized to implement the policy changes as soon as possible.

Virtual Moderators reduce your moderator wellness impact

A human-only approach to content moderation exposes moderators to a great volume of stress-inducing content. The Virtual Moderators’ hybrid approach minimises the amount of harmful content that human moderators have to review, focusing their attention on the hardest cases and thereby reducing your platform’s wellness impact.

In short, the Virtual Moderators help online communities reach their potential by providing the most scalable and cost-effective solution to content moderation, helping them to keep users safe and enact their brand values as they grow.

Want to learn more about the Virtual Moderators? Get in touch with us!