Here's some useful frequently asked questions. Use the search bar below to quickly find answers to questions you might have.
Whilst English is our primary language, we also offer capabilities in French, Spanish, Italian, Portuguese, Turkish, Russian and more. Get in touch and let us know the languages you’re looking to cover, and we’ll let you know how we can help.
Our models are state-of-the-art and, to our knowledge, we have the most accurate software available. Get in touch, and we can go through specific accuracy-related data that’s most relevant to your use case.
Speed: For video we generally target better-than-real-time (i.e. a 30 second video will take under 30 seconds to process). For images, processing takes ~1 second, and text is a fraction of a second. Processing times vary depending on the size and length of the content.
Scale: Our platform dynamically scales to handle large batches of content and process numerous requests simultaneously, maintaining low response times for even billions of uploads.
Yes - we are a member of GARM and support all GARM brand safety and suitability categories and risk tiers. We also offer custom categories based on your needs. We understand that while GARM has provided a useful framework, there are many issues that might fall outside of its current scope.
We don’t store any personal data, and only process content and data in line with our partners’ platform regulations.
Overall, we’re compliant with all applicable provisions of the EU GDPR, the UK GDPR, the Data Protection Act and any other applicable data protection regulations. We can also work with you to create a tailored agreement specific to your needs.
Our prices are tailored to the volume of content you need to evaluate. Get in touch and we can create a tailored package for you.
We collaborate with leading universities to develop datasets that are wide-ranging, large-scale, and representative.
Our automated moderation models outperform human accuracy in numerous cases. We can process over 5 million minutes of video in a single day - this would take a human over a decade to watch, let alone moderate! Unitary allows human moderators to keep up with the ever-increasing volume of content.
Multimodal algorithms allow us to interpret content as a human would, taking into account text, visuals and audio signals simultaneously, to provide an overall classification.
Our API analyses text, image or video content. We apply multimodal machine learning models to deliver a customisable output depending on the use case.