We are now also processing video speech in many more languages
We have a few exciting product updates to share today.
Unitary Standard classifies images, videos and any text like titles and descriptions against the following categories of harmful content:
Unitary Standard works off-the-shelf. You can start getting value out of it as soon as we give you an API key.
Unitary Standard goes beyond basic Items and Characteristics classification. It uses all modalities, from speech to texts within the images, to determine if a piece of content is harmful.
As a reminder, we are also offering Unitary Premium, which provides further accuracy than Unitary Standard by fine-tuning our models to your platform's data and content guidelines. Please let us know if you’d like to explore an evaluation of Unitary Premium that can give you further classification performance.
We’ve improved our API documentation to make the onboarding to Unitary's API even simpler, including:
All Unitary products are now processing speech in many more languages.
Additionally, performance for English has slightly improved.
The API add-on to Include speech audio transcriptions in the API response will incorporate two new fields to help you detect any language spoken in the video.
These two new fields will be included under a new speech group field that will also contain the field texts, which returns the actual speech, as seen in the example below
{
"speech":{
"texts":["This is speech", "in a video"],
"language":"en",
"language_probability":0.7
}
}
We know that speed is important for many of your use cases so we are now offering a synchronous response for our image classification endpoint when it targets sub-second latency. This removes the need to implement webhooks for images.
Let us know if you’d like to know more about any of these updates!