Digital Nirvana, a provider of leading-edge media monitoring and metadata generation services, today announced that MetadataIQ, its SaaS-based tool that automatically generates speech-to-text and video intelligence metadata, now supports Avid CTMS APIs. As a result, video editors and producers can now use MetadataIQ to extract media directly from Avid Media Composer or Avid MediaCentral Cloud UX (MCCUX) rather than having to connect with Avid Interplay first. This capability will help broadcast networks, postproduction houses, sports organizations, houses of worship, and other Avid users that do not have Interplay in their environments to benefit from MetadataIQ.
Previously, only Avid Interplay users were able to employ MetadataIQ to extract media and insert speech-to-text and video intelligence metadata as markers within an Avid timeline. Now, all Avid Media Composer/MCCUX users will be able to do this. They will also be able to:
• Ingest different types of metadata, such as speech to text, facial recognition, OCR, logos, and objects, each with customizable marker durations and color codes for easy identification of metadata type.
• Submit files without having to create low-res proxies or manually import metadata files into Avid Media Composer/MCCUX.
• Automatically submit media files to Digital Nirvana’s transcription and caption service to receive the highest quality, human-curated output.
• Submit data from MCCUX into Digital Nirvana’s Trance product to generate transcripts, captions, and translations in-house and publish files in all industry-supported formats.
These capabilities will greatly improve the workflow for media companies that use Avid Media Composer or MCCUX to produce content.
For example, media operations can ingest raw camera feeds and clips to create speech-to-text and video intelligence metadata, which editors can consume in real time if required. Editors can easily type a search term within Media Composer or MCCUX, identify the relevant clip, and start creating content.
For certain shows (reality, on-street interviews, etc.), the machine-generated or human-curated transcripts can be used in the script-generation process.
The postproduction team can submit files directly from the existing workflow to Digital Nirvana to generate transcripts, closed captions/subtitles, and translations. Then the team can either receive the output as sidecar files or ingest it directly back into Avid MCCUX as timeline markers.
If the postproduction team includes in-house transcribers/captioners/translators, editors can automatically route the media asset from Avid to MetadataIQ to create a low-res proxy, generate speech to text, and present it to the in-house team in Digital Nirvana’s user-friendly Trance interface. There, users get support from artificial intelligence and machine learning for efficient captioning and translation.
With timecoded logo detection metadata, sales teams can get a clearer picture of the total screen presence of each sponsor/advertiser.
For video-on-demand video on demand and content repurposing, the abundant video intelligence metadata helps to accurately identify ad spots and helps with additional brand/product placement and replacement.
“MetadataIQ used to only connect with Avid Interplay for media extraction, which meant customers had to use Interplay in their Avid environments to use MetadataIQ. With the success of MetadataIQ and its ability to enhance content production, Digital Nirvana got received a lot of requests from prospective customers to extend our services to non-Interplay users. And that’s exactly what we’ve done,” said Hiren Hindocha, CEO at Digital Nirvana. “This integration makes it so editors can use MetadataIQ throughout the entire pipeline without having to create additional proxies, import metadata files, or anything else — and no Interplay required.”
More information about Digital Nirvana and its products and services is available at www.digital-nirvana.com.