Digital Nirvana, a provider of leading-edge media monitoring and metadata generation services, today announced that MetadataIQ, its SaaS-based tool that automatically generates speech-to-text and video intelligence metadata, now supports Avid CTMS APIs. As a result, video editors and producers can now use MetadataIQ to extract media directly from Avid Media Composer or Avid MediaCentral Cloud UX (MCCUX) rather than having to connect with Avid Interplay first. This capability will help broadcast networks, postproduction houses, sports organizations, houses of worship, and other Avid users that do not have Interplay in their environments to benefit from MetadataIQ.
Previously, only Avid Interplay users were able to employ MetadataIQ to extract media and insert speech-to-text and video intelligence metadata as markers within an Avid timeline. Now, all Avid Media Composer/MCCUX users will be able to do this. They will also be able to:
• Ingest different types of metadata, such as speech to text, facial recognition, OCR, logos, and objects, each with customizable marker durations and color codes for easy identification of metadata type.
• Submit files without having to create low-res proxies or manually import metadata files into Avid Media Composer/MCCUX.
• Automatically submit media files to Digital Nirvana’s transcription and caption service to receive the highest quality, human-curated output.
• Submit data from MCCUX into Digital Nirvana’s Trance product to generate transcripts, captions, and translations in-house and publish files in all industry-supported formats.