MulticoreWare Demos LipSync Technology to Automatically Detect Audio-Video Synchronization Using Deep Learning and GPUs at NAB 2017
- On April 20, 2017
Saratoga, CA / April 20, 2017 – MulticoreWare, developers of the x265 HEVC video encoder, are showcasing LipSync, a technology that uses deep learning and artificial intelligence to automatically detect audio-video synchronization errors in video streams and files. MulticoreWare will demo LipSync on the showfloor of the 2017 National Association of Broadcasters Show (NAB 2017) in Las Vegas.
Typical causes of audio-video misalignment include transmission and transcode errors, incorrect video cuts, or incorrect frame-rate conversions. With an ever-increasing amount of video content, sources, transmissions, and transcodes, synchronization errors can occur more frequently. MulticoreWare has developed LipSync to automatically detect synchronization errors to ensure content integrity at scales where manual verification is intractable or expensive.
LipSync combines the latest deep learning neural network techniques with statistical analysis to test videos without relying on digital fingerprinting or watermarking. Audio-video synchronization detection is performed by analyzing moving lips and faces and listening for human speech patterns, similar to how a human viewer would watch a video. Unlike a human viewer, LipSync can process file-based content at 2-3x real-time or analyze multiple video streams in real-time using NVIDIA GPU-accelerated servers.
“We are the first-to-market with a machine learning-based solution,” says Arun Ramanathan, VP and GM for Machine Learning at MulticoreWare. “This was made possible by combining our expertise in video processing, GPU computing and deep learning.”
NVIDIA GPUs enabled the development of LipSync into a real-time solution. “LipSync is an impressive example of how deep learning, accelerated by NVIDIA GPUs, solves major challenges in creating and distributing video content,” said Will Ramey, Director of Developer Marketing at NVIDIA. “This innovative application addresses a pervasive problem for the entire industry.”
Telestream and MulticoreWare are partnering to make LipSync available to enterprise customers. Shawn Carnahan, CTO of Telestream said that “identifying audio-video sync errors has long been a challenge in our industry and Telestream is excited to offer an automated solution using deep learning technologies. Telestream is working closely with MulticoreWare to integrate LipSync into our products.” Telestream recently expanded its video quality-control portfolio with the acquisition of VidCheck and IneoQuest.
MulticoreWare is currently demoing LipSync technology for new partners and licensees. Video quality control providers, broadcasters, and content distributors can integrate LipSync into their existing software or pipelines, or use it as a stand-alone product on-premise or in the cloud. On-demand usage is supported on Amazon Web Services (AWS), Google Cloud Platform, and other GPU-accelerated cloud services running Windows or Linux. Licensing models include perpetual on-premise installations, integration licenses, and per-usage pricing.
For more information visit https://lipsync.multicorewareinc.com or MulticoreWare’s booth SU14002 at NAB 2017.
MulticoreWare, Inc. is a leading provider of machine learning technology and services, high performance image and video processing libraries, software optimization consulting, and compilers & tools. MulticoreWare’s expertise in heterogeneous computing forms the foundation of its business, with its headquarters in Silicon Valley and over 200 engineers in 6 global locations. For more information, please visit: https://multicorewareinc.com
This release also available at: http://www.prweb.com/releases/2017/04/prweb14261049.htm