The digital technology and media and entertainment industries are beginning to come together to solve a common problem—how to extract, unlock, harness and make better use of the massive amounts of video content and data they produce.
Anyone who has ever appeared in or produced a movie, commercial or business video is aware that many times a good portion of the footage winds up on the proverbial cutting room floor. The same is true of digital data. It is estimated that more than 80 percent of enterprise data is considered “dark”—created but never used.
+ Also on Network World: Machine learning proves its worth to business +
What if there were a way to pick up this data equivalent of cutting room discards and turn them into new assets? What if technologies such as IBM Watson could work with video editors to unearth data treasures that benefit and excite both producers and audiences? Sounds like the plot to a blockbuster movie, but it is now more fact than fiction.
Cloud-based cognitive solutions and AI technologies at work
Digital technology companies are now busy developing cloud-based cognitive solutions to help uncover new data and insights about video content and its viewers. New AI technologies can help media companies identify meaningful content in video. Using deep learning, these same companies can identify viewers who will want to watch newly created derivative works based on this content. These solutions also can be used by companies in other industries that depend on video to communicate with employees, partners or customers.
Some companies have research projects or narrowly focused cognitive services for video. Some have even employed rudimentary cognitive technologies, able to provide simple transcriptions of what’s said in a video. But what’s really needed is an approach that uses cognitive technology to deliver new insights about both content and viewers.
A great example is a proof of concept IBM developed for the Masters Golf Tournament this April to automatically identify highlights. This system was trained to “watch” and “hear” broadcast videos of the golf tournament in real time, accurately identifying the start and end frames of exciting moments using commentator tone, players celebrating, high fives and other indicators. IBM created a dashboard that showed the latest highlight and the excitement level to prove the ability generate highlights in real time.
IBM Watson analyzes video
This week at the National Association of Broadcasters Show, IBM also is showcasing a new cloud-based service that will use Watson’s cognitive capabilities to provide a deep understanding of video that generates new metadata identifying keywords, concepts, visual imagery, tone and emotional context. Media companies will be able to use the detailed data to better match content and advertising with viewer interests.
The combination of using cognitive extraction technologies to understand complex video content and Watson learning methods to identify viewer preferences will be an industry breakthrough. Building a deep semantic understanding of the video provides a richer analysis of what’s in a video. Using Watson to analyze a combination of viewer behaviors—what people are watching, how long they are watching and what they are saying on social media—provides a deeper understanding of what they may want to watch next or to help inform advertising or marketing teams. The two technologies together form a very powerful combination.
It is innovative cognitive technologies like these that will drive the future of entertainment and business video. It’s a classic win-win for both consumers and companies. These new services will improve the viewer experience by helping content providers deliver video tailored for specific audiences and their interests. And it will empower companies to better monetize and unlock value from their video content.
Many experts have called our era the “Golden Age of Entertainment” with more choices than ever for content. Cognitive technologies are being applied to the media and entertainment industry at a crucial time to better understand and categorize this content to create an even more engaging experience for viewers.
This article is published as part of the IDG Contributor Network. Want to Join?