In this Streaming Media West 2009 Red Carpet interview Peter Cervieri talks with Ben Weinberger, CEO and Co-founder of Digitalsmiths, the leading indexing and analysis company that creates advanced time-based metadata and publishing solutions for clients like Warner Bros., Telepictures and TMZ.com. Ben also spoke at the Online Video Platform Summit on the Analytics Panel - Measuring Success about how do you collect all of your audience data and metrics, what do you with it and how do you effectively monetize your video.
Digitalsmiths just announced yesterday, the newest version of its flagship product, VideoSense® 2.5. The new version adds enhanced video intelligence and monetization tools including: direct asset upload, advanced clip creation, YouTube integration and a new reporting dashboard for analytics. Digitalsmiths focuses exclusively on Tier 1 content (TV Shows, movies and sports for Hollywood studios, broadcasters, distributors and publishers) using a variety of computer vision, speech algorithm, facial recognition, scene classification and object identification to build a unique and deep metadata framework.
Digitalsmiths was founded while Ben was still in college at Southern Illinois University and was originally a Web design company. In 2002, they began indexing TV show content for TV studio clients, for example, reporting on how things like how many times Kramer mentioned Cuban cigars on "Seinfeld" or where certain scenes took place, at Jerry's apartment or the diner. In 2005, the company began developing a system for automated analysis of time-based metadata which became VideoSense. Their technology was developed by a team that Co-founder and CTO Matt Berry assembled comprising of computer vision scientists out of places like NASA, the FBI, academia and Fortune 500 companies.
Ben offers this advise to media companies and publishers to better measure success:
"You need to come at the publishing process from a holistic approach of, it's not just broadband and it's broadband as a separate business, it's an integrated digital media workflow that you absolutely have to have data. If you don't have data, you're publishing a dumb asset. Your asset is at a a disadvantage compared to every other asset that we're involved with. So put all the data around it, look at ways to generate revenue today and tomorrow that you're going to usethat asset. And once you make a blue print of that asset you can use it over and over again. You don't have to reinvent the wheel."
How VideoSense® 2.5 works
Digitalsmiths’ suite of visual interpretation tools processes each frame of video using proprietary algorithms such as facial recognition, scene classification and object identification to build a unique metadata framework – or MetaFrame – of informed video tags. Specific time-based metatags are assigned across a rich set of variables that go beyond common descriptors like name and date with criteria related to each frame (people, places, objects, dialogue, subject matter) or critical commercial matters such as rights management (geography, music issues, content sharing permissions, licensing concerns). This provides content owners powerful analytics, reporting and search capabilities bundled with publishing tools for direct revenue through syndication and ad targeting and audience building.
Related:
- Read Ben's ReelSEO interview on Image and Speech Recognition in Video SEO: The Digitalsmith interview
- See Ben's Beet.TV interview Beet.TV: A New Frontier in Video Search: Facial and Scene Recognition Converted to Metadata
- Read FierceOnlineVideo Leaders - Ben Weinberger, CEO Digitalsmiths - FierceOnlineVideo
- Read Ben's post-Online Video Platform Summit thoughts on the OVP market, Mapping the Wild West | www.digitalsmiths.com