AI’s role in media in 2018

AI and machine learning are already in use by many within the video industry as a means of driving deeper and more meaningful recommendations and personalization for consumers. (Pixabay)

Artificial intelligence and machine learning are poised to have a breakout 2018 in the world of media and entertainment.

During the recent IBC show, Adrian Drury, group technology strategy and insight director at Liberty Global, called AI "something that's transforming our business today."

"It will disrupt pretty much all areas of the market," agreed David Mowrey, head of Product & Business Development for IBM Watson Media. “We’re just scratching the surface on how transformative AI will be.”

The Pay TV Show

The new meeting ground for video programming distributors!

At The Pay TV Show, taking place May 14-16 in Denver, we'll look at the innovative technologies, strategies, and business models that cable, telecom, tech, and media companies are using to compete in what has become a very disrupted marketplace.

Artificial intelligence research dates back decades. AI often refers to any machine or computer that exhibits cognitive functions similar to those shown in human beings. Machine learning is essentially a form of AI that suggests machines can learn in order to program and reprogram themselves without the need for humans to do the programming manually.

According to an IABM survey of end-user organizations, about 5% of respondents said that they have already deployed AI technology, and 38% said that they are likely to over the next few years.

At CES last month, the hype train around AI and machine learning rolled on, with optimization and personalization emerging as two big topics surrounding the technologies.

Optimization and innovation

Quality of content is only a piece of the video delivery puzzle. Another big factor is the efficiency and consistency with which that content is presented to the end user.

Damian Mulcock, vice president of business development and service provider business at Cisco, said a big focus around AI for video use cases relates to the network. Specifically, seeing what can be learned about video delivery reliability from the information used to optimize and rectify network issues that disrupt the quality of experience.

“For us, the network intelligence we can use to optimize how we’re delivering content is pretty key,” said Mulcock.

It’s somewhat similar to what firms like Conviva are doing with products such as Video AI Alerts that use machine learning to help publishers automatically find and diagnose issues with streaming video assets.

At CES last month, Cisco was particularly focused on demonstrating AI’s role in terms of cloud DVR. The company is using AI to watch traffic patterns in order to optimize playback for resiliency.

“You imagine the network infrastructure that’s required to record and play back cloud DVR, and the peak rates and learnings that we can get from what’s happening with both the content usage as well as the network usage, we’re actually showing how to use AI and machine learning to optimize the network and content to get a better experience for consumers,” said Mulcock.

A network optimized through the use of AI and machine learning would help by giving consumers consistent access to a kind of AI-assisted video innovation called “volumetric video” touted by Intel CEO Brian Krzanich during his CES keynote address.

According to media analyst Colin Dixon, Krzanich discussed the technology in terms of Intel’s VR project with the NFL, in which Intel surrounds the interior of an NFL stadium with 5K cameras and then uses AI to stitch those images together. He said that creates something called “voxels”—essentially pixels with depth and volume—that can be used to let viewers see the field from any angle, regardless of where the cameras are placed.

“Intel has built a line of neural networking processors and AI software which is powering this volumetric video approach,” said Dixon, who added that the process creates a huge amount of video data.

Personalization and recommendations

AI and machine learning are already in use by many within the video industry as a means of driving deeper and more meaningful recommendations and personalization for consumers.

Major SVODs like Netflix are reportedly considering applying AI technology toward presenting users with personalized trailers and previews for series and movies. Beyond hypothetical use cases, AI is already a big part of everyday functionality for many SVODs.

Ben Smith, senior vice president and head of experience at Hulu, said that AI is not just the future for the media and entertainment industry, it’s the present.

“Almost everything you see in [Hulu’s] UI is delivered by AI. There are recommendation algorithms that are being updated during the course of the day as people watch things,” said Smith.

He said at Hulu they are focused on two things in particular with regards to AI. First is the negative case, when users say they don’t want to watch something, and adding that element of control for the users. The other area where Hulu is exploring AI a great deal is within customer support, through its partnership with Salesforce.com.

Smith said Hulu is beginning to develop chatbots and is looking at how to make them handle both customer conversations about content recommendations and information, and technical issues.

While AI is finding useful applications in terms of recommendations and improving customers’ experiences, Philo CEO Andrew McCollum cautioned that AI alone should not be relied upon for contextual recommendations.

Philo is a streaming service that launched in 2009. Late in 2017, the company introduced a $16 per month live TV service that offers programming from cable networks including A&E, AMC, Discovery, Scripps and Viacom. Within the service, AI has been a big factor in building the recommendation engine. But at Philo, McCollum said they think about that a little differently.

“I think recommendation and having the ability to do that algorithmically is really a key to the platform. But it’s also something where TV tends to work a little bit differently than other things. Because shows are really personal, because they’re a big investment of time, if you start watching a show and you watch it for a season or multiple seasons, you’re spending dozens of hours of your life watching the show. It’s really different than just, ‘Hey, Pandora, play me a song that I’ll like,’” said McCollum.

He said that shows become a big part of people’s lives and their conversations, which has led Philo to believe that having context and meaning around recommendations is more important. The company wants to use algorithms but also wants to show users that their friends are also watching that show.

McCollum, one of the co-founders of Facebook, said that Philo’s approach is similar to something he learned in the early days of the social network. He said Facebook took a long time to implement machine-learning algorithms because so much of how the site worked was driven by the natural patterns of social groups. Even after those algorithms came into play at Facebook, he said the service stayed married to the social context as well.

“We want to give you more to grab onto rather than just spitting out a match,” said McCollum, suggesting that AI is important but that other sources of recommendation context are as well.