Peter DiMaria is Using Tech to Make You the Ultimate Playlist

How cool would it be if your favorite streaming service could sense your mood and play just the right song to complement it? Or what if it could sense that you were sad and proactively cued a cheerful and upbeat playlist to boost your spirits? Sound far-fetched? It’s not.

Welcome to the next chapter in content discovery—one that’s defined by the user. And that means listeners get something they’ve never really had before—a unique and fluid experience that’s no longer only defined by general categories like jazz and pop.

Sound complex? It is. But don’t worry. Peter DiMaria, Head of Content Architecture and Discovery at Nielsen Gracenote, is at the control panel. Not only does Peter have an extensive background in “content understanding,” he has an immense love of music and music discovery.

“Back before the digital music revolution, record stores were a great place to listen to and discover new music," Peter says. "Today, we’ve solved the problem of affordable, instant global access to every recording ever made, but we’ve created a new problem: finding the music people will love the most when they have tens of millions of choices. That’s where Nielsen Gracenote comes in.”

Music Discovery for the Digital Age

Today, music is still often categorized by broad genres alone. For instance, the Rolling Stones are largely known as a classic rock band. But that doesn’t really tell you much about their unique sound. And it tells you even less about the great degree of stylistic variety there is in their catalog dating back from 1963 to today. Many of their tracks have an R&B or blues feel, while others get into psychedelic pop, folk-tinged ballads—and even disco.

The new discovery process starts with understanding what’s really going on inside a song beyond the traditional genre to include its unique music characteristics in order to decode music’s true nature.

“At Nielsen, we’re using data and technology to recreate the classic record store experience online,” Peter explains. “But now, we can actually make it better. That’s because any song—whether it was released today or 50 years ago—has the potential to find a new fan. You could say that we’re helping the music find its fans.”


“Any song—whether it was released today or 50 years ago—has the potential to find a new fan. You could say that we’re helping the music find its fans.”


But Peter and his team aren’t limiting themselves to understanding music using traditional methods alone. They’re using acoustic analysis and machine learning to define the unique combinations of harmony, melody, rhythm, instrumentation and vocal character, that when combined, act to create musical styles and moods in the mind of the listener. Machine learning delves deeper to understand the essence of a song—the story it tells. It’s also scalable, global, precise and objective.

Gracenote, A Nielsen Company

Gracenote helps people connect to the music, TV shows, movies and sports they love across the world’s most popular entertainment platforms and devices.

Learn More >

“Supervised machine learning, for example, is a hybrid approach where we teach algorithms to make the same judgments that a human musicologist might make—but on a massively scalable basis,” Peter explains. “This allows us to create richly descriptive semantic data for each of the more than 100 million songs that have been recorded over the last century—something we could never do with traditional editorial methods alone.”

Music Discovery and True Mood Music

So what does this mean for listeners? It means they have a world of music at their fingertips that’s personal, unique and evolving. That’s because our clients use our data and services as a key ingredient in creating engaging listening experiences that build on the premise of music discovery—just like record stores used to. Today, playlists are the new album, and the quality of a playlist can be a true differentiator.

“By understanding which attributes of a particular song make it compelling to someone, we can help present that song to people at the right time—at the right moment in the listener’s life,” Peter says. “And that can lead to some really fun music discovery, surprising you with a song that has just the essence you’re interested in, even though it may have been recorded by an artist you would never associate with that sound.”


“By understanding which attributes of a particular song make it compelling to someone, we can help present that song to people at the right time—at the right moment in the listener’s life.”


But these capabilities are just the tip of the sonic iceberg. The Gracenote Sonic Mood system can recognize more than 400 distinct music moods, which allows us to catalog the entire universe of recorded music according to mood as well as genre. But there’s way too much music already out there and still being produced for humans alone to take on this project.

“We’re also using machine learning to bring this to life,” Peter explains. “For example, we might give the algorithm a large set of examples of songs that we, as humans, perceive as being melancholy. From there, the machine looks at the acoustic waveform patterns of those songs and determines the common thread. By doing this, the machine now has learned how to perceive music mood just as humans do.”

Putting Music into Context

All Access With Gracenote Sports

The biggest media companies and major sports organizations use Gracenote Sports to keep fans engaged with their favorite players and teams across platforms.

Learn More >

Again, mood is just one of the areas Peter and his team is exploring. He’s also working on what he refers to as “contextual adaptation,” which focuses on building systems that are aware of contextual elements like where and when we listen to music, the time of day it is, the weather and even who we’re with.

“We all have preferences for different types of music that depend on all of these factors,” he says. “And contextual adaptation focuses on those factors and creating a music experience that will be right for you at that moment.”

Peter’s team is also partnering with automakers to find ways that music can be used to make a safer driving experience. For example, they’re researching ways to adjust the in-car music to keep drivers focused on the road, say if the system detects that they’re sleepy or angry.


“To me, innovation starts with observation. I like finding gaps in current offerings and then brainstorming and designing to bridge those gaps.”


“To me, innovation starts with observation,” Peter says. “I like finding gaps in current offerings and uncovering false limits—and then brainstorming and designing to bridge those gaps and ultimately build outside those original limits.”