Dear MEL Topic Readers,
Smart Speakers: Why your voice is a major
battle in music
Smart Speakers like Amazon Echo and Google
Home are getting popular especially in English speaking countries. Among the
multiple features that these voice-activated devices offer, such as weather news,
traffic information, reading podcasts and ordering food or diapers, the most
popular request for smart speakers is to play music. The request can be very
specific like the title, musician, or label, or could be rather indirect like
the type, genre, or mood of the music the user wants to hear. Then, how does the
“Smart Speaker”, not a human DJ of a radio station, pick up songs that suits
your taste or mood?
That is what Algorithm does by searching the
metadata. Metadata is the information embedded in an audio file that is used to
identify the content, such as the name of the album and artist and the title,
as well as the writer, producer, and publisher of the music.
Sounds like the tags of a homepage. Indeed, the
chance a certain music title is picked up by those algorithms depends on the metadata.
If the name of the musician or the title of the song is hard to remember or
pronounce, it is less likely to be chosen no matter how suitable it is to the listener.
It seems that smart speakers are taking place the place of radios and human DJs.
Enjoy reading and learn about the technology
and new rules of the game music industry and musicians have to deal with.
http://www.bbc.com/culture/story/20190416-smart-speakers-why-your-voice-is-a-major-battle-in-music
No comments:
Post a Comment