Audio Analysis with Spotify replacement for Echonest?

Hey,
Looking into audio responsive mapping using this tutorial (see below). The tox included uses the echonest API component which has been integrated into the Spotify API recently. Has anyone had luck modifying the echonest component to allow the same functionality from Spotify? If not any tips on dissecting this .toe so I could look at updating the echonest component would be amazing.

Echonest Component
viewtopic.php?f=22&t=4538

-Patrick

Hey,
I developed thsi spotify analyzer using as base the echo nest module and the spotify API.

How to Use the SpotifyAudioAnalysisAPI component (Johanpg27)
Requirements
Follow the python install and module import tutorial derivative.ca/wiki088/index. … ng_Modules . to install the python module “requests”.
Open the resonateSpotifyAPI.toe file in the container “Spotify Analyzer” you can find the replacement for the Echo Nest Module.

API Key: You will need a valid API Key (OAuth Token Can be requested here developer.spotify.com/web-api/c … ysis-track)
Track ID: The Spotify song URI.
At this point when clicking Fetch all the audio analysis will be query on Spotify and retrieve and the Song Information Artist and Title are populated automatically.

Whoa! Thanks so much for this! Nice work!!!

Hello,

Trying to hunt down the resonateSpotifyAPI.toe file.

Doesn’t seem to be attached here.

Thanks!

I use Spotify instead. If you want to use any other music player, you can import your playlist from Google music player using MusConv.com tool.

there are 2 ways of audio analysis in that patch right? the regular TD audio analysis and the Echonest , or they are working simultaneously and helping each other?

interesting, how can i get Segments and barsDuration, i’m tried to divide sound fraction to it’s length, but seems that is not what i want.

maybe someone will have some patience , and will take a little explanation what is goin on there.

i have some questions about:

  1. group primitives . there is a group named - groupchop("count1:beats"). can’t understand how this group is bounded with that Count chop. and how to use this technic in general. we need 2 groups of primitives, one with white color, another with black. we can switch between them with alpha channel right? in Echonest example alpha is controlled by Count Op right?

  2. can’t understand how primScale works
    audioCubes.toe (172 KB)

  1. This is old tscript which hurts my brain just to look at so I wouldn’t recommend this technic. tscript is deprecated and python is the way to do things now. What is its doing it evaluating the channel “beats” in count1 OP and adding it to the work “group”. So the result may be group0 for example. But when I run this it errors with group0 errors because only group1, group2, and group3 are defined in the input.

  2. I have no comment on what primScale is ‘intended’ to do in such a large system, but currently it drives the CHOP to TOP inside the Color component it is beside. The 43 samples in the CHOP are converted into 43 pixels of color.

thank you Ben for your answers! make some sense =)