Parse MP3 or M3U or streaming URL for: "what's playing" info?

Sort of similar to this question but coming at it from a totally different angle, is anyone currently parsing a streaming URL and using it as a sensor? If so, would you mind sharing how?!

I want to parse for what’s on and return it as a sensor value if poss…

not sure exactly what you’re after. Are you trying to retrieve details from the mp3 track?
Not sure that’s easily possible.
The way I did retrieve the current track/artist from some of my fave radio streaming pages this is with a scrape sensor, but that’s far from easy to achieve and relies on the web page not changing its structure…
here’s a quick python script that I wrote to try and test the select parameter:

# -*- coding: utf-8 -*-

from bs4 import BeautifulSoup
import urllib2
import sys
import subprocess

def bs_get_value():
	content = urllib2.urlopen("").read()

		raw_data = BeautifulSoup(content,"html.parser")
	except Exception as e:
		print("%s - Unable to get BS from URL" % e)


then I created a whole bunch of sensors based on artist, track, artwork for each station and got this nicely tied into a lovelace card:

I’m not gonna lie, that’s quite some complex/advanced setup which took me months to achieve and fine tune…
If you’re up for the task, feel free to check my config:

Cool! I ran your script and it immediately returned

"list index out of range" which I think is because that span contains

Smokebelch II (Beatless Mix)

so I changed it to another span without parentheses and got

Download 'Finally (Lenny Ibizarre mix)' on iTunes


So this really might be possible…

Hi again!

So I’ve made progress. I’ve got this python script which returns what’s on across the main BBC radio channels. I’m a bit stuck as to how to now get this into Home Assistant. I had a look at your config but I think the bit that would be relevant; the shell scripts, aren’t on GitHub? But you’ve already been a huge inspiration…!

import requests
from bs4 import BeautifulSoup

url = ""

r = requests.get(url)

soup = BeautifulSoup(r.content, "lxml")

g_data = soup.find_all("div", {"class": "sc-c-network-item__bottom sc-u-truncate sc-u-truncate--expand@l gel-brevier gs-u-mt-"})

print g_data[0].text
print g_data[1].text
print g_data[2].text
print g_data[3].text
print g_data[4].text
print g_data[5].text
print g_data[6].text

ah, and looking at it still further, it looks like some of the data is sent by MQTT - so I’m guessing it is done on a separate server?

I’ve tried popping the deets into Home Assistant as a scrape sensor

  - platform: scrape
    name: Now Palying
    select: '.sc-c-network-item__bottom sc-u-truncate sc-u-truncate--expand@l gel-brevier gs-u-mt-'
    index: 0

and I just get:

soupsieve.util.SelectorSyntaxError: Invalid character '@' position 62

EDIT Hmm, no immediate panic, looks like I can do the lookup with Node-Red and send the result to HA via MQTT… would be cool to pull in the images as well…

EDIT2 OK, looks like I can get my BBC Radio sensors just fine at the moment using Node Red, a Python script and MQTT. Just hope that they work in the morning, or at the weekend. Not sure I’m directing the message payload in the most sensible way. Or if the way I’m doing it will continue to work throughout the week… we will see.

Flow here:-

1 Like

Hi again - sorry to bother you - I just wondered if you’d ever had any problems with the generic camera platform hanging/freezing? It’s causing me all sorts of problems despite the fact I’m limiting fetch to if the URL changes. But you seem to be using it (or am I mistaken - is there a key difference I’ve missed?) without issue? Do you use DuckDNS or SSL for example? That seems to possibly be the kicker.

No issues my side no. sorry