Help with scrape - crypto wallet balance

I want to know what my BTC balance is with the scrape functionality, but I’m unable to get the correct outcome unfortunately. The sensor is showing me unknown.

The website I’m trying to scrape (just an example of a wallet, so not mine :slight_smile: ) 3FupZp77ySr7jwoLYEJ9mwzJpvoNBXsBnE - Bitcoin Address
When I open the developers view it’s showing me a valid html page (without cookie notification) so I think it’s OK for use.

It’s about the Balance line here:

But when I try to scrape “.price”, “.content price” or "display:inline-block
it does not work…

If someone has another solution to get the balance of a crypto wallet, please let me know.

Look at the page using View Source, not DevTools. Can you see the data in that HTML? If not, scrape cannot help you, because the data is being dynamically inserted using (probably) AJAX.

Reload the page in DevTools with the Network tab active, and look for requests of type xhr. One of those is probably a JSON or XML data set containing the data you want. Then use a rest sensor to read that — ask for help here if needed, but you’ll need to provide the data that’s being received. Redact anything confidential without changing the data structure.

Hi troon thanks for the reply. It looks like the data is still there when opening the page with View Source, but not readable. All those lines are displayed next to each other like this:

at the network tab I don’t see any xhr requests, so that makes it easier to get the data right?

In that case,

  select: "span[itemprop='price']"
  value_template: "{{ value.replace(',','') }}"

if that is the only <span> with that itemprop setting.

“price” is the only one in combinatie with that “span” but the sensor keeps telling me unknown.

<span itemprop="price" content="602328976.13">602,328,976.13</span>

Do we need to do something with “content”?

I’ve just dome some testing: the page tests which browser you’re using, so you need to set a header.

In the Headers section, type in (for example):

User-Agent: "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/ Safari/537.36"

Then it works:


I keep getting sensor is not available when using the Scrape integration in the HA gui.
May I ask what did you fill in here?

I’m used to the configuration.yaml, looks more easier.

Remove the quotes around the select:

1 Like