Thanks @TheOtherGlen… I didn’t want to remove it on the assumption that it was part of the suggested code. Also that everything was working so well. But I thought it might worry others like me in the future.
Have now commented the relevant lines out and restarted and all is still good. Will delete the offending lines when I understand the entity_id deprecation issue better. Have been reading lots about YAML and HA but still at the stage where I have to continually revise.
I’ve commented out all the “entity_id” lines of code - representing legacy efforts to get the Energy function working. Doesn’t seem to have had any impact on the workings of the data and results.
Firstly, rather than put my SA config directly in configuration.yaml, I’ve put mine in a package that is included in the main yaml file (see solar_analytics.yaml in the packages folder). This helps to modularise things and keep the main file a lot cleaner.
Secondly, (and maybe of more interest to you) I’ve used the site_id that is retrieved from the site_list service call to build the URL for the site_data service call. This means your config doesn’t need to hard code the site_id.
Lastly, I’ve created two “calculation sensors” (this is my terminology, not an official term… as far as I know) that hold all the logic for extracting values from the REST services and calculating any totals, most recent values, etc. The benefit of this is that it separates the complicated code and puts it all in one place and greatly simplifies the config for the sensors that are actually usable in HA.
Anyway, hope you can find something useful in there. Let me know if there’s anything that needs more explanation.
Thanks again to both of you for all of this. It’s been running nicely for me. And yesterday I updated it with the latest version on github.
I’ve been background reading and poring over the yaml to try to better understand it… then went down lots of dead ends before arriving at some extra sensors. I can now read the latest 5 minute data for any of the SA clamps. I got there by hacking together some of your earlier yaml and things I picked up elswhere. But it turns out it’s a lot like your sa_data_detail (which for me returns data for my stove and oven) - except that it’s the most recent readings. I managed this by using an end date that should be always one day in the future (kudos to my daughter for that bit of the yaml). It’s easily adaptable to return data for any or all 6 of the clamps.
Will copy my code below… welcoming any feedback on how it could be better… there are plenty of bits I don’t truly understand. Maybe it will be helpful for someone else with the same basic skills as me.
Now I can get on with some automations using the SA clamps as triggers. Will also be doing some work to better understand time syntax too… just that I noticed that SA report in Unix time syntax but as if it were in the local time zone. So my time sensor is not right… maybe that’s why your sa_detail_time sensor is reporting the end of the 24 hour period as if it were midnight UTC rather than local time.
Substitute site ID for the #s in the resource template. And note that I am only looking at data for 3 of the attributes… but could easily get the others if needed.
Thanks for putting this up. I’m pretty new to all this but I can copy and paste.
I’m not sure if what I did is correct of not but based on the instructions I changed the site_id 36304 in the 2 URLs as mentioned.
However there were 2 URLs and 2 sections (Dashboard status and 5min update) that didn’t seem to get data until I changed the 36303 (note the different number) to my site_id
I do get data doing this, the Dashboard status section has gone from unknown to uncertain and the 5min update section now has data.
Is this what I should have done?
Thanks, Josh
Note: mine shows GET360 as the portal as Green Energy Tech installed this if that makes a difference.
Sorry, it is a little confusing, I’m pretty new to all this. It’s all working fine now. The original instructions on Github are as below:
Substitute your site_id for the placeholder site_id (i.e. 36303) in the 2x resource URLs below.
Changing just those two URLs with 36303 didn’t work. I had to change another 2 URLs with 36304 to get it to work. The instructions mention 36303 and the config yaml mentions 36304
These snippets are in the original Github configuration.yaml
# Note: site_id value is required in URLs used in gets below - e.g. 36304
# Solar Analytics - get the site status for the specified site_id
resource: https://portal.solaranalytics.com.au/api/v3/site_status/36303
# Solar Analytics - get the site raw data for the specified site_id
resource: https://portal.solaranalytics.com.au/api/v2/site_data/36304?gran=minute&raw=true
# Solar Analytics - get the daily data
resource: https://portal.solaranalytics.com.au/api/v2/site_data/36304
# Solar Analytics - get detailed 5 minute data
resource_template: 'https://portal.solaranalytics.com.au/api/v2/site_data/36303?gran=minute&tstart={{ now().strftime("%Y%m%d") }}&tend={{ now().strftime("%Y%m%d") }}'
I just wanted to check what I did was right and I’m also guessing since mine is working the instructions should be:
Substitute your site_id for the placeholder site_id (i.e. 36303 and 36304) in the 4x resource URLs below.
Hopefully that makes a little more sense.
Thanks, Josh
Many thanks to all the contributors on this, you’re legends! Have updated my config and some early numbers have just started coming into the HA energy dashboard so looking like its working.
I hadn’t been able to play with this for a few months but I took some time recently to revamp and simplify my Solar Analytics sensors.
There were quite a few things I didn’t like about previous iterations of this config:
It was mixed in with everything else in the configuration.yaml file. The latest version is in a standalone package file that is included in the main configuration.yaml file.
It was complicated. This was partly due to the complexity of the data in the “site_data” service but also just my inexperience with rest sensors, template sensors and jinja syntax. The latest version is much simpler.
The rest sensor that called the site_data service either relied on a hardcoded site_id (which would be exposed publicly if published to a git repository) or relied on called the site_list service first (which seemed like an unnecessary service call for something that never changes!). The latest version uses an input_text helper to store the site_id and it retrieves the value from the secrets file.
The site_data service with “raw=false” returns decent data for solar generation and total consumption but due to the fact that the data comes in 5 minute “buckets” it is not very accurate for calculating the energy that is imported from or exported to the grid. Fortunately, the “raw=true” data already contains these grid import/export values so we can use them directly. But as @PeterH24x7 pointed out here the timestamps in the raw data are a mess… but they’re not irredeemable. Thanks to @ddwdiot’s post here, I realised that the timestamps in the raw service were (kind of) valid, they were just using the wrong timezone - they were using the timezone of the solar site instead of using UTC. This meant that when the timestamps were converted using standard unix epoch timestamp conversion rules, it looked like the time was about 10 or 11 hours ago (I’m in NSW, Australia) whereas they were really for 5 minutes ago. Once I had that worked out, it was relatively simple to work around the timestamp screwiness and now my data, using the raw=true service, is more precise.
My sensors used the older syntax with the “sensor” integration + “rest” and “template” platforms. The latest version now uses the rest and template integrations. (Is the newer syntax any better? It’s debatable… but I think it is slightly simpler)
If you want to install it the package manually, just upload the package file to your config folder (or subfolder) and then ensure you have a packages item in configuration.yaml that includes the package file itself (or the whole subfolder. I use the subfolder option).
In my experience, yes.
The “raw data” sensor more closely matches the data from Solar Analytics web page/app. From memory, the difference between the sensors is more obvious when your consumption exceeds your solar production within any 5 minute period. And when those differences are added up over a day then it can become significant.
YMMV