It happened again last night , which meant neither off the two devices that should have been switched off at 0500 ( as a result of the input_boolean being set to a false state ) ran correctly . Both ran for another 30 mins until being switched off at 0530.
Could it perhaps be an API/server error on the Octopus end? Is there a way we can catch/check what is being retrieved?
As far as I can tell, JSONDecodeError: Expecting value: line 1 column 1 (char 0) specifically happens with an empty string (i.e. empty response content)
I suoppose it could be the problem , I don’t know enough about python to know how/where to start modifying the code to better trap this.
As an aside I’ve never really understood why the integration goes back to the python api servers as often as it does. The costs are published at about 1600 so from then on it could check every 30 mins until it has them but once it has that’s it until the next day isn’t it? Also I can’t se the need to retrieve consumption figures more than once a day - but others may feel differently.
I think that is the issue, I can see a few entries in my log too. I agree that it probably doesn’t need to poll out so much but it’d be a fair bit of work to change so unlikely to happen any time soon. (I always welcome pull requests though ) There are a few things I’d do differently if I started again from scratch.
I think I should be able to fix the current issue, it seems to be happening in the update of device_times but is throwing back up a level so the trigger of the timer (which happens after that update) doesn’t happen. The trigger of the timer doesn’t actually poll out so I can add in some better error handling to fix it.
Do you think it’s possible that this is some kind of rate limiting/DoS protection on the Octopus end? Whilst I have no idea how many people are using this plugin, perhaps you do, if everyone refreshes every half an hour with an NTP synced box… I might try skewing the time on my HA box as a quick test.
I was playing around with a couple of manual curl calls yesterday, for around 20 seconds either side of 00/30 minutes and (whilst it may have just been pot luck) it did seem to take 5 seconds or so in some cases to return the API data.
Sometimes it seems that the ‘current rate’ does not update, whilst the previous rate did/does, so the sensors can also get a little out of sync with the card data.
EDIT: Well now I’m confused… got ‘the error’ at 08:00, but the octopusagile.* entities DID update this time, including the ‘start_in’ attribute which didn’t seem to previously when I caught it:
2021-04-23 08:00:05 ERROR (SyncWorker_1) [custom_components.octopusagile] 'NoneType' object has no attribute 'attributes'
2021-04-23 08:00:20 ERROR (SyncWorker_10) [custom_components.octopusagile] Expecting value: line 1 column 1 (char 0)
Installed via HACS last night and all ok so far (I Think!)
Wanted the sensors for when to run appliances to show relative times (instead of absolute) by default
I edited /init.py to re-order the attributes so start_in is the first (instead of start_time) - this appears to have worked for me but conscious I may “break” something doing this…
You can do all sorts of things with template sensors
For example, I wanted one which just showed HH:MM, no date and seconds etc, so use it in conjunction with timestamp_custom
- platform: template
sensors:
dishwasher_cheapest_time_to_run:
friendly_name: Cheapest Time To Run Dishwasher
value_template: "{{ as_timestamp(states('octopusagile.dishwasher')|timestamp_local) | timestamp_custom('%H:%M') }}"
However, to customise things like units and icons, you should use the customize.yml - so in configuration.yml:
homeassistant:
customize: !include customize.yaml
Then in customize.yml override various things, for example my electricity monitor is incorrectly detected as being kWh, when its actually Wh, so I override this:
The issue I have found is that (specifically with the octopus agile entities) the customisations were dropped each time the sensor updated (yet if I went into customise them again, all the customisations are there and clicking save re-applied them)
more playing about / learning required - thanks for the pointers
Hi @Markg Is there a guesstimate available on when a code update is going to be available to lessen the dependency on the timer trigger from the successful poll of the api servers ?
This below is just from today .
Whilst I might be wrong, I have a hunch that it’s because of the amount of people requerying every half an hour.
Not ideal, but I think the below has more or less resolved the issues I had - I haven’t had any devices fail to start, or fail to stop, since adding this that I’ve noticed - ‘start_in’ not updating due to the update failure and not updating automatically at start up.
I think you’re probably right, especially if running the service again in a couple of minutes works fine. I will add some sleeps and retries in for anything I can’t get locally in the short term.
Yup! Sorry I should’ve put something on here. There are some notes about what I’ve done so far, essentially the rate sensors and timers should now be more reliable, run_devices may still fail but will log out something useful and you can disable the cost calculations which may still cause errors by setting “consumption: false” in the config yml.
Let me know how you get on!
Did the current, next, previous sensors get removed in this latest 1.1.1 update?
sensor.octopus_agile_current_rate, for example, shows as ‘unknown’ for me after upgrade - along with previous and next, although min_rate seems to be populated.
EDIT: Seems not, from checking github, perhaps it just needs to run for a little. Although I see the below comment in the Github issue log:
“Need to force all_rates to update from the previous 30 min slot if the entity doesn’t already exist to avoid unknown values in the sensors on first startup.”