Hi Geoff, I appreciate the information and a chance to get stuck into coding. Taking others coding and working out what bit does what is what i tend to do to learn but sometimes i get stuck. I am still ploughing on with this as it is part of a massive project:-
2 x 24v tesla batteries in parallel
2 x 1000w grid tie inverters to feed back
2 x 50 amp 24v chargers to charge up the batteries
and various components to control switch on/off and monitor voltage/current/temperature
so i am digging into your code as i want my chargers to switch on and rapidly charge the batteries at the best times of the day. To use a fixed price means every day i have to monitor and modify. I could use IFTTT in the cloud but i want home assistant to do it all so if the internet goes poof for a few hours, the charger and inverter continue to work offline and HA can continue to control them.
So far i am about to switch over to Agile from econ 7 and currently the setup works charging overnight and feeding back during the day but i am trying different automations to see if i can use your graph to switch the charger on and fill up on the cheapest juice possible.
There may be other ways through the octopus agile integration but my mind is fizzing with ideas and i like an option or two…
Many thanks for your input and insights, good code you have here,
Shaggy
Can anyone point me in the right direction to fix the timezone to my local (BST) from the UTC stored within the low price data array?
I wrote the code some while back, and as I continue to work with JSONata I find that there is always something to learn. The issue with unsorted prices should not happen, but I think I now know what caused this. From version 2, JSONata uses threaded processes when evaluating expressions. This means that any specific order of execution within an expression cannot be relied upon, and my short-cut of sorting a resulting array using a field created during the expression is not guaranteed to perform as expected.
I am currently looking again at this code to make some changes and add extra features, so in due course I should be able to provide a full update, however replacing with the following expression in ‘Build & Save Tariff Array’ should address the issue. In code blocks as (expression1; expression2; expression3) the order is guaranteed by each expression, thus first building the array and binding to a variable, then sorting the array using the now well-defined field ‘from’, will always work.
(
$array:=payload.import#$i.{
"from": valid_from,
"upto": valid_to,
"date": $substring(valid_from,0,10),
"timefrom": $substring(valid_from,11,5),
"timeupto": $substring(valid_to,11,5),
"import": value_inc_vat,
"export": %.export[$i].value_inc_vat};
$array^(from)
)
This should mean that you can safely remove the sort from the card. The gradient colouring is certainly an improvement, thank you!
Ah yes, daylight saving time (DST). Bit of a DiSasTer there…
I wrote all of this back in the dark winter and GMT. The problem in the UK is that half the year we are on GMT which is the same (but not equivalent) to UTC, so any UTC to Local Time issues don’t show. I did make a mental note to check what happened when we put the clocks forward, but clearly missed this one.
I only use this as a visual monitor of Agile prices, and I have moved on to looking at the latest ‘Flux’ tariff offering from Octopus as that could suit my solar PV and Battery system better.
Anyway - Octopus publish as UTC throughout the year. In general terms, systems do and should stay on UTC always. HA keeps all times internally as UTC, and converts to local time at the point of display. This works via JavaScript, which runs in the user browser and has access to the machine local settings. JS has functions for moving UTC from/to local time based on the machine local settings.
What is happening here is that Apexgraph card is using JS to process the timestamp (UTC) and JS converts this to local time at the point of display. My other data in the array, ‘time’, I simply cut using substring to pick from the UTC string, and this will not get updated at any point. JSONata is simply not timezone aware, and I have recently been working elsewhere on trying to deal successfully with timezone and DST calculations in JSONata.
I have looked at this for a day, and have got as far as a prototype fix, so I hope in due course to update the code so as to provide both UTC and ‘local’ time within the primary tariff array, created in the first stage. This can then be rippled down through the other context as required.
To fix the ‘issue’ there are two or three possible approaches. One is to just use a bit of JS and the ‘locale’ functions to do the work.
However, since Octopus only sell UK tariffs within the UK, and since the UK only has GMT or BST, and since we know exactly when this change happens and that it is only +one hour, a simple fix is to work out if we are GMT or BST, and if BST to add one hour to the time. DST starts on the last Sunday in March at 01:00 UTC, and ends on the last Sunday in October at 01:00 UTC.
Thus:
- For the current year, get the day of the week for 31 March, and 31 October (based on 0=Sunday)
- subtract this from 31 to get the date of the last Sunday in March / October
- build an ISO format timestamp as “2023-10-29T01:00:00.000Z” for the UTC time
- turn this into Unix milliseconds, and compare with ‘now’ / each timestamp to see if GMT or BST
- if in BST, add one hour to the time
For interest, here is the JSONata code (this seems OK but I am still testing it, and I have yet to incorporate it into the array-build code).
(
/* FUNCTION: return day of week for 'yyyy-mm-dd' where Sun=0 */
$dayofweek:=function($date) {(
$y:=$number($substring($date,0,4));
$m:=$number($substring($date,5,2));
$d:=$number($substring($date,8,2));
$yr:= $m<3 ? $y-1 : $y;
$t:=[0,3,2,5,0,3,5,1,4,6,2,4];
($yr + $floor($yr/4) - $floor($yr/100) + $floor($yr/400) + $t[$m-1] +$d)%7;
)};
/* FUNCTION: returm start and end DST timestamps for a given year yyyy */
$dstdates:=function($year) {(
$mar:=$year & "-03-";
$oct:=$year & "-10-";
$dstart:=$mar & (31-$dayofweek($mar & "31") & "T01:00:00.000Z");
$dend:=$oct & (31-$dayofweek($oct & "31") & "T01:00:00.000Z");
$a:=$toMillis($dstart);
$b:=$toMillis($dend);
$n:=$toMillis($now());
{"now": $n, "year": $year, "start": $dstart, "BST": $a, "end": $dend, "GMT": $b,
"DST": $n>$a and $n<$b,
"offset": 3600000,
"close": $abs($a-$n)<3*24*3600000 or $abs($b-$n)<3*24*3600000
};
)};
$dstdates(2023)
)
And here is what it produces
{"now":1687341262110,"year":2023,"start":"2023-03-26T01:00:00.000Z","BST":1679792400000,"end":"2023-10-29T01:00:00.000Z","GMT":1698541200000,"DST":true,"offset":3600000,"close":false}
The day-of-week code needs further testing, and then I need to add this to the main code. For simplicity I plan to code that for April to September assume BST, for November to February assume GMT, but in March and October (last 10 days) build the changeover times, apply the test to the first/last values, then if the change occurs during the array block, test each timestamp and add GMT/BST to each one.
Then a simple case of either adding one hour to create a new ‘local time’ field, or simply letting the user make the decision themselves.
Of course, using Local Time (BST) rather than UTC to do any automation would cause issues twice a year, particularly in October when the period 01:00-01:30 would appear twice.
01:00-01:30 (BST)
01:30-02:00 (BST) = 01:00 UTC, hence BST->GMT
01:00-01:30 (GMT)
01:30-02:00 (GMT)
This is the challenge I face if I want to use Octopus ‘Flux’ tariff. This has a fixed low-import period between 02:00 and 05:00, which would be great with a timed-charge period on my inverter. Octopus suggest that this will always be ‘local time’, thus on 29th October 2023 my inverter, running on BST, would be one hour out for the tariff, now running on GMT. Since my inverter does not keep good time (it gains about four seconds every day) I have an auto-reset running via Modbus, and I will have to improve this to adjust for DST change during the night. Intersting, as 02:00 BST (turn on inverter charge) instantly becomes 01:00 GMT (don’t turn on inverter charge) so ideally the change from BST to GMT should happen before 01:00 UTC.
So much easier to stick with UTC for everything!
Updated code to follow, when I have time to write and fully test…
This is a great project but I seem to be stumbling a little. I’ve used Node Red for some time but haven’t used it to populate sensors and I’m finding it hard to follow exactly what’s happening. I’ve pasted in the Node Red code from above and get exactly the same as your diagram. However, I can’t seem to populate the prices sensor while the sequence table works fine. This is what I get when I look at the sensors in the developers tools:
I’ve debugged what’s going into the “Octopus Agile Prices” and all seems correct (and I don’t get any errors anyway). I’ve even tried duplicating exactly what’s in “Octopus Agile Sequences” and feeding it from the Table node but nothing ever goes into the prices sensor.
I don’t fully understand how these sensors are configured and used in Node Red, the documentation isn’t very clear. Any ideas what I’m doing wrong? Thanks
I have solved this by deleting the config nodes from your import and recreated them with different names. I’ve no idea what was wrong and why this has worked.
Good to hear that you fixed it - that saved me trying to debug the issue remotely!
The HA websocket nodes are a powerful addition to Node-RED and make full two-way communication between HA and NR possible. They are documented at https://zachowj.github.io/node-red-contrib-home-assistant-websocket/guide/
At first they do take a bit of time and effort to get to understand how it all fits together, but it is worth the effort.
The ‘sensor node’ works with an ‘ha entity config node’. The sensor node is, in effect, just the front-end and visible node that sits in the flow. The work is done using the config node, and these can be seen if you use the ‘configuration node’ tab in the Node-RED editor right hand debug window. The config node holds the information required to create and maintain the sensor entity in HA, and this node in turn uses the HA server configuration node (which sets up and managed the web socket connection between NR and HA).
I can only guess that your HA server node was OK, as you had the other ‘agile sequence table’ working, and that your other sensor node and its related configuration node were ok and happily working. When this all works it works brilliantly, however when something goes wrong it is quite sensible to delete the configuration node, recreate and set it up again. This should force the new configuration node to correctly reconnect to the HA server configuration node, to correctly re-register the entity in HA, to connect to the flow sensor node, and to take the payload / attribute data from the sensor node and pass it to the entity in HA.
Sometimes just disabling the sensor & config nodes, redeploying, and then reenabling can fix things. Sometimes a full restart of NR is required (between disabling and renabling), and in the extreme I have had to delete the sensor and config nodes and restart both HA and Node-RED to force HA to ‘forget’ the entities, and then replace everything.
Thanks for this. I confess to still not fully understanding how the sensor and config nodes fit together. The documentation explains (a little!) how to configure them but is limited on their structure (and I’ve worked in IT all my life so the concepts aren’t alien to me). One thing that’s “interesting” is that the sensors don’t persist across HA reboots - they get created by Node Red and only exist while it’s running. This may affect how I develop what I’m thinking of doing (this is an evolving idea), let me explain. I wonder if Helpers would help with persistence and be another approach?!
I noticed that the Octopus agile tariffs can be -ve (ie you’re paid for using electricity). We don’t use electricity very much but I was wondering whether I could exploit this a little through our emersion heater. However, I want to do some checks before changing tariff. So, through HA, I want to:
- collect my current (I’m on the Octopus flexible tariff at the moment) and the agile tariffs for an indefinite period and display these both graphically and in a table so I can compare the two over any period I’ve collected data for
- Using my consumption data, calculate the cost of the electricity I’ve used under both tariffs so I can see if the agile tariff would be better or worse.
- Implement an NR automation that will activate if the agile tariff in the current half hour is < 0.
- Maybe have an export facility as some analysis might be better using Excel.
So far, I’ve just started getting the data from Octopus (I’ve used the Octopus Energy integration and your NR template). I now need to start thinking where I’ll store the data (I’ve not done anything in HA like this so will be Googling lots!). It’ll take me some time as I haven’t a lot of time I can dedicate to this but I’ll report back on my progress.
In looking recently at another Flex Table card I have I realised that this can be done so very easily I can’t think why I did not think of this before.
The Flex Table card allows for JavaScript eval() action on the data values, and thus
modify: (new Date(x.from)).toLocaleString("en-GB", {day:"2-digit", month:"short", hour:"2-digit", minute:"2-digit", timeZoneName:"short"})
does the trick. The ‘locale’ JS functions use the machine local settings, with location ‘en-GB’ it should show the current UK time, adjusting for DST.
The data fields required are array.from and array.upto, which are already both UTC ISO, and the optional format object picks out just the date/time parts for display.
Full card is now:
type: custom:flex-table-card
title: Octopus Agile Period
entities:
include: sensor.octopus_agile_sequence_table
columns:
- data: both_array
modify: x.mode
name: Mode
- data: both_array
modify: >-
(new Date(x.from)).toLocaleString("en-GB", {day:"2-digit", month:"short",
hour:"2-digit", minute:"2-digit", timeZoneName:"short"})
name: Start
- data: both_array
modify: >-
(new Date(x.upto)).toLocaleString("en-GB", {day:"2-digit", month:"short",
hour:"2-digit", minute:"2-digit", timeZoneName:"short"})
name: Stop
- data: both_array
modify: x.duration
name: Mins
- data: both_array
modify: x.average
name: Price (p)
and the result is
As promised, a report on how I’ve got on with comparing my existing costs with what they would be if I were on the agile tariff. It was certainly quite complex, more in working out how to do it than doing it. I felt it was worth a detailed explanation so I’ve published it on my substack.
I’ve used InfluxDB and Grafana in my approach and insert data into InfluxDB directly (this was the hardest part - nowhere on Google could I find anything on how to insert into InfluxDB).
Any feedback welcome.
@PedroKTFC - im having the same issue as you with the agile rates not being retrieved yet the sequence table works fine. Can you elaborate on your fix (or better still post your node red json?). I did look for the @config sections you mention but there seem to be multiple…
My fix wasn’t really a fix, it was a workaround. I deleted Geoff’s original entity config nodes. You’ll see mine listed below:
Click on the cog wheel to see them (highlighted in yellow). You can check this has worked properly by going into the Developer Tools and select the STATES tab (at the top). You can then use the Filter entities field to search for Geoff’s original entities by name (might be a good idea to do a search before and after you delete the entities to see they go).
Having deleted the nodes, Node Red should now show the Octopus Agile Sequences and Prices nodes as in error. You now need to create a config entity node. Make sure you use a different name to that originally used (I think somewhere some old config information is kept that prevents the original names being updated correctly). Check your new named sensors are present in HA using the Developer Tools.
You might want to force injection so that the sensors get populated.
Hope that’s clear and what you need. Let me know if not and we can try and work out what else is happening.
I should have said once you click on the cog wheel, double click on the config node which displays the config node. At the top are three buttons including a Delete button. Use that to delete Geoff’s original nodes.
Thanks for your help Pedro - I’ll give this a try. I did have some luck modifying the original sequence sensor to ALSO carry the rates property… thought then I don’t really know what to do with it in any of the cards i.e. how to show it.
Good luck with it. I do recommend InfluxDB and Grafana for dealing with storing and displaying this type of data. It should store and display Geoff’s data more or less automatically.
The full Agile Tariff array (96 records ie two days) is already passed to the ‘Octopus Agile Prices’ sensor entity in the attributes as ‘array’, so you really don’t need to add it to the sequence entity.
There is a limit to the size of data that can be passed in attributes (as I found out the hard way) and there is a general view that all data should be passed to HA using only state values and not attributes. However…
HA entity states can only take simple variables, so objects and arrays have to go via attributes. This therefore requires a bit more work to get at the data in HA, however displaying the full agile price table is relatively straightforward.
I use the custom flex-table-card, which builds basic tables very easily. For attribute data it requires a bit of code to modify the data. Above is a simple test card that uses the octopus-agile-prices entity, picks up the data from the ‘array’ attribute, and then uses a bit of simple code (x.‘object key’) to pull the variables from the object. Since it shows the data ‘as is’ the time is the UTC timestamp, and the full 96 records are shown, but this should give you the basics of how to build a table using entity attributes.
https://github.com/custom-cards/flex-table-card
If you don’t fancy installing and using a custom card, then the included markdown card will also work, with a bit of code.
Time Price
{%- set results = state_attr('sensor.octopus_agile_prices', 'array') %}
{%- for record in results %}
{%- set ts = as_timestamp(record.from) %}
{%- set ts_now = ((as_timestamp(now())/1800)|round(0,'floor')|int * 1800) %}
{%- if ts >= ts_now %}
{{ ts|timestamp_custom ('%H:%M', local=true)
}} {{
record.export }}
{%- endif %}
{%- endfor %}
Here the ‘code’ is the included template based on Jinja2. You can find the details of HA templating at
https://www.home-assistant.io/docs/configuration/templating/
and test it using the developer > template sandbox in HA.
The ‘code’ here does a bit of work to pull the entity attribute, get the timestamp, and test against the current time so as to only display current and future records. The use of ‘local=true’ in the timestamp rendering filter ‘timestamp_custom’ does the dirty work of dealing with DST on the fly.
Big tables of data are naturally more difficult to build, display sensibly, and to use, so it may well be more appropriate to just stick with the ‘current’ and ‘next’ values. These are available in the sensor as attributes, so can be displayed using a basic entity card. Personally I like graphs, as they show a great deal of data (and relationships) visually in a compact format.
Observation.
OK, so Octopus Energy API is for the most part very stable and reliable.
However, in looking today I have an error in my sensor node. Digging further and looking at the return from the API call itself (which calls for the latest 96 records…) I get
The API call appears to be returning duplicate records. Normal return is for the latest record and working backwards, hence asking for 96 records this afternoon returns from tomorrow end of period (23:00 UTC) back to yesterday. Here I can see the exact same record twice (on multiple occasions), which means that the records I am expecting for now (today) are just not there.
Not an error that can easily be recovered from, but this may explain the (recent?) issues with not seeing the Agile Prices sensor return. My usual option is to wait until tomorrow, but clearly getting unexpected duplicates on a direct API call is not easy to deal with.
If anyone is interested, out of the 96 records called for there are 52 distinct ‘valid_from’ times, so this is not just repeating every single record, just most of them.
Thanks Geoff, I was just trying to diagnose this, removed Node red, re installed, deleted code, re installed code, tried older versions of node red, still with the same result - should have checked here first…
Hopefully after the next update it will return back to normal as i like your gubbins here - it works turning stuff on and off and i can visually see when it decides are the best times…
Regards,
Shaggy
Only spotted this by chance, and I have not seen it before. Although it looks like an Octopus API issue it may be something to do with the Node-RED http request node. The https://agileprices.co.uk/?tariff=AGILE-22-08-31®ion=L website is showing the correct figures for today (2nd) but has missing figures for yesterday (1st). My bet is on the Octopus records. Debugging stuff takes a lot of time and effort…
In the mean time, a simple patch fix can be applied to the ‘Results’ node just after the API call.
The JSONata function $distinct() returns an array removing duplicates. At the moment this for me reduces the 96 records down to 52, so it works fine, except between 16:00 and 23:00 immediately after the update when the tail end of the 96 (being the remains of today) is missing, as it was yesterday up to around 20:00.
PATCH:
In the ‘Results’ change node, set the ‘to the value’ dropdown to ‘J:’ JSONata, and add the expression
$distinct(payload.results)
This should do the trick if the problem repeats, although I suspect at 16:00 the issue will have gone away.
I have made several changes to this code (adding in BST local time and a binary ‘switch’ sensor) but every time I think it is good to release something else crops up, so I may have another think - one thing would be to pull down 144 records, strip out duplicates (flag an error if the 144 shrinks in size) then pick out the top 96 (and check the start-end dates are correct). As usual the error avoidance and checking code is considerably bigger than the code required to do the work.