EMHASS: An Energy Management for Home Assistant

I use the Include option in recorder settings so in my setup everything is excluded by default but what I really want to retain; this helps me keep the size under control.

Alternatively could we move to long term statistics? Ultimately a downsampled dataset should be sufficient for EMHASS?

1 Like

I wanted to do so using a VM to test it. I had the same idea.
I let you know if I find out something interesting.

1 Like

In the meantime:

Well just the PV and load sensors on one side and then the sensor created by EMHASS and used to control your real switches.

1 Like

referring to my post, does somebody have an idea?
I now switched back to

"set_def_constant":[false]

as the timeframe is always moved to a later time, when itā€™s not the optimumā€¦

Hello,
Maybe someone can explain these variables Iā€™m receiving in EMHASS table:

  • P_PV
  • P_Load
  • P_deferrable0
  • P_grid_pos
  • P_grid_neg
  • P_grid
  • unit_load_cost
  • unit_prod_price
  • cost_profit
  • cost_fun_cost
    Or maybe share some webspace in which these are clearly defined (if they are collected or calculated)?

Hereā€™s a breakdown of the headings in the EMHASS forecast table:

Heading Explanation
P_PV Forecasted power generation from your solar panels (Watts). This helps you predict how much solar energy you will produce during the forecast period.
P_Load Forecasted household power consumption (Watts). This gives you an idea of how much energy your appliances are expected to use.
P_deferrable0 Forecasted power consumption of deferrable loads (Watts). Deferable loads are appliances that can be managed by EMHASS. EMHASS helps you optimise energy usage by prioritising solar self-consumption and minimizing reliance on the grid or by taking advantage or supply and feed-in tariff volatility. You can have multiple deferable loads and you use this sensor in HA to control these loads via smart switch or other IoT means at your disposal
P_grid_pos Forecasted power exported to the grid (Watts). This indicates the amount of excess solar energy you are expected to send back to the grid during the forecast period.
P_grid_neg Forecasted power imported from the grid (Watts). This indicates the amount of energy you are expected to draw from the grid when your solar production is insufficient to meet your needs or it is advantagous to consume from the grid.
P_grid Forecasted net power flow between your home and the grid (Watts). This is calculated as P_grid_pos - P_grid_neg. A positive value indicates net export, while a negative value indicates net import.
unit_load_cost Forecasted cost per unit of energy you pay to the grid (typically $/kWh). This helps you understand the expected energy cost during the forecast period.
unit_prod_price Forecasted price you receive for selling excess solar energy back to the grid (typically $/kWh). This helps you understand the potential income from your solar production.
cost_profit Forecasted profit or loss from your energy usage for the forecast period. This is calculated as unit_load_cost * P_Load - unit_prod_price * P_grid_pos. A positive value indicates a profit, while a negative value indicates a loss.
cost_fun_cost Forecasted cost associated with deferring loads to maximize solar self-consumption. This helps you evaluate the trade-off between managing the load and not managing and potential cost savings.

Web Resources:

You can find information and definitions of these headings in the following resources:

1 Like

One correction: Pgrid pos and neg should be inverted:

P_grid_pos is the power you take off the grid, so flowing from the grid into your home.

P_grid_neg is the power you inject in the grid, typically the excess power from rooftop PV

1 Like

Thank you for an answer and link.
Also small question from the same topic - I do not have solar panels, but I do not understand how to switch them off in the configuration. Now I specified variable for PV production which is always zero as it is mandatory.

I was about to suggest the same. I think this is the right approach and I would do the same.

I think this is the only way. Not an expert here so donā€™t take my word for it.

Hi all,
hope anyone can help me. Iā€™m struggling with the syntax of my shell command to run a day-ahead optimization:
Running the shell command gives a 400 Bad request, although the syntax seems correct:

trigger_entsoe_da: ā€œcurl -i -H \ā€œContent-Type: application/json\ā€ -X POST -d ā€˜{\ā€œload_cost_forecast\ā€:{{(states(ā€˜sensor.electricity_price_offtake_next24h_1ā€™)+states(ā€˜sensor.electricity_price_offtake_next24h_2ā€™))}},\ā€œprod_price_forecast\ā€:{{(states(ā€˜sensor.electricity_price_offtake_next24h_1ā€™)+states(ā€˜sensor.electricity_price_offtake_next24h_2ā€™))}},\ā€œpv_power_forecast\ā€:{{states(ā€˜sensor.solcast_24hrs_forecastā€™)}}}ā€™ http://localhost:5000/action/dayahead-optimā€

I escaped the double quotes.

If I enter the above syntax in HA developer tools template, it resolves correctly:

But still the shell command returns following error:

stdout: ā€œHTTP/1.1 400 BAD REQUEST\r\nContent-Length: 167\r\nContent-Type: text/html; charset=utf-8\r\nDate: Wed, 13 Dec 2023 17:45:45 GMT\r\nServer: waitress\r\n\r\n<!doctype html>\n\n400 Bad Request\n

Bad Request

\n

The browser (or proxy) sent a request that this server could not understand.

ā€
stderr: ā€œ% Total % Received % Xferd Average Speed Time Time Time Current\n Dload Upload Total Spent Left Speed\n\r 0 0 0 0 0 0 0 0 --:ā€“:-- --:ā€“:-- --:ā€“:-- 0\r100 1035 100 167 100 868 39155 198k --:ā€“:-- --:ā€“:-- --:ā€“:-- 252kā€
returncode: 0

See if you can run the expanded curl command directly from the command line, unfortunately the escaping needs to change slightly.

Hello Community,

For simplicity letā€™s assume itā€™s midnight, Sunday has just started and we are using the naive approach for load forecasting; under this scenario we take the last 24h to estimate the next 24.

But I was thinking that in general I expect some periodicity in our activities, based on the day of the week. For example maybe Sunday is Pizza day and because of the oven and we can expect a consistent higher load if compared to other days.

By having a look at the documentation it seems possible to pass your own forecast for the load (of course without deferrable ones) using the dictionary key load_power_forecast.

Apart from wondering if anybody is passing some ad-hoc computed load estimation, I am interested to know if anybody has already developed an approach to estimate tomorrowā€™s load based on N previous Sundays (the next day we should pass it based on the N previous Mondays and so on).
If we want to generalize we would have a 24h rolling window forecast where each datapoint is the average of the corresponding datapoints from Nx24h rolling windows, from N past weeks.

This approach would also mitigate the situation of having a very low load forecast for tomorrow if today you were away.

I hope I was clear enough in my explanation.
Is anybody doing something similar or thinking the same?

This is how the ML forecaster works, if you retain weeks of data via your recorder it will then detect and recreate the daily and weekly cycles for future forecast predictions.

Thanks Mark, I understood the ML was doing something more sophisticated (of course it is), finding a pattern but I didnā€™t get it was considering the specific day of the week. I came to this thought as the ML method, for me, was not performing way better than the standard one. But following what you say maybe feeding just 21 days is not enough.
I will study the documentation better and maybe perform some more tests (now we also have the long term statistics from HA that maybe can play a role in this).
Thanks.

I wanted to perform some more tests using the ML method and today Iā€™m also getting an error for the data Iā€™m passing to EMHASS

2023-12-15 10:33:17,238 - web_server - ERROR - The retrieved JSON is empty, check that correct day or variable names are passed
2023-12-15 10:33:17,238 - web_server - ERROR - Either the names of the passed variables are not correct or days_to_retrieve is larger than the recorded history of your sensor (check your recorder settings)

So I checked the load sensor and the data available and I was surprised to notice that I have some long term statistics much older than I would expect. This feature was introduced about one week ago but I can see statistics starting from June 2023.

Iā€™m wondering how this is possible given my purge days is set to 21 and the recent launch of this featureā€¦

@RT1080 did you identify what was the problem on your side?

Ok so I tried to run ML model fit again using just 20 days and this time it worked.
Maybe something related toā€¦ I donā€™t know. Iā€™m changing the purge days to 22 to be sure I have 21 days available.
But more interesting, also following my previous message, this makes me think the long term statistics data is not easily accessible by other applications (or maybe itā€™s not without some changes to the codeā€¦ I see in the history that the label shows ā€œlong term statisticsā€ so maybe this data is accessible in a different way).

The EMHASS ML depends on recorder data being available, it currently cannot access the long term stastics. You could extend the recorder data window for your EMHASS data to get those weekly, monthly, yearly trends.

Long Term stastics have always been there, but not easily accessible, 2023.12 updated the history card so it can display both recorder and long term stastics. The energy dashboard also accesses long term stastics, which is why it can go back so far.