EMHASS: An Energy Management for Home Assistant

Here’s a breakdown of the headings in the EMHASS forecast table:

Heading Explanation
P_PV Forecasted power generation from your solar panels (Watts). This helps you predict how much solar energy you will produce during the forecast period.
P_Load Forecasted household power consumption (Watts). This gives you an idea of how much energy your appliances are expected to use.
P_deferrable0 Forecasted power consumption of deferrable loads (Watts). Deferable loads are appliances that can be managed by EMHASS. EMHASS helps you optimise energy usage by prioritising solar self-consumption and minimizing reliance on the grid or by taking advantage or supply and feed-in tariff volatility. You can have multiple deferable loads and you use this sensor in HA to control these loads via smart switch or other IoT means at your disposal
P_grid_pos Forecasted power exported to the grid (Watts). This indicates the amount of excess solar energy you are expected to send back to the grid during the forecast period.
P_grid_neg Forecasted power imported from the grid (Watts). This indicates the amount of energy you are expected to draw from the grid when your solar production is insufficient to meet your needs or it is advantagous to consume from the grid.
P_grid Forecasted net power flow between your home and the grid (Watts). This is calculated as P_grid_pos - P_grid_neg. A positive value indicates net export, while a negative value indicates net import.
unit_load_cost Forecasted cost per unit of energy you pay to the grid (typically $/kWh). This helps you understand the expected energy cost during the forecast period.
unit_prod_price Forecasted price you receive for selling excess solar energy back to the grid (typically $/kWh). This helps you understand the potential income from your solar production.
cost_profit Forecasted profit or loss from your energy usage for the forecast period. This is calculated as unit_load_cost * P_Load - unit_prod_price * P_grid_pos. A positive value indicates a profit, while a negative value indicates a loss.
cost_fun_cost Forecasted cost associated with deferring loads to maximize solar self-consumption. This helps you evaluate the trade-off between managing the load and not managing and potential cost savings.

Web Resources:

You can find information and definitions of these headings in the following resources:

1 Like

One correction: Pgrid pos and neg should be inverted:

P_grid_pos is the power you take off the grid, so flowing from the grid into your home.

P_grid_neg is the power you inject in the grid, typically the excess power from rooftop PV

1 Like

Thank you for an answer and link.
Also small question from the same topic - I do not have solar panels, but I do not understand how to switch them off in the configuration. Now I specified variable for PV production which is always zero as it is mandatory.

I was about to suggest the same. I think this is the right approach and I would do the same.

I think this is the only way. Not an expert here so don’t take my word for it.

Hi all,
hope anyone can help me. I’m struggling with the syntax of my shell command to run a day-ahead optimization:
Running the shell command gives a 400 Bad request, although the syntax seems correct:

trigger_entsoe_da: “curl -i -H \“Content-Type: application/json\” -X POST -d ‘{\“load_cost_forecast\”:{{(states(‘sensor.electricity_price_offtake_next24h_1’)+states(‘sensor.electricity_price_offtake_next24h_2’))}},\“prod_price_forecast\”:{{(states(‘sensor.electricity_price_offtake_next24h_1’)+states(‘sensor.electricity_price_offtake_next24h_2’))}},\“pv_power_forecast\”:{{states(‘sensor.solcast_24hrs_forecast’)}}}’ http://localhost:5000/action/dayahead-optim

I escaped the double quotes.

If I enter the above syntax in HA developer tools template, it resolves correctly:

But still the shell command returns following error:

stdout: “HTTP/1.1 400 BAD REQUEST\r\nContent-Length: 167\r\nContent-Type: text/html; charset=utf-8\r\nDate: Wed, 13 Dec 2023 17:45:45 GMT\r\nServer: waitress\r\n\r\n<!doctype html>\n\n400 Bad Request\n

Bad Request

\n

The browser (or proxy) sent a request that this server could not understand.


stderr: “% Total % Received % Xferd Average Speed Time Time Time Current\n Dload Upload Total Spent Left Speed\n\r 0 0 0 0 0 0 0 0 --:–:-- --:–:-- --:–:-- 0\r100 1035 100 167 100 868 39155 198k --:–:-- --:–:-- --:–:-- 252k”
returncode: 0

See if you can run the expanded curl command directly from the command line, unfortunately the escaping needs to change slightly.

Hello Community,

For simplicity let’s assume it’s midnight, Sunday has just started and we are using the naive approach for load forecasting; under this scenario we take the last 24h to estimate the next 24.

But I was thinking that in general I expect some periodicity in our activities, based on the day of the week. For example maybe Sunday is Pizza day and because of the oven and we can expect a consistent higher load if compared to other days.

By having a look at the documentation it seems possible to pass your own forecast for the load (of course without deferrable ones) using the dictionary key load_power_forecast.

Apart from wondering if anybody is passing some ad-hoc computed load estimation, I am interested to know if anybody has already developed an approach to estimate tomorrow’s load based on N previous Sundays (the next day we should pass it based on the N previous Mondays and so on).
If we want to generalize we would have a 24h rolling window forecast where each datapoint is the average of the corresponding datapoints from Nx24h rolling windows, from N past weeks.

This approach would also mitigate the situation of having a very low load forecast for tomorrow if today you were away.

I hope I was clear enough in my explanation.
Is anybody doing something similar or thinking the same?

This is how the ML forecaster works, if you retain weeks of data via your recorder it will then detect and recreate the daily and weekly cycles for future forecast predictions.

Thanks Mark, I understood the ML was doing something more sophisticated (of course it is), finding a pattern but I didn’t get it was considering the specific day of the week. I came to this thought as the ML method, for me, was not performing way better than the standard one. But following what you say maybe feeding just 21 days is not enough.
I will study the documentation better and maybe perform some more tests (now we also have the long term statistics from HA that maybe can play a role in this).
Thanks.

I wanted to perform some more tests using the ML method and today I’m also getting an error for the data I’m passing to EMHASS

2023-12-15 10:33:17,238 - web_server - ERROR - The retrieved JSON is empty, check that correct day or variable names are passed
2023-12-15 10:33:17,238 - web_server - ERROR - Either the names of the passed variables are not correct or days_to_retrieve is larger than the recorded history of your sensor (check your recorder settings)

So I checked the load sensor and the data available and I was surprised to notice that I have some long term statistics much older than I would expect. This feature was introduced about one week ago but I can see statistics starting from June 2023.

I’m wondering how this is possible given my purge days is set to 21 and the recent launch of this feature…

@RT1080 did you identify what was the problem on your side?

Ok so I tried to run ML model fit again using just 20 days and this time it worked.
Maybe something related to… I don’t know. I’m changing the purge days to 22 to be sure I have 21 days available.
But more interesting, also following my previous message, this makes me think the long term statistics data is not easily accessible by other applications (or maybe it’s not without some changes to the code… I see in the history that the label shows “long term statistics” so maybe this data is accessible in a different way).

The EMHASS ML depends on recorder data being available, it currently cannot access the long term stastics. You could extend the recorder data window for your EMHASS data to get those weekly, monthly, yearly trends.

Long Term stastics have always been there, but not easily accessible, 2023.12 updated the history card so it can display both recorder and long term stastics. The energy dashboard also accesses long term stastics, which is why it can go back so far.

Ah good to know. I didn’t know long-term was previously available and about energy dashboard I thought about that but never dug into it.

Nope, no clue, but 48 hours after the HA upgrade it worked again. I just moved to the .3 version of HA and relieved to say that EMHASS did not break down.

Seems like an issue with connection to the EMHASS webserver?
Ran some command in the HA OS terminal:

Check if your emhass server is online. You run the add on? On the standard port?

Running normally:


Interesting behaviour overnight? Have to disable EMHASS control over the battery and manage manually for the day until it recovers. Battery has gone to 0% as well which is unusual.

Is there a problem with def_total_hours or P_deferrable_nom being 0?
I’m disabling deferable loads by zeroing these variables.

{
  "prod_price_forecast": {{
    ([states('sensor.cecil_st_feed_in_price')|float(0)] +
    (state_attr('sensor.cecil_st_feed_in_forecast', 'forecasts')|map(attribute='per_kwh')|list))
    | tojson 
  }},
  "load_cost_forecast": {{
    ([states('sensor.cecil_st_general_price')|float(0)] + 
    state_attr('sensor.cecil_st_general_forecast', 'forecasts') |map(attribute='per_kwh')|list) 
    | tojson 
  }},
  "pv_power_forecast": {{
    ([states('sensor.sonnenbatterie_84324_production_w')|int(0)] +
    state_attr('sensor.solcast_pv_forecast_forecast_today', 'detailedForecast')|selectattr('period_start','gt',utcnow()) | map(attribute='pv_estimate')|map('multiply',1000)|map('int')|list +
    state_attr('sensor.solcast_pv_forecast_forecast_tomorrow', 'detailedForecast')|selectattr('period_start','gt',utcnow()) | map(attribute='pv_estimate')|map('multiply',1000)|map('int')|list
    )| tojson
  }},
  "prediction_horizon": {{
    min(48, (state_attr('sensor.cecil_st_feed_in_forecast', 'forecasts')|map(attribute='per_kwh')|list|length)+1)
  }},
  "num_def_loads": 2,
  "def_total_hours": [
    {%- if states('sensor.cecil_st_feed_in_price') | float(0) > 0 -%}
      0
    {%- elif is_state('sensor.season', 'winter') -%}
      2
    {%- elif is_state('sensor.season', 'summer') -%}
      4
    {%- else -%}
      3
    {%- endif -%},
    {%- if is_state('device_tracker.ynot_location_tracker', ['home']) -%}
      {%- if is_state('binary_sensor.ynot_charger', ['on']) -%}
        {{ ((90-(states('sensor.ynot_battery')|int(0)))/30*3)|int(0) }}
      {%- else -%} 
        0
      {%- endif -%}
    {%- else -%} 
      0
    {%- endif -%}
    ],
  "P_deferrable_nom": [1150, {{ (states('input_number.ev_amps') | int(0) * 230)|int(0) }}],
  "treat_def_as_semi_cont": [1, 0],
  "set_def_constant": [0, 0],
  "soc_init": {{ (states('sensor.sonnenbatterie_84324_state_charge_user')|int(0))/100 }},
  "soc_final": 0.03,
  "alpha": 0.25,
  "beta": 0.75
}
{
  "prod_price_forecast": [0.04, 0.1, 0.01, -0.03, -0.03, -0.03, -0.04, -0.03, 0.04, 0.32, 0.32, 0.32, 0.33, 0.22, 0.22, 0.27, 0.27, 0.3, 0.34, 0.35, 0.43, 0.18, 0.18, 0.15, 0.15, 0.15, 0.12, 0.09, 0.09, 0.09, 0.11, 0.09, 0.09, 0.09, 0.09, 0.08, 0.08, 0.09, 0.09],
  "load_cost_forecast": [0.14, 0.23, 0.13, 0.08, 0.08, 0.08, 0.07, 0.08, 0.15, 0.41, 0.41, 0.4, 0.42, 0.3, 0.3, 0.35, 0.36, 0.39, 0.43, 0.44, 0.53, 0.29, 0.29, 0.26, 0.25, 0.26, 0.23, 0.2, 0.2, 0.2, 0.21, 0.2, 0.2, 0.2, 0.19, 0.18, 0.18, 0.19, 0.19],
  "pv_power_forecast": [1161, 2692, 2765, 2683, 2461, 2322, 2186, 2236, 2380, 2455, 2504, 2513, 2517, 2393, 2145, 1782, 1345, 933, 529, 171, 41, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 7, 73, 278, 567, 875, 1182, 1465, 1721, 1935, 2139, 2356, 2538, 2653, 2783, 2923, 3065, 3124, 3089, 3042, 2936, 2730, 2553, 2386, 1971, 1413, 944, 517, 172, 36, 0, 0, 0, 0, 0, 0, 0, 0],
  "prediction_horizon": 39,
  "num_def_loads": 2,
  "def_total_hours": [0,0],
  "P_deferrable_nom": [1150, 7360],
  "treat_def_as_semi_cont": [1, 0],
  "set_def_constant": [0, 0],
  "soc_init": 0.08,
  "soc_final": 0.03,
  "alpha": 0.25,
  "beta": 0.75
})

Hi, I don’t understand what went wrong there. But definetly it is not a problem to set some of those def_total_hours to zero, I do this everyday. However did you set them all to zero at some point? That may be problematic.