EMHASS: An Energy Management for Home Assistant

IIRC the SOC_Batt forcast changes every 5 minute… in my situation I’ve seen the p_batt_forcast goes below the -4000 watts which will trigger my charging automation… but after 5 mins (or the next MPC optimation) the p_batt_forecast will be different … hence the SOC Battery forecast will be different as well…

How is this best manage with the 5 minute interval keeps on changing the values of p_batt forecast and SOC battery forecast to control the charging and discharging the battery… not to mention when there is an error in the MPC optimisattion (I’m yet to find why… already converted to REST API calls as per your suggestion) which will mess up the SOC_Battery_Forecast

I have sungrow inverter and sungrow battery 9KW Panel 9.6 KWH battery
I can set the charging rate in Watts and can force charge or dischage the battery

If you can directly set charging rate then I would utilise that. When it changes every five minutes it will just say a new charging rate for your battery so should with well. I would think this would be more accurate than using the soc_forecsst, but in reality either method should yield the same outcomes. Maybe follow p_batt_forecast for a copule of days and then follow soc_forecast for a couple of days and see which one works best for you.

My p_batt_forecast does change regularly but that shouldn’t be a problem, it tracks the forecast pretty well.

1 Like

Configured to compare the Current SOC vs the SOC forecast and the discharge rate as per EMHASS… hopefully it runs fne tonight.

This is my loop condition for charging
- condition: numeric_state
entity_id: sensor.battery_level
below: sensor.soc_batt_forecast

For Discharging
- condition: numeric_state
entity_id: sensor.battery_level
above: sensor.soc_batt_forecast
After the loop it will set to Self consumtion again

Is it possible to get the Cost PRofit of the timeslot? to ensure when I discharge its profitable?

Thanks for trying to help me but I gave up on the legacy method. I just couldn’t get it to work.
I gave the docker method a try instead and actually got it up and running. Now I just need to configure all parameters to get it to work properly (nordpool prices, pv forecast etc), but that’s the fun part!

A question on the docker method. I assume it should be started in detached mode once I get it to work properly. But even in interactive mode, what’s the best way to reload emhass if I update the config file? Do I need to stop, remove and start the container again?

Thanks for helping out here in the forums. Just bought you a coffee. :coffee:

Thanks!
Nice that you got the docker standalone running. If you find any other issue we can treat them on the github repository.
If you just update the configuration then a simple docker restart is enough.

docker restart container_name
1 Like

@davidusb it has been 7d since to issue started occurring and it is still the same error. I tried changing the sensor name that the load forecast gets published to but no difference. I restart HASS all the time so I think the restart is a bit of a red herring. Have you got a good sense of what data is missing?

Hi. On my side I need to treat this in the code and perform a quick data cleaning before a prediction or a tune task. But on your side I don’t know what can possibly be going on, why are you always restarting HASS? That can be a cause. Another very common cause of missing values is the lost of communication with your sensors. I have this problem with one sensor that has a low WiFi signal resulting in communication lost all the time.

The restarts are just for config changes or upgrades. They are controlled restarts through the UI however the restart where the issue started was from a power outage so maybe something in that. How can I help debug here? It looks like all the sensors are working. Is it possible to tell which input data is missing? Is it the sensor for load without the controllable loads?

How can i test the data coming from HA for the ML fit function, im getting the error
The retrieved JSON is empty, check that correct day or variable names are passed

My days to retrieve is set to 9, when i run perfect optimization it runs without a hitch.

You can both check the values that are being used with a file that is saved in the “share” folder of your HA instance. The file is called “injection_dict.pkl”.
You can inspect that file and look for the data.
It a pickle file so you have to use Python to open it.
Here is a code snippet for that:

import pickle
with open('injection_dict.pkl', "rb") as fid:
    injection_dict = pickle.dump(fid)

For a prediction using the ML feature that injection_dict is a dictionary that contains a data table, you can access that table with injection_dict['table1'].
From there you can inspect your data.
You could probably use pandas read_thml functionality to convert to a DataFrame for further analysis: https://pandas.pydata.org/docs/reference/api/pandas.read_html.html

1 Like

I have littel Probelm.

when I send my Tibber prices to the load_cost_forecast I become this Error in the log:

2023-06-21 19:41:57,623 - web_server - ERROR - ERROR: The passed data is either not a list or the length is not correct, length should be 48

This is the result from my template:

curl -i -H \"Content-Type: application/json\" -X POST -d '{\"load_cost_forecast\":[0.3518, 0.3794, 0.3682, 0.3369, 0.313, 0.3107, 0.2995, 0.2899, 0.285, 0.2799, 0.2902, 0.3163, 0.3342, 0.3357, 0.3073, 0.2905, 0.2807, 0.2795, 0.2796, 0.2792, 0.2796, 0.2812, 0.3022, 0.3159],\"prod_price_forecast\":[0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0]}' http://localhost:5000/action/dayahead-optim

Can anyone help me?

I suspect your Tibber pricing is 24x hourly forecasts, but that EMHASS is setup for 30 minute time slots (default) and is complaining that you haven’t included 48x thirty minute forecasts.

If that is the case you should set the EMHASS timeslot to 60 minutes and perhaps set the prediction horizon to the number of forecasts you have (e.g. 24).

Exactly, I will just add that there is no need to set a prediction horizon for the dayahead-optim, just pass the correct length of data.

Thanks. That solve the problem.

But with an PV installed is this timeframe to long for good forecasts?

An other way is to duplicate the Tibber prices.

Hi mark,

how are you able to pass the current load into the forecase?
“load_power_forecast”: {{
([states(‘sensor.power_load_no_var_loads’)|int] +
(states(‘input_text.fi_fo_buffer’).split(’, ')|map(‘multiply’,1000)|map(‘int’)|list)[1:]
)| tojson

are there some magic on the ““input_text.fi_fo_buffer””

I would initially get your system running without the load_power_forecast, it will just use the last 24 hours as a proxy, which is quite sufficient.

When you are ready to fine tune your model down to exact amounts of W each minute, then you could implement the fifo buffer:

1 Like

Thanks Mark, I just remembered today when I converted to REST API calls, there is a section in your about load_power_forecast.

1 Like

Just getting back to this EMHASS system, working through doc. It talks about config_emhass.yaml. Can’t find it in hassos fips share after installing the add-on. Do you just take a copy from GitHub?
Thanks in anticipation.

@pasarn I use nordpool 24 hour power prices and Emhass PV scraper which gives 24 hour PV forecast using the dayahead-optimization. Emhass gives very good prediction when to switch on the water heater or other deferales when the power price is low or the solar cell gives high energy. When the price is high it prediction to switch off the water heater.

I have also used Tibber price data. I use the home_assistant_tibber_data plugin for getting the price data. The plugin has now some problems to update every day the price data so I wait for a fix from the developer.

Here is my shell commands I use:

shell_command:

  trigger_nordpool_forecast: "curl -i -H \"Content-Type: application/json\" -X POST -d '{
    \"load_cost_forecast\":{{((state_attr('sensor.nordpool', 'raw_today') | map(attribute='value') | list  + state_attr('sensor.nordpool', 'raw_tomorrow') | map(attribute='value') | list))[now().hour:][:24] }},
    \"prod_price_forecast\":{{((state_attr('sensor.nordpool_uten_avgifter', 'raw_today') | map(attribute='value') | list  + state_attr('sensor.nordpool_uten_avgifter', 'raw_tomorrow') | map(attribute='value') | list))[now().hour:][:24]}},
    \"def_total_hours\":{{states('sensor.list_operating_hours_of_each_deferrable_load')}}
    }' http://localhost:5000/action/dayahead-optim"

  publish_data: "curl -i -H \"Content-Type:application/json\" -X POST -d '{\"custom_deferrable_forecast_id\": [
    {\"entity_id\": \"sensor.p_deferrable0\",\"unit_of_measurement\": \"W\", \"friendly_name\": \"Varmtvannsbereder\"},
    {\"entity_id\": \"sensor.p_deferrable1\",\"unit_of_measurement\": \"W\", \"friendly_name\": \"Varmekabel stue og kjøkken\"},
    {\"entity_id\": \"sensor.p_deferrable2\",\"unit_of_measurement\": \"W\", \"friendly_name\": \"Varmekabel bad1etg\"},
    {\"entity_id\": \"sensor.p_deferrable3\",\"unit_of_measurement\": \"W\", \"friendly_name\": \"Varmekabel bad2etg\"},
    {\"entity_id\": \"sensor.p_deferrable4\",\"unit_of_measurement\": \"W\", \"friendly_name\": \"Varmekabel gang\"},
    {\"entity_id\": \"sensor.p_deferrable5\",\"unit_of_measurement\": \"W\", \"friendly_name\": \"Varmepumpe\"},
    {\"entity_id\": \"sensor.p_deferrable6\",\"unit_of_measurement\": \"W\", \"friendly_name\": \"Easee lader\"}
    ]}' http://localhost:5000/action/publish-data"

And here is the template sensor: sensor.list_operating_hours_of_each_deferrable_load which you can read more about here: example-to-pass-data-at-runtime

template:
  - sensor:
      - name: "List operating hours of each deferrable load" 
        unique_id: e4b566c1-6024-4157-8ef5-97c87bcf382c
        state: >-
          {% if states("sensor.outdoor_temperature_mean_over_last_12_hours") < "10" %}
            {{ [6, 3, 3, 3, 3, 3, 6] | list }}
          {% elif states("sensor.outdoor_temperature_mean_over_last_12_hours") >= "10" and states("sensor.outdoor_temperature_mean_over_last_12_hours") < "15" %}
            {{ [6, 2, 2, 2, 2, 2, 6] | list }}
          {% elif states("sensor.outdoor_temperature_mean_over_last_12_hours") >= "15" and states("sensor.outdoor_temperature_mean_over_last_12_hours") < "20" %}
            {{ [6, 1, 1, 1, 1, 1, 6] | list }}
          {% elif states("sensor.outdoor_temperature_mean_over_last_12_hours") >= "20" and states("sensor.outdoor_temperature_mean_over_last_12_hours") < "25" %}
            {{ [6, 0, 0, 0, 0, 0, 6] | list }}
          {% else %}
            {{ [6, 2, 2, 2, 2, 2, 6] | list }}
          {% endif %}

If you want to update the forecast more often you can use the mpc optimization.
Here are the updated shell commands with total number of hours inside the prediction horizon window, in my case 6h:
Waterheater: 2 hours
Floorheating: 3 hours
“def_total_hours”:[2,3,3,3,3,3,3,3]

shell_command:
  publish_data: "curl -i -H \"Content-Type: application/json\" -X POST -d '{}' http://localhost:5000/action/publish-data"

  trigger_nordpool_forecast: "curl -i -H \"Content-Type: application/json\" -X POST -d '{\"load_cost_forecast\":{{((state_attr('sensor.nordpool', 'raw_today') | map(attribute='value') | list  + state_attr('sensor.nordpool', 'raw_tomorrow') | map(attribute='value') | list))[now().hour:][:24] }},\"prod_price_forecast\":{{((state_attr('sensor.nordpool', 'raw_today') | map(attribute='value') | list  + state_attr('sensor.nordpool', 'raw_tomorrow') | map(attribute='value') | list))[now().hour:][:24]}}}' http://localhost:5000/action/dayahead-optim"

  trigger_nordpool_mpc: "curl -i -H \"Content-Type: application/json\" -X POST -d '{\"load_cost_forecast\":{{((state_attr('sensor.nordpool', 'raw_today') | map(attribute='value') | list + state_attr('sensor.nordpool', 'raw_tomorrow') | map(attribute='value') | list))[now().hour:][:24] }},\"prod_price_forecast\":{{((state_attr('sensor.nordpool', 'raw_today') | map(attribute='value') | list  + state_attr('sensor.nordpool', 'raw_tomorrow') | map(attribute='value') | list))[now().hour:][:24]}}, \"prediction_horizon\":6, \"def_total_hours\":[2,3,3,3,3,3,3,3]}' http://localhost:5000/action/naive-mpc-optim"

  trigger_entsoe_mpc: "curl -i -H \"Content-Type: application/json\" -X POST -d '{\"load_cost_forecast\":{{((state_attr('sensor.entsoe_average_electricity_price_today', 'prices_today') | map(attribute='price') | list + state_attr('sensor.entsoe_average_electricity_price_today', 'prices_tomorrow') | map(attribute='price') | list))[now().hour:][:24] }},\"prod_price_forecast\":{{((state_attr('sensor.entsoe_average_electricity_price_today', 'prices_today') | map(attribute='price') | list  + state_attr('sensor.entsoe_average_electricity_price_today', 'prices_tomorrow') | map(attribute='price') | list))[now().hour:][:24]}}, \"prediction_horizon\":6, \"def_total_hours\":[2,3,3,3,3,3,3,3]}' http://localhost:5000/action/naive-mpc-optim"
1 Like

That configuration file is needed for other installation methods (legacy and docker standalone). If you are using the add-on then you don’t need to care about that configuration file because the configuration is set directly on the web interface of the add-on options.