EMHASS: An Energy Management for Home Assistant

I have littel Probelm.

when I send my Tibber prices to the load_cost_forecast I become this Error in the log:

2023-06-21 19:41:57,623 - web_server - ERROR - ERROR: The passed data is either not a list or the length is not correct, length should be 48

This is the result from my template:

curl -i -H \"Content-Type: application/json\" -X POST -d '{\"load_cost_forecast\":[0.3518, 0.3794, 0.3682, 0.3369, 0.313, 0.3107, 0.2995, 0.2899, 0.285, 0.2799, 0.2902, 0.3163, 0.3342, 0.3357, 0.3073, 0.2905, 0.2807, 0.2795, 0.2796, 0.2792, 0.2796, 0.2812, 0.3022, 0.3159],\"prod_price_forecast\":[0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0]}' http://localhost:5000/action/dayahead-optim

Can anyone help me?

I suspect your Tibber pricing is 24x hourly forecasts, but that EMHASS is setup for 30 minute time slots (default) and is complaining that you haven’t included 48x thirty minute forecasts.

If that is the case you should set the EMHASS timeslot to 60 minutes and perhaps set the prediction horizon to the number of forecasts you have (e.g. 24).

Exactly, I will just add that there is no need to set a prediction horizon for the dayahead-optim, just pass the correct length of data.

Thanks. That solve the problem.

But with an PV installed is this timeframe to long for good forecasts?

An other way is to duplicate the Tibber prices.

Hi mark,

how are you able to pass the current load into the forecase?
“load_power_forecast”: {{
([states(‘sensor.power_load_no_var_loads’)|int] +
(states(‘input_text.fi_fo_buffer’).split(’, ')|map(‘multiply’,1000)|map(‘int’)|list)[1:]
)| tojson

are there some magic on the ““input_text.fi_fo_buffer””

I would initially get your system running without the load_power_forecast, it will just use the last 24 hours as a proxy, which is quite sufficient.

When you are ready to fine tune your model down to exact amounts of W each minute, then you could implement the fifo buffer:

1 Like

Thanks Mark, I just remembered today when I converted to REST API calls, there is a section in your about load_power_forecast.

1 Like

Just getting back to this EMHASS system, working through doc. It talks about config_emhass.yaml. Can’t find it in hassos fips share after installing the add-on. Do you just take a copy from GitHub?
Thanks in anticipation.

@pasarn I use nordpool 24 hour power prices and Emhass PV scraper which gives 24 hour PV forecast using the dayahead-optimization. Emhass gives very good prediction when to switch on the water heater or other deferales when the power price is low or the solar cell gives high energy. When the price is high it prediction to switch off the water heater.

I have also used Tibber price data. I use the home_assistant_tibber_data plugin for getting the price data. The plugin has now some problems to update every day the price data so I wait for a fix from the developer.

Here is my shell commands I use:

shell_command:

  trigger_nordpool_forecast: "curl -i -H \"Content-Type: application/json\" -X POST -d '{
    \"load_cost_forecast\":{{((state_attr('sensor.nordpool', 'raw_today') | map(attribute='value') | list  + state_attr('sensor.nordpool', 'raw_tomorrow') | map(attribute='value') | list))[now().hour:][:24] }},
    \"prod_price_forecast\":{{((state_attr('sensor.nordpool_uten_avgifter', 'raw_today') | map(attribute='value') | list  + state_attr('sensor.nordpool_uten_avgifter', 'raw_tomorrow') | map(attribute='value') | list))[now().hour:][:24]}},
    \"def_total_hours\":{{states('sensor.list_operating_hours_of_each_deferrable_load')}}
    }' http://localhost:5000/action/dayahead-optim"

  publish_data: "curl -i -H \"Content-Type:application/json\" -X POST -d '{\"custom_deferrable_forecast_id\": [
    {\"entity_id\": \"sensor.p_deferrable0\",\"unit_of_measurement\": \"W\", \"friendly_name\": \"Varmtvannsbereder\"},
    {\"entity_id\": \"sensor.p_deferrable1\",\"unit_of_measurement\": \"W\", \"friendly_name\": \"Varmekabel stue og kjøkken\"},
    {\"entity_id\": \"sensor.p_deferrable2\",\"unit_of_measurement\": \"W\", \"friendly_name\": \"Varmekabel bad1etg\"},
    {\"entity_id\": \"sensor.p_deferrable3\",\"unit_of_measurement\": \"W\", \"friendly_name\": \"Varmekabel bad2etg\"},
    {\"entity_id\": \"sensor.p_deferrable4\",\"unit_of_measurement\": \"W\", \"friendly_name\": \"Varmekabel gang\"},
    {\"entity_id\": \"sensor.p_deferrable5\",\"unit_of_measurement\": \"W\", \"friendly_name\": \"Varmepumpe\"},
    {\"entity_id\": \"sensor.p_deferrable6\",\"unit_of_measurement\": \"W\", \"friendly_name\": \"Easee lader\"}
    ]}' http://localhost:5000/action/publish-data"

And here is the template sensor: sensor.list_operating_hours_of_each_deferrable_load which you can read more about here: example-to-pass-data-at-runtime

template:
  - sensor:
      - name: "List operating hours of each deferrable load" 
        unique_id: e4b566c1-6024-4157-8ef5-97c87bcf382c
        state: >-
          {% if states("sensor.outdoor_temperature_mean_over_last_12_hours") < "10" %}
            {{ [6, 3, 3, 3, 3, 3, 6] | list }}
          {% elif states("sensor.outdoor_temperature_mean_over_last_12_hours") >= "10" and states("sensor.outdoor_temperature_mean_over_last_12_hours") < "15" %}
            {{ [6, 2, 2, 2, 2, 2, 6] | list }}
          {% elif states("sensor.outdoor_temperature_mean_over_last_12_hours") >= "15" and states("sensor.outdoor_temperature_mean_over_last_12_hours") < "20" %}
            {{ [6, 1, 1, 1, 1, 1, 6] | list }}
          {% elif states("sensor.outdoor_temperature_mean_over_last_12_hours") >= "20" and states("sensor.outdoor_temperature_mean_over_last_12_hours") < "25" %}
            {{ [6, 0, 0, 0, 0, 0, 6] | list }}
          {% else %}
            {{ [6, 2, 2, 2, 2, 2, 6] | list }}
          {% endif %}

If you want to update the forecast more often you can use the mpc optimization.
Here are the updated shell commands with total number of hours inside the prediction horizon window, in my case 6h:
Waterheater: 2 hours
Floorheating: 3 hours
“def_total_hours”:[2,3,3,3,3,3,3,3]

shell_command:
  publish_data: "curl -i -H \"Content-Type: application/json\" -X POST -d '{}' http://localhost:5000/action/publish-data"

  trigger_nordpool_forecast: "curl -i -H \"Content-Type: application/json\" -X POST -d '{\"load_cost_forecast\":{{((state_attr('sensor.nordpool', 'raw_today') | map(attribute='value') | list  + state_attr('sensor.nordpool', 'raw_tomorrow') | map(attribute='value') | list))[now().hour:][:24] }},\"prod_price_forecast\":{{((state_attr('sensor.nordpool', 'raw_today') | map(attribute='value') | list  + state_attr('sensor.nordpool', 'raw_tomorrow') | map(attribute='value') | list))[now().hour:][:24]}}}' http://localhost:5000/action/dayahead-optim"

  trigger_nordpool_mpc: "curl -i -H \"Content-Type: application/json\" -X POST -d '{\"load_cost_forecast\":{{((state_attr('sensor.nordpool', 'raw_today') | map(attribute='value') | list + state_attr('sensor.nordpool', 'raw_tomorrow') | map(attribute='value') | list))[now().hour:][:24] }},\"prod_price_forecast\":{{((state_attr('sensor.nordpool', 'raw_today') | map(attribute='value') | list  + state_attr('sensor.nordpool', 'raw_tomorrow') | map(attribute='value') | list))[now().hour:][:24]}}, \"prediction_horizon\":6, \"def_total_hours\":[2,3,3,3,3,3,3,3]}' http://localhost:5000/action/naive-mpc-optim"

  trigger_entsoe_mpc: "curl -i -H \"Content-Type: application/json\" -X POST -d '{\"load_cost_forecast\":{{((state_attr('sensor.entsoe_average_electricity_price_today', 'prices_today') | map(attribute='price') | list + state_attr('sensor.entsoe_average_electricity_price_today', 'prices_tomorrow') | map(attribute='price') | list))[now().hour:][:24] }},\"prod_price_forecast\":{{((state_attr('sensor.entsoe_average_electricity_price_today', 'prices_today') | map(attribute='price') | list  + state_attr('sensor.entsoe_average_electricity_price_today', 'prices_tomorrow') | map(attribute='price') | list))[now().hour:][:24]}}, \"prediction_horizon\":6, \"def_total_hours\":[2,3,3,3,3,3,3,3]}' http://localhost:5000/action/naive-mpc-optim"
1 Like

That configuration file is needed for other installation methods (legacy and docker standalone). If you are using the add-on then you don’t need to care about that configuration file because the configuration is set directly on the web interface of the add-on options.

Ok, thanks for that. It’a not clear in the manual. It’s a difficult read in spots but great work. very appreciative to the developer.

Hey @davidusb when I load the pkl file in table1 I see three lots of data, each 48 entries long. There is nothing missing as far as I can tell. I have tried to enable DEBUG level logging via the ad-on config UI but that does not seem to take effect. I am a bit stuck now.

Must this all be integers? Cant it be a float?

I don’t know if I have a time zone issue or if I’m just not using dayahead-optim right. I can’t get the PV forecast sent to the function to align with actual times.

Example:
At 08:35 I call dayahead-optim with the following PV forecast:
pv_power_forecast: [1.726, 3.264, 4.613, 5.789, 6.391, 6.79, 6.71, 6.233, 5.775, 5.273, 4.226, 2.71, 1.568, 0.356, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01, 0.188, 0.454, 0.683, 0.627]

My intention here is that the forecast is 1.726 kWh between 08-09, 3.264 kWh between 9-10 and so forth. But, when I look at the optimization results table, the forecast is one hour off. For some reason, the optimization believes that the first forecast in the list is for the time 07-08.
image

What am I missing?
Server time and timezone where both HA and EMHASS is running is correct.

~$ date -I'seconds'
2023-06-27T08:48:25+02:00

EDIT: I realize I have the same problem with price forecast passed to dayahead-optim. The question then probably is: What is the starting time of dayahead-optim? Do I need to pass a few hours of historical data to it?

Have a look at the timestamp rounding, which sets different behaviour.

Thanks. It’s not intuitive what ‘first’ or ‘last’ means in terms of timestamp rounding. Maybe something got lost in translation. But I did some tests from your pointer.

Forecast data i send is per hour. Time now is 18:33, so the first value in forecast parameters sent to dayahead-optim is for the time period 18:00-19:00.
Solcast sensor attributes look like this:

- period_start: '2023-06-28T18:00:00+02:00'
pv_estimate: 3.4184
- period_start: '2023-06-28T19:00:00+02:00'
pv_estimate: 1.3334
... 

The PV power data sent to dayahead-optim looks like this: pv_power_forecast: [3418, 1333, 416, 113...

With method_ts_round: 'first', dayahead-optim gives me this result:
image

With method_ts_round: 'last', dayahead-optim gives me this result:
image

While method_ts_round: 'nearest', dayahead-optim gives me this result, which probably is consistent since the time is nearer to 19 than 18 now.
image

The results are consistent with the setting I believe, but why does it start two hours earlier? Time now is between 18:30 and 19, so last/nearest should give an optimization from 19, not from 17. Rounding to ‘first’ should give an optimization starting from 18, not from 16.

Sure, I can adapt my forecast data, but I believe the behavior is wrong. Maybe I should open an issue for it.

@davidusb can you please have a look at issue #90?
https://github.com/davidusb-geek/emhass/issues/90

You made a comment and closed it. I have replied, but I can’t reopen the issue so I’m not sure you got my reply. Please just have a look at my reply and decide whether it’s reason enough to reopen the issue or if you want to leave it closed. I will accept either, but please just review my comment.

Yes I’ve just reopened the issue. It will be treated. I’ve just had two very busy weeks but I will soon get some time to work on this and other issues.

The two hours difference is odd.

When I inject values they appear in the current timeslot.

Can you connect to your HomeAssistant container and check the timezone setting there?

Trying to catch up with all the info in this thread. Lots of good stuff to boil my head with. I’ve tried to implement everything up until actually executing automations to defer loads or charge/discharge etc. Want to make sure it’s working correctly before I go that far.

I’ve a 5 kWp system, half facing NNW and the other half facing ENE and all on one Fronius Primo 6.0-1 inverter.

So, I’ve set this up in solcast taking their advice and configuring an azimuth of 10° (halfway between the two directions my panels face) and tilt of 21°.

I’m also with Amber Electric and on the NSW AusGrid bonus trial.

I’ve installed the EMHASS add-on as a Home Assistant OS user. HA is running on a VM under Proxmox on an Intel NUC along with other things like Tuya convert and a docker instance running TeslaMate.

Configuration for EMHASS:

hass_url: empty
long_lived_token: empty
costfun: profit
logging_level: INFO
optimization_time_step: 30
historic_days_to_retrieve: 2
method_ts_round: first
set_total_pv_sell: false
lp_solver: COIN_CMD
lp_solver_path: /usr/bin/cbc
set_nocharge_from_grid: false
set_nodischarge_to_grid: false
set_battery_dynamic: false
battery_dynamic_max: 0.9
battery_dynamic_min: -0.9
load_forecast_method: naive
sensor_power_photovoltaics: sensor.sonnenbatterie_84324_production_w
sensor_power_load_no_var_loads: sensor.house_power_consumption_less_deferrables
number_of_deferrable_loads: 2
list_nominal_power_of_deferrable_loads:
  - nominal_power_of_deferrable_loads: 1500
  - nominal_power_of_deferrable_loads: 750
list_operating_hours_of_each_deferrable_load:
  - operating_hours_of_each_deferrable_load: 2
  - operating_hours_of_each_deferrable_load: 2
list_peak_hours_periods_start_hours:
  - peak_hours_periods_start_hours: "02:54"
  - peak_hours_periods_start_hours: "17:24"
list_peak_hours_periods_end_hours:
  - peak_hours_periods_end_hours: "15:24"
  - peak_hours_periods_end_hours: "20:24"
list_treat_deferrable_load_as_semi_cont:
  - treat_deferrable_load_as_semi_cont: true
  - treat_deferrable_load_as_semi_cont: true
load_peak_hours_cost: 0.1907
load_offpeak_hours_cost: 0.1419
photovoltaic_production_sell_price: 0.065
maximum_power_from_grid: 14490
list_pv_module_model:
  - pv_module_model: CSUN_Eurasia_Energy_Systems_Industry_and_Trade_CSUN295_60M
list_pv_inverter_model:
  - pv_inverter_model: Fronius_International_GmbH__Fronius_Primo_5_0_1_208_240__240V_
list_surface_tilt:
  - surface_tilt: 21
list_surface_azimuth:
  - surface_azimuth: 350
list_modules_per_string:
  - modules_per_string: 17
list_strings_per_inverter:
  - strings_per_inverter: 1
set_use_battery: true
battery_discharge_power_max: 3300
battery_charge_power_max: 3300
battery_discharge_efficiency: 0.95
battery_charge_efficiency: 0.95
battery_nominal_energy_capacity: 10000
battery_minimum_state_of_charge: 0.1
battery_maximum_state_of_charge: 0.9
battery_target_state_of_charge: 0.1

Deferrable loads:

  1. I did have an Arlec GridConnect power point for the pool pump with Tasmota firmware but as that doesn’t monitor power usage, I’ve replaced it with a Zigbee power point that does monitor power. The power measurement is being subtracted from the total consumption see below.

  2. I’ve installed a Tuya power monitoring plug behind the dish washer and added the local tuya HACS to integrate it (I hate Tuya but bit my tongue). This is also being subtracted from total consumption, see below.

  3. I’ll also installed a zigbee power point for the dryer and washing machine (when I get around to it).

  4. I’ve also install the Tesla HACS for my Model Y Performance. I only have a dumb Tesla charger that came with my old Model 3 so have to depend on load data and control via the car itself. Not using it yet.

Only the first two deferrable loads are being subtracted from total home consumption at this point:

  - platform: template
    sensors:
      dw_power:
        unique_id: dw_power
        friendly_name: "Dish Washer Power"
        value_template: "{{ states.switch.dw_switch.attributes.current_consumption | float(0) }}"
        unit_of_measurement: W

  - platform: template
    sensors:
      house_power_consumption_less_deferrables:
        unit_of_measurement: W
        unique_id: house_power_consumption_less_deferrables
        value_template: >-
          {% set consumption = states.sensor.sonnenbatterie_84324_meter_consumption_4_2_w_total.state|float(0) %}
          {% set deferrable1 = states.sensor.garage_power_point_power.state|float(0) %}
          {% set deferrable2 = states.sensor.dw_power.state|float(0) %}
          {{ (consumption - (deferrable1 + deferrable2))|float(0) }}

So I’m getting this consumable sensor from the sonnen battery (sensor.sonnenbatterie_84324_meter_consumption_4_2_w_total).
Am I right in assuming that it should go negative if I’m selling energy into the grid?

The battery I have is a sonnen eco 9.43 10 kWh (9 kWh usable, although I think it’s getting closer to 8 now that it’s nearly 5 years old, you can upgrade them to 15 I think but not sure it’s worth it).

I have control of battery mode (Auto Self Consumptionm, manual and ToU) and charging and discharging via API thanks to @julianlu.

So to integrate the solcast forecast data:

sensor:
  # Solar forecast for EMHASS
  - platform: rest
    name: "Solcast Forecast Data"
    json_attributes:
      - forecasts
    resource: https://api.solcast.com.au/rooftop_sites/SOLCAST_RESOURCE_ID/forecasts?format=json&api_key=SOLCAST_API_KEY&hours=24
    method: GET
    value_template: "{{ (value_json.forecasts[0].pv_estimate)|round(2) }}"
    unit_of_measurement: "kW"
    device_class: power
    scan_interval: 00:30
    force_update: true

  - platform: template
    sensors:
      solcast_24hrs_forecast:
        value_template: >-
          {%- set power = state_attr('sensor.solcast_forecast_data', 'forecasts') | map(attribute='pv_estimate') | list %}
          {%- set values_all = namespace(all=[]) %}
          {% for i in range(power | length) %}
          {%- set v = (power[i] | float |multiply(1000) ) | int(0) %}
          {%- set values_all.all = values_all.all + [ v ] %}
          {%- endfor %} {{ (values_all.all)[:48] }}

I see some discussion about suitability of ‘Solcast Forecast’ Data being kW not W but it appears we only use solcast_24hrs_forecast which corrects this?

Next are the shell commands:

shell_command:
  dayahead_optim: 'curl -i -H "Content-Type: application/json" -X POST -d ''{}'' http://localhost:5000/action/dayahead-optim'
  publish_data: 'curl -i -H "Content-Type: application/json" -X POST -d ''{}'' http://localhost:5000/action/publish-data'
  post_amber_forecast:
    'curl -i -H ''Content-Type: application/json'' -X POST -d ''{"prod_price_forecast":{{(
    state_attr(''sensor.amber_feed_in_forecast'', ''forecasts'')|map(attribute=''per_kwh'')|list)
    }},"load_cost_forecast":{{(
    state_attr(''sensor.amber_general_forecast'', ''forecasts'') |map(attribute=''per_kwh'')|list)
    }},"prediction_horizon":33}'' http://localhost:5000/action/dayahead-optim'
  post_emhass_forecast:
    'curl -i -H ''Content-Type: application/json'' -X POST -d ''{"prod_price_forecast":{{(
    state_attr(''sensor.amber_feed_in_forecast'', ''forecasts'')|map(attribute=''per_kwh'')|list)
    }},{{states(''sensor.solcast_24hrs_forecast'')}},"load_cost_forecast":{{(
    state_attr(''sensor.amber_general_forecast'', ''forecasts'') |map(attribute=''per_kwh'')|list)
    }}}'' http://localhost:5000/action/dayahead-optim'
  post_mpc_optim_solcast:
    'curl -i -H "Content-Type: application/json" -X POST -d ''{"load_cost_forecast":{{(
    ([states(''sensor.amber_general_price'')|float(0)] +
    state_attr(''sensor.amber_general_forecast'', ''forecasts'') |map(attribute=''per_kwh'')|list)[:48])
    }}, "prod_price_forecast":{{(
    ([states(''sensor.amber_feed_in_price'')|float(0)] +
    state_attr(''sensor.amber_feed_in_forecast'', ''forecasts'')|map(attribute=''per_kwh'')|list)[:48])
    }}, "pv_power_forecast":{{states(''sensor.solcast_24hrs_forecast'')
    }}, "prediction_horizon":48,"soc_init":{{(states(''sensor.sonnenbatterie_84324_state_charge_user'')|float(0))/100
    }},"soc_final":0.05,"def_total_hours":[2,0,0,0]}'' http://localhost:5000/action/naive-mpc-optim'

This is where I get a bit confused with all the different methods.
I understand the basic ‘dayahead_optim’ is a good place to start and that’s what I’m running now. But can I run the other post commands since I have the forecast data to use?

Do I run them as well as the dayahead_optim or instead of?
Do I run all three post commands?

I have the following node-red flows to execute these various curl commands (although I’ve left them as shell commands for the time being and called them from node-red).


So at the moment I’m only running

  1. dayahead_optim at 05:30 and
  2. publish_data every 5 mins

The battery controls are simply a set of posts linked back to 4 buttons

I don’t actually use the 5th TOU mode, but I’ve put it in anyway.

These lead to the flows below:


Apart from the buttons that I can control manually, there are three flows that also control the battery state.

  1. The first is an afternoon charge that activates if the Amber price is “very low” and the battery is less than 80% charges. Its a rainy day charge to get me over the evening hump at a better price.
  2. The second is an early morning charge starting at 03:00 early morning flow (at the top) which charges the battery if it’s less that 20% charged and then lets it sit idle for 2 hours until people start to wake up and consume energy (first coffee and heaters on in winter).
  3. The last one down the bottom is to catch any spikes in the FiT what seem to happen from time to time where the tariff can hit $12 per kWh. Only problem is I’m often burning everything out of the 3.3kW output of the battery in winter time so can’t take advantage sometimes unless I run around turning aircons and heaters off. 5 adults in the house at the moment so can be difficult.

So, these are my rules based configurations that I will replace with flows that use the output of EMHASS when its working correctly.
What I’m getting so far is:


Seems to have died in the last day. Probably me mucking around too much.

Any advice greatly appreciated.