EMHASS: An Energy Management for Home Assistant

I have difficulties to the the ML forecast model fit running. The perfect optimisation and the day ahead is doing well. This is the error log:

2024-01-20 09:31:46,605 - web_server - INFO - EMHASS server online, serving index.html…
2024-01-20 09:31:49,485 - web_server - INFO - Setting up needed data
2024-01-20 09:31:49,493 - web_server - INFO - Retrieve hass get data method initiated…
2024-01-20 09:31:49,519 - web_server - ERROR - The retrieved JSON is empty, check that correct day or variable names are passed
2024-01-20 09:31:49,520 - web_server - ERROR - Either the names of the passed variables are not correct or days_to_retrieve is larger than the recorded history of your sensor (check your recorder settings)
2024-01-20 09:31:49,521 - web_server - ERROR - Exception on /action/forecast-model-fit [POST]
Traceback (most recent call last):
File “/usr/local/lib/python3.11/dist-packages/flask/app.py”, line 1455, in wsgi_app
response = self.full_dispatch_request()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/local/lib/python3.11/dist-packages/flask/app.py”, line 869, in full_dispatch_request
rv = self.handle_user_exception(e)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/local/lib/python3.11/dist-packages/flask/app.py”, line 867, in full_dispatch_request
rv = self.dispatch_request()
^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/local/lib/python3.11/dist-packages/flask/app.py”, line 852, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/local/lib/python3.11/dist-packages/emhass/web_server.py”, line 181, in action_call
input_data_dict = set_input_data_dict(config_path, str(data_path), costfun,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/local/lib/python3.11/dist-packages/emhass/command_line.py”, line 146, in set_input_data_dict
rh.get_data(days_list, var_list)
File “/usr/local/lib/python3.11/dist-packages/emhass/retrieve_hass.py”, line 150, in get_data
self.df_final = pd.concat([self.df_final, df_day], axis=0)
^^^^^^
UnboundLocalError: cannot access local variable ‘df_day’ where it is not associated with a value

any ideas? as mentioned the linear optimisation works, bus the ML optimisation doesn’t.
thanks for any help.

What does your curl command look like?

Have you tried running the same curl command from a terminal session?

Are you using jinja templates in the curl command and do they resolve the way you expect in the Developer Tools demo teplate renderer?

I got a rather weird error from EMHASS the other day. Right now I can’t find the logs, so if anyone can point me to where there is a log file in the docker container, I’d be very happy.
What I experienced was that after more than 30 days of zero PV power (snow on the panels), EMHASS started failing with errors related to the PV power sensor in HA. Luckily, a dramatic change in weather has caused all the snow to slide off the panels to give me just a few watts of PV power, and then EMHASS just started working again.

I don’t want to write a ticket this vague, so any pointers to find logs in arrears would be appreciated. Mind you that I’m running a rather old version (0.4.14) so perhaps this has already been fixed. Need to update my image real soon…

I’m trying to understand the new weight_battery_discharge and weight_battery_charge configuration parameters. What do these values do to the optimization?
What does a value of 1.0 mean? What would a value of 0 or 1.5 mean?

Ah, found the reply from @markpurcell above. It’s in currency per kWh. EMHASS is currency agnostic, you can use dollars, Euros or any other local currency as you like. I have all my prices in Swedish SEK.
I assume this just adds the pre-defined amount to the sales price or purchase cost, respectively, when discharging or charging the battery to or from grid? So basically, a positive value in these parameters would cause the optimization to prefer charging from PV and discharging to household loads, instead of grid.
Do I understand this correctly?

Hi!

A question: If I have one sensor for the power consumption to the grid, sensor.grid_power_consumption, one sensor for pv production, sensor.pv_power_production and one sensor for my deferrable load, sensor.def_load_power. Would the supplied power without deferrable load be sensor.grid_power_consumption - sensor.pv_power_production - sensor.def_load_power?

Thanks

If your sensor sensor.grid_power_consumption includes the PV production then it should be:
sensor.grid_power_consumption + sensor.pv_power_production - sensor.def_load_power.
Note the signs.

Hello. I have made an addon that can control deye inverters over wifi if you are interested

Is there any example of how to use solcast if you have that integration already in HA

If you are using oziee’s HACS integration ha-solcast-solar, something like this:

"pv_power_forecast": {{
    ([states('sensor.sonnenbatterie_84324_production_w')|int(0)] +
    state_attr('sensor.solcast_pv_forecast_forecast_today', 'detailedForecast')|selectattr('period_start','gt',utcnow()) | map(attribute='pv_estimate')|map('multiply',1000)|map('int')|list +
    state_attr('sensor.solcast_pv_forecast_forecast_tomorrow', 'detailedForecast')|selectattr('period_start','gt',utcnow()) | map(attribute='pv_estimate')|map('multiply',1000)|map('int')|list
    )| tojson
  }},

sensor.sonnenbatterie_84324_production_w is the actual solar pruduction now as reported by my battery.

Which produces something like this:

"pv_power_forecast": [400, 1042, 1484, 1912, 2269, 2586, 2842, 2944, 2981, 3037, 3073, 3066, 3006, 2820, 2485, 2203, 2057, 1795, 1433, 1146, 941, 647, 334, 91, 22, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5, 57, 220, 523, 795, 995, 1180, 1429, 1624, 1780, 1883, 1886, 1841, 1841, 1832, 1811, 1815, 1812, 1736, 1670, 1553, 1370, 1190, 1027, 753, 435, 121, 32, 0, 0, 0, 0, 0, 0, 0, 0],
1 Like

Thanks, but now the next challenge is how I run this (and how often), I have the others as shellscripts

##Nordpool
trigger_nordpool_forecast: 'curl -i -H "Content-Type: application/json" -X POST -d ''{"load_cost_forecast":{{((state_attr(''sensor.nordpool_kwh_se3_sek_3_10_025'', ''raw_today'') | map(attribute=''value'') | list  + state_attr(''sensor.nordpool_kwh_se3_sek_3_10_025'', ''raw_tomorrow'') | map(attribute=''value'') | list))[now().hour:][:24] }},"prod_price_forecast":{{((state_attr(''sensor.nordpool_kwh_se3_sek_3_10_025'', ''raw_today'') | map(attribute=''value'') | list  + state_attr(''sensor.nordpool_kwh_se3_sek_3_10_025'', ''raw_tomorrow'') | map(attribute=''value'') | list))[now().hour:][:24]}}}'' http://localhost:5000/action/dayahead-optim'
#Dayahead Optimization
dayahead_optim: 'curl -i -H "Content-Type:application/json" -X POST -d ''{}'' http://localhost:5000/action/dayahead-optim'
publish_data: 'curl -i -H "Content-Type:application/json" -X POST -d ''{}'' http://localhost:5000/action/publish-data'
##Solcast

I think most people are using either Home Assistant Shell Command or more appropriatly the Rest Command.
The Dayahead method just needs to be called once a day or until it produces a result that you like and then you publish data periodically, say every 10 minutes.

Below is my naive-mpc-optim rest_command from my configuration.yaml file but I don’t use it as I run everyting in Node Red. In Node Red I call this rest command every minute and then publish data directly after it.

rest_command:
  naive_mpc_optim:
    url: http://localhost:5000/action/naive-mpc-optim
    method: POST
    content_type: 'application/json'
    payload: >-
      {
        "prod_price_forecast": {{
          ([states('sensor.cecil_st_feed_in_price')|float(0)] +
          (state_attr('sensor.cecil_st_feed_in_forecast', 'forecasts')|map(attribute='per_kwh')|list))
          | tojson 
        }},
        "load_cost_forecast": {{
          ([states('sensor.cecil_st_general_price')|float(0)] + 
          state_attr('sensor.cecil_st_general_forecast', 'forecasts') |map(attribute='per_kwh')|list) 
          | tojson 
        }},
        "pv_power_forecast": {{
          ([states('sensor.sonnenbatterie_84324_production_w')|int(0)] +
          state_attr('sensor.solcast_pv_forecast_forecast_today', 'detailedForecast')|selectattr('period_start','gt',utcnow()) | map(attribute='pv_estimate')|map('multiply',1000)|map('int')|list +
          state_attr('sensor.solcast_pv_forecast_forecast_tomorrow', 'detailedForecast')|selectattr('period_start','gt',utcnow()) | map(attribute='pv_estimate')|map('multiply',1000)|map('int')|list
          )| tojson
        }},
        "prediction_horizon": {{
          min(48, (state_attr('sensor.cecil_st_feed_in_forecast', 'forecasts')|map(attribute='per_kwh')|list|length)+1)
        }},
        "num_def_loads": 2,
        "def_total_hours": [2,2],
        "P_deferrable_nom": [1300, 7360],
        "treat_def_as_semi_cont": [1, 0],
        "set_def_constant": [0, 0],
        "soc_init": {{(states('sensor.sonnenbatterie_84324_state_charge_user')|int(0))/100 }},
        "soc_final": 0.1
      }
  publish_data:
    url: http://localhost:5000/action/publish-data
    method: POST
    content_type: 'application/json'
    payload: '{}'

Currently using Kellerza to good effect. What were you proposing?

Found an anwser to my own question, so I thought I’d post it.
Since emhass runs in an interactive container, I can attach to it to view real time emhass output to stdout by attaching my tty to it using:
sudo docker start -a emhass

But in order to view previously output logs, I can use docker’s own command:
sudo docker logs emhass
which will list all log lines since the container started. It can preferably be paged through less or filtered by sed, grep or similar.

1 Like

hello and thanks for your reply. could you please explain which sensors are what in your example, I’m having a bit of a hard time understanding which of mine to put in.

rest_command:
naive_mpc_optim:
url: http://localhost:5000/action/naive-mpc-optim
method: POST
content_type: ‘application/json’
payload: >-
{
“prod_price_forecast”: {{
([states(‘sensor.cecil_st_feed_in_price’)|float(0)] +
(state_attr(‘sensor.cecil_st_feed_in_forecast’, ‘forecasts’)|map(attribute=‘per_kwh’)|list))
| tojson
}},
“load_cost_forecast”: {{
([states(‘sensor.cecil_st_general_price’)|float(0)] +
state_attr(‘sensor.cecil_st_general_forecast’, ‘forecasts’) |map(attribute=‘per_kwh’)|list)
| tojson
}},
“pv_power_forecast”: {{
([states(‘sensor.sonnenbatterie_84324_production_w’)|int(0)] +
state_attr(‘sensor.solcast_pv_forecast_forecast_today’, ‘detailedForecast’)|selectattr(‘period_start’,‘gt’,utcnow()) | map(attribute=‘pv_estimate’)|map(‘multiply’,1000)|map(‘int’)|list +
state_attr(‘sensor.solcast_pv_forecast_forecast_tomorrow’, ‘detailedForecast’)|selectattr(‘period_start’,‘gt’,utcnow()) | map(attribute=‘pv_estimate’)|map(‘multiply’,1000)|map(‘int’)|list
)| tojson
}},
“prediction_horizon”: {{
min(48, (state_attr(‘sensor.cecil_st_feed_in_forecast’, ‘forecasts’)|map(attribute=‘per_kwh’)|list|length)+1)
}},
“num_def_loads”: 2,
“def_total_hours”: [2,2],
“P_deferrable_nom”: [1300, 7360],
“treat_def_as_semi_cont”: [1, 0],
“set_def_constant”: [0, 0],
“soc_init”: {{(states(‘sensor.sonnenbatterie_84324_state_charge_user’)|int(0))/100 }},
“soc_final”: 0.1
}
publish_data:
url: http://localhost:5000/action/publish-data
method: POST
content_type: ‘application/json’
payload: ‘{}’

A better example for you is this Jinja template:

{
  "load_cost_forecast": {{
    ([states('sensor.cecil_st_general_price')|float(0)] + 
    state_attr('sensor.cecil_st_general_forecast', 'forecasts') |map(attribute='per_kwh')|list) 
    | tojson 
  }},
  "prod_price_forecast": {{
    ([states('sensor.cecil_st_feed_in_price')|float(0)] +
    (state_attr('sensor.cecil_st_feed_in_forecast', 'forecasts')|map(attribute='per_kwh')|list))
    | tojson 
  }},
  "pv_power_forecast": {{
    ([states('sensor.sonnenbatterie_84324_production_w')|int(0)] +
    state_attr('sensor.solcast_pv_forecast_forecast_today', 'detailedForecast')|selectattr('period_start','gt',utcnow()) | map(attribute='pv_estimate')|map('multiply',1000)|map('int')|list +
    state_attr('sensor.solcast_pv_forecast_forecast_tomorrow', 'detailedForecast')|selectattr('period_start','gt',utcnow()) | map(attribute='pv_estimate')|map('multiply',1000)|map('int')|list
    )| tojson
  }},
  "num_def_loads": 2,
  "def_total_hours": [3, 1],
  "P_deferrable_nom":  [1300, 3450],
  "treat_def_as_semi_cont": [1, 0]  
}

I use this in my Node Red flow in a “render template node” which returns a JSON result like this which is for the dayahead method:

{
  "load_cost_forecast": [
    0.16,
    0.15,
    0.13,
    0.13,
    0.13,
    0.39,
    0.39,
    0.37,
    0.39,
    0.34,
    0.34,
    0.33,
    0.38,
    0.34,
    0.44,
    0.54,
    0.54,
    0.19,
    0.19,
    0.19,
    0.16,
    0.19,
    0.19,
    0.2,
    0.16,
    0.16,
    0.16,
    0.16,
    0.16,
    0.16,
    0.15,
    0.15,
    0.15,
    0.15,
    0.16
  ],
  "prod_price_forecast": [
    0.04,
    0.03,
    0.02,
    0.02,
    0.02,
    0.3,
    0.3,
    0.28,
    0.3,
    0.25,
    0.25,
    0.25,
    0.29,
    0.25,
    0.35,
    0.44,
    0.44,
    0.09,
    0.09,
    0.09,
    0.07,
    0.09,
    0.09,
    0.09,
    0.07,
    0.07,
    0.07,
    0.07,
    0.07,
    0.06,
    0.05,
    0.05,
    0.05,
    0.05,
    0.06
  ],
  "pv_power_forecast": [
    314,
    1366,
    1339,
    1339,
    1380,
    1416,
    1411,
    1339,
    1278,
    1192,
    1046,
    925,
    827,
    610,
    329,
    91,
    22,
    0,
    0,
    0,
    0,
    0,
    0,
    0,
    0,
    0,
    0,
    0,
    0,
    0,
    0,
    0,
    0,
    0,
    0,
    0,
    0,
    10,
    73,
    204,
    646,
    1145,
    1686,
    2184,
    2636,
    3024,
    3323,
    3347,
    3275,
    3305,
    3334,
    3309,
    3212,
    3077,
    2891,
    2668,
    2378,
    2094,
    1822,
    1500,
    1149,
    774,
    402,
    102,
    24,
    0,
    0,
    0,
    0,
    0,
    0,
    0,
    0
  ],
  "num_def_loads": 2,
  "def_total_hours": [
    3,
    1
  ],
  "P_deferrable_nom": [
    1300,
    3450
  ],
  "treat_def_as_semi_cont": [
    1,
    0
  ]
}

The same or similar template can be used in a rest_command (or shell_command).

The sensors are:

  • sensor.cecil_st_general_price - the actual electricity supply price at this moment.
  • sensor.cecil_st_general_forecast - the forecast supply price as provided by my electricity supplier in 1/2 hour increments. You can see I’m using the ‘per_kwh’ attribute out of the ‘forecasts’ array (see the image below). This template ( state_attr(‘sensor.cecil_st_general_forecast’, ‘forecasts’) |map(attribute=‘per_kwh’)|list) ) extracts the attributes and lists them with the actual price from the sensor above as the first element.
  • sensor.cecil_st_feed_in_price - As above except this is the actual feed-in price
  • sensor.cecil_st_feed_in_forecast - as above except this is the feed-in forecast list
  • sensor.sonnenbatterie_84324_production_w - this is the current actual solar production as measured by my battery. You have get this from your inverted?
  • sensor.solcast_pv_forecast_forecast_today - this is the solcast solar forecast for today as produced buy oziee’s ha-solcast-solar HACS.
  • sensor.solcast_pv_forecast_forecast_tomorrow - same as above except this is tomorrows solar forecast.

Maybe this discussion will help? I haven’t read it but it seems to mention Nordpool and dayahead. Look here. I’m sure there are many discussions in this forum about nordpool.

1 Like

Hello,

Why suddenly this error?
All I did was change the peak/non-peak prices in the config.

2024-01-27 14:27:43,526 - web_server - ERROR - The retrieved JSON is empty, check that correct day or variable names are passed
2024-01-27 14:27:43,526 - web_server - ERROR - Either the names of the passed variables are not correct or days_to_retrieve is larger than the recorded history of your sensor (check your recorder settings)
2024-01-27 14:27:43,527 - web_server - ERROR - Exception on /action/forecast-model-fit [POST]
Traceback (most recent call last):
  File "/usr/local/lib/python3.11/dist-packages/flask/app.py", line 1455, in wsgi_app
    response = self.full_dispatch_request()
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/dist-packages/flask/app.py", line 869, in full_dispatch_request
    rv = self.handle_user_exception(e)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/dist-packages/flask/app.py", line 867, in full_dispatch_request
    rv = self.dispatch_request()
         ^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/dist-packages/flask/app.py", line 852, in dispatch_request
    return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/dist-packages/emhass/web_server.py", line 181, in action_call
    input_data_dict = set_input_data_dict(config_path, str(data_path), costfun,
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/dist-packages/emhass/command_line.py", line 146, in set_input_data_dict
    rh.get_data(days_list, var_list)
  File "/usr/local/lib/python3.11/dist-packages/emhass/retrieve_hass.py", line 150, in get_data
    self.df_final = pd.concat([self.df_final, df_day], axis=0)
                                              ^^^^^^
UnboundLocalError: cannot access local variable 'df_day' where it is not associated with a value

forecast-model-fit you used does not involve peak/non peak hours so that’s unlikely to be the problem, which the log reports to be:

Erase the load_forecast_mlf.pkl file in you share folder. Then try again the fit method.

Thanks for your reply.
I tried it but it’s the same error.
I also rebooted but that doesn’t help eighter.

Hi guys,
I’m trying to add start & end times to the mpc_optim curl call, like this:
\“def_start_timestep\”: [21,0],
\“def_end_timestep\”: [23,0],
I also tried:
\“def_start_timestep\”: [0,0],
\“def_end_timestep\”: [3,0],
but my first 1-hour deferable load doesn;t seem to move from the2:00 am slot (it’s now 19:00)

Any ideas?