EMHASS: An Energy Management for Home Assistant

I’m interessted in this, too. There are loads which just can run at the best time frame (e.g dishwasher). But there are others that need to run within the next x hours or something worse happens (e.g. no hot water is available).

@davidusb I wonder if we could define the peak_hours_periods (start/end) per load in the runtimeparams? With this we could dynamically limit the time frames when running a frequent MPC ? Am I right? Is it possible?

Setting the prediction_horizion doesn’t work in that case because this would then affect all loads in my understanding and not only one of the x defferable loads.

Thanks for the help/suggestions above. Will give this a go.

One more question - is there a way to set an export max limit. At my house due to the distance to the pole I’ve been limited to export max 5kW (even though my inverter could push slightly more).

In EMHass I seem to only be able to define how much power I can pull from the grid (currently set to 9kW), which it then seems to also use for the value I can max push to the grid.

Right now during the day it sometimes suggests I should be pushing up to 9kW to the grid (Solar PV output + battery output).

You could do some time limiting

"def_total_hours": [{{ iif(now() > today_at("12:00"),2,0) }}],

If its after 12 then deferrable total hours is 2 else 0.

Still not resolving the issue. Because in my example the whole day is approx 32 cent. But 1 hour at 23.00 hours it’s 25 cent. Of course from cost profit it’s the best to run at 25 cent, but sometimes you would like to force something in the next x hours

Hello, the “set_def_constant” parameter doesn’t work for me. I simply appended it to the end of my optimization command…

dayahead_optim: "curl -i -H \"Content-Type:application/json\" -X POST -d '{\"solcast_rooftop_id\":\"xxxx-xxxx-xxxx-xxxx\",\"solcast_api_key\":\"xxxxxxxxxx\",\"set_def_constant\":[1,1,1]}' http://localhost:5000/action/dayahead-optim"

No error is displayed in the log, but my consumption is not planned in entire blocks either.

What am I doing wrong? Did I format the parameter incorrectly? Maybe quotation marks are missing? Does it matter if I pass “true” or “1” in the list?

Thanks

I think we need to set something like a due date/time instead of setting a parameter to run in the next x hours. Latter wouldn’t solve the issue when running frequent MPC because with every run it can schedule the loads in a future timeslot.

Hello, i am trying to get some optimizations with this call:

curl -i -H 'Content-Type: application/json' -X POST -d '{
      "load_cost_forecast": [0.3, 0.3, 0.3, 0.3, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5],
      "pv_power_forecast": [0, 0, 0, 0, 0, 0, 0, 0, 5000, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
      "num_def_loads": 1,
      "def_total_hours": [2],
      "P_deferrable_nom": [2000],
      "treat_def_as_semi_cont": [1],
      "set_def_constant": [1]
  }' http://localhost:5000/action/dayahead-optim

soo with this command i have the first 4 “cheap” hours at 0.3 and the rest of my 24h window 0.5.
furthermore i have one hour with 5000w pv, the rest is 0.

i have 1 deferred load with 2k watts.
i have set

 "treat_def_as_semi_cont": [true],
 "set_def_constant": [true]

my expectation is, that the def load will be placed on the hour with 5k watts or in one of the cheap hours.

but when i use true for “treat_def_as_semi_cont” and “seet_def_constant” it will always be placed on the last hour.

e.g. here:

i also set my history to 0, to showcase that this cannot be correct.

any hints what is wrong?

ps: log output is:

2023-10-21 00:25:15,984 - web_server - INFO - Performing day-ahead forecast optimization
2023-10-21 00:25:15,989 - web_server - INFO - Perform optimization for the day-ahead
2023-10-21 00:25:16,026 - web_server - INFO - Status: Optimal
2023-10-21 00:25:16,026 - web_server - INFO - Total value of the Cost function = -11.68

The way you are passing set_def_constant is correct.
Try to explicitly pass via the command also:

  • "num_def_loads": 3
  • "P_deferrable_nom": [x, y, z]
  • "def_total_hours": [a, b, c]
  • "treat_def_as_semi_cont": [d, e, f] (use 0/1)

@davidusb, looking at the charngelog does time series clustering mean bring able to detect something like the duty cycle of my heat pump?

1 Like

Equipments with duty cycles like that can be easy classified using basic clustering.
What I did for now is just a script that is able to identify the different cluster present on your load power time series. The script is there and anybody can use it, you may test it to see if its able to identify those HVAC clusters. However it is not part of the add-on yet is a work in progress on the core EMHASS code.
The goal will be to ease the configuration process of EMHASS with no need build a custom sensor with your load power and the deferrable powers subtracted, waiting for 2 days, etc.
If we are able to identify the deferrable load clusters then we could build this sensor automatically inside EMHASS.
The main challenge is that we need some input from the user. The IA algorithms identify the clusters with no problem, but then the user should label them to pick the correct deferrable load signals, this cannot be automated.
So thinking about the easiest way to implement this.

This seems like a nice option to have in the future.
But you can already do this by yourself if you provide a customized load cost list of values. You can pass a list with your normal cost but in top of that you may add a really expensive range of operating values. This way the algorithm will converge and schedule your load to the cheapest time frames.
But this will only work properly and make sense if you have just one deferrable load.

@davidusb my forcaster model is broken after upgrading to the latest version.

2023-10-21 23:44:32,440 - web_server - INFO -  >> Performing a machine learning forecast model predict...
/usr/local/lib/python3.9/dist-packages/sklearn/base.py:348: InconsistentVersionWarning:

Trying to unpickle estimator KNeighborsRegressor from version 1.3.0 when using version 1.3.1. This might lead to breaking code or invalid results. Use at your own risk. For more info please refer to:
https://scikit-learn.org/stable/model_persistence.html#security-maintainability-limitations

2023-10-21 23:44:32,452 - web_server - ERROR - Exception on /action/forecast-model-predict [POST]
Traceback (most recent call last):
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1455, in wsgi_app
    response = self.full_dispatch_request()
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 869, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 867, in full_dispatch_request
    rv = self.dispatch_request()
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 852, in dispatch_request
    return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
  File "/usr/local/lib/python3.9/dist-packages/emhass/web_server.py", line 221, in action_call
    df_pred = forecast_model_predict(input_data_dict, app.logger)
  File "/usr/local/lib/python3.9/dist-packages/emhass/command_line.py", line 361, in forecast_model_predict
    predictions = mlf.predict(data_last_window)
  File "/usr/local/lib/python3.9/dist-packages/emhass/machine_learning_forecaster.py", line 211, in predict
    predictions = self.forecaster.predict(steps=self.num_lags,
  File "/usr/local/lib/python3.9/dist-packages/skforecast/ForecasterAutoreg/ForecasterAutoreg.py", line 696, in predict
    if self.differentiation is not None:
AttributeError: 'ForecasterAutoreg' object has no attribute 'differentiation'

How can i migrate the model to work with the new EMHASS version / dependencies?

Yes, but like u said this works only with one deferrable load. I added a feature request on GitHub (Feature Request: Time windows for deferrable loads · Issue #123 · davidusb-geek/emhass · GitHub) so this goes not forgotten.

Erase the current saved *.pkl file in the share folder of your HA instance and relaunch a model fit/tune

1 Like

Just to close the loop on this, I needed to reboot the machine running HA. So if you see timeouts on your REST calls into EMHASS and high CPU usage from the Add on, give rebooting your host a try. Should have been the first thing I tried !

@davidusb do you have any idea, why the program did this decision?

This CPU issue is not actually resolved for me. It takes about 10-20 runs of the MPC action to start seeing enough growth in the waitress queue length that is causes 99% cpu for the add on container. The host is running HA so not much else going on. These days I am getting a lot of infeasible status from the optimizer. This seems to lead to exceptions due to missing error handling. Could these exceptions be the issue?

I can see in the log file that it never gets back from running the solver, from below you can see that

2023-10-23 19:20:01,855 - web_server - INFO -  >> Performing a machine learning forecast model predict...
2023-10-23 19:20:01,969 - web_server - INFO - Successfully posted to sensor.p_load_forecast_custom_model = 3346.73
2023-10-23 19:20:01,980 - web_server - INFO - Setting up needed data
2023-10-23 19:20:01,983 - web_server - INFO - Retrieve hass get data method initiated...
2023-10-23 19:20:02,171 - web_server - INFO - Retrieving weather forecast data using method = list
2023-10-23 19:20:02,175 - web_server - INFO -  >> Performing naive MPC optimization...
2023-10-23 19:20:02,175 - web_server - INFO - Performing naive MPC optimization
2023-10-23 19:20:02,179 - web_server - INFO - Perform an iteration of a naive MPC controller

I don’t see the line Status: ..... from

 self.logger.info("Status: " + self.optim_status)

I also don’t see the message that happens if an exception is thrown from the solver. I am a bit stuck here. This has started happening since I added another def load.

What am I missing I’m manually pressing the Perfect Optimization Day-ahead Optimization buttons in the webgui?

s6-rc: info: service s6rc-oneshot-runner: starting
s6-rc: info: service s6rc-oneshot-runner successfully started
s6-rc: info: service fix-attrs: starting
s6-rc: info: service fix-attrs successfully started
s6-rc: info: service legacy-cont-init: starting
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service legacy-services: starting
services-up: info: copying legacy longrun emhass (no readiness notification)
s6-rc: info: service legacy-services successfully started
2023-10-23 06:41:53,357 - web_server - INFO - Launching the emhass webserver at: http://0.0.0.0:5000
2023-10-23 06:41:53,357 - web_server - INFO - Home Assistant data fetch will be performed using url: http://supervisor/core/api
2023-10-23 06:41:53,357 - web_server - INFO - The data path is: /share
2023-10-23 06:41:53,358 - web_server - INFO - Using core emhass version: 0.5.1
waitress   INFO  Serving on http://0.0.0.0:5000
2023-10-23 06:41:58,565 - web_server - INFO - EMHASS server online, serving index.html...
2023-10-23 06:41:58,577 - web_server - WARNING - The data container dictionary is empty... Please launch an optimization task
2023-10-23 06:42:25,484 - web_server - INFO - Setting up needed data
2023-10-23 06:42:25,509 - web_server - INFO - Retrieve hass get data method initiated...
2023-10-23 06:42:26,399 - web_server - INFO -  >> Performing perfect optimization...
2023-10-23 06:42:26,399 - web_server - INFO - Performing perfect forecast optimization
2023-10-23 06:42:26,402 - web_server - INFO - Perform optimization for perfect forecast scenario
2023-10-23 06:42:26,402 - web_server - INFO - Solving for day: 21-10-2023
2023-10-23 06:42:26,405 - web_server - ERROR - Exception on /action/perfect-optim [POST]
Traceback (most recent call last):
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1455, in wsgi_app
    response = self.full_dispatch_request()
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 869, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 867, in full_dispatch_request
    rv = self.dispatch_request()
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 852, in dispatch_request
    return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
  File "/usr/local/lib/python3.9/dist-packages/emhass/web_server.py", line 188, in action_call
    opt_res = perfect_forecast_optim(input_data_dict, app.logger)
  File "/usr/local/lib/python3.9/dist-packages/emhass/command_line.py", line 199, in perfect_forecast_optim
    opt_res = input_data_dict['opt'].perform_perfect_forecast_optim(df_input_data, input_data_dict['days_list'])
  File "/usr/local/lib/python3.9/dist-packages/emhass/optimization.py", line 494, in perform_perfect_forecast_optim
    opt_tp = self.perform_optimization(data_tp, P_PV, P_load,
  File "/usr/local/lib/python3.9/dist-packages/emhass/optimization.py", line 157, in perform_optimization
    if self.optim_conf['treat_def_as_semi_cont'][k]:
IndexError: list index out of range
2023-10-23 06:42:27,397 - web_server - INFO - Setting up needed data
2023-10-23 06:42:27,401 - web_server - INFO - Retrieving weather forecast data using method = scrapper
2023-10-23 06:42:28,171 - web_server - ERROR - Exception on /action/dayahead-optim [POST]
Traceback (most recent call last):
  File "/usr/local/lib/python3.9/dist-packages/pandas/core/indexes/base.py", line 3621, in get_loc
    return self._engine.get_loc(casted_key)
  File "pandas/_libs/index.pyx", line 136, in pandas._libs.index.IndexEngine.get_loc
  File "pandas/_libs/index.pyx", line 163, in pandas._libs.index.IndexEngine.get_loc
  File "pandas/_libs/hashtable_class_helper.pxi", line 5198, in pandas._libs.hashtable.PyObjectHashTable.get_item
  File "pandas/_libs/hashtable_class_helper.pxi", line 5206, in pandas._libs.hashtable.PyObjectHashTable.get_item
KeyError: 'Huawei_Technologies_Co-Ltd_SUN2000_10KTL_USL0_240V_'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1455, in wsgi_app
    response = self.full_dispatch_request()
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 869, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 867, in full_dispatch_request
    rv = self.dispatch_request()
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 852, in dispatch_request
    return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
  File "/usr/local/lib/python3.9/dist-packages/emhass/web_server.py", line 179, in action_call
    input_data_dict = set_input_data_dict(config_path, str(data_path), costfun,
  File "/usr/local/lib/python3.9/dist-packages/emhass/command_line.py", line 90, in set_input_data_dict
    P_PV_forecast = fcst.get_power_from_weather(df_weather)
  File "/usr/local/lib/python3.9/dist-packages/emhass/forecast.py", line 418, in get_power_from_weather
    inverter = cec_inverters[self.plant_conf['inverter_model'][i]]
  File "/usr/local/lib/python3.9/dist-packages/pandas/core/frame.py", line 3505, in __getitem__
    indexer = self.columns.get_loc(key)
  File "/usr/local/lib/python3.9/dist-packages/pandas/core/indexes/base.py", line 3623, in get_loc
    raise KeyError(key) from err
KeyError: 'Huawei_Technologies_Co-Ltd_SUN2000_10KTL_USL0_240V_'
2023-10-23 06:42:30,511 - web_server - INFO - Setting up needed data
2023-10-23 06:42:30,516 - web_server - INFO -  >> Publishing data...
2023-10-23 06:42:30,516 - web_server - INFO - Publishing data to HASS instance
2023-10-23 06:42:30,516 - web_server - ERROR - File not found error, run an optimization task first.

Try to replace the hyphen with an underscore in your inverter/PV panel model.

It’s that simple, thank you.