EMHASS: An Energy Management for Home Assistant

I’m struggling to get the machine learning forecaster to work.
Can someone give me some advice?
My addon config

hass_url: empty
long_lived_token: empty
costfun: profit
logging_level: INFO
optimization_time_step: 30
historic_days_to_retrieve: 6
method_ts_round: nearest
set_total_pv_sell: false
lp_solver: COIN_CMD
lp_solver_path: /usr/bin/cbc
set_nocharge_from_grid: false
set_nodischarge_to_grid: false
set_battery_dynamic: false
battery_dynamic_max: 0.9
battery_dynamic_min: -0.9
load_forecast_method: naive
sensor_power_photovoltaics: sensor.huidige_opbrengst
sensor_power_load_no_var_loads: sensor.huidig_verbruik_zonder_wp
number_of_deferrable_loads: 4
list_nominal_power_of_deferrable_loads:
  - nominal_power_of_deferrable_loads: 2000
  - nominal_power_of_deferrable_loads: 2000
  - nominal_power_of_deferrable_loads: 1700
  - nominal_power_of_deferrable_loads: 1100
list_operating_hours_of_each_deferrable_load:
  - operating_hours_of_each_deferrable_load: 2
  - operating_hours_of_each_deferrable_load: 2
  - operating_hours_of_each_deferrable_load: 2.5
  - operating_hours_of_each_deferrable_load: 1
list_peak_hours_periods_start_hours:
  - peak_hours_periods_start_hours: "12:00"
list_peak_hours_periods_end_hours:
  - peak_hours_periods_end_hours: "17:24"
list_treat_deferrable_load_as_semi_cont:
  - treat_deferrable_load_as_semi_cont: false
  - treat_deferrable_load_as_semi_cont: false
  - treat_deferrable_load_as_semi_cont: false
  - treat_deferrable_load_as_semi_cont: false
load_peak_hours_cost: 0.1784
load_offpeak_hours_cost: 0.1684
photovoltaic_production_sell_price: 0
maximum_power_from_grid: 14000
list_pv_module_model:
  - pv_module_model: CSUN_Eurasia_Energy_Systems_Industry_and_Trade_CSUN295_60M
list_pv_inverter_model:
  - pv_inverter_model: Fronius_International_GmbH__Fronius_Primo_5_0_1_208_240__240V_
list_surface_tilt:
  - surface_tilt: 30
list_surface_azimuth:
  - surface_azimuth: 205
list_modules_per_string:
  - modules_per_string: 16
list_strings_per_inverter:
  - strings_per_inverter: 1
set_use_battery: false
battery_discharge_power_max: 1000
battery_charge_power_max: 1000
battery_discharge_efficiency: 0.95
battery_charge_efficiency: 0.95
battery_nominal_energy_capacity: 5000
battery_minimum_state_of_charge: 0.3
battery_maximum_state_of_charge: 0.9
battery_target_state_of_charge: 0.6
method: solcast

And this are my shell commands

forecast_model_fit_load_zonder_wp: >-
  curl -i -H 'Content-Type: application/json' -X POST -d '{
    "days_to_retrieve": 15,
    "model_type": "load_zonder_wp_forecast",
    "var_model": "sensor.huidig_verbruik_zonder_wp",
    "sklearn_model": "KNeighborsRegressor",
    "num_lags": 48,
    "split_date_delta": "48h",
    "perform_backtest": "True"
    }' http://localhost:5001/action/forecast-model-fit
forecast_model_predict_load_zonder_wp: >-
  curl -i -H 'Content-Type: application/json' -X POST -d '{
    "model_type": "load_zonder_wp_forecast",
    "model_predict_publish": "True",
    "model_predict_entity_id": "sensor.p_load_zonder_wp_custom_model",
    "model_predict_unit_of_measurement": "W",
    "model_predict_friendly_name": "Warmtepompboiler custom model"
    }' http://localhost:5001/action/forecast-model-predict

When I do the first in a terminal this file is created load_zonder_wp_forecast_mlf.pkl
When I do the second

❯ curl -i -H 'Content-Type: application/json' -X POST -d '{
    "model_type": "load_zonder_wp_forecast",
    "model_predict_publish": "True",
    "model_predict_entity_id": "sensor.p_load_zonder_wp_custom_model",
    "model_predict_unit_of_measurement": "W",
    "model_predict_friendly_name": "Warmtepompboiler custom model"
    }' http://192.168.79.54:5001/action/forecast-model-predict
HTTP/1.1 500 INTERNAL SERVER ERROR
Content-Length: 265
Content-Type: text/html; charset=utf-8
Date: Wed, 07 Jun 2023 22:49:59 GMT
Server: waitress

<!doctype html>
<html lang=en>
<title>500 Internal Server Error</title>
<h1>Internal Server Error</h1>
<p>The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.</p>

And this are the logs for the addon

2023-06-08 00:49:31,692 - web_server - INFO - Performing a forecast model fit for load_zonder_wp_forecast
2023-06-08 00:49:31,701 - web_server - INFO - Training a KNeighborsRegressor model
2023-06-08 00:49:31,818 - web_server - INFO - Elapsed time for model fit: 0.11742901802062988
2023-06-08 00:49:31,930 - web_server - INFO - Prediction R2 score of fitted model on test data: -0.09174206116291761
2023-06-08 00:49:31,933 - web_server - INFO - Performing simple backtesting of fitted model

  0%|          | 0/13 [00:00<?, ?it/s]
 15%|█▌        | 2/13 [00:00<00:00, 19.13it/s]
 31%|███       | 4/13 [00:00<00:00, 19.06it/s]
 46%|████▌     | 6/13 [00:00<00:00, 18.76it/s]
 62%|██████▏   | 8/13 [00:00<00:00, 18.88it/s]
 77%|███████▋  | 10/13 [00:00<00:00, 18.84it/s]
 92%|█████████▏| 12/13 [00:00<00:00, 19.07it/s]
100%|██████████| 13/13 [00:00<00:00, 20.34it/s]
2023-06-08 00:49:32,577 - web_server - INFO - Elapsed backtesting time: 0.6434693336486816
2023-06-08 00:49:32,577 - web_server - INFO - Backtest R2 score: 0.1965843955066804
2023-06-08 00:49:59,873 - web_server - INFO - Setting up needed data
2023-06-08 00:49:59,880 - web_server - INFO - Retrieve hass get data method initiated...
2023-06-08 00:49:59,907 - web_server - ERROR - The retrieved JSON is empty, check that correct day or variable names are passed
2023-06-08 00:49:59,908 - web_server - ERROR - Either the names of the passed variables are not correct or days_to_retrieve is larger than the recorded history of your sensor (check your recorder settings)
2023-06-08 00:49:59,908 - web_server - ERROR - Exception on /action/forecast-model-predict [POST]
Traceback (most recent call last):
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 2190, in wsgi_app
    response = self.full_dispatch_request()
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1486, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1484, in full_dispatch_request
    rv = self.dispatch_request()
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1469, in dispatch_request
    return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
  File "/usr/local/lib/python3.9/dist-packages/emhass/web_server.py", line 174, in action_call
    input_data_dict = set_input_data_dict(config_path, str(data_path), costfun,
  File "/usr/local/lib/python3.9/dist-packages/emhass/command_line.py", line 146, in set_input_data_dict
    rh.get_data(days_list, var_list)
  File "/usr/local/lib/python3.9/dist-packages/emhass/retrieve_hass.py", line 147, in get_data
    self.df_final = pd.concat([self.df_final, df_day], axis=0)
UnboundLocalError: local variable 'df_day' referenced before assignment

In recorder.yaml I have purge_keep_days: 15

When I use the buttons in the webui I get the same errors in the log.

1 Like

You don’t have enough data.
If purge is set to 15 then set days_to_retrieve to something like 10 to be sure.

First time to experince Negative FIT… I panicked… hahahaha it was exporting 1.0KW with -5c FIT…

Time to automate Tesla MY charging…
where do I start?

Should I be concerned about exporting and importing small amount of wattage ? specially negative FIT? Around 100 watts?

I find a lot more hype around -ve FIT. With EMHASS if the price is that low (general price ~ 5c/ kWh for feed in to be -3c/ kWh) I am certainly not exporting any excess solar, in fact the reverse I’m importing as much as I can from the grid; EV, pool, how water, HVAC, …

Anyway getting the Tesla vehicle into EMHASS isn’t to bad, here is my configuration:

1 Like

At first I have deleted the load_zonder_wp_forecast_mlf.pkl file, then I have changed to 10 days and the same problem…

    ~ 
❯ curl -i -H 'Content-Type: application/json' -X POST -d '{
    "days_to_retrieve": 10,
    "model_type": "load_zonder_wp_forecast",
    "var_model": "sensor.huidig_verbruik_zonder_wp",
    "sklearn_model": "KNeighborsRegressor",
    "num_lags": 48,
    "split_date_delta": "48h",
    "perform_backtest": "True"
    }' http://192.168.79.54:5001/action/forecast-model-fit
HTTP/1.1 201 CREATED
Content-Length: 49
Content-Type: text/html; charset=utf-8
Date: Thu, 08 Jun 2023 10:52:19 GMT
Server: waitress

EMHASS >> Action forecast-model-fit executed...

    ~                                                                                   27s 
❯ curl -i -H 'Content-Type: application/json' -X POST -d '{
    "model_type": "load_zonder_wp_forecast",
    "model_predict_publish": "True",
    "model_predict_entity_id": "sensor.p_load_zonder_wp_custom_model",
    "model_predict_unit_of_measurement": "W",
    "model_predict_friendly_name": "Warmtepompboiler custom model"
    }' http://192.168.79.54:5001/action/forecast-model-predict
HTTP/1.1 500 INTERNAL SERVER ERROR
Content-Length: 265
Content-Type: text/html; charset=utf-8
Date: Thu, 08 Jun 2023 10:53:04 GMT
Server: waitress

<!doctype html>
<html lang=en>
<title>500 Internal Server Error</title>
<h1>Internal Server Error</h1>
<p>The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.</p>

    ~ 

Logs from the addon

2023-06-08 12:52:19,062 - web_server - INFO - Setting up needed data
2023-06-08 12:52:19,068 - web_server - INFO - Retrieve hass get data method initiated...
2023-06-08 12:52:44,891 - web_server - INFO -  >> Performing a machine learning forecast model fit...
2023-06-08 12:52:44,892 - web_server - INFO - Performing a forecast model fit for load_zonder_wp_forecast
2023-06-08 12:52:44,899 - web_server - INFO - Training a KNeighborsRegressor model
2023-06-08 12:52:44,926 - web_server - INFO - Elapsed time for model fit: 0.026717424392700195
2023-06-08 12:52:45,132 - web_server - INFO - Prediction R2 score of fitted model on test data: -0.009545803507708062
2023-06-08 12:52:45,135 - web_server - INFO - Performing simple backtesting of fitted model

  0%|          | 0/8 [00:00<?, ?it/s]
 25%|██▌       | 2/8 [00:00<00:00, 17.07it/s]
 50%|█████     | 4/8 [00:00<00:00, 17.42it/s]
 75%|███████▌  | 6/8 [00:00<00:00, 16.90it/s]
100%|██████████| 8/8 [00:00<00:00, 19.05it/s]
2023-06-08 12:52:45,559 - web_server - INFO - Elapsed backtesting time: 0.42389512062072754
2023-06-08 12:52:45,560 - web_server - INFO - Backtest R2 score: 0.45701203567639437
2023-06-08 12:53:04,109 - web_server - INFO - Setting up needed data
2023-06-08 12:53:04,115 - web_server - INFO - Retrieve hass get data method initiated...
2023-06-08 12:53:04,139 - web_server - ERROR - The retrieved JSON is empty, check that correct day or variable names are passed
2023-06-08 12:53:04,140 - web_server - ERROR - Either the names of the passed variables are not correct or days_to_retrieve is larger than the recorded history of your sensor (check your recorder settings)
2023-06-08 12:53:04,142 - web_server - ERROR - Exception on /action/forecast-model-predict [POST]
Traceback (most recent call last):
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 2190, in wsgi_app
    response = self.full_dispatch_request()
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1486, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1484, in full_dispatch_request
    rv = self.dispatch_request()
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1469, in dispatch_request
    return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
  File "/usr/local/lib/python3.9/dist-packages/emhass/web_server.py", line 174, in action_call
    input_data_dict = set_input_data_dict(config_path, str(data_path), costfun,
  File "/usr/local/lib/python3.9/dist-packages/emhass/command_line.py", line 146, in set_input_data_dict
    rh.get_data(days_list, var_list)
  File "/usr/local/lib/python3.9/dist-packages/emhass/retrieve_hass.py", line 147, in get_data
    self.df_final = pd.concat([self.df_final, df_day], axis=0)
UnboundLocalError: local variable 'df_day' referenced before assignment

And when trying the tune

❯ curl -i -H 'Content-Type: application/json' -X POST -d '{
    "model_type": "load_zonder_wp_forecast"
    }' http://192.168.79.54:5001/action/forecast-model-tune
HTTP/1.1 500 INTERNAL SERVER ERROR
Content-Length: 265
Content-Type: text/html; charset=utf-8
Date: Thu, 08 Jun 2023 11:49:13 GMT
Server: waitress

<!doctype html>
<html lang=en>
<title>500 Internal Server Error</title>
<h1>Internal Server Error</h1>
<p>The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.</p>

Errors in the addon log

2023-06-08 13:49:13,471 - web_server - ERROR - Either the names of the passed variables are not correct or days_to_retrieve is larger than the recorded history of your sensor (check your recorder settings)
2023-06-08 13:49:13,471 - web_server - ERROR - Exception on /action/forecast-model-tune [POST]
Traceback (most recent call last):
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 2190, in wsgi_app
    response = self.full_dispatch_request()
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1486, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1484, in full_dispatch_request
    rv = self.dispatch_request()
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1469, in dispatch_request
    return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
  File "/usr/local/lib/python3.9/dist-packages/emhass/web_server.py", line 174, in action_call
    input_data_dict = set_input_data_dict(config_path, str(data_path), costfun,
  File "/usr/local/lib/python3.9/dist-packages/emhass/command_line.py", line 146, in set_input_data_dict
    rh.get_data(days_list, var_list)
  File "/usr/local/lib/python3.9/dist-packages/emhass/retrieve_hass.py", line 147, in get_data
    self.df_final = pd.concat([self.df_final, df_day], axis=0)
UnboundLocalError: local variable 'df_day' referenced before assignment

Thanks. Progressing here. Now I run into another issue, which I believe might have to do with timestamp formatting from HA on last_changed or last_updated attributes. Sometimes they have microseconds, but that doesn’t seem to be appreciated by emhass. As you can see in my post above, the timestamps from the pv power sensor somethimes has “%Y-%m-%dT%H:%M:%S.%f%z” format…

> emhass --action 'dayahead-optim' --config './' --costfun 'profit'
C:\Users\Ivar\Documents\Programmering\Python\emhass\emhassenv\Lib\site-packages\pvlib\forecast.py:20: UserWarning: The forecast module algorithms and features are highly experimental. The API may change, the functionality may be consolidated into an io module, or the module may be separated into its own package.
  warnings.warn(
2023-06-08 16:28:34,558 - emhass.command_line - INFO - Setting up needed data
INFO:emhass.command_line:Setting up needed data
2023-06-08 16:28:34,665 - emhass.command_line - INFO - Retrieve hass get data method initiated...
INFO:emhass.command_line:Retrieve hass get data method initiated...
Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "C:\Users\Ivar\Documents\Programmering\Python\emhass\emhassenv\Scripts\emhass.exe\__main__.py", line 7, in <module>
  File "C:\Users\Ivar\Documents\Programmering\Python\emhass\emhassenv\Lib\site-packages\emhass\command_line.py", line 182, in main
    input_data_dict = setUp(config_path, costfun, logger)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Ivar\Documents\Programmering\Python\emhass\emhassenv\Lib\site-packages\emhass\command_line.py", line 37, in setUp
    rh.get_data(days_list, var_list,
  File "C:\Users\Ivar\Documents\Programmering\Python\emhass\emhassenv\Lib\site-packages\emhass\retrieve_hass.py", line 101, in get_data
    from_date = pd.to_datetime(df_raw['last_changed']).min()
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Ivar\Documents\Programmering\Python\emhass\emhassenv\Lib\site-packages\pandas\core\tools\datetimes.py", line 1050, in to_datetime
    values = convert_listlike(arg._values, format)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Ivar\Documents\Programmering\Python\emhass\emhassenv\Lib\site-packages\pandas\core\tools\datetimes.py", line 453, in _convert_listlike_datetimes
    return _array_strptime_with_fallback(arg, name, utc, format, exact, errors)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Ivar\Documents\Programmering\Python\emhass\emhassenv\Lib\site-packages\pandas\core\tools\datetimes.py", line 484, in _array_strptime_with_fallback
    result, timezones = array_strptime(arg, fmt, exact=exact, errors=errors, utc=utc)
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "pandas\_libs\tslibs\strptime.pyx", line 530, in pandas._libs.tslibs.strptime.array_strptime
  File "pandas\_libs\tslibs\strptime.pyx", line 351, in pandas._libs.tslibs.strptime.array_strptime
ValueError: time data "2023-06-07T14:07:02.986111+00:00" doesn't match format "%Y-%m-%dT%H:%M:%S%z", at position 1. You might want to try:
    - passing `format` if your strings have a consistent format;
    - passing `format='ISO8601'` if your strings are all ISO8601 but not necessarily in exactly the same format;
    - passing `format='mixed'`, and the format will be inferred for each element individually. You might want to use `dayfirst` alongside this.

In the NL we currently have a big problem where eletricity prices can go extremely negative due to massive growth of PV infeed in recent years. Last week we had prices of up to MINUS 0.60 euro per kWh (day head reached the absolute minimum at -400 euro / MWh!), absolutely crazy and it’s going to get much worse before it gets better. For people with solar panels and a dynamic hourly energy contract this is terrible ofcourse.

EMHASS can help with this but what would be the best way? I prefer not to use a relay because then I would also lose the ability to consume my own PV energy. I basically need a dummy load to dump energy into whenever the price goes so far negative that it starts to cost me money (ofcourse after all deferrable loads have been planned).

But during that hour, wouldn’t it be better to just not use PV but import from grid instead? You wouldn’t lose anything by not using your own PV energy.
We have a similar situation in Sweden, although nowhere near those prices. I’ve seen 0.03€/kWh on occasion and it’s not really worth avoiding as a few hours later, prices are positive again. However, I see a point in waiting with battery charge to those hours and also set my inverter to zero export. If the price dip comes when the sun doesn’t shine, I’d use grid power to charge the battery.

Yeah I have thought about that. It makes sense on an individual basis if you purely look at the finances. But if we consider the entire network I think it would be better if instead of shutting off solar panels and losing energy we would all consume our own energy as much as possible. You might even be paid for that in the future.

I don’t view this as a problem or a terrible outcome. This is a success in the transition to sustainable energy and there are real productivity benefits for a nation as well as individuals for the price of energy to go this low. Business can offer things like ‘free’ EV charging, high energy processes like water heating and HVAC can soak up this additional energy improving quality of life for very little cost.

In AU over summer we offer see the sell price for electricity go negative for 6-8 hours a day and even the buy cost go negative over the midday solar soak.

One of my key observations is following EMHASS recommendations is don’t let your battery fill too early. If your battery is full by 1000 but the latest prices of the day are at 1200 then you aren’t optimising. In this case EMHASS will generally delay charging the battery until maybe 1000 or 1100 so it can then be filled during the cheapest time of the day.

It also helps greatly if your loads match/ exceed your solar production capability. I have a 15 kW solar inverter but initially only had a battery with 5 kW power rating, EMHASS was able to schedule my other loads to make up the shortfall; 11 kW EV charging, 5 kW pool heat pump, 10 kW HVAC. In fact when the prices go that low EMHASS schedules those devices to consume additional energy from the grid which actually helps everyone by soaking up the oversupply.

My final strategy when fully charged is curtailment. Home Assistant switches my SolarEdge inverter to zero export mode when the sell price is below zero and zero production mode when the buy cost is below zero. This means I’m not contributing to the issue of oversupply (which is what negative prices mean) but actually consuming the excess which allows others to export.

3 Likes

Love it Mark… great perspective!

1 Like

not work:

s6-rc: info: service s6rc-oneshot-runner: starting
s6-rc: info: service s6rc-oneshot-runner successfully started
s6-rc: info: service fix-attrs: starting
s6-rc: info: service fix-attrs successfully started
s6-rc: info: service legacy-cont-init: starting
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service legacy-services: starting
services-up: info: copying legacy longrun emhass (no readiness notification)
s6-rc: info: service legacy-services successfully started
2023-06-10 12:54:21,365 - web_server - INFO - Launching the emhass webserver at: http://0.0.0.0:5000
2023-06-10 12:54:21,366 - web_server - INFO - Home Assistant data fetch will be performed using url: http://supervisor/core/api
2023-06-10 12:54:21,367 - web_server - INFO - The data path is: /share
2023-06-10 12:54:21,371 - web_server - INFO - Using core emhass version: 0.4.12
waitress   INFO  Serving on http://0.0.0.0:5000
2023-06-10 12:54:59,043 - web_server - INFO - EMHASS server online, serving index.html...
2023-06-10 12:54:59,056 - web_server - WARNING - The data container dictionary is empty... Please launch an optimization task
2023-06-10 12:55:42,583 - web_server - INFO - Setting up needed data
2023-06-10 12:55:42,665 - web_server - INFO - Retrieve hass get data method initiated...
2023-06-10 12:55:42,702 - web_server - ERROR - The retrieved JSON is empty, check that correct day or variable names are passed
2023-06-10 12:55:42,702 - web_server - ERROR - Either the names of the passed variables are not correct or days_to_retrieve is larger than the recorded history of your sensor (check your recorder settings)
2023-06-10 12:55:42,703 - web_server - ERROR - Exception on /action/perfect-optim [POST]
Traceback (most recent call last):
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 2190, in wsgi_app
    response = self.full_dispatch_request()
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1486, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1484, in full_dispatch_request
    rv = self.dispatch_request()
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1469, in dispatch_request
    return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
  File "/usr/local/lib/python3.9/dist-packages/emhass/web_server.py", line 174, in action_call
    input_data_dict = set_input_data_dict(config_path, str(data_path), costfun,
  File "/usr/local/lib/python3.9/dist-packages/emhass/command_line.py", line 78, in set_input_data_dict
    rh.get_data(days_list, var_list,
  File "/usr/local/lib/python3.9/dist-packages/emhass/retrieve_hass.py", line 147, in get_data
    self.df_final = pd.concat([self.df_final, df_day], axis=0)
UnboundLocalError: local variable 'df_day' referenced before assignment

Either the names of the passed variables are not correct or days_to_retrieve is larger than the recorded history of your sensor (check your recorder settings)

Also python is on either 3.10 or 3.11 currently in the releases. My question is why has yours not updated? The RED line error.

That error has absolutely nothing to do with a Python version. Python 3.9 is still completely supported but yes we should update to 3.10. Just have to find the time to test that :+1:

My curl command is getting quite complex to manage:

I have now switched to a dedicated rest_command, which is still complex, but less so I also get errors reported in the logs directly, which is helpful:

rest_command:
  naive_mpc_optim:
    url: http://localhost:5000/action/naive-mpc-optim
    method: POST
    content_type: 'application/json'
    payload: >-
      {
        "prod_price_forecast": {{
          ([states('sensor.amber_feed_in_price')|float(0)] +
          (state_attr('sensor.amber_feed_in_forecast', 'forecasts')|map(attribute='per_kwh')|list))
          | tojson 
        }},
        "load_cost_forecast": {{
          ([states('sensor.amber_general_price')|float(0)] + 
          state_attr('sensor.amber_general_forecast', 'forecasts') |map(attribute='per_kwh')|list) 
          | tojson 
        }},
        "load_power_forecast": {{
          ([states('sensor.power_load_no_var_loads')|int] +
          (states('input_text.fi_fo_buffer').split(', ')|map('multiply',1000)|map('int')|list)[1:]
          )| tojson 
        }},
        "pv_power_forecast": {{
          ([states('sensor.APF_Generation_Entity')|int(0)] +
          state_attr('sensor.solcast_forecast_today', 'detailedForecast')|selectattr('period_start','gt',utcnow()) | map(attribute='pv_estimate')|map('multiply',2000)|map('int')|list +
          state_attr('sensor.solcast_forecast_tomorrow', 'detailedForecast')|selectattr('period_start','gt',utcnow()) | map(attribute='pv_estimate')|map('multiply',2000)|map('int')|list
          )| tojson
        }},
        "prediction_horizon": {{
          min(48, (state_attr('sensor.amber_feed_in_forecast', 'forecasts')|map(attribute='per_kwh')|list|length)+1)
        }},
        "alpha": 1,
        "beta": 0,
        "num_def_loads": 6,
        "def_total_hours": {{[states('sensor.def_total_hours_pool_filter')|int(0),
                              states('sensor.def_total_hours_pool_heatpump')|int(0),
                              states('sensor.def_total_hours_ev')|int(0),
                              states('sensor.def_total_hours_hvac')|int(0),
                              states('sensor.def_total_hours_hws')|int(0),
                              states('input_number.car2_def_total_hours')|int(0)]}},
        "P_deferrable_nom": [1300, 5000, 11500, 4000, 600, 3500],
        "treat_def_as_semi_cont": [1, 1, 0, 0, 1, 0],
        "set_def_constant": [0, 0, 0, 0, 0, 0],
        "soc_init": {{ (states('sensor.filtered_powerwall_soc')|int(0))/100 }},
        "soc_final": 0.0
      }
  publish_data:
    url: http://localhost:5000/action/publish-data
    method: POST
    content_type: 'application/json'
    payload: '{}'
3 Likes

Done! Python 3.10 is now supported.
Possible Python versions are 3.8, 3.9 and 3.10

Hi
Trying to set up EMHASS but finding the instructions difficult. Would be very appreciative of some help.

  • I have 5kW PV.
  • A Fronius Primo 6.0-1 inverter.
  • I’m using solcast to predict PV production.
  • A 10 kWh (9 kWh usable) sonnen eco 9.43 battery.
  • I use the Sonnenbatterie HACS addon by weltmeyer to obtain all the battery metrics.
  • I can use POST to switch the battery between charge, discharge, self consumption and TOU modes and probably others that I haven’t discovered yet.
  • I’m on the Australian Amber Electric service (They don’t support Sonnen).
  • I’m using the Amber Electric integration in HA to get the pricing data etc.
  • Deferable loads might be pool pump, dish washer, dryer, washing machine but I currently don’t monitor power consumption for any of these loads.
  • I have one EV, Tesla model Y.
  • I’ve been using Charge HQ to divert feed-in to the car but suspect I can replace that with EMHASS.

I’ve set up the following in HA:

  • Automation
- alias: EMHASS day-ahead optimization
  trigger:
    platform: time
    at: 05:30:00
  action:
  - service: shell_command.dayahead_optim
- alias: EMHASS publish data
  trigger:
  - minutes: /5
    platform: time_pattern
  action:
  - service: shell_command.publish_data
  • configuration.yaml
shell_command:
  dayahead_optim: "curl -i -H \"Content-Type: application/json\" -X POST -d '{}' http://localhost:5000/action/dayahead-optim"
  publish_data: "curl -i -H \"Content-Type: application/json\" -X POST -d '{}' http://localhost:5000/action/publish-data"
  post_amber_forecast: "curl -i -H 'Content-Type: application/json' -X POST -d '{\"prod_price_forecast\":{{(
          state_attr('sensor.amber_feed_in_forecast', 'forecasts')|map(attribute='per_kwh')|list)
          }},\"load_cost_forecast\":{{(
          state_attr('sensor.amber_general_forecast', 'forecasts') |map(attribute='per_kwh')|list)
          }},\"prediction_horizon\":33}' http://localhost:5000/action/dayahead-optim"
  post_emhass_forecast: "curl -i -H 'Content-Type: application/json' -X POST -d '{\"prod_price_forecast\":{{(
          state_attr('sensor.amber_feed_in_forecast', 'forecasts')|map(attribute='per_kwh')|list)
          }},{{states('sensor.solcast_24hrs_forecast')}},\"load_cost_forecast\":{{(
          state_attr('sensor.amber_general_forecast', 'forecasts') |map(attribute='per_kwh')|list)
          }}}' http://localhost:5000/action/dayahead-optim"
  post_mpc_optim_solcast: "curl -i -H \"Content-Type: application/json\" -X POST -d '{\"load_cost_forecast\":{{(
          ([states('sensor.amber_general_price')|float(0)] +
          state_attr('sensor.amber_general_forecast', 'forecasts') |map(attribute='per_kwh')|list)[:48])
          }}, \"prod_price_forecast\":{{(
          ([states('sensor.cecil_st_feed_in_price')|float(0)] +
          state_attr('sensor.cecil_st_feed_in_forecast', 'forecasts')|map(attribute='per_kwh')|list)[:48]) 
          }}, \"pv_power_forecast\":{{states('sensor.solcast_24hrs_forecast')
          }}, \"prediction_horizon\":48,\"soc_init\":{{(states('sensor.sonnenbatterie_84324_state_charge_user')|float(0))/100
          }},\"soc_final\":0.05,\"def_total_hours\":[2,0,0,0]}' http://localhost:5000/action/naive-mpc-optim"

and

# Solar forecast for EMHASS
  - platform: rest
    name: "Solcast Forecast Data"
    json_attributes:
      - forecasts
    resource: https://api.solcast.com.au/rooftop_sites/YYYYYYYYYYYYY/forecasts?format=json&api_key=XXXXXXXXXXXXXXXXXXXXXXXXXX&hours=24
    method: GET
    value_template: "{{ (value_json.forecasts[0].pv_estimate)|round(2) }}"
    unit_of_measurement: "kW"
    device_class: power
    scan_interval: 8000
    force_update: true

  - platform: template
    sensors:
      solcast_24hrs_forecast :
        value_template: >-
          {%- set power = state_attr('sensor.solcast_forecast_data', 'forecasts') | map(attribute='pv_estimate') | list %}
          {%- set values_all = namespace(all=[]) %}
          {% for i in range(power | length) %}
          {%- set v = (power[i] | float |multiply(1000) ) | int(0) %}
          {%- set values_all.all = values_all.all + [ v ] %}
          {%- endfor %} {{ (values_all.all)[:48] }}

I assume I have to change the automation to also call the amber, EMHASS and solcast curl shell commands? But not sure if this replaces the default dayahead_optim and publish_data curl commands.

Haven’t even looked into what I do with the returned results from EMHASS. Just wanted to get this first stage set up correctly.

Do I need to measure deferable loads so I can subtract them from house consumption figure I get from battery? If so, I’ll have to go out and gets some power points that monitor power.

Am I going about this in the right way. What changes do I need to make and what next.

Thanks in advance.

Looks like you have a lot of components.

I would start with a basic system.

If you can control your battery charge/ discharge via a post command, then you should be able to get the same level of functionality as SmartShift.

Your EV and pool pump will also integrate well and you should be able to get excellent optimisation results. You will need a smart socket for the pool but the EV should be able to integrate directly.

You only need to call one optimisation and one plublish content, the others are just spending that at in additional functionality.

As it is setup it will call the simple dayahead once a day at 0530 optimisation which is good and then publish these results every 5 minutes.

You can find my post here on how you can connect other devices:
Running Devices When Energy is Cheaper and Greener

@davidusb did you have a chance to look at the errors I posted above in EMHASS: An Energy Management for Home Assistant - #728 by Octofinger?
Seems emhass is choking on microseconds in HA sensor timestamps.