EMHASS: An Energy Management for Home Assistant

Unfortunately, it wasn’t that simple

s6-rc: info: service s6rc-oneshot-runner: starting
s6-rc: info: service s6rc-oneshot-runner successfully started
s6-rc: info: service fix-attrs: starting
s6-rc: info: service fix-attrs successfully started
s6-rc: info: service legacy-cont-init: starting
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service legacy-services: starting
services-up: info: copying legacy longrun emhass (no readiness notification)
s6-rc: info: service legacy-services successfully started
2023-10-23 08:04:40,873 - web_server - INFO - Launching the emhass webserver at: http://0.0.0.0:5000
2023-10-23 08:04:40,873 - web_server - INFO - Home Assistant data fetch will be performed using url: http://supervisor/core/api
2023-10-23 08:04:40,873 - web_server - INFO - The data path is: /share
2023-10-23 08:04:40,874 - web_server - INFO - Using core emhass version: 0.5.1
waitress   INFO  Serving on http://0.0.0.0:5000
2023-10-23 08:05:30,617 - web_server - INFO - Setting up needed data
2023-10-23 08:05:30,655 - web_server - INFO - Retrieve hass get data method initiated...
2023-10-23 08:05:30,662 - web_server - ERROR - Exception on /action/perfect-optim [POST]
Traceback (most recent call last):
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1455, in wsgi_app
    response = self.full_dispatch_request()
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 869, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 867, in full_dispatch_request
    rv = self.dispatch_request()
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 852, in dispatch_request
    return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
  File "/usr/local/lib/python3.9/dist-packages/emhass/web_server.py", line 179, in action_call
    input_data_dict = set_input_data_dict(config_path, str(data_path), costfun,
  File "/usr/local/lib/python3.9/dist-packages/emhass/command_line.py", line 78, in set_input_data_dict
    rh.get_data(days_list, var_list,
  File "/usr/local/lib/python3.9/dist-packages/emhass/retrieve_hass.py", line 124, in get_data
    data = response.json()[0]
KeyError: 0
2023-10-23 08:05:40,298 - web_server - INFO - Setting up needed data
2023-10-23 08:05:40,302 - web_server - INFO - Retrieving weather forecast data using method = scrapper
2023-10-23 08:05:41,159 - web_server - INFO - Retrieving data from hass for load forecast using method = naive
2023-10-23 08:05:41,160 - web_server - INFO - Retrieve hass get data method initiated...
2023-10-23 08:05:41,166 - web_server - ERROR - Exception on /action/dayahead-optim [POST]
Traceback (most recent call last):
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1455, in wsgi_app
    response = self.full_dispatch_request()
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 869, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 867, in full_dispatch_request
    rv = self.dispatch_request()
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 852, in dispatch_request
    return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
  File "/usr/local/lib/python3.9/dist-packages/emhass/web_server.py", line 179, in action_call
    input_data_dict = set_input_data_dict(config_path, str(data_path), costfun,
  File "/usr/local/lib/python3.9/dist-packages/emhass/command_line.py", line 91, in set_input_data_dict
    P_load_forecast = fcst.get_load_forecast(method=optim_conf['load_forecast_method'])
  File "/usr/local/lib/python3.9/dist-packages/emhass/forecast.py", line 585, in get_load_forecast
    rh.get_data(days_list, var_list)
  File "/usr/local/lib/python3.9/dist-packages/emhass/retrieve_hass.py", line 124, in get_data
    data = response.json()[0]
KeyError: 0
2023-10-23 08:05:45,514 - web_server - INFO - Setting up needed data
2023-10-23 08:05:45,518 - web_server - INFO -  >> Publishing data...
2023-10-23 08:05:45,518 - web_server - INFO - Publishing data to HASS instance
2023-10-23 08:05:45,519 - web_server - ERROR - File not found error, run an optimization task first

.

It seems there is something “wrong” in your configuration (the key error has changed).
Try to have a look at any previously posted (from rcruikshank or markpurcell for example) in this thread and see if there are substantial differences.

BTW perfect optim is not really useful from a practical standpoint. If you want to see some forecast you should use dayahead of MPC.

You’re not retrieving any data from Home Assistant. Either you don’t have enough history data foe thz requested perfect optimization task or you have a problem with your sensor name.

Ok, so I have paired it back to the def loads I have been running happily for years. Now I am getting infeasible results all the time. I assume that this is because of the new load introduced by the pool equipment.

So my plan is to monitor this new load and remove it from the “no vars load” sensor. This should help retrain the load forecast to better reflect what the uncontrolled load will be once the pool equipment is controlled by EMHASS.

This seems like a lot of work. What am I doing wrong?

Thanks, it works now.

Edit:
No, unfortunately it only worked once. During the next optimization the blocks were separated again.
I have at least two loads, both of which require up to 5000W and each have a running time of one hour. And both should be in whole blocks. Currently they are alternately separated into blocks of 30 minutes.

dayahead_optim: "curl -i -H \"Content-Type:application/json\" -X POST -d '{\"solcast_rooftop_id\":\"6c3f-7329-bed7-4717\",\"solcast_api_key\":\"NlGNmlv0dauI6hzbJFrmk5gas2QSTah8\",\"num_def_loads\":\"2\",\"P_deferrable_nom\":[5000,5000],\"def_total_hours\":[1,1],\"treat_def_as_semi_cont\":[1,1],\"set_def_constant\":[1,1]}' http://localhost:5000/action/dayahead-optim"

Is it possible to prioritize the loads?
Example:
1: Water
2: Heating
3: washing machine

Hi Mark

Did you manage to make these graphs work?

Mine dont even remotly look the same and i cant figure the way to do 30 minute utility meter calulations.

Hoping you could help with this by chance?

I anticipate the reason could be something completely different, but from time to time I have HA that crashes and when I check the add-on log this is the first thing that I always see (I suspect from the latest execution).
Does this make any bell ring?
Otherwise I will look for a different cause.
Thanks

s6-rc: info: service s6rc-oneshot-runner: starting
s6-rc: info: service s6rc-oneshot-runner successfully started
s6-rc: info: service fix-attrs: starting
s6-rc: info: service fix-attrs successfully started
s6-rc: info: service legacy-cont-init: starting
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service legacy-services: starting
services-up: info: copying legacy longrun emhass (no readiness notification)
s6-rc: info: service legacy-services successfully started
Traceback (most recent call last):
  File "/usr/local/lib/python3.9/dist-packages/requests/models.py", line 971, in json
    return complexjson.loads(self.text, **kwargs)
  File "/usr/lib/python3.9/json/__init__.py", line 346, in loads
    return _default_decoder.decode(s)
  File "/usr/lib/python3.9/json/decoder.py", line 340, in decode
    raise JSONDecodeError("Extra data", s, end)
json.decoder.JSONDecodeError: Extra data: line 1 column 4 (char 3)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3.9/runpy.py", line 197, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib/python3.9/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/usr/local/lib/python3.9/dist-packages/emhass/web_server.py", line 319, in <module>
    config_hass = response.json()
  File "/usr/local/lib/python3.9/dist-packages/requests/models.py", line 975, in json
    raise RequestsJSONDecodeError(e.msg, e.doc, e.pos)
requests.exceptions.JSONDecodeError: Extra data: line 1 column 4 (char 3)
Traceback (most recent call last):
  File "/usr/local/lib/python3.9/dist-packages/requests/models.py", line 971, in json
    return complexjson.loads(self.text, **kwargs)
  File "/usr/lib/python3.9/json/__init__.py", line 346, in loads
    return _default_decoder.decode(s)
  File "/usr/lib/python3.9/json/decoder.py", line 340, in decode
    raise JSONDecodeError("Extra data", s, end)
json.decoder.JSONDecodeError: Extra data: line 1 column 4 (char 3)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3.9/runpy.py", line 197, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib/python3.9/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/usr/local/lib/python3.9/dist-packages/emhass/web_server.py", line 319, in <module>
    config_hass = response.json()
  File "/usr/local/lib/python3.9/dist-packages/requests/models.py", line 975, in json
    raise RequestsJSONDecodeError(e.msg, e.doc, e.pos)
requests.exceptions.JSONDecodeError: Extra data: line 1 column 4 (char 3)
Traceback (most recent call last):
  File "/usr/local/lib/python3.9/dist-packages/requests/models.py", line 971, in json
    return complexjson.loads(self.text, **kwargs)
  File "/usr/lib/python3.9/json/__init__.py", line 346, in loads
    return _default_decoder.decode(s)
  File "/usr/lib/python3.9/json/decoder.py", line 340, in decode
    raise JSONDecodeError("Extra data", s, end)
json.decoder.JSONDecodeError: Extra data: line 1 column 4 (char 3)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3.9/runpy.py", line 197, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib/python3.9/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/usr/local/lib/python3.9/dist-packages/emhass/web_server.py", line 319, in <module>
    config_hass = response.json()
  File "/usr/local/lib/python3.9/dist-packages/requests/models.py", line 975, in json
    raise RequestsJSONDecodeError(e.msg, e.doc, e.pos)
requests.exceptions.JSONDecodeError: Extra data: line 1 column 4 (char 3)
2023-10-25 18:28:04,883 - web_server - INFO - Launching the emhass webserver at: http://0.0.0.0:5000
2023-10-25 18:28:04,885 - web_server - INFO - Home Assistant data fetch will be performed using url: http://supervisor/core/api
2023-10-25 18:28:04,886 - web_server - INFO - The data path is: /share
2023-10-25 18:28:05,004 - web_server - INFO - Using core emhass version: 0.5.1
waitress   INFO  Serving on http://0.0.0.0:5000

Previously I was setting 0 deferrable loads, but after a few days ago it was suggested it’s better to have one with 0 W power, I switched to this approach. Now I see something strange: how is it possible I have a negative power for a deferrable load with 0 power and 0 hours? (additional 0 W follow the +750 W - the snapshot is just cropped).

Also my battery is charging more than allowed per grid power limit (-6252 W and there is no PV production that could explain this - this can just come from the grid, which is correctly limited to 6000 W). It seems the negative power of the deferrable load is causing this: +497 W load - 750 W def_load = -253 W I see as additional power to charge the battery.

And if I try to remove the deferrable load it goes crazy :sweat_smile:

Yes, I following the instructions and they have been working very well for me.

There is that funny two step process with the automation yaml file.

Can i ask for some help.

how do you do the 30 minute automation for utility cost?

my consumption graph is the only one that works. So the 30 minute reset to match the automation is the only thing i cant make it work.

For everyone is using solcast, the new HA beta breaks it…

Got a little further now with a graph that is published anyway. However, still having problems in the log. however, it could be that the sensor doesn’t have two days of history, but at the same time I’m doubtful if my shellscripts are ok.

dayahead_optim: "curl -i -H \"Content-Type:application/json\" -X POST -d '{}' http://localhost:5000/action/dayahead-optim"
  publish_data: "curl -i -H \"Content-Type:application/json\" -X POST -d '{}' http://localhost:5000/action/publish-data"
  trigger_nordpool_forecast: 'curl -i -H "Content-Type: application/json" -X POST -d ''{
    "load_cost_forecast":{{((state_attr("sensor.nordpool_kwh_se3_sek_3_10_025", "raw_today") | map(attribute="value") | list  + state_attr("sensor.nordpool_kwh_se3_sek_3_10_025", "raw_tomorrow") | map(attribute="value") | list))[now().hour:][:24] }},
    "prod_price_forecast":{{((state_attr("sensor.nordpool_kwh_se3_sek_2_10_0", "raw_today") | map(attribute="value") | list  + state_attr("sensor.nordpool_kwh_se3_sek_2_10_0", "raw_tomorrow") | map(attribute="value") | list))[now().hour:][:24]}}
  }'' http://localhost:5000/action/dayahead-optim'
  ml_forecast_fit: 'curl -i -H "Content-Type:application/json" -X POST -d ''{"num_lags": 24}'' http://localhost:5000/action/forecast-model-fit'
  ml_forecast_tune: 'curl -i -H "Content-Type:application/json" -X POST -d ''{}'' http://localhost:5000/action/forecast-model-tune'ice_forecast":{{((state_attr("sensor.nordpool_kwh_se3_sek_2_10_0", "raw_today") | map(attribute="value") | list  + state_attr("sensor.nordpool_kwh_se3_sek_2_10_0", "raw_tomorrow") | map(attribute="value") | list))[now().hour:][:24]}}}'' http://localhost:5000/action/dayahead-optim'
s6-rc: info: service s6rc-oneshot-runner: starting
s6-rc: info: service s6rc-oneshot-runner successfully started
s6-rc: info: service fix-attrs: starting
s6-rc: info: service fix-attrs successfully started
s6-rc: info: service legacy-cont-init: starting
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service legacy-services: starting
services-up: info: copying legacy longrun emhass (no readiness notification)
s6-rc: info: service legacy-services successfully started
2023-10-26 09:35:35,963 - web_server - INFO - Launching the emhass webserver at: http://0.0.0.0:5000
2023-10-26 09:35:35,963 - web_server - INFO - Home Assistant data fetch will be performed using url: http://supervisor/core/api
2023-10-26 09:35:35,963 - web_server - INFO - The data path is: /share
2023-10-26 09:35:35,964 - web_server - INFO - Using core emhass version: 0.5.1
waitress   INFO  Serving on http://0.0.0.0:5000
2023-10-26 09:37:57,069 - web_server - INFO - EMHASS server online, serving index.html...
2023-10-26 09:40:00,226 - web_server - INFO - Setting up needed data
2023-10-26 09:40:00,248 - web_server - INFO -  >> Publishing data...
2023-10-26 09:40:00,248 - web_server - INFO - Publishing data to HASS instance
2023-10-26 09:40:00,248 - web_server - ERROR - File not found error, run an optimization task first.
2023-10-26 09:45:00,246 - web_server - INFO - Setting up needed data
2023-10-26 09:45:00,249 - web_server - INFO -  >> Publishing data...
2023-10-26 09:45:00,249 - web_server - INFO - Publishing data to HASS instance
2023-10-26 09:45:00,249 - web_server - ERROR - File not found error, run an optimization task first.
2023-10-26 09:50:00,226 - web_server - INFO - Setting up needed data
2023-10-26 09:50:00,227 - web_server - INFO -  >> Publishing data...
2023-10-26 09:50:00,227 - web_server - INFO - Publishing data to HASS instance
2023-10-26 09:50:00,227 - web_server - ERROR - File not found error, run an optimization task first.
2023-10-26 09:55:00,244 - web_server - INFO - Setting up needed data
2023-10-26 09:55:00,247 - web_server - INFO -  >> Publishing data...
2023-10-26 09:55:00,248 - web_server - INFO - Publishing data to HASS instance
2023-10-26 09:55:00,248 - web_server - ERROR - File not found error, run an optimization task first.
2023-10-26 10:00:00,245 - web_server - INFO - Setting up needed data
2023-10-26 10:00:00,248 - web_server - INFO -  >> Publishing data...
2023-10-26 10:00:00,248 - web_server - INFO - Publishing data to HASS instance
2023-10-26 10:00:00,248 - web_server - ERROR - File not found error, run an optimization task first.

But this would mean to loose the pre-trained model?! E.g. when the source data is discarded I can’t train it on that data. Isn’t there a way to migrate the model? This would mean we need to hold the data as long as possible in HA?

@davidusb figured out that prediction_horizon of 36 produces the same issue sometimes. I wonder if there is a gerneral recommendation for setting the prediction_horizon or does everybody needs to find his “sweet spot”? It’s not clear to me why the prediction_horizon sometimes produces the right result and on the other side the same setting gave me splitted loads (many 1 watt loads) instead of scheduling it the expected way (3h x 2000 watts).

Can you give a bit more detail?

Which part is actually broken?

Still don’t get it.
Running MPC with set_def_constant set to true gives me (moves the load in the last possible hours, nevermind which prediction_horizon was choosen):
prediction_horizon=12


prediction_horizon=24

When setting it to false it gave me the expected - and cost-efficient - result.


But may schedule loads multiple times when a higher prediction_horizon was choosen.

Not sure if I’m doing something wrong when running a MPC every 15 min or if this is a bug in EMHASS?

curl -i -H "Content-Type: application/json" -X POST -d '{
        "load_cost_forecast": [0.3341, 0.3259, 0.3259, 0.307, 0.307, 0.292, 0.292, 0.2822, 0.2822, 0.2895, 0.2895, 0.3118, 0.3118, 0.3141, 0.3141, 0.3277, 0.3277, 0.3286, 0.3286, 0.3052, 0.3052, 0.2804, 0.2804, 0.2766, 0.2766, 0.2728, 0.2728],
        "prediction_horizon": 24,
        "pv_power_forecast": [737, 782, 999, 1123, 1159, 717, 691, 665, 629, 592, 512, 512, 506, 386, 296, 173, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
        "alpha": 0.75,
        "beta": 0.25,
        "num_def_loads": 2,
        "def_total_hours": [
            3,
            0
          ],
        "P_deferrable_nom":  [
            2750,
            2000
          ],
        "treat_def_as_semi_cont": [true, true],
        "set_def_constant": [false, false] #<<<----- or set this to true
      }' http://localhost:5000/action/naive-mpc-optim

EDIT: This is a example with prediction_horizon = 24 and in that case better pv forecast data, setting set_def_constant to false.

1 Like

Thanks, I applied your fix here and I’m back online:

2 Likes