EMHASS: An Energy Management for Home Assistant

Yes

> 
> alias: EMHASS Post MPC Solcast
> description: ""
> trigger:
>   - platform: time_pattern
>     minutes: /5
>   - platform: state
>     entity_id:
>       - sensor.amber_feed_in_price
> condition: []
> action:
>   - service: shell_command.post_mpc_optim_solcast
>     data: {}
>   - delay:
>       hours: 0
>       minutes: 0
>       seconds: 5
>       milliseconds: 0
>   - service: shell_command.publish_data
>     data: {}
> mode: single
type or paste code here

Looks like this is my issue

image

Still learning HA…
value template…

I wouldn’t recommend you use two triggers, the /5 minute interval should be enough, if you trigger every time the feed_in_price changes you will be causing your optimisation to run very often. Ie. delete the platform state trigger.

1 Like

Can you tell me what this input_text is?

I’m struggling to get emhass working in legacy mode in order to easier test my parameters and view log/debug output before I intend to get the standalone docker version up and running.
But I’m running into issues with the accepted parameters. According to documentation, the legacy version should accept all kinds of parameters, most importantly the --runtimeparams parameter so I can pass cost forecast to my model. I get my costs from Nordpool, and then add some additional costs from my provider. But emhass objects when I add --runtimeparams, saying it’s an unrecognized argument. Not even the --version argument is accepted.

> emhass --action 'dayahead-optim' --config '.\' --runtimeparams '{"load_cost_forecast": [125.07, 123.07, 121.07, 120.07, 126.07, 125.07, 123.07, 124.07, 126.07, 131.07, 142.07, 152.07, 164.07, 152.07, 146.07, 143.07, 136.07, 133.07, 131.07, 130.07, 131.07, 131.07, 132.07, 132.07], "prod_price_forecast": [108.96, 106.96, 104.96, 103.96, 109.96, 108.96, 106.96, 107.96, 109.96, 114.96, 125.96, 135.96, 147.96, 135.96, 129.96, 126.96, 119.96, 116.96, 114.96, 113.96, 114.96, 114.96, 115.96, 115.96]}'
C:\Users\Ivar\Documents\Programmering\Python\emhass\emhassenv\Lib\site-packages\pvlib\forecast.py:20: UserWarning: The forecast module algorithms and features are highly experimental. The API may change, the functionality may be consolidated into an io module, or the module may be separated into its own package.
  warnings.warn(
usage: emhass [-h] [--action ACTION] [--config CONFIG] [--costfun COSTFUN]
emhass: error: unrecognized arguments: --runtimeparams {load_cost_forecast: [125.07, 123.07, 121.07, 120.07, 126.07, 125.07, 123.07, 124.07, 126.07, 131.07, 142.07, 152.07, 164.07, 152.07, 146.07, 143.07, 136.07, 133.07, 131.07, 130.07, 131.07, 131.07, 132.07, 132.07], prod_price_forecast: [108.96, 106.96, 104.96, 103.96, 109.96, 108.96, 106.96, 107.96, 109.96, 114.96, 125.96, 135.96, 147.96, 135.96, 129.96, 126.96, 119.96, 116.96, 114.96, 113.96, 114.96, 114.96, 115.96, 115.96]}

I wanted to inject my own calculation of the household load forecasts for the next 24 hours.

I have created a FIFO buffer containing 48 elements which is my power consumption from the last 48 time slots. Every 30 minutes I save the current power value at the back of the queue and delete the first entry.

When I run my optimisation I inject the current power value for the first entry followed by 47 elements from the FIFO queue as my power load forecast.

Unless you are running a very high frequency MPC optimisation that needs the current power value this FIFO buffer is probably unnecessary for most implementations of EMHASS.

Hi. These old problems were normally solved by providing the path to the installed solver.

Once you are inside Python try this command:

solver_list = plp.listSolvers(onlyAvailable=True)

This will normally list the solvers that arr really available.

If this doesn’t works then manually install the glpk solver with:

apt-get install -y --no-install-recommends libglpk-dev glpk-utils 

And change the solver parameter accordingly in the config.yaml file

The legacy method is stable and passing arguments should work fine.

Try first to test without passing any custom data.
Just do something like this:

emhass --action 'dayahead-optim' --config '/home/user/emhass/config_emhass.yaml' --costfun 'profit'

With this result we can figure out if you have a problem with your install, your configuration file or with your secrets file.

Hi Mark,

What’s your trigger to stop charging or stop discharging?

Thanks

At the moment I don’t stop/ start.

The car starts when I plugin and EMHASS then sets the correct value. When EMHASS is requesting 0W my car draws about 400W which is close enough.

I probably should switch off charger with my sunset routine.

What I don’t want is the contactor switching on and off lots of times during the day.

Apologies… I mean when p_batt_forecast goes below -4000? When do you stop or what is your trigger to stop charging the inhouse battery? I assume this is charging the Tesla PW?

Thanks

Oh sorry, too many batteries.

Powerwall doesn’t have fine control so I can’t set a direct charging rate.

I do set backup reserve as % and if battery is below this level it will charge at 1.7 kW, I can also set backup mode that will charge at 3.3 kW.

Exporting is quite complex playing with TBC settings.

1 Like

Thanks for the quick replies, and sorry for filling this thread with potentially stupid questions.
First off, I run emhass in a Windows environment in a py venv. The venv is activated.
Hence, I call emhass from the folder where I have the config files, using a relative path. That’s when I ran into the first deviation from the docs. --config parameter seems to be interpreted as a path to config files, not a path to config.yaml file. Forward or backward slash doesn’t seem to matter.

> emhass --action 'dayahead-optim' --config '.\config_emhass.yaml' --costfun 'profit'
C:\Users\Ivar\Documents\Programmering\Python\emhass\emhassenv\Lib\site-packages\pvlib\forecast.py:20: UserWarning: The forecast module algorithms and features are highly experimental. The API may change, the functionality may be consolidated into an io module, or the module may be separated into its own package.
  warnings.warn(
2023-06-07 08:51:43,400 - emhass.command_line - INFO - Setting up needed data
INFO:emhass.command_line:Setting up needed data
Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "C:\Users\Ivar\Documents\Programmering\Python\emhass\emhassenv\Scripts\emhass.exe\__main__.py", line 7, in <module>
  File "C:\Users\Ivar\Documents\Programmering\Python\emhass\emhassenv\Lib\site-packages\emhass\command_line.py", line 182, in main
    input_data_dict = setUp(config_path, costfun, logger)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Ivar\Documents\Programmering\Python\emhass\emhassenv\Lib\site-packages\emhass\command_line.py", line 30, in setUp
    retrieve_hass_conf, optim_conf, plant_conf = get_yaml_parse(config_path)
                                                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Ivar\Documents\Programmering\Python\emhass\emhassenv\Lib\site-packages\emhass\utils.py", line 70, in get_yaml_parse
    with open(config_path + '/config.yaml', 'r') as file:
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: '.\\config_emhass.yaml/config.yaml'

Now that I changed that so the files are called “config.yaml” and “secret.yaml” that step passes.
Here’s my config file. It’s experimental now as I will use lists for cost forecasts. For now I just set it up as more or less constant just to get emhass started. See https://pastebin.com/CjG0AdHD

Here’s what happens now. I get a frequency validation error, but can’t really understand what causes it.

> emhass --action 'dayahead-optim' --config './' --costfun 'profit'
C:\Users\Ivar\Documents\Programmering\Python\emhass\emhassenv\Lib\site-packages\pvlib\forecast.py:20: UserWarning: The forecast module algorithms and features are highly experimental. The API may change, the functionality may be consolidated into an io module, or the module may be separated into its own package.
  warnings.warn(
2023-06-07 09:33:52,531 - emhass.command_line - INFO - Setting up needed data
INFO:emhass.command_line:Setting up needed data
2023-06-07 09:33:52,645 - emhass.command_line - INFO - Retrieve hass get data method initiated...
INFO:emhass.command_line:Retrieve hass get data method initiated...
Traceback (most recent call last):
  File "C:\Users\Ivar\Documents\Programmering\Python\emhass\emhassenv\Lib\site-packages\pandas\core\arrays\datetimelike.py", line 1917, in _validate_frequency
    raise ValueError
ValueError

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "C:\Users\Ivar\Documents\Programmering\Python\emhass\emhassenv\Scripts\emhass.exe\__main__.py", line 7, in <module>
  File "C:\Users\Ivar\Documents\Programmering\Python\emhass\emhassenv\Lib\site-packages\emhass\command_line.py", line 182, in main
    input_data_dict = setUp(config_path, costfun, logger)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Ivar\Documents\Programmering\Python\emhass\emhassenv\Lib\site-packages\emhass\command_line.py", line 37, in setUp
    rh.get_data(days_list, var_list,
  File "C:\Users\Ivar\Documents\Programmering\Python\emhass\emhassenv\Lib\site-packages\emhass\retrieve_hass.py", line 113, in get_data
    self.df_final.index.freq = self.freq
    ^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Ivar\Documents\Programmering\Python\emhass\emhassenv\Lib\site-packages\pandas\core\indexes\datetimelike.py", line 100, in freq
    self._data.freq = value  # type: ignore[misc]
    ^^^^^^^^^^^^^^^
  File "C:\Users\Ivar\Documents\Programmering\Python\emhass\emhassenv\Lib\site-packages\pandas\core\arrays\datetimelike.py", line 1883, in freq
    self._validate_frequency(self, value)
  File "C:\Users\Ivar\Documents\Programmering\Python\emhass\emhassenv\Lib\site-packages\pandas\core\arrays\datetimelike.py", line 1928, in _validate_frequency
    raise ValueError(
ValueError: Inferred frequency None from passed values does not conform to passed frequency H

Ok, it is advancing.
In most cases this error means that your are not able to fetch data from Home Assistant.
So the main thing to check are:

  • Correct names of your sensors. In your case check that these have enough and correct values: sensor.pv_power and sensor.template_emhass_no_var_load_2
  • Correct setting of the long lived access token in Home Assistant. Did you did this? It should be placed on the secrets.yaml file

Thanks.
long_lived_token: is placed in the secrets.yaml file, hass_url: has the trailing slash.
I’ve tested the access using curl with the HA URL, token and REST API URL to query both the sensors. I looked in the code for retrieve_hass.py to recreate the url using the same logic.
Of course, there might still be an issue somewhere, like a typo or something. But I got responses. I was probably expecting an exception of sorts if HA connection failed. These sensors update very frequently. Is that an issue? Both sensors have been up and running for quite some time so they should have sufficient history.

I’m curious to find out which value fails to conform to a certain frequency.
See snippet below with some masked stuff

$ curl -H "Authorization: Bearer xxxxxxxxxxxxxxxxxxxxxxxx" -H "Content-Type: application/json" https://my.ha.server:8123/api/history/period/2023-06-06T00:00:00+02:00?filter_entity_id=sensor.pv_power
[[{"entity_id":"sensor.pv_power","state":"0","attributes":{"state_class":"measurement","unit_of_measurement":"W","icon":"mdi:solar-power","friendly_name":"PV po
wer"},"last_changed":"2023-06-05T22:00:00+00:00","last_updated":"2023-06-05T22:00:00+00:00"},{"entity_id":"sensor.pv_power","state":"2","attributes":{"state_cla
ss":"measurement","unit_of_measurement":"W","icon":"mdi:solar-power","friendly_name":"PV power"},"last_changed":"2023-06-06T01:46:56.070474+00:00","last_updated
":"2023-06-06T01:46:56.070474+00:00"},{"entity_id":"sensor.pv_power","state":"0","attributes":{"state_class":"measurement","unit_of_measurement":"W","icon":"mdi
:solar-power","friendly_name":"PV power"},"last_changed":"2023-06-06T01:47:54.460419+00:00","last_updated":"2023-06-06T01:47:54.460419+00:00"},{"entity_id":"sen
sor.pv_power","state":"2","attributes":{"state_class":"measurement","unit_of_measurement":"W","icon":"mdi:solar-power","friendly_name":"PV power"},"last_changed
":"2023-06-06T01:48:26.766070+00:00","last_updated":"2023-06-06T01:48:26.766070+00:00"},{"entity_id":"sensor.pv_power","state":"25","attributes":{"state_class":
"measurement","unit_of_measurement":"W","icon":"mdi:solar-power","friendly_name":"PV power"},"last_changed":"2023-06-06T01:49:25.458547+00:00","last_updated":"2
023-06-06T01:49:25.458547+00:00"},{"entity_id":"sensor.pv_power","state":"38","attributes":{"state_class":"measurement","unit_of_measurement":"W","icon":"mdi:so
lar-power","friendly_name":"PV power"},"last_chan........

You are a star , thank you!

After the installation it work, now i can start dive into the functions

1 Like

Ok then define at least one deferrable load.
Leave most of the default config template as it is just to test. But of course change to your own sensor names.
For example treat your sensors with the var_interp option, this can help.

I’m struggling to get the machine learning forecaster to work.
Can someone give me some advice?
My addon config

hass_url: empty
long_lived_token: empty
costfun: profit
logging_level: INFO
optimization_time_step: 30
historic_days_to_retrieve: 6
method_ts_round: nearest
set_total_pv_sell: false
lp_solver: COIN_CMD
lp_solver_path: /usr/bin/cbc
set_nocharge_from_grid: false
set_nodischarge_to_grid: false
set_battery_dynamic: false
battery_dynamic_max: 0.9
battery_dynamic_min: -0.9
load_forecast_method: naive
sensor_power_photovoltaics: sensor.huidige_opbrengst
sensor_power_load_no_var_loads: sensor.huidig_verbruik_zonder_wp
number_of_deferrable_loads: 4
list_nominal_power_of_deferrable_loads:
  - nominal_power_of_deferrable_loads: 2000
  - nominal_power_of_deferrable_loads: 2000
  - nominal_power_of_deferrable_loads: 1700
  - nominal_power_of_deferrable_loads: 1100
list_operating_hours_of_each_deferrable_load:
  - operating_hours_of_each_deferrable_load: 2
  - operating_hours_of_each_deferrable_load: 2
  - operating_hours_of_each_deferrable_load: 2.5
  - operating_hours_of_each_deferrable_load: 1
list_peak_hours_periods_start_hours:
  - peak_hours_periods_start_hours: "12:00"
list_peak_hours_periods_end_hours:
  - peak_hours_periods_end_hours: "17:24"
list_treat_deferrable_load_as_semi_cont:
  - treat_deferrable_load_as_semi_cont: false
  - treat_deferrable_load_as_semi_cont: false
  - treat_deferrable_load_as_semi_cont: false
  - treat_deferrable_load_as_semi_cont: false
load_peak_hours_cost: 0.1784
load_offpeak_hours_cost: 0.1684
photovoltaic_production_sell_price: 0
maximum_power_from_grid: 14000
list_pv_module_model:
  - pv_module_model: CSUN_Eurasia_Energy_Systems_Industry_and_Trade_CSUN295_60M
list_pv_inverter_model:
  - pv_inverter_model: Fronius_International_GmbH__Fronius_Primo_5_0_1_208_240__240V_
list_surface_tilt:
  - surface_tilt: 30
list_surface_azimuth:
  - surface_azimuth: 205
list_modules_per_string:
  - modules_per_string: 16
list_strings_per_inverter:
  - strings_per_inverter: 1
set_use_battery: false
battery_discharge_power_max: 1000
battery_charge_power_max: 1000
battery_discharge_efficiency: 0.95
battery_charge_efficiency: 0.95
battery_nominal_energy_capacity: 5000
battery_minimum_state_of_charge: 0.3
battery_maximum_state_of_charge: 0.9
battery_target_state_of_charge: 0.6
method: solcast

And this are my shell commands

forecast_model_fit_load_zonder_wp: >-
  curl -i -H 'Content-Type: application/json' -X POST -d '{
    "days_to_retrieve": 15,
    "model_type": "load_zonder_wp_forecast",
    "var_model": "sensor.huidig_verbruik_zonder_wp",
    "sklearn_model": "KNeighborsRegressor",
    "num_lags": 48,
    "split_date_delta": "48h",
    "perform_backtest": "True"
    }' http://localhost:5001/action/forecast-model-fit
forecast_model_predict_load_zonder_wp: >-
  curl -i -H 'Content-Type: application/json' -X POST -d '{
    "model_type": "load_zonder_wp_forecast",
    "model_predict_publish": "True",
    "model_predict_entity_id": "sensor.p_load_zonder_wp_custom_model",
    "model_predict_unit_of_measurement": "W",
    "model_predict_friendly_name": "Warmtepompboiler custom model"
    }' http://localhost:5001/action/forecast-model-predict

When I do the first in a terminal this file is created load_zonder_wp_forecast_mlf.pkl
When I do the second

❯ curl -i -H 'Content-Type: application/json' -X POST -d '{
    "model_type": "load_zonder_wp_forecast",
    "model_predict_publish": "True",
    "model_predict_entity_id": "sensor.p_load_zonder_wp_custom_model",
    "model_predict_unit_of_measurement": "W",
    "model_predict_friendly_name": "Warmtepompboiler custom model"
    }' http://192.168.79.54:5001/action/forecast-model-predict
HTTP/1.1 500 INTERNAL SERVER ERROR
Content-Length: 265
Content-Type: text/html; charset=utf-8
Date: Wed, 07 Jun 2023 22:49:59 GMT
Server: waitress

<!doctype html>
<html lang=en>
<title>500 Internal Server Error</title>
<h1>Internal Server Error</h1>
<p>The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.</p>

And this are the logs for the addon

2023-06-08 00:49:31,692 - web_server - INFO - Performing a forecast model fit for load_zonder_wp_forecast
2023-06-08 00:49:31,701 - web_server - INFO - Training a KNeighborsRegressor model
2023-06-08 00:49:31,818 - web_server - INFO - Elapsed time for model fit: 0.11742901802062988
2023-06-08 00:49:31,930 - web_server - INFO - Prediction R2 score of fitted model on test data: -0.09174206116291761
2023-06-08 00:49:31,933 - web_server - INFO - Performing simple backtesting of fitted model

  0%|          | 0/13 [00:00<?, ?it/s]
 15%|█▌        | 2/13 [00:00<00:00, 19.13it/s]
 31%|███       | 4/13 [00:00<00:00, 19.06it/s]
 46%|████▌     | 6/13 [00:00<00:00, 18.76it/s]
 62%|██████▏   | 8/13 [00:00<00:00, 18.88it/s]
 77%|███████▋  | 10/13 [00:00<00:00, 18.84it/s]
 92%|█████████▏| 12/13 [00:00<00:00, 19.07it/s]
100%|██████████| 13/13 [00:00<00:00, 20.34it/s]
2023-06-08 00:49:32,577 - web_server - INFO - Elapsed backtesting time: 0.6434693336486816
2023-06-08 00:49:32,577 - web_server - INFO - Backtest R2 score: 0.1965843955066804
2023-06-08 00:49:59,873 - web_server - INFO - Setting up needed data
2023-06-08 00:49:59,880 - web_server - INFO - Retrieve hass get data method initiated...
2023-06-08 00:49:59,907 - web_server - ERROR - The retrieved JSON is empty, check that correct day or variable names are passed
2023-06-08 00:49:59,908 - web_server - ERROR - Either the names of the passed variables are not correct or days_to_retrieve is larger than the recorded history of your sensor (check your recorder settings)
2023-06-08 00:49:59,908 - web_server - ERROR - Exception on /action/forecast-model-predict [POST]
Traceback (most recent call last):
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 2190, in wsgi_app
    response = self.full_dispatch_request()
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1486, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1484, in full_dispatch_request
    rv = self.dispatch_request()
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1469, in dispatch_request
    return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
  File "/usr/local/lib/python3.9/dist-packages/emhass/web_server.py", line 174, in action_call
    input_data_dict = set_input_data_dict(config_path, str(data_path), costfun,
  File "/usr/local/lib/python3.9/dist-packages/emhass/command_line.py", line 146, in set_input_data_dict
    rh.get_data(days_list, var_list)
  File "/usr/local/lib/python3.9/dist-packages/emhass/retrieve_hass.py", line 147, in get_data
    self.df_final = pd.concat([self.df_final, df_day], axis=0)
UnboundLocalError: local variable 'df_day' referenced before assignment

In recorder.yaml I have purge_keep_days: 15

When I use the buttons in the webui I get the same errors in the log.

1 Like

You don’t have enough data.
If purge is set to 15 then set days_to_retrieve to something like 10 to be sure.

First time to experince Negative FIT… I panicked… hahahaha it was exporting 1.0KW with -5c FIT…

Time to automate Tesla MY charging…
where do I start?

Should I be concerned about exporting and importing small amount of wattage ? specially negative FIT? Around 100 watts?

I find a lot more hype around -ve FIT. With EMHASS if the price is that low (general price ~ 5c/ kWh for feed in to be -3c/ kWh) I am certainly not exporting any excess solar, in fact the reverse I’m importing as much as I can from the grid; EV, pool, how water, HVAC, …

Anyway getting the Tesla vehicle into EMHASS isn’t to bad, here is my configuration:

1 Like