EMHASS: An Energy Management for Home Assistant

I’ve created an issue on Github: Error: KeyError: 'list_hp_periods' when running optimization with CSV option · Issue #24 · davidusb-geek/emhass · GitHub
Emhass is running in a standalone docker container.

Port ‘5000’ is already in use by something else on the host
Could it be some other port 5000 is used by the electrical vehicle addon.
Thank you

Yes, this is somewhat painful. I’m struggling right now to implement ingress on the add-on and avoid this kind of issues. It is a work in progress but I’m stuck, help wanted here: Feature request: Access EMHASS web interface via ingress · Issue #22 · davidusb-geek/emhass · GitHub

Hi David, I hope this is an easy questions as I am just getting started with EMHASS. I am in the southern hemisphere so latitudes are negative. This seems to be causing issues with getting the weather forecast.

[2022-10-03 18:29:08,050] ERROR in app: Exception on /action/dayahead-optim [POST]
Traceback (most recent call last):
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 2525, in wsgi_app
    response = self.full_dispatch_request()
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1822, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1820, in full_dispatch_request
    rv = self.dispatch_request()
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1796, in dispatch_request
    return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
  File "/usr/local/lib/python3.9/dist-packages/emhass/web_server.py", line 134, in action_call
    input_data_dict = set_input_data_dict(config_path, str(config_path.parent), costfun,
  File "/usr/local/lib/python3.9/dist-packages/emhass/command_line.py", line 76, in set_input_data_dict
    df_weather = fcst.get_weather_forecast(method=optim_conf['weather_forecast_method'])
  File "/usr/local/lib/python3.9/dist-packages/emhass/forecast.py", line 181, in get_weather_forecast
    raw_data.loc[count_row, col] = float(row.get_text())
ValueError: could not convert string to float: '-'

Config is

lat: -33.79843
lon: 151.224112
alt: 75

Hi, are you using the add-on or the docker standalone mode?
If using the add-on, the lat/lon values will retrieved directly from your core HA configuration. So check that.
Otherwise I’ve just tested that clearoutside can accept negative latitudes with no problem. So the iussue is somewhere else.
See:
https://clearoutside.com/forecast/-33/151?desktop=true

I am using the add-on. I removed the lat and lon from the emhass config and picked it up from HA and I am seeing the same issue. When I put more specific lat/lon in I see the following data which has “-” in some of the table cells. I assume this is a case of incomplete data from clearoutside. Maybe I need to shift to Solcast.

Odd, I’m southern hemisphere and the forecast module has not missed a beat, subscribers a little inaccurate so I have switch to solcast now.

You don’t need to put or add any additional parameters from those readily available to modify on the add-on configuration pane. Please share your complete configuration file (change the sensible/private information) and the automation commands that you’re using to trigger the optimization task.

I’m betting that this is the problem:

I removed the lat and lon from the emhass config and picked it up from HA and I am seeing the same issue.

You’re adding some parameters that don’t need to be there.

Hey David, it is working now. It looks like it was a temporary glitch in the data from Clear Outside v1.0 - International Weather Forecasts For Astronomers . In today’s data from there I see no “-” in any table cells. Thanks for your help.

You can change the port here on the configuration pane:

Hi,

after I changed the name of the load sensor without the deferable loads, EMHASS changed its time to 27.04.2022 and is no longer accepting dayahead forecast triggers:

Log from Node-Red, which triggers the forecast:

 payload: 'HTTP/1.1 500 INTERNAL SERVER ERROR\r\n' +
    'Content-Length: 265\r\n' +
    'Content-Type: text/html; charset=utf-8\r\n' +
    'Date: Tue, 04 Oct 2022 12:12:31 GMT\r\n' +
    'Server: waitress\r\n' +
    '\r\n' +
    '<!doctype html>\n' +
    '<html lang=en>\n' +
    '<title>500 Internal Server Error</title>\n' +
    '<h1>Internal Server Error</h1>\n' +
    '<p>The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.</p>\n',

EMHASS:

[2022-10-04 14:12:31,761] INFO in command_line: Setting up needed data
[2022-10-04 14:12:31,867] INFO in forecast: Retrieving weather forecast data using method = scrapper
[2022-10-04 14:12:34,419] INFO in forecast: Retrieving data from hass for load forecast using method = naive
[2022-10-04 14:12:34,421] INFO in retrieve_hass: Retrieve hass get data method initiated...
[2022-10-04 14:12:34,478] ERROR in retrieve_hass: The retrieved JSON is empty, check that correct day or variable names are passed
[2022-10-04 14:12:34,479] ERROR in retrieve_hass: Either the names of the passed variables are not correct or days_to_retrieve is larger than the recorded history of your sensor (check your recorder settings)
[2022-10-04 14:12:34,480] ERROR in app: Exception on /action/dayahead-optim [POST]
Traceback (most recent call last):
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 2525, in wsgi_app
    response = self.full_dispatch_request()
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1822, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1820, in full_dispatch_request
    rv = self.dispatch_request()
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1796, in dispatch_request
    return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
  File "/usr/local/lib/python3.9/dist-packages/emhass/web_server.py", line 134, in action_call
    input_data_dict = set_input_data_dict(config_path, str(config_path.parent), costfun,
  File "/usr/local/lib/python3.9/dist-packages/emhass/command_line.py", line 78, in set_input_data_dict
    P_load_forecast = fcst.get_load_forecast(method=optim_conf['load_forecast_method'])
  File "/usr/local/lib/python3.9/dist-packages/emhass/forecast.py", line 495, in get_load_forecast
    rh.get_data(days_list, var_list)
  File "/usr/local/lib/python3.9/dist-packages/emhass/retrieve_hass.py", line 130, in get_data
    self.df_final = pd.concat([self.df_final, df_day], axis=0)
UnboundLocalError: local variable 'df_day' referenced before assignment

That’s the function:

{
  payload: `curl -i -H "Content-Type: application/json" -X POST -d '{"prod_price_forecast":[0.22381,0.22381,0.2252,0.2252,0.24414,0.24414,0.3288,0.3288,0.37426,0.37426,0.4733,0.4733,0.33011999999999997,0.33011999999999997,0.24231000000000003,0.24231000000000003,0.31518999999999997,0.31518999999999997,0.27918,0.27918,0.17394,0.17394,0.09236,0.09236,0.09134,0.09134,0.07735,0.07735,0.1325,0.1325,0.1852,0.1852,0.22561,0.22561,0.25519,0.25519,0.25218,0.25218,0.22204000000000002,0.22204000000000002,0.18003000000000002,0.18003000000000002,0.17619,0.17619,0.2028,0.2028,0.19917,0.19917]}' http://localhost:5000/action/dayahead-optim`,
  solcast_rooftop_id: '889c-2e66-aebc-99b8',
  solcast_api_key: 'SECRET',
  _msgid: 'SECRET'
}

Any idea why this happens?

Thanks

Ah ok sure, I have to wait till the new sensor is at least 2 days old. But is this also the reason for the completely wrong date?

Yes, totally normal, those are just the dates of the initial test results that are used to show an initial glimpse of what an optimization result should look like. If you find that this is misleading or that this creates confusion then I can contemplate eliminating those initial results. What do you think?

I’ve just released a new version of the core package v0.3.20 and a new version of the add-on v0.2.22: Release EMHASS add-on v0.2.22 · davidusb-geek/emhass-add-on · GitHub

This is a new release with a bunch of improvements, most notably added direct support for the Solar.Forecast method to forecast PV power.

Also:

  • Added more detailed examples to the forecast module documentation.
  • Improved handling of datatime indexes in DataFrames on forecast module.
  • Added warning messages if passed list values contains non numeric items.
  • Added missing unittests for forecast module with request.get dependencies using MagicMock.

The Docker images to update the add-on should be available soon.

Cheers!

1 Like

Hi David,

I’m planning to use EMHASS in out future solar panel and energy storage installation in our house. I’ve been attempting to setup the planned installation in the configuration tab for your add-on, but I’m struggling a bit to say the least.

Our planned installation has 3 roofs (~east, ~west, ~south) with a total of 51 panels from Solar-fabrik S3-N 375Wp (Solar-fabrik S3-N 375Wp glas-teknik - Energiteknik i Kungälv AB)
I will use an Energyhub from Ferroamp. The roof facing east will have 27 panels, the one facing west 16 and the one facing south 8. The roof facing east will have two SSO’s (Solar String Optimizer) and the other two one each.

Below is the configuration for EMHASS. I could not find the panels or the Energyhub. For the panels I believe I’ve found something that is close enough. For the inverter I’m bit lost on what to choose.

hass_url: empty
long_lived_token: empty
costfun: profit
optimization_time_step: 60
historic_days_to_retrieve: 2
method_ts_round: nearest
set_total_pv_sell: false
lp_solver: GLPK_CMD
lp_solver_path: empty
sensor_power_photovoltaics: sensor.power_production_address
sensor_power_load_no_var_loads: sensor.power_address
number_of_deferrable_loads: 1
list_nominal_power_of_deferrable_loads:
  - nominal_power_of_deferrable_loads: 3000
list_operating_hours_of_each_deferrable_load:
  - operating_hours_of_each_deferrable_load: 4
list_peak_hours_periods_start_hours:
  - peak_hours_periods_start_hours: "02:54"
  - peak_hours_periods_start_hours: "17:24"
list_peak_hours_periods_end_hours:
  - peak_hours_periods_end_hours: "15:24"
  - peak_hours_periods_end_hours: "20:24"
list_treat_deferrable_load_as_semi_cont:
  - treat_deferrable_load_as_semi_cont: true
load_peak_hours_cost: 600
load_offpeak_hours_cost: 300
photovoltaic_production_sell_price: 300
maximum_power_from_grid: 13800
list_pv_module_model:
  - pv_module_model: LG_Electronics_Inc__LG375N2T_A4
  - pv_module_model: LG_Electronics_Inc__LG375N2T_A4
  - pv_module_model: LG_Electronics_Inc__LG375N2T_A4
list_pv_inverter_model:
  - pv_inverter_model: Fronius_International_GmbH__Fronius_Primo_15_0_1_208_240__208V_
  - pv_inverter_model: Fronius_International_GmbH__Fronius_Primo_15_0_1_208_240__208V_
  - pv_inverter_model: Fronius_International_GmbH__Fronius_Primo_15_0_1_208_240__208V_
list_surface_tilt:
  - surface_tilt: 23
  - surface_tilt: 23
  - surface_tilt: 27
list_surface_azimuth:
  - surface_azimuth: 109
  - surface_azimuth: 289
  - surface_azimuth: 199
list_modules_per_string:
  - modules_per_string: 27
  - modules_per_string: 16
  - modules_per_string: 8
list_strings_per_inverter:
  - strings_per_inverter: 1
  - strings_per_inverter: 1
  - strings_per_inverter: 1
set_use_battery: true
battery_discharge_power_max: 8200
battery_charge_power_max: 6600
battery_discharge_efficiency: 0.96
battery_charge_efficiency: 0.96
battery_nominal_energy_capacity: 14200
battery_minimum_state_of_charge: 0.1
battery_maximum_state_of_charge: 0.9
battery_target_state_of_charge: 0.6
web_ui_url: 0.0.0.0

When I trigger a day-ahead optimization I see the following log traces:

s6-rc: info: service s6rc-oneshot-runner: starting
s6-rc: info: service s6rc-oneshot-runner successfully started
s6-rc: info: service fix-attrs: starting
s6-rc: info: service fix-attrs successfully started
s6-rc: info: service legacy-cont-init: starting
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service legacy-services: starting
services-up: info: copying legacy longrun emhass (no readiness notification)
s6-rc: info: service legacy-services successfully started
[2022-10-07 00:21:11,486] INFO in web_server: Launching the emhass webserver at: http://0.0.0.0:5000
[2022-10-07 00:21:11,487] INFO in web_server: Home Assistant data fetch will be performed using url: http://supervisor/core/api
[2022-10-07 00:21:11,487] INFO in web_server: The base path is: /usr/src
[2022-10-07 00:21:11,491] INFO in web_server: Using core emhass version: 0.3.20
[2022-10-07 00:21:12,414] INFO in command_line: Setting up needed data
[2022-10-07 00:21:12,499] WARNING in utils: There are non numeric values on the passed data, check for missing values (nans, null, etc)
[2022-10-07 00:21:12,500] WARNING in utils: There are non numeric values on the passed data, check for missing values (nans, null, etc)
[2022-10-07 00:21:12,506] INFO in forecast: Retrieving weather forecast data using method = scrapper
[2022-10-07 00:21:15,448] INFO in forecast: Retrieving data from hass for load forecast using method = naive
[2022-10-07 00:21:15,451] INFO in retrieve_hass: Retrieve hass get data method initiated...
[2022-10-07 00:21:18,952] INFO in web_server:  >> Performing dayahead optimization...
[2022-10-07 00:21:18,952] INFO in command_line: Performing day-ahead forecast optimization
[2022-10-07 00:21:18,977] INFO in optimization: Perform optimization for the day-ahead
[2022-10-07 00:21:19,380] INFO in optimization: Status: Optimal
[2022-10-07 00:21:19,381] INFO in optimization: Total value of the Cost function = 313.78

I see a warning that I don’t know what to do with:

[2022-10-07 00:21:12,499] WARNING in utils: There are non numeric values on the passed data, check for missing values (nans, null, etc)
[2022-10-07 00:21:12,500] WARNING in utils: There are non numeric values on the passed data, check for missing values (nans, null, etc)

Have I configured EMHASS correctly or what is causing the warning?
I’m also asking myself why I can’t see a difference in PV forecast when I change the azimuth angle for the panels?

I’m hoping that you have the possibility to help out because I really wish to use this add-on to optimize usage of our future investment.

Warmly, Per

Just check the passed data for non numeric values. You are passing data that contains nan’s or null values. The nan’s are handled on the code and filled with valid data. So the optimization can run without problems. However you should just check your data integrity, hence the warning message.

For the inverter model just pick one with the same nominal power as yours. That will be enough.

Hi @davidusb!

I’m trying to create a difference in price between load_cost_forecast and prod_price_forecast to take into account for the costs associated with transportation of the electricity by our utility supplier Ellevio when I purchase and sell electricity. I want to add 0.75 SEK to load_cost_forecast and 0.631 SEK to prod_price_forecast.

dayahead_optim:
  'curl -i -H ''Content-Type: application/json'' -X POST -d ''{"load_cost_forecast":{{(
  (state_attr(''sensor.tibber_prices'', ''tomorrow'')|map(attribute=''total'')|list)[:24])
  }},"prod_price_forecast":{{(
  (state_attr(''sensor.tibber_prices'', ''tomorrow'')|map(attribute=''total'')|list)[:24])}}}'' http://localhost:5000/action/dayahead-optim'

Lists does not appear to handle just adding a float number. I’m struggling with the syntax and need some help.

The sensor Tibber prices is defined as below:

- platform: rest
  name: Tibber prices
  resource: https://api.tibber.com/v1-beta/gql
  method: POST
  scan_interval: 60
  payload: '{ "query": "{ viewer { homes { currentSubscription { priceInfo { today { total startsAt } tomorrow { total startsAt }}}}}}" }'
  json_attributes_path: "$.data.viewer.homes[0].currentSubscription.priceInfo"
  json_attributes:
    - today
    - tomorrow
  value_template: Ok
  headers:
    Authorization: "personal_token"
    Content-Type: application/json
    User-Agent: REST

Cheers, Per

2 Likes

I trying to use emhass to start and stop the dishwasher which takes five hours to complete.
I use this config for the dishwasher:

  • nominal_power_of_deferrable_loads: 1500

  • operating_hours_of_each_deferrable_load: 5
    In the documentation it says this parameter are the total number of hours that each deferrable load should operate.

Emhass chart gives me these schedules which are less than 5 hours running time, which my dishwasher needs to be finished. How can I get a schedule which are 5 hours (or more) running time?

Have a look at the set_def_constant variable.

If you set True it should schedule a continuous block of 5 hours for you.

set_def_constant: Define if we should set each deferrable load as a constant fixed value variable with just one startup for each optimization task. For example:

  • False

Thank you for your help.
I am using emhass addon and in the config I do not have “set_def_constant”. But I have already True for treat_deferrable_load_as_semi_cont. Maybe these two are the same?

  • treat_deferrable_load_as_semi_cont: Define if we should treat each deferrable load as a semi-continuous variable. Semi-continuous variables are variables that can take either their nominal value or zero.

image