EMHASS: An Energy Management for Home Assistant

I’m trying to configure emhass on my home assistant installation running on a Raspberry Pi 4 rpi4-64.
I’ll get errors when using my selected pv_module and pv_inverter. Maybe I don’t apply the right number of _? or is there a bug?

I figured out which pv-module is most similar to my panels (as my panels from Bauer are not listed in the csv file), so I choose:
Vietnam Sunergy Joint Stock Company VSUN380-120BMH

Same for the inverter, but luckily the vendor exist, only diff is that my inverter is a 4.6KTL:
Huawei Technologies Co Ltd : SUN2000-5KTL-USL0 [240V]

When applying config, starting emhass and launching day-ahead optimization from the web-ui I get following log entries:

s6-rc: info: service s6rc-oneshot-runner: starting s6-rc: info: service s6rc-oneshot-runner successfully started s6-rc: info: service fix-attrs: starting s6-rc: info: service fix-attrs successfully started s6-rc: info: service legacy-cont-init: starting s6-rc: info: service legacy-cont-init successfully started s6-rc: info: service legacy-services: starting services-up: info: copying legacy longrun emhass (no readiness notification) s6-rc: info: service legacy-services successfully started [2022-07-27 23:23:23,463] INFO in web_server: Launching the emhass webserver at: http://0.0.0.0:5000 [2022-07-27 23:23:23,464] INFO in web_server: Home Assistant data fetch will be performed using url: http://supervisor/core/api [2022-07-27 23:23:23,466] INFO in web_server: The base path is: /usr/src [2022-07-27 23:23:23,474] INFO in web_server: Using core emhass version: 0.3.17 [2022-07-27 23:23:32,087] INFO in web_server: EMHASS server online, serving index.html... [2022-07-27 23:24:21,390] INFO in command_line: Setting up needed data [2022-07-27 23:24:21,479] INFO in forecast: Retrieving weather forecast data using method = scrapper [2022-07-27 23:24:24,611] ERROR in app: Exception on /action/dayahead-optim [POST] Traceback (most recent call last): File "/root/.local/lib/python3.9/site-packages/pandas/core/indexes/base.py", line 3621, in get_loc return self._engine.get_loc(casted_key) File "pandas/_libs/index.pyx", line 136, in pandas._libs.index.IndexEngine.get_loc File "pandas/_libs/index.pyx", line 163, in pandas._libs.index.IndexEngine.get_loc File "pandas/_libs/hashtable_class_helper.pxi", line 5198, in pandas._libs.hashtable.PyObjectHashTable.get_item File "pandas/_libs/hashtable_class_helper.pxi", line 5206, in pandas._libs.hashtable.PyObjectHashTable.get_item KeyError: 'Vietnam_Sunergy_Joint_Stock_Company_VSUN380_120BMH' The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 2077, in wsgi_app response = self.full_dispatch_request() File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1525, in full_dispatch_request rv = self.handle_user_exception(e) File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1523, in full_dispatch_request rv = self.dispatch_request() File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1509, in dispatch_request return self.ensure_sync(self.view_functions[rule.endpoint])(**req.view_args) File "/usr/local/lib/python3.9/dist-packages/emhass/web_server.py", line 134, in action_call input_data_dict = set_input_data_dict(config_path, str(config_path.parent), costfun, File "/usr/local/lib/python3.9/dist-packages/emhass/command_line.py", line 76, in set_input_data_dict P_PV_forecast = fcst.get_power_from_weather(df_weather) File "/usr/local/lib/python3.9/dist-packages/emhass/forecast.py", line 317, in get_power_from_weather module = cec_modules[self.plant_conf['module_model'][i]] File "/root/.local/lib/python3.9/site-packages/pandas/core/frame.py", line 3505, in __getitem__ indexer = self.columns.get_loc(key) File "/root/.local/lib/python3.9/site-packages/pandas/core/indexes/base.py", line 3623, in get_loc raise KeyError(key) from err KeyError: 'Vietnam_Sunergy_Joint_Stock_Company_VSUN380_120BMH'

When replacing the pv-module name in the config with the documentation example:
CSUN_Eurasia_Energy_Systems_Industry_and_Trade_CSUN295_60M
I’ll get this:

s6-rc: info: service s6rc-oneshot-runner: starting s6-rc: info: service s6rc-oneshot-runner successfully started s6-rc: info: service fix-attrs: starting s6-rc: info: service fix-attrs successfully started s6-rc: info: service legacy-cont-init: starting s6-rc: info: service legacy-cont-init successfully started s6-rc: info: service legacy-services: starting services-up: info: copying legacy longrun emhass (no readiness notification) s6-rc: info: service legacy-services successfully started [2022-07-27 23:34:08,710] INFO in web_server: Launching the emhass webserver at: http://0.0.0.0:5000 [2022-07-27 23:34:08,710] INFO in web_server: Home Assistant data fetch will be performed using url: http://supervisor/core/api [2022-07-27 23:34:08,711] INFO in web_server: The base path is: /usr/src [2022-07-27 23:34:08,718] INFO in web_server: Using core emhass version: 0.3.17 [2022-07-27 23:34:21,858] INFO in web_server: EMHASS server online, serving index.html... [2022-07-27 23:34:33,264] INFO in command_line: Setting up needed data [2022-07-27 23:34:33,353] INFO in forecast: Retrieving weather forecast data using method = scrapper [2022-07-27 23:34:35,533] ERROR in app: Exception on /action/dayahead-optim [POST] Traceback (most recent call last): File "/root/.local/lib/python3.9/site-packages/pandas/core/indexes/base.py", line 3621, in get_loc return self._engine.get_loc(casted_key) File "pandas/_libs/index.pyx", line 136, in pandas._libs.index.IndexEngine.get_loc File "pandas/_libs/index.pyx", line 163, in pandas._libs.index.IndexEngine.get_loc File "pandas/_libs/hashtable_class_helper.pxi", line 5198, in pandas._libs.hashtable.PyObjectHashTable.get_item File "pandas/_libs/hashtable_class_helper.pxi", line 5206, in pandas._libs.hashtable.PyObjectHashTable.get_item KeyError: 'Huawei_Technologies_Co_Ltd___SUN2000_5KTL_USL0__240V_' The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 2077, in wsgi_app response = self.full_dispatch_request() File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1525, in full_dispatch_request rv = self.handle_user_exception(e) File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1523, in full_dispatch_request rv = self.dispatch_request() File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1509, in dispatch_request return self.ensure_sync(self.view_functions[rule.endpoint])(**req.view_args) File "/usr/local/lib/python3.9/dist-packages/emhass/web_server.py", line 134, in action_call input_data_dict = set_input_data_dict(config_path, str(config_path.parent), costfun, File "/usr/local/lib/python3.9/dist-packages/emhass/command_line.py", line 76, in set_input_data_dict P_PV_forecast = fcst.get_power_from_weather(df_weather) File "/usr/local/lib/python3.9/dist-packages/emhass/forecast.py", line 318, in get_power_from_weather inverter = cec_inverters[self.plant_conf['inverter_model'][i]] File "/root/.local/lib/python3.9/site-packages/pandas/core/frame.py", line 3505, in __getitem__ indexer = self.columns.get_loc(key) File "/root/.local/lib/python3.9/site-packages/pandas/core/indexes/base.py", line 3623, in get_loc raise KeyError(key) from err KeyError: 'Huawei_Technologies_Co_Ltd___SUN2000_5KTL_USL0__240V_'

Hi, I’ve just answered you on github: https://github.com/davidusb-geek/emhass-add-on/issues/23

Everything should be fine but you’ll need to find a module name from the database. The inverter name does exists in the database.

Thanks @davidusb , I’ll respond on github

is there a way to integrate variable tariffs in EMHASS?
In Denmark (and Scandinavia) we get hourly prices from Nordpool. (Market data | Nord Pool), they are published for the next 24 hours every day at 13.00 CET, showing the hourly rate. It can be used in HomeAssistant with the Nordpool integration. With the current price fluctuations it would be really useful to have EMHASS determine usage and charging based on nordpool prices or whatever entity sets the power prices in your area.

I do exactly this with my energy provider Amber, who change the price forecasts every five minutes based off or national energy market.

EMHASS is ideal for this scenario as it will create an optimum scheduled based on these variables. Although your case of a single daily price update is simpler to implement. It will be good to have more users on with variable pricing.

You can see some configuration snippets here: The forecast module — emhass 0.3.17 documentation

and my EMHASS forecast plan for today:

1 Like

Hi, thanks to Mark for the fast answer.
So yes this is totally possible. Take a look at the documentation link provided by Mark. There are several examples to do this. You’ll just basically need to use templates to convert the prices provided by the Nordpool integration into a list of values and then pass the list when calling the EMHASS optimization routine.

Thanks a lot for the replies.
This is great and just what I have been looking for. It will be fun to get this working :smile:

Hi David,

could you please add examples how to calculate alpha and beta values to documentation?

Thanks
Mirek

Hi Mirek,
I’ve just updated the documentation on this part here: The forecast module — emhass 0.3.17 documentation

I hope it helps…

2 Likes

I am trying to pass the Nordpool cost prices raw_today + raw_tomorrow, but the unit_load_cost do not get updated. Why doesn’t this work?

bilde

bilde

I use this template to pass the cost price value to the list:

'curl -i -H "Content-Type: application/json" -X POST -d '{"load_cost_forecast": {{ ((( state_attr('sensor.nordpool', 'raw_today') + state_attr('sensor.nordpool', 'raw_tomorrow')) |map(attribute='value')|list)[:48]) }}' http://localhost:5000/action/dayahead-optim'

and this shell_command to pass the list to emhass:

shell_command:
  load_cost_forecast: "curl -i -H \"Content-Type: application/json\" -X POST -d '{\"load_cost_forecast\": {{ ((( state_attr('sensor.nordpool', 'raw_today') + state_attr('sensor.nordpool', 'raw_tomorrow')) |map(attribute='value')|list)[:48]) }}' http://localhost:5000/action/dayahead-optim"

The shell_command debug says the command returned successful:

➜  config tail -100  home-assistant.log | grep "homeassistant.components.shell_command"
2022-08-25 19:08:30.940 DEBUG (MainThread) [homeassistant.components.shell_command] Stdout of command: `curl -i -H "Content-Type: application/json" -X POST -d '{"load_cost_forecast": {{ ((( state_attr('sensor.nordpool', 'raw_today') + state_attr('sensor.nordpool', 'raw_tomorrow')) |map(attribute='value')|list)[:48]) }}' http://localhost:5000/action/dayahead-optim`, return code: 0:
2022-08-25 19:08:30.941 DEBUG (MainThread) [homeassistant.components.shell_command] Stderr of command: `curl -i -H "Content-Type: application/json" -X POST -d '{"load_cost_forecast": {{ ((( state_attr('sensor.nordpool', 'raw_today') + state_attr('sensor.nordpool', 'raw_tomorrow')) |map(attribute='value')|list)[:48]) }}' http://localhost:5000/action/dayahead-optim`, return code: 0:

I use the emhass addon version 0.2.19 with this config:

The emhass log:

[2022-08-25 19:03:08,159] INFO in web_server: EMHASS server online, serving index.html...
[2022-08-25 19:08:49,585] INFO in web_server: EMHASS server online, serving index.html...
[2022-08-25 19:09:29,125] INFO in command_line: Setting up needed data
[2022-08-25 19:09:29,368] INFO in forecast: Retrieving weather forecast data using method = scrapper
[2022-08-25 19:09:32,306] INFO in web_server: EMHASS server online, serving index.html...
[2022-08-25 19:09:35,093] INFO in forecast: Retrieving data from hass for load forecast using method = naive
[2022-08-25 19:09:35,096] INFO in retrieve_hass: Retrieve hass get data method initiated...
[2022-08-25 19:09:58,215] INFO in web_server:  >> Performing dayahead optimization...
[2022-08-25 19:09:58,216] INFO in command_line: Performing day-ahead forecast optimization
[2022-08-25 19:09:58,227] INFO in optimization: Perform optimization for the day-ahead
[2022-08-25 19:09:58,823] INFO in optimization: Status: Optimal
[2022-08-25 19:09:58,825] INFO in optimization: Total value of the Cost function = -3.61
/usr/local/lib/python3.9/dist-packages/emhass/web_server.py:46: FutureWarning:
Dropping of nuisance columns in DataFrame reductions (with 'numeric_only=None') is deprecated; in a future version this will raise TypeError.  Select only valid columns before calling the reduction.
[2022-08-25 19:19:45,309] INFO in web_server: EMHASS server online, serving index.html...
[2022-08-25 19:29:24,315] INFO in web_server: EMHASS server online, serving index.html...

Hi, I just can’t what is wrong here. This is just working fine for me and there is a specific unit test in the code to test for this and everything seems fine.

What is the Home Assistant log saying when you execute that shell command?

Please open a github issue to follow this more in detail there.

1 Like

@KasperEdw
I solved it! I found out I have done two errors.

The first was passing wrong amount of data in this list. Nordpool publish prices for every hour so the list must have 24 data points, not 48 points.

You need to be careful here to send the correct amount of data on this list, the correct length. For example, if the data time step is defined to 1h and you are performing a day-ahead optimization, then this list length should be of 24 data points.

In the emhass configuration you must also have 60 (minuts) for optimization_time_step.
image

The price data must also be for the next day and price data must be in Euro. When you setup Nordpool addon you can choose Euro as the price data.
Update: You can use whatever currency you want as long as you use the same currency everywhere.

The second was using template with errors. My template was wrong with the parenthesis and curly bracket. After fixing the template the shell_command worked and the passing of load_cost_forecast and prod_price_forecast was successful.

Here is the correct template for passing forecast data from Nordpool.

shell_command:
  publish_data: "curl -i -H 'Content-Type:application/json' -X POST -d '{}' http://localhost:5000/action/publish-data"
  
  post_nordpool_forecast: "curl -i -H 'Content-Type: application/json' -X POST -d '{\"load_cost_forecast\":{{(
        (state_attr('sensor.nordpool_euro', 'raw_tomorrow')|map(attribute='value')|list)[:24])
        }},\"prod_price_forecast\":{{(
        (state_attr('sensor.nordpool_euro', 'raw_tomorrow')|map(attribute='value')|list)[:24])}}}' http://localhost:5000/action/dayahead-optim"

image

Hope it helps others

5 Likes

Hi guys! I am struggling with the initial config, as long as I am not changing the hass_url from the default “empty” I get EMHASS UI running without any data from HA. With the hass_url changed I get this error message:

s6-rc: info: service s6rc-oneshot-runner: starting
s6-rc: info: service s6rc-oneshot-runner successfully started
s6-rc: info: service fix-attrs: starting
s6-rc: info: service fix-attrs successfully started
s6-rc: info: service legacy-cont-init: starting
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service legacy-services: starting
services-up: info: copying legacy longrun emhass (no readiness notification)
s6-rc: info: service legacy-services successfully started

Traceback (most recent call last):
  File "/usr/local/lib/python3.9/dist-packages/requests/models.py", line 971, in json
    return complexjson.loads(self.text, **kwargs)
  File "/usr/lib/python3.9/json/__init__.py", line 346, in loads
    return _default_decoder.decode(s)
  File "/usr/lib/python3.9/json/decoder.py", line 337, in decode
    obj, end = self.raw_decode(s, idx=_w(s, 0).end())
  File "/usr/lib/python3.9/json/decoder.py", line 355, in raw_decode
    raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
  File "/usr/lib/python3.9/runpy.py", line 197, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib/python3.9/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/usr/local/lib/python3.9/dist-packages/emhass/web_server.py", line 241, in <module>
    config_hass = response.json()
  File "/usr/local/lib/python3.9/dist-packages/requests/models.py", line 975, in json
    raise RequestsJSONDecodeError(e.msg, e.doc, e.pos)
requests.exceptions.JSONDecodeError: Expecting value: line 1 column 1 (char 0)

With this config:

web_ui_url: 0.0.0.0
hass_url: https://xxx:8123/
long_lived_token: empty
costfun: self-consumption
optimization_time_step: 30
historic_days_to_retrieve: 2
method_ts_round: nearest
set_total_pv_sell: false
lp_solver: PULP_CBC_CMD
lp_solver_path: empty
sensor_power_photovoltaics: sensor.pv_power
sensor_power_load_no_var_loads: sensor.house_consumption
number_of_deferrable_loads: 2
list_nominal_power_of_deferrable_loads:
  - nominal_power_of_deferrable_loads: 3000
  - nominal_power_of_deferrable_loads: 750
list_operating_hours_of_each_deferrable_load:
  - operating_hours_of_each_deferrable_load: 5
  - operating_hours_of_each_deferrable_load: 8
list_peak_hours_periods_start_hours:
  - peak_hours_periods_start_hours: "02:54"
  - peak_hours_periods_start_hours: "17:24"
list_peak_hours_periods_end_hours:
  - peak_hours_periods_end_hours: "15:24"
  - peak_hours_periods_end_hours: "20:24"
list_treat_deferrable_load_as_semi_cont:
  - treat_deferrable_load_as_semi_cont: true
  - treat_deferrable_load_as_semi_cont: true
load_peak_hours_cost: 0.1907
load_offpeak_hours_cost: 0.1419
photovoltaic_production_sell_price: 0.065
maximum_power_from_grid: 22080
list_pv_module_model:
  - pv_module_model: IBEX-132MHC-EiGER-495-500
list_pv_inverter_model:
  - pv_inverter_model: GoodWe_10K_ET_Plus+
list_surface_tilt:
  - surface_tilt: 25
list_surface_azimuth:
  - surface_azimuth: 205
list_modules_per_string:
  - modules_per_string: 6
list_strings_per_inverter:
  - strings_per_inverter: 2
set_use_battery: false
battery_discharge_power_max: 6390
battery_charge_power_max: 6390
battery_discharge_efficiency: 0.95
battery_charge_efficiency: 0.95
battery_nominal_energy_capacity: 10668
battery_minimum_state_of_charge: 0.2
battery_maximum_state_of_charge: 1
battery_target_state_of_charge: 0.6

Could anyone point me in the right direction, please?

Hi, you need to define the long_lived_token parameter, otherwise EMHASS won’t be able to access your HA instance data.

Even if I use the long_lived_token the error is the same, however I do have a supervisor so I left it empty. Am I supposed to use the token under my admin profile, right? image
Same as the URL, which should not be needed with the supervisor. Maybe I am just messing around with wrong setting to get my problem solved. With the default config with supervisor like this

web_ui_url: 0.0.0.0
hass_url: empty
long_lived_token: empty

I get no error, but the data are not fed into EMHASS at all.

s6-rc: info: service s6rc-oneshot-runner: starting
s6-rc: info: service s6rc-oneshot-runner successfully started
s6-rc: info: service fix-attrs: starting
s6-rc: info: service fix-attrs successfully started
s6-rc: info: service legacy-cont-init: starting
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service legacy-services: starting
services-up: info: copying legacy longrun emhass (no readiness notification)
s6-rc: info: service legacy-services successfully started
[2022-09-04 09:24:00,823] INFO in web_server: Launching the emhass webserver at: http://0.0.0.0:5000
[2022-09-04 09:24:00,823] INFO in web_server: Home Assistant data fetch will be performed using url: http://supervisor/core/api
[2022-09-04 09:24:00,824] INFO in web_server: The base path is: /usr/src
[2022-09-04 09:24:00,828] INFO in web_server: Using core emhass version: 0.3.18
[2022-09-04 09:25:00,166] INFO in command_line: Setting up needed data
[2022-09-04 09:25:00,318] INFO in web_server:  >> Publishing data...
[2022-09-04 09:25:00,320] INFO in command_line: Publishing data to HASS instance
[2022-09-04 09:25:00,458] INFO in retrieve_hass: Successfully posted value in a newly created entity_id
[2022-09-04 09:25:00,572] INFO in retrieve_hass: Successfully posted value in a newly created entity_id
[2022-09-04 09:25:00,748] INFO in retrieve_hass: Successfully posted value in a newly created entity_id
[2022-09-04 09:25:00,860] INFO in retrieve_hass: Successfully posted value in a newly created entity_id
[2022-09-04 09:25:01,000] INFO in retrieve_hass: Successfully posted value in a newly created entity_id
[2022-09-04 09:25:01,123] INFO in retrieve_hass: Successfully posted value in a newly created entity_id
[2022-09-04 09:25:57,581] INFO in web_server: EMHASS server online, serving index.html... 

I can now access EMHASS UI, but it never gets updated, the data keep being the “default” one shown below.
image
What could be the problem then? Thanks for any suggestions.

Why do you say that no data is being fed to the add-on? You need to set some automations to launch the optimization tasks, have you already done this? The graph on the web ui won’t update until you launch an optimization task

Thanks a lot for your help, I thought that no data is being fed, so I just checked the deferrable load entity from time to time if they got changed or not. I also changed some details in the configuration and later discovered that the UI got updated with all the data. The P_deferrable in UI is spot on, but I struggle to get it into HA. The reason why I said that no data is being fed is because of the log:

[2022-09-04 20:25:00,808] INFO in web_server:  >> Publishing data...
[2022-09-04 20:25:00,809] INFO in command_line: Publishing data to HASS instance
[2022-09-04 20:25:00,904] INFO in retrieve_hass: Successfully posted to sensor.p_pv_forecast = 1391.13
[2022-09-04 20:25:00,946] INFO in retrieve_hass: Successfully posted to sensor.p_load_forecast = 169.94
[2022-09-04 20:25:00,989] INFO in retrieve_hass: Successfully posted to sensor.p_deferrable0 = 0.0
[2022-09-04 20:25:01,033] INFO in retrieve_hass: Successfully posted to sensor.p_deferrable1 = 0.0
[2022-09-04 20:25:01,077] INFO in retrieve_hass: Successfully posted to sensor.p_grid_forecast = -1221.19
[2022-09-04 20:25:01,118] INFO in retrieve_hass: Successfully posted to sensor.total_cost_fun_value = -0.61

It just keeps publishing the “default” values, instead of the ones I am seeing in UI.



I might be missing something totally obvious for you, but my lack of experience and language skills makes it quite challenging for me. Thanks for any suggestions.

Little edit:

Just noticed that the result table does not contain data unless I manually press the Perfect Optimization. I do have these lines in /config/configuration.yaml


And these lines in /config/automations.yaml

image
Are those the automatization you were talking about? I hope I did not miss any.

Hello all,
Seems to me I have issues with module and inverter name. If I understood well the documentation (chapter Configuration File) I need to find as close model as possible in SAM/deploy/libraries at develop · NREL/SAM · GitHub and to replace the special chars by underscore.

If I use the model used in the example (‘CSUN_Eurasia_Energy_Systems_Industry_and_Trade_CSUN295_60M’ and ‘Fronius_International_GmbH__Fronius_Primo_5_0_1_208_240__240V_’) everything works.

However if I try to insert ‘GoodWe_Technologies_Co___Ltd___GW9600A_ES__240V_’ or module ‘LONGi_Green_Energy_Technology_Co__Ltd__LR5_66HBD_490M’) I get following error message in log after clicking on Day-Ahead Optimization. See at the end of my post.

Am I missing anything? Or I need to pass somehow the information about used module/invertor?

Thank you

[2022-09-08 22:02:24,044] ERROR in app: Exception on /action/dayahead-optim [POST]
Traceback (most recent call last):
File “/usr/local/lib/python3.9/dist-packages/pandas/core/indexes/base.py”, line 3621, in get_loc
return self._engine.get_loc(casted_key)
File “pandas/_libs/index.pyx”, line 136, in pandas._libs.index.IndexEngine.get_loc
File “pandas/_libs/index.pyx”, line 163, in pandas._libs.index.IndexEngine.get_loc
File “pandas/_libs/hashtable_class_helper.pxi”, line 5198, in pandas._libs.hashtable.PyObjectHashTable.get_item
File "pandas/libs/hashtable_class_helper.pxi", line 5206, in pandas.libs.hashtable.PyObjectHashTable.get_item
KeyError: 'GoodWe_Technologies_Co___Ltd___GW9600A_ES__240V

The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File “/usr/local/lib/python3.9/dist-packages/flask/app.py”, line 2525, in wsgi_app
response = self.full_dispatch_request()
File “/usr/local/lib/python3.9/dist-packages/flask/app.py”, line 1822, in full_dispatch_request
rv = self.handle_user_exception(e)
File “/usr/local/lib/python3.9/dist-packages/flask/app.py”, line 1820, in full_dispatch_request
rv = self.dispatch_request()
File “/usr/local/lib/python3.9/dist-packages/flask/app.py”, line 1796, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
File “/usr/local/lib/python3.9/dist-packages/emhass/web_server.py”, line 134, in action_call
input_data_dict = set_input_data_dict(config_path, str(config_path.parent), costfun,
File “/usr/local/lib/python3.9/dist-packages/emhass/command_line.py”, line 76, in set_input_data_dict
P_PV_forecast = fcst.get_power_from_weather(df_weather)
File “/usr/local/lib/python3.9/dist-packages/emhass/forecast.py”, line 337, in get_power_from_weather
inverter = cec_inverters[self.plant_conf[‘inverter_model’][i]]
File “/usr/local/lib/python3.9/dist-packages/pandas/core/frame.py”, line 3505, in getitem
indexer = self.columns.get_loc(key)
File “/usr/local/lib/python3.9/dist-packages/pandas/core/indexes/base.py”, line 3623, in get_loc
raise KeyError(key) from err
KeyError: 'GoodWe_Technologies_Co___Ltd___GW9600A_ES__240V

1 Like

Hello,
The problem is that your module and inverter models are not found in the database. This is a recurrent problem.

You should look if your models are available. If they are not avalable, solution (1) is to pick another model as close as possible as yours in terms of the nominal power.
The available module models are listed here: https://github.com/davidusb-geek/emhass-add-on/files/9234460/sam-library-cec-modules-2019-03-05.csv
And the available inverter models are listed here: https://github.com/davidusb-geek/emhass-add-on/files/9532724/sam-library-cec-inverters-2019-03-05.csv

Solution (2) would be to use SolCast and pass that data directly to emhass as a list of values from a template. Take a look at this example here: The forecast module — emhass 0.3.18 documentation

I checked your module model and there are no 490W models or even 500W models on the database, so you are better off going with solution (2)

Oh, now I can see the problem I am facing, the SAM library included in the EMHASS is not up to date, judging by the date in the file name " 2019-03-05" I checked the file and did not find the module and inverter models, that I can see in the link documentation refers to:

The complete list of supported modules and inverter models can be found here: pvlib.pvsystem.retrieve_sam — pvlib python 0.10.3 documentation

This link then refers to :

Files available at
SAM/deploy/libraries at develop · NREL/SAM · GitHub

These files are not the same. The inverter Daman is talking about is listed in the original SAM libraries, but not in the EMHASS one. So I just wanted to point out to others, who did use the link from documentation, that it may contain the models you are looking for, but may not work.