EMHASS: An Energy Management for Home Assistant

The perfect optimization are now running without error. I was solved when I waited 2 days (historic_days_to_retrieve). The day-ahead optimization is still giving error, but I hope it is solved when I wait a few hours.

[2022-07-07 16:04:14,019] INFO in web_server: EMHASS server online, serving index.html...
[2022-07-07 16:04:16,751] INFO in command_line: Setting up needed data
[2022-07-07 16:04:16,755] INFO in retrieve_hass: Retrieve hass get data method initiated...
[2022-07-07 16:04:17,091] INFO in web_server:  >> Performing perfect optimization...
[2022-07-07 16:04:17,091] INFO in command_line: Performing perfect forecast optimization
[2022-07-07 16:04:17,093] INFO in optimization: Perform optimization for perfect forecast scenario
[2022-07-07 16:04:17,094] INFO in optimization: Solving for day: 5-7-2022
[2022-07-07 16:04:17,162] INFO in optimization: Status: Optimal
[2022-07-07 16:04:17,163] INFO in optimization: Total value of the Cost function = -2.31
[2022-07-07 16:04:17,171] INFO in optimization: Solving for day: 6-7-2022
[2022-07-07 16:04:17,222] INFO in optimization: Status: Optimal
[2022-07-07 16:04:17,222] INFO in optimization: Total value of the Cost function = -4.5
/usr/local/lib/python3.9/dist-packages/emhass/web_server.py:46: FutureWarning:
Dropping of nuisance columns in DataFrame reductions (with 'numeric_only=None') is deprecated; in a future version this will raise TypeError.  Select only valid columns before calling the reduction.

Ok, nice that the perfect optimization is running, effectively the historic_days_to_retrieve has to match the available data on your HA database.
What is the error on the day-ahead optimization?

When I push the “Day-Ahead Optimization” button in the web page I get this: “ERROR in retrieve_hass”. But when I push the “Perfect Optimization” I do not get the error.

Home Assistant 2022.7.0
Supervisor 2022.07.0
Operating System 8.2
Frontend 20220706.0 - latest

2022-07-08 02:59:14,383] INFO in web_server: EMHASS server online, serving index.html...
[2022-07-08 02:59:26,345] INFO in command_line: Setting up needed data
[2022-07-08 02:59:26,372] INFO in forecast: Retrieving weather forecast data using method = scrapper
[2022-07-08 02:59:27,715] INFO in forecast: Retrieving data from hass for load forecast using method = naive
[2022-07-08 02:59:27,716] INFO in retrieve_hass: Retrieve hass get data method initiated...
[2022-07-08 02:59:27,724] ERROR in retrieve_hass: The retrieved JSON is empty, check that correct day or variable names are passed
[2022-07-08 02:59:27,725] ERROR in retrieve_hass: Either the names of the passed variables are not correct or days_to_retrieve is larger than the recorded history of your sensor (check your recorder settings)
[2022-07-08 02:59:27,725] ERROR in app: Exception on /action/dayahead-optim [POST]
Traceback (most recent call last):
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 2077, in wsgi_app
    response = self.full_dispatch_request()
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1525, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1523, in full_dispatch_request
    rv = self.dispatch_request()
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1509, in dispatch_request
    return self.ensure_sync(self.view_functions[rule.endpoint])(**req.view_args)
  File "/usr/local/lib/python3.9/dist-packages/emhass/web_server.py", line 134, in action_call
    input_data_dict = set_input_data_dict(config_path, str(config_path.parent), costfun,
  File "/usr/local/lib/python3.9/dist-packages/emhass/command_line.py", line 77, in set_input_data_dict
    P_load_forecast = fcst.get_load_forecast(method=optim_conf['load_forecast_method'])
  File "/usr/local/lib/python3.9/dist-packages/emhass/forecast.py", line 476, in get_load_forecast
    rh.get_data(days_list, var_list)
  File "/usr/local/lib/python3.9/dist-packages/emhass/retrieve_hass.py", line 130, in get_data
    self.df_final = pd.concat([self.df_final, df_day], axis=0)
UnboundLocalError: local variable 'df_day' referenced before assignment
[2022-07-08 02:59:55,840] INFO in command_line: Setting up needed data
[2022-07-08 02:59:55,843] INFO in retrieve_hass: Retrieve hass get data method initiated...
[2022-07-08 02:59:56,325] INFO in web_server:  >> Performing perfect optimization...
[2022-07-08 02:59:56,325] INFO in command_line: Performing perfect forecast optimization
[2022-07-08 02:59:56,328] INFO in optimization: Perform optimization for perfect forecast scenario
[2022-07-08 02:59:56,328] INFO in optimization: Solving for day: 6-7-2022
[2022-07-08 02:59:56,417] INFO in optimization: Status: Optimal
[2022-07-08 02:59:56,418] INFO in optimization: Total value of the Cost function = -8.68
[2022-07-08 02:59:56,423] INFO in optimization: Solving for day: 7-7-2022
[2022-07-08 02:59:56,483] INFO in optimization: Status: Optimal
[2022-07-08 02:59:56,484] INFO in optimization: Total value of the Cost function = -6.87
/usr/local/lib/python3.9/dist-packages/emhass/web_server.py:46: FutureWarning:
Dropping of nuisance columns in DataFrame reductions (with 'numeric_only=None') is deprecated; in a future version this will raise TypeError.  Select only valid columns before calling the reduction.

In your configuration you need to check that in both var_PV and var_load, they have the correct name as in your HA sensors and yes that there is enough history data.

I suspect the ha recorder does not have enough data for “sensor.power_adresse_xx” (only one and a half day recording). I will wait. I am going to report back later.

      - name: "Power_load_no_var_loads"
        unit_of_measurement: "W"
        device_class: power
        state: > 
          {% set powerload = states('sensor.power_adresse_xx') | float(default=0) %}
          {% set vvb = states('sensor.bryter_varmvannsbereder_electric_consumed_w') | float(default=0) %}
          {% set vaskemaskin = states('sensor.strombryter_vaskemaskin_electrical_measurement') | float(default=0) %}
          {% set value = ( powerload - vvb - vaskemaskin) | round(1,default=0) %}
          {{ value }}
1 Like

The “Day-Ahead Optimization” is now working. The solution was to wait for 2 days of recording.

Next question:
I have 12 solar cell modules (REC_Solar_REC295TP2) on my garage roof with three AP System QS1 microinvertere (Discontinued APsystems QS1 - APsystems USA | The global leader in multi-platform MLPE technology) placed in the center with four solar cell modules attached. So I have 12 panels and 3 QS1 microinverteres. The microinverters are attached to one 220 V power cabel.
image

On the house roof I have 6 solar cell modules (REC_Solar_REC295TP2). Four of the panels are attaced to one QS1 and two of them are attached to APSystem YC600. So I have 6 panels and 1 QS1 microinverteres + 1 YC600 (Discontinued APsystems YC600 - APsystems USA | The global leader in multi-platform MLPE technology) pr string. The microinverters are attached to one 220 V power cabel.
image

Can you verify if pv configuration is correct? I know the “list_surface_tilt” and “list_surface_azimuth” is not correct. I must measure it outside first.

Great!
From your description your configuration seems good to me. What you should do is validate that it is good by comparing the PV forecast that will be produced with your actual power production. If they match very well on a clear sky day then it is validated.

1 Like

I’m trying to configure emhass on my home assistant installation running on a Raspberry Pi 4 rpi4-64.
I’ll get errors when using my selected pv_module and pv_inverter. Maybe I don’t apply the right number of _? or is there a bug?

I figured out which pv-module is most similar to my panels (as my panels from Bauer are not listed in the csv file), so I choose:
Vietnam Sunergy Joint Stock Company VSUN380-120BMH

Same for the inverter, but luckily the vendor exist, only diff is that my inverter is a 4.6KTL:
Huawei Technologies Co Ltd : SUN2000-5KTL-USL0 [240V]

When applying config, starting emhass and launching day-ahead optimization from the web-ui I get following log entries:

s6-rc: info: service s6rc-oneshot-runner: starting s6-rc: info: service s6rc-oneshot-runner successfully started s6-rc: info: service fix-attrs: starting s6-rc: info: service fix-attrs successfully started s6-rc: info: service legacy-cont-init: starting s6-rc: info: service legacy-cont-init successfully started s6-rc: info: service legacy-services: starting services-up: info: copying legacy longrun emhass (no readiness notification) s6-rc: info: service legacy-services successfully started [2022-07-27 23:23:23,463] INFO in web_server: Launching the emhass webserver at: http://0.0.0.0:5000 [2022-07-27 23:23:23,464] INFO in web_server: Home Assistant data fetch will be performed using url: http://supervisor/core/api [2022-07-27 23:23:23,466] INFO in web_server: The base path is: /usr/src [2022-07-27 23:23:23,474] INFO in web_server: Using core emhass version: 0.3.17 [2022-07-27 23:23:32,087] INFO in web_server: EMHASS server online, serving index.html... [2022-07-27 23:24:21,390] INFO in command_line: Setting up needed data [2022-07-27 23:24:21,479] INFO in forecast: Retrieving weather forecast data using method = scrapper [2022-07-27 23:24:24,611] ERROR in app: Exception on /action/dayahead-optim [POST] Traceback (most recent call last): File "/root/.local/lib/python3.9/site-packages/pandas/core/indexes/base.py", line 3621, in get_loc return self._engine.get_loc(casted_key) File "pandas/_libs/index.pyx", line 136, in pandas._libs.index.IndexEngine.get_loc File "pandas/_libs/index.pyx", line 163, in pandas._libs.index.IndexEngine.get_loc File "pandas/_libs/hashtable_class_helper.pxi", line 5198, in pandas._libs.hashtable.PyObjectHashTable.get_item File "pandas/_libs/hashtable_class_helper.pxi", line 5206, in pandas._libs.hashtable.PyObjectHashTable.get_item KeyError: 'Vietnam_Sunergy_Joint_Stock_Company_VSUN380_120BMH' The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 2077, in wsgi_app response = self.full_dispatch_request() File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1525, in full_dispatch_request rv = self.handle_user_exception(e) File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1523, in full_dispatch_request rv = self.dispatch_request() File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1509, in dispatch_request return self.ensure_sync(self.view_functions[rule.endpoint])(**req.view_args) File "/usr/local/lib/python3.9/dist-packages/emhass/web_server.py", line 134, in action_call input_data_dict = set_input_data_dict(config_path, str(config_path.parent), costfun, File "/usr/local/lib/python3.9/dist-packages/emhass/command_line.py", line 76, in set_input_data_dict P_PV_forecast = fcst.get_power_from_weather(df_weather) File "/usr/local/lib/python3.9/dist-packages/emhass/forecast.py", line 317, in get_power_from_weather module = cec_modules[self.plant_conf['module_model'][i]] File "/root/.local/lib/python3.9/site-packages/pandas/core/frame.py", line 3505, in __getitem__ indexer = self.columns.get_loc(key) File "/root/.local/lib/python3.9/site-packages/pandas/core/indexes/base.py", line 3623, in get_loc raise KeyError(key) from err KeyError: 'Vietnam_Sunergy_Joint_Stock_Company_VSUN380_120BMH'

When replacing the pv-module name in the config with the documentation example:
CSUN_Eurasia_Energy_Systems_Industry_and_Trade_CSUN295_60M
I’ll get this:

s6-rc: info: service s6rc-oneshot-runner: starting s6-rc: info: service s6rc-oneshot-runner successfully started s6-rc: info: service fix-attrs: starting s6-rc: info: service fix-attrs successfully started s6-rc: info: service legacy-cont-init: starting s6-rc: info: service legacy-cont-init successfully started s6-rc: info: service legacy-services: starting services-up: info: copying legacy longrun emhass (no readiness notification) s6-rc: info: service legacy-services successfully started [2022-07-27 23:34:08,710] INFO in web_server: Launching the emhass webserver at: http://0.0.0.0:5000 [2022-07-27 23:34:08,710] INFO in web_server: Home Assistant data fetch will be performed using url: http://supervisor/core/api [2022-07-27 23:34:08,711] INFO in web_server: The base path is: /usr/src [2022-07-27 23:34:08,718] INFO in web_server: Using core emhass version: 0.3.17 [2022-07-27 23:34:21,858] INFO in web_server: EMHASS server online, serving index.html... [2022-07-27 23:34:33,264] INFO in command_line: Setting up needed data [2022-07-27 23:34:33,353] INFO in forecast: Retrieving weather forecast data using method = scrapper [2022-07-27 23:34:35,533] ERROR in app: Exception on /action/dayahead-optim [POST] Traceback (most recent call last): File "/root/.local/lib/python3.9/site-packages/pandas/core/indexes/base.py", line 3621, in get_loc return self._engine.get_loc(casted_key) File "pandas/_libs/index.pyx", line 136, in pandas._libs.index.IndexEngine.get_loc File "pandas/_libs/index.pyx", line 163, in pandas._libs.index.IndexEngine.get_loc File "pandas/_libs/hashtable_class_helper.pxi", line 5198, in pandas._libs.hashtable.PyObjectHashTable.get_item File "pandas/_libs/hashtable_class_helper.pxi", line 5206, in pandas._libs.hashtable.PyObjectHashTable.get_item KeyError: 'Huawei_Technologies_Co_Ltd___SUN2000_5KTL_USL0__240V_' The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 2077, in wsgi_app response = self.full_dispatch_request() File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1525, in full_dispatch_request rv = self.handle_user_exception(e) File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1523, in full_dispatch_request rv = self.dispatch_request() File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1509, in dispatch_request return self.ensure_sync(self.view_functions[rule.endpoint])(**req.view_args) File "/usr/local/lib/python3.9/dist-packages/emhass/web_server.py", line 134, in action_call input_data_dict = set_input_data_dict(config_path, str(config_path.parent), costfun, File "/usr/local/lib/python3.9/dist-packages/emhass/command_line.py", line 76, in set_input_data_dict P_PV_forecast = fcst.get_power_from_weather(df_weather) File "/usr/local/lib/python3.9/dist-packages/emhass/forecast.py", line 318, in get_power_from_weather inverter = cec_inverters[self.plant_conf['inverter_model'][i]] File "/root/.local/lib/python3.9/site-packages/pandas/core/frame.py", line 3505, in __getitem__ indexer = self.columns.get_loc(key) File "/root/.local/lib/python3.9/site-packages/pandas/core/indexes/base.py", line 3623, in get_loc raise KeyError(key) from err KeyError: 'Huawei_Technologies_Co_Ltd___SUN2000_5KTL_USL0__240V_'

Hi, I’ve just answered you on github: https://github.com/davidusb-geek/emhass-add-on/issues/23

Everything should be fine but you’ll need to find a module name from the database. The inverter name does exists in the database.

Thanks @davidusb , I’ll respond on github

is there a way to integrate variable tariffs in EMHASS?
In Denmark (and Scandinavia) we get hourly prices from Nordpool. (Market data | Nord Pool), they are published for the next 24 hours every day at 13.00 CET, showing the hourly rate. It can be used in HomeAssistant with the Nordpool integration. With the current price fluctuations it would be really useful to have EMHASS determine usage and charging based on nordpool prices or whatever entity sets the power prices in your area.

I do exactly this with my energy provider Amber, who change the price forecasts every five minutes based off or national energy market.

EMHASS is ideal for this scenario as it will create an optimum scheduled based on these variables. Although your case of a single daily price update is simpler to implement. It will be good to have more users on with variable pricing.

You can see some configuration snippets here: The forecast module — emhass 0.3.17 documentation

and my EMHASS forecast plan for today:

1 Like

Hi, thanks to Mark for the fast answer.
So yes this is totally possible. Take a look at the documentation link provided by Mark. There are several examples to do this. You’ll just basically need to use templates to convert the prices provided by the Nordpool integration into a list of values and then pass the list when calling the EMHASS optimization routine.

Thanks a lot for the replies.
This is great and just what I have been looking for. It will be fun to get this working :smile:

Hi David,

could you please add examples how to calculate alpha and beta values to documentation?

Thanks
Mirek

Hi Mirek,
I’ve just updated the documentation on this part here: The forecast module — emhass 0.3.17 documentation

I hope it helps…

2 Likes

I am trying to pass the Nordpool cost prices raw_today + raw_tomorrow, but the unit_load_cost do not get updated. Why doesn’t this work?

bilde

bilde

I use this template to pass the cost price value to the list:

'curl -i -H "Content-Type: application/json" -X POST -d '{"load_cost_forecast": {{ ((( state_attr('sensor.nordpool', 'raw_today') + state_attr('sensor.nordpool', 'raw_tomorrow')) |map(attribute='value')|list)[:48]) }}' http://localhost:5000/action/dayahead-optim'

and this shell_command to pass the list to emhass:

shell_command:
  load_cost_forecast: "curl -i -H \"Content-Type: application/json\" -X POST -d '{\"load_cost_forecast\": {{ ((( state_attr('sensor.nordpool', 'raw_today') + state_attr('sensor.nordpool', 'raw_tomorrow')) |map(attribute='value')|list)[:48]) }}' http://localhost:5000/action/dayahead-optim"

The shell_command debug says the command returned successful:

➜  config tail -100  home-assistant.log | grep "homeassistant.components.shell_command"
2022-08-25 19:08:30.940 DEBUG (MainThread) [homeassistant.components.shell_command] Stdout of command: `curl -i -H "Content-Type: application/json" -X POST -d '{"load_cost_forecast": {{ ((( state_attr('sensor.nordpool', 'raw_today') + state_attr('sensor.nordpool', 'raw_tomorrow')) |map(attribute='value')|list)[:48]) }}' http://localhost:5000/action/dayahead-optim`, return code: 0:
2022-08-25 19:08:30.941 DEBUG (MainThread) [homeassistant.components.shell_command] Stderr of command: `curl -i -H "Content-Type: application/json" -X POST -d '{"load_cost_forecast": {{ ((( state_attr('sensor.nordpool', 'raw_today') + state_attr('sensor.nordpool', 'raw_tomorrow')) |map(attribute='value')|list)[:48]) }}' http://localhost:5000/action/dayahead-optim`, return code: 0:

I use the emhass addon version 0.2.19 with this config:

The emhass log:

[2022-08-25 19:03:08,159] INFO in web_server: EMHASS server online, serving index.html...
[2022-08-25 19:08:49,585] INFO in web_server: EMHASS server online, serving index.html...
[2022-08-25 19:09:29,125] INFO in command_line: Setting up needed data
[2022-08-25 19:09:29,368] INFO in forecast: Retrieving weather forecast data using method = scrapper
[2022-08-25 19:09:32,306] INFO in web_server: EMHASS server online, serving index.html...
[2022-08-25 19:09:35,093] INFO in forecast: Retrieving data from hass for load forecast using method = naive
[2022-08-25 19:09:35,096] INFO in retrieve_hass: Retrieve hass get data method initiated...
[2022-08-25 19:09:58,215] INFO in web_server:  >> Performing dayahead optimization...
[2022-08-25 19:09:58,216] INFO in command_line: Performing day-ahead forecast optimization
[2022-08-25 19:09:58,227] INFO in optimization: Perform optimization for the day-ahead
[2022-08-25 19:09:58,823] INFO in optimization: Status: Optimal
[2022-08-25 19:09:58,825] INFO in optimization: Total value of the Cost function = -3.61
/usr/local/lib/python3.9/dist-packages/emhass/web_server.py:46: FutureWarning:
Dropping of nuisance columns in DataFrame reductions (with 'numeric_only=None') is deprecated; in a future version this will raise TypeError.  Select only valid columns before calling the reduction.
[2022-08-25 19:19:45,309] INFO in web_server: EMHASS server online, serving index.html...
[2022-08-25 19:29:24,315] INFO in web_server: EMHASS server online, serving index.html...

Hi, I just can’t what is wrong here. This is just working fine for me and there is a specific unit test in the code to test for this and everything seems fine.

What is the Home Assistant log saying when you execute that shell command?

Please open a github issue to follow this more in detail there.

1 Like

@KasperEdw
I solved it! I found out I have done two errors.

The first was passing wrong amount of data in this list. Nordpool publish prices for every hour so the list must have 24 data points, not 48 points.

You need to be careful here to send the correct amount of data on this list, the correct length. For example, if the data time step is defined to 1h and you are performing a day-ahead optimization, then this list length should be of 24 data points.

In the emhass configuration you must also have 60 (minuts) for optimization_time_step.
image

The price data must also be for the next day and price data must be in Euro. When you setup Nordpool addon you can choose Euro as the price data.
Update: You can use whatever currency you want as long as you use the same currency everywhere.

The second was using template with errors. My template was wrong with the parenthesis and curly bracket. After fixing the template the shell_command worked and the passing of load_cost_forecast and prod_price_forecast was successful.

Here is the correct template for passing forecast data from Nordpool.

shell_command:
  publish_data: "curl -i -H 'Content-Type:application/json' -X POST -d '{}' http://localhost:5000/action/publish-data"
  
  post_nordpool_forecast: "curl -i -H 'Content-Type: application/json' -X POST -d '{\"load_cost_forecast\":{{(
        (state_attr('sensor.nordpool_euro', 'raw_tomorrow')|map(attribute='value')|list)[:24])
        }},\"prod_price_forecast\":{{(
        (state_attr('sensor.nordpool_euro', 'raw_tomorrow')|map(attribute='value')|list)[:24])}}}' http://localhost:5000/action/dayahead-optim"

image

Hope it helps others

5 Likes

Hi guys! I am struggling with the initial config, as long as I am not changing the hass_url from the default “empty” I get EMHASS UI running without any data from HA. With the hass_url changed I get this error message:

s6-rc: info: service s6rc-oneshot-runner: starting
s6-rc: info: service s6rc-oneshot-runner successfully started
s6-rc: info: service fix-attrs: starting
s6-rc: info: service fix-attrs successfully started
s6-rc: info: service legacy-cont-init: starting
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service legacy-services: starting
services-up: info: copying legacy longrun emhass (no readiness notification)
s6-rc: info: service legacy-services successfully started

Traceback (most recent call last):
  File "/usr/local/lib/python3.9/dist-packages/requests/models.py", line 971, in json
    return complexjson.loads(self.text, **kwargs)
  File "/usr/lib/python3.9/json/__init__.py", line 346, in loads
    return _default_decoder.decode(s)
  File "/usr/lib/python3.9/json/decoder.py", line 337, in decode
    obj, end = self.raw_decode(s, idx=_w(s, 0).end())
  File "/usr/lib/python3.9/json/decoder.py", line 355, in raw_decode
    raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
  File "/usr/lib/python3.9/runpy.py", line 197, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib/python3.9/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/usr/local/lib/python3.9/dist-packages/emhass/web_server.py", line 241, in <module>
    config_hass = response.json()
  File "/usr/local/lib/python3.9/dist-packages/requests/models.py", line 975, in json
    raise RequestsJSONDecodeError(e.msg, e.doc, e.pos)
requests.exceptions.JSONDecodeError: Expecting value: line 1 column 1 (char 0)

With this config:

web_ui_url: 0.0.0.0
hass_url: https://xxx:8123/
long_lived_token: empty
costfun: self-consumption
optimization_time_step: 30
historic_days_to_retrieve: 2
method_ts_round: nearest
set_total_pv_sell: false
lp_solver: PULP_CBC_CMD
lp_solver_path: empty
sensor_power_photovoltaics: sensor.pv_power
sensor_power_load_no_var_loads: sensor.house_consumption
number_of_deferrable_loads: 2
list_nominal_power_of_deferrable_loads:
  - nominal_power_of_deferrable_loads: 3000
  - nominal_power_of_deferrable_loads: 750
list_operating_hours_of_each_deferrable_load:
  - operating_hours_of_each_deferrable_load: 5
  - operating_hours_of_each_deferrable_load: 8
list_peak_hours_periods_start_hours:
  - peak_hours_periods_start_hours: "02:54"
  - peak_hours_periods_start_hours: "17:24"
list_peak_hours_periods_end_hours:
  - peak_hours_periods_end_hours: "15:24"
  - peak_hours_periods_end_hours: "20:24"
list_treat_deferrable_load_as_semi_cont:
  - treat_deferrable_load_as_semi_cont: true
  - treat_deferrable_load_as_semi_cont: true
load_peak_hours_cost: 0.1907
load_offpeak_hours_cost: 0.1419
photovoltaic_production_sell_price: 0.065
maximum_power_from_grid: 22080
list_pv_module_model:
  - pv_module_model: IBEX-132MHC-EiGER-495-500
list_pv_inverter_model:
  - pv_inverter_model: GoodWe_10K_ET_Plus+
list_surface_tilt:
  - surface_tilt: 25
list_surface_azimuth:
  - surface_azimuth: 205
list_modules_per_string:
  - modules_per_string: 6
list_strings_per_inverter:
  - strings_per_inverter: 2
set_use_battery: false
battery_discharge_power_max: 6390
battery_charge_power_max: 6390
battery_discharge_efficiency: 0.95
battery_charge_efficiency: 0.95
battery_nominal_energy_capacity: 10668
battery_minimum_state_of_charge: 0.2
battery_maximum_state_of_charge: 1
battery_target_state_of_charge: 0.6

Could anyone point me in the right direction, please?