EMHASS add-on: An energy management optimization add-on for Home Assistant OS and supervised

Hi guys,
I am still struggling to make the add-on work. Its seems that I get no data from home assistant.
It is problably a noob mistake, but I couldnt find anything in the manual or in this forum about it.

Has anyone a hint for me?

Best
T

2024-01-30 09:26:49,271 - web_server - ERROR - Variable sensor.growatt_total_enegry_usage_actual_2 was not found. This is typically because no data could be retrieved from Home Assistant

2024-01-30 09:26:49,277 - web_server - ERROR - Exception on /action/dayahead-optim [POST]

Please share your configuration to see if there is anything wrong. Typically people have these errors when setting the data fetch URL

Sure, it is pretty standard. Havent changed much yet:

costfun: self-consumption
logging_level: INFO
set_total_pv_sell: false
set_nocharge_from_grid: true
set_nodischarge_to_grid: true
sensor_power_photovoltaics: sensor.growatt_pv_energy_calculated_total_actual
sensor_power_load_no_var_loads: sensor.growatt_total_enegry_usage_actual_2
number_of_deferrable_loads: 2
list_nominal_power_of_deferrable_loads:
  - nominal_power_of_deferrable_loads: 3600
list_operating_hours_of_each_deferrable_load:
  - operating_hours_of_each_deferrable_load: 8
list_start_timesteps_of_each_deferrable_load:
  - start_timesteps_of_each_deferrable_load: 0
  - start_timesteps_of_each_deferrable_load: 0
list_end_timesteps_of_each_deferrable_load:
  - end_timesteps_of_each_deferrable_load: 0
  - end_timesteps_of_each_deferrable_load: 0
list_peak_hours_periods_start_hours:
  - peak_hours_periods_start_hours: "05:54"
  - peak_hours_periods_start_hours: "17:54"
list_peak_hours_periods_end_hours:
  - peak_hours_periods_end_hours: "09:24"
  - peak_hours_periods_end_hours: "21:24"
list_treat_deferrable_load_as_semi_cont:
  - treat_deferrable_load_as_semi_cont: true
  - treat_deferrable_load_as_semi_cont: true
list_set_deferrable_load_single_constant:
  - set_deferrable_load_single_constant: false
  - set_deferrable_load_single_constant: false
load_peak_hours_cost: 0.3507
load_offpeak_hours_cost: 0.2519
photovoltaic_production_sell_price: 0.093
maximum_power_from_grid: 22000
list_pv_module_model:
  - pv_module_model: CSUN_Eurasia_Energy_Systems_Industry_and_Trade_CSUN295_60M
list_pv_inverter_model:
  - pv_inverter_model: Fronius_International_GmbH__Fronius_Primo_5_0_1_208_240__240V_
list_surface_tilt:
  - surface_tilt: 30
list_surface_azimuth:
  - surface_azimuth: 205
list_modules_per_string:
  - modules_per_string: 6
list_strings_per_inverter:
  - strings_per_inverter: 2
set_use_battery: true
battery_nominal_energy_capacity: 6500
hass_url: http://192.168.1.134:8123
long_lived_token: >-
  WASCHANGENDUPUPONPOSTINGSONOTWORTHTRYINGCI6IkpXVCJ9.eyJpc3MiOiIxZWNkOGMyYTNhNzA0Y2E1YWNiNjhkNDVlNTBiNmU1OSIsImlhdCI6MTcwNjYwMjg3OSwiZXhwIjoyMDIxOTYyODc5fQ._yldHu_yWEIBSIMzpugDsX-OSGNKcP6fOaPB4saaGCU
optimization_time_step: 30
historic_days_to_retrieve: 2
method_ts_round: nearest
lp_solver: COIN_CMD
lp_solver_path: /usr/bin/cbc
set_battery_dynamic: false
battery_dynamic_max: 0.9
battery_dynamic_min: -0.9
weight_battery_discharge: 1
weight_battery_charge: 1
load_forecast_method: naive
battery_discharge_power_max: 3650
battery_charge_power_max: 3650
battery_discharge_efficiency: 0.95
battery_charge_efficiency: 0.95
battery_minimum_state_of_charge: 0.3
battery_maximum_state_of_charge: 0.9
battery_target_state_of_charge: 0.6

This is the problem.
Set these to:

hass_url: empty
long_lived_token: empty

You would only need to set these if you are using the docker standalone mode. These are not needed with the add-on on HA OS.

Thanks, brings me to s different error :frowning:

2024-01-30 11:57:20,839 - web_server - INFO - Setting up needed data
2024-01-30 11:57:20,846 - web_server - INFO - Retrieving weather forecast data using method = scrapper
2024-01-30 11:57:25,495 - web_server - INFO - Retrieving data from hass for load forecast using method = mlforecaster
2024-01-30 11:57:25,522 - web_server - INFO - Retrieve hass get data method initiated...
2024-01-30 11:57:33,088 - web_server - ERROR - The ML forecaster file was not found, please run a model fit method before this predict method
2024-01-30 11:57:33,091 - web_server - ERROR - Exception on /action/dayahead-optim [POST]
Traceback (most recent call last):
  File "/usr/local/lib/python3.11/dist-packages/flask/app.py", line 1463, in wsgi_app
    response = self.full_dispatch_request()
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/dist-packages/flask/app.py", line 872, in full_dispatch_request
    rv = self.handle_user_exception(e)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/dist-packages/flask/app.py", line 870, in full_dispatch_request
    rv = self.dispatch_request()
         ^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/dist-packages/flask/app.py", line 855, in dispatch_request
    return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)  # type: ignore[no-any-return]
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/dist-packages/emhass/web_server.py", line 50, in action_call
    input_data_dict = set_input_data_dict(config_path, str(data_path), costfun,
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/dist-packages/emhass/command_line.py", line 91, in set_input_data_dict
    P_load_forecast = fcst.get_load_forecast(method=optim_conf['load_forecast_method'])
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/dist-packages/emhass/forecast.py", line 616, in get_load_forecast
    forecast_out = mlf.predict(data_last_window)
                   ^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'predict'

Before running with mlforecaster you must train the model with fit.

Thanks, but tried it also differently:

2024-01-30 13:22:16,343 - web_server - WARNING - The data container dictionary is empty... Please launch an optimization task
2024-01-30 13:22:40,193 - web_server - INFO - Setting up needed data
2024-01-30 13:22:40,395 - web_server - INFO - Retrieve hass get data method initiated...
2024-01-30 13:22:47,940 - web_server - ERROR - Exception on /action/perfect-optim [POST]
Traceback (most recent call last):
``

OK, thank you guys. Started working after the last update :slight_smile:

1 Like

My EMHASS plan looks good but when it comes time to publish the plan everything seems to be off by 30 minutes.

I am using optimization_time_step: 60 in the dayahead and MPC calls. Do I need to pass optimization_time_step: 60 to publish-data as well?

If not, where should I start to investigate what is going wrong?

What time do you publish and what rounding are you using?

Publishing every 5 minutes. Not sure what rounding you’re referring to?

Try “first”

Hi,

Anyone else having this error?
Cannot infer dst time from 2024-04-07 02:00:00, try using the ‘ambiguous’ argument

I was using an old version (0.4 I think), so I tried to setup the EMHASS addon but having a similar issue

2024-04-06 14:21:32,598 - web_server - INFO - EMHASS server online, serving index.html…
2024-04-06 14:21:33,297 - web_server - INFO - Passed runtime parameters: {‘prod_price_forecast’: [0.03, 0.02, 0.03, 0.02, 0.02, 0.05, 0.07, 0.08, 0.11, 0.12, 0.13, 0.11, 0.12, 0.12, 0.11, 0.11, 0.11, 0.11, 0.14, 0.12, 0.11, 0.14, 0.13, 0.11, 0.11, 0.11, 0.11, 0.09, 0.09, 0.08, 0.08, 0.06, 0.02, 0.01, 0.06, 0.07, 0.06, 0.05, 0.05, 0.06, 0.05, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01], ‘load_cost_forecast’: [0.14, 0.13, 0.27, 0.27, 0.27, 0.3, 0.33, 0.33, 0.37, 0.37, 0.38, 0.37, 0.37, 0.37, 0.22, 0.22, 0.23, 0.22, 0.26, 0.23, 0.23, 0.26, 0.24, 0.23, 0.23, 0.23, 0.22, 0.2, 0.2, 0.2, 0.19, 0.17, 0.13, 0.12, 0.17, 0.18, 0.17, 0.16, 0.16, 0.16, 0.16, 0.12, 0.12, 0.12, 0.12, 0.12, 0.12, 0.12, 0.12], ‘load_power_forecast’: [754, 1700, 1200, 900, 900, 800, 1400, 1400, 600, 500, 600, 900, 1200, 1300, 1200, 1300, 1300, 1000, 500, 500, 500, 400, 500, 500, 500, 500, 400, 400, 400, 400, 400, 400, 400, 400, 400, 400, 800, 1000, 2000, 1300, 900, 1600, 1500, 1700, 1900, 2000, 1800, 1900], ‘pv_power_forecast’: [2639, 1627, 1618, 1653, 1608, 1409, 1040, 592, 184, 52, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 9, 104, 358, 832, 1147, 1396, 1573, 1731, 1896, 1983, 1992, 2052, 2162, 2264, 2352, 2438, 2490, 2292, 1885, 1399, 987, 461, 103, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], ‘prediction_horizon’: 48, ‘alpha’: 1, ‘beta’: 0, ‘num_def_loads’: 0, ‘soc_init’: 0.63, ‘soc_final’: 0.05}
2024-04-06 14:21:33,297 - web_server - INFO - >> Setting input data dict
2024-04-06 14:21:33,297 - web_server - INFO - Setting up needed data
2024-04-06 14:21:33,304 - web_server - ERROR - Exception on /action/naive-mpc-optim [POST]
Traceback (most recent call last):
File “/usr/local/lib/python3.11/dist-packages/flask/app.py”, line 1463, in wsgi_app
response = self.full_dispatch_request()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/local/lib/python3.11/dist-packages/flask/app.py”, line 872, in full_dispatch_request
rv = self.handle_user_exception(e)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/local/lib/python3.11/dist-packages/flask/app.py”, line 870, in full_dispatch_request
rv = self.dispatch_request()
^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/local/lib/python3.11/dist-packages/flask/app.py”, line 855, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args) # type: ignore[no-any-return]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/local/lib/python3.11/dist-packages/emhass/web_server.py”, line 108, in action_call
input_data_dict = set_input_data_dict(config_path, str(data_path), costfun,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/local/lib/python3.11/dist-packages/emhass/command_line.py”, line 64, in set_input_data_dict
fcst = Forecast(retrieve_hass_conf, optim_conf, plant_conf,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/local/lib/python3.11/dist-packages/emhass/forecast.py”, line 164, in init
freq=self.freq).round(self.freq, ambiguous=‘infer’, nonexistent=‘shift_forward’)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/local/lib/python3.11/dist-packages/pandas/core/indexes/extension.py”, line 98, in method
result = attr(self._data, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/local/lib/python3.11/dist-packages/pandas/core/arrays/datetimelike.py”, line 2026, in round
return self._round(freq, RoundTo.NEAREST_HALF_EVEN, ambiguous, nonexistent)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/local/lib/python3.11/dist-packages/pandas/core/arrays/datetimelike.py”, line 2002, in _round
return result.tz_localize(
^^^^^^^^^^^^^^^^^^^
File “/usr/local/lib/python3.11/dist-packages/pandas/core/arrays/_mixins.py”, line 86, in method
return meth(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/local/lib/python3.11/dist-packages/pandas/core/arrays/datetimes.py”, line 1040, in tz_localize
new_dates = tzconversion.tz_localize_to_utc(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “pandas/_libs/tslibs/tzconversion.pyx”, line 322, in pandas._libs.tslibs.tzconversion.tz_localize_to_utc
File “pandas/_libs/tslibs/tzconversion.pyx”, line 637, in pandas._libs.tslibs.tzconversion._get_dst_hours
pytz.exceptions.AmbiguousTimeError: 2024-04-07 02:00:00
2024-04-06 14:21:33,330 - web_server - INFO - Passed runtime parameters: {}
2024-04-06 14:21:33,331 - web_server - INFO - >> Setting input data dict
2024-04-06 14:21:33,331 - web_server - INFO - Setting up needed data
2024-04-06 14:21:33,335 - web_server - ERROR - Exception on /action/publish-data [POST]
Traceback (most recent call last):

EDIT 2: Now seems to be working

2024-04-06 14:32:33,293 - web_server - INFO - Passed runtime parameters: {‘prod_price_forecast’: [0.02, 0.02, 0.02, 0.02, 0.05, 0.07, 0.08, 0.11, 0.12, 0.13, 0.11, 0.12, 0.12, 0.11, 0.11, 0.11, 0.11, 0.14, 0.12, 0.11, 0.14, 0.13, 0.11, 0.11, 0.11, 0.11, 0.09, 0.09, 0.08, 0.08, 0.06, 0.02, 0.01, 0.06, 0.07, 0.06, 0.05, 0.05, 0.06, 0.05, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01], ‘load_cost_forecast’: [0.13, 0.27, 0.27, 0.27, 0.3, 0.33, 0.33, 0.37, 0.37, 0.38, 0.37, 0.37, 0.37, 0.22, 0.22, 0.23, 0.22, 0.26, 0.23, 0.23, 0.26, 0.24, 0.23, 0.23, 0.23, 0.22, 0.2, 0.2, 0.2, 0.19, 0.17, 0.13, 0.12, 0.17, 0.18, 0.17, 0.16, 0.16, 0.16, 0.16, 0.12, 0.12, 0.12, 0.12, 0.12, 0.12, 0.12, 0.12, 0.12], ‘load_power_forecast’: [872, 1200, 900, 900, 800, 1400, 1400, 600, 500, 600, 900, 1200, 1300, 1200, 1300, 1300, 1000, 500, 500, 500, 400, 500, 500, 500, 500, 400, 400, 400, 400, 400, 400, 400, 400, 400, 400, 800, 1000, 2000, 1300, 900, 1600, 1500, 1700, 1900, 2000, 1800, 1900, 1300], ‘pv_power_forecast’: [2954, 1618, 1653, 1608, 1409, 1040, 592, 184, 52, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 9, 104, 358, 832, 1147, 1396, 1573, 1731, 1896, 1983, 1992, 2052, 2162, 2264, 2352, 2438, 2490, 2292, 1885, 1399, 987, 461, 103, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], ‘prediction_horizon’: 48, ‘alpha’: 1, ‘beta’: 0, ‘num_def_loads’: 0, ‘soc_init’: 0.66, ‘soc_final’: 0.05}
2024-04-06 14:32:33,295 - web_server - INFO - >> Setting input data dict
2024-04-06 14:32:33,295 - web_server - INFO - Setting up needed data
2024-04-06 14:32:33,297 - web_server - INFO - Retrieve hass get data method initiated…
2024-04-06 14:32:36,342 - web_server - INFO - Retrieving weather forecast data using method = list
2024-04-06 14:32:36,352 - web_server - INFO - >> Performing naive MPC optimization…
2024-04-06 14:32:36,353 - web_server - INFO - Performing naive MPC optimization
2024-04-06 14:32:36,369 - web_server - INFO - Perform an iteration of a naive MPC controller
2024-04-06 14:32:36,409 - web_server - WARNING - Solver default unknown, using default
Welcome to the CBC MILP Solver
Version: 2.10.3
Build Date: Dec 15 2019

command line - /usr/local/lib/python3.11/dist-packages/pulp/solverdir/cbc/linux/64/cbc /tmp/7541742d607049e0967f6fb68c9c42b2-pulp.mps -max -timeMode elapsed -branch -printingOptions all -solution /tmp/7541742d607049e0967f6fb68c9c42b2-pulp.sol (default strategy 1)
At line 2 NAME MODEL
At line 3 ROWS
At line 342 COLUMNS
At line 5815 RHS
At line 6153 BOUNDS
At line 6538 ENDATA
Problem MODEL has 337 rows, 288 columns and 5184 elements
Coin0008I MODEL read with 0 errors
Option for timeMode changed from cpu to elapsed
Continuous objective value is -0.393301 - 0.00 seconds
Cgl0003I 0 fixed, 0 tightened bounds, 71 strengthened rows, 0 substitutions
Cgl0003I 0 fixed, 0 tightened bounds, 1 strengthened rows, 0 substitutions
Cgl0004I processed model has 332 rows, 288 columns (96 integer (96 of which binary)) and 5191 elements
Cbc0038I Initial state - 37 integers unsatisfied sum - 4.91781
Cbc0038I Pass 1: suminf. 4.04198 (36) obj. 1.00374 iterations 77
Cbc0038I Pass 2: suminf. 0.69500 (12) obj. 1.14495 iterations 56
Cbc0038I Solution found of 1.14495
Cbc0038I Relaxing continuous gives 0.954144
Cbc0038I Before mini branch and bound, 39 integers at bound fixed and 115 continuous
Cbc0038I Full problem 332 rows 288 columns, reduced to 21 rows 27 columns
Cbc0038I Mini branch and bound improved solution from 0.954144 to 0.393301 (0.03 seconds)
Cbc0038I Freeing continuous variables gives a solution of 0.393301
Cbc0038I After 0.03 seconds - Feasibility pump exiting with objective of 0.393301 - took 0.01 seconds
Cbc0012I Integer solution of 0.39330141 found by feasibility pump after 0 iterations and 0 nodes (0.03 seconds)
Cbc0001I Search completed - best objective 0.3933014127418696, took 0 iterations and 0 nodes (0.03 seconds)
Cbc0035I Maximum depth 0, 0 variables fixed on reduced cost
Cuts at root node changed objective from 0.393301 to 0.393301
Probing was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
Gomory was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
Knapsack was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
Clique was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
MixedIntegerRounding2 was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
FlowCover was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
TwoMirCuts was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
ZeroHalf was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)

Result - Optimal solution found

Objective value: -0.39330141
Enumerated nodes: 0
Total iterations: 0
Time (CPU seconds): 0.03
Time (Wallclock seconds): 0.04

Option for printingOptions changed from normal to all
Total time (CPU seconds): 0.03 (Wallclock seconds): 0.04

2024-04-06 14:32:36,472 - web_server - INFO - Status: Optimal
2024-04-06 14:32:36,472 - web_server - INFO - Total value of the Cost function = -0.39
2024-04-06 14:32:36,702 - web_server - INFO - Passed runtime parameters: {}
2024-04-06 14:32:36,702 - web_server - INFO - >> Setting input data dict
2024-04-06 14:32:36,702 - web_server - INFO - Setting up needed data
2024-04-06 14:32:36,704 - web_server - INFO - >> Publishing data…
2024-04-06 14:32:36,705 - web_server - INFO - Publishing data to HASS instance
2024-04-06 14:32:36,723 - web_server - INFO - Successfully posted to sensor.p_pv_forecast = 2954
2024-04-06 14:32:36,741 - web_server - INFO - Successfully posted to sensor.p_load_forecast = 872
2024-04-06 14:32:36,742 - web_server - ERROR - P_deferrable0 was not found in results DataFrame. Optimization task may need to be relaunched or it did not converge to a solution.
2024-04-06 14:32:36,758 - web_server - INFO - Successfully posted to sensor.p_batt_forecast = -4166.4
2024-04-06 14:32:36,781 - web_server - INFO - Successfully posted to sensor.soc_batt_forecast = 86.62
2024-04-06 14:32:36,794 - web_server - INFO - Successfully posted to sensor.p_grid_forecast = 2084.4
2024-04-06 14:32:36,808 - web_server - INFO - Successfully posted to sensor.total_cost_fun_value = -0.39
2024-04-06 14:32:36,816 - web_server - INFO - Successfully posted to sensor.optim_status = Optimal
2024-04-06 14:32:36,829 - web_server - INFO - Successfully posted to sensor.unit_load_cost = 0.13
2024-04-06 14:32:36,843 - web_server - INFO - Successfully posted to sensor.unit_prod_price = 0.02

Known bug. 0.8.5 might fix it alternatively you’ll have to wait for 24 hours I think.

See DST issues in the CHANGELOG.md file.

1 Like

Warning: I am quite annoyed at the documentation at this point.

I am guessing that I am just stupid, but could someone please explain how I am supposed to define 4 * 430 Watt Panels with a 1.8 kW peak Inverter.

And yes, it’s not an actual PV Installation on a roof, but instead a “Balkonkraftwerk” or small PV System that you can put up on your balcony. It’s plug and play. No strings etc. just one 4 x mppt Input inverter with 1.8 kWp.

Why do I even have to do enter it like this:

-pv_module_model: CSUN_Eurasia_Energy_Systems_Industry_and_Trade_CSUN295_60M

It is “to me” totally unclear how I am supposed to define my panel.
I would do it like this:

-pv_module_model: SUNPROPOWER_SPDG_xxx_-N108M10
Datasheet

Which without even trying I can 10000% accurately say won’t work, because it’s nonsense.

And don’t get me started on my Inverter
Do I just write:

- pv_inverter_model: HMS-1800-4T

I read the documentation, but nowhere does it state how to actually come up with this seemingly Random array of strings. Do I make stuff up? Does it parse it in a specific way? No clue.

It would be 1000 times easier if I could just enter the kWp and orientation of the panels + Inverter settings. I don’t know why I need to enter the exact panel. Does it compute it to a milliwatt accurate? I just have 4 panels with 430 Watt peak each. I don’t need complicated multi array string configurations with 20 inverters and 4000 different panels.

Again, maybe I am stupid and just need to enter 1__222_xxxx__430Wpx4Panels__1xInverter1.8kWp___NaN_X-X-Xxx to configure it.

I feel really stupid to not get how I am supposed to set this up.

Also on a sidenote, what does

“list_start_timesteps_of_each_deferrable_load, The timestep as from which each deferrable load is allowed to operate. Operation before this timestep is not allowed.”

even mean. Is it the minimum state interval for the device? I don’t know what this is supposed to mean.

I am really sorry, but I usually have no problem with Documentation, but this one reads like a cryptic puzzle.

Maybe as an idea. Input Fields for the usual physical characteristics of the Panels and Inverter, then press plus or add a new tab for another string. Add all Voltages, Amps, kWp together, depending on arrangement. Also, I don’t want to calculate the thermal load on the aluminum frame and the resulting performance drop. While we’re at it, why not add the ability to simulate induction losses in loose copper wire from old telephone wires in the ground.

What I am trying to say is: Simpler is usually better.

Again, for anyone reaching this Part. I am really sorry for ranting this much, but I really need to know if anyone thinks this is a good way of configuring this stuff and thinks the documentation is not unclearly written. But I really wanted some actual human feedback because the usual solution of just throwing it against some GPT will definitely not work with this level of cryptic reasoning required.

There is always the possibility to do things better, but you should keep into consideration that this project is not something commercial and has grown exponentially from the initial build. Actually David has done a terrific job in making this tool available to the community and developing it in what I can assume is his spare time, with the help of just a few great contributors and experienced users available to help and guide EMHASS newbies.
I can share my personal experience and I can tell you it took at least one month to me before I could implement something; I had to read a lot on the forum (here as well) and deeply study the documentation, which has improved a lot over the time.
If you think something should be changed or done differently David is always open to contribution (in the code, in the documentation, …) on the GitHub page of the project.

Coming to your question about the reason why you have to indicate the PV panels and the inverter in that funny way: because at the beginning the tool was making use of a DB developed by a 3rd group and that was the way the devices were recorded. Each string corresponds to a specific model that corresponds to specific electrical parameters.
Indicating your panel and inverter models is needed if you want to use the scrapping method, because it will convert the solar forecast into PV [W] production needed by EMHASS to predict and optimize your battery usage.
In addition to that, as the 3rd party DB was pretty outdated, David kindly implemented a newer and expanded database with more models; to make our life easier he also released a webapp where you can filter the results and look for your model or the one that is the closest to yours. (https://emhass-pvlib-database.streamlit.app/) [doc reference: https://emhass.readthedocs.io/en/latest/forecasts.html]

So do not give up, it’s not something you should expect to have up and running in minutes but not even days. If you have doubts post a message and somebody will try to help :wink:

2 Likes

I have to be honest. I didn’t expect a reply (and definitely not this fast).
And since I don’t tend to log on often, this would’ve been buried in my messages at some point.

I have looked at your references as well as the webapp. Unfortunately it seems like the manufacturer is not included in the DB, which isn’t too bad now that I got to understand this system a bit more though. I guess I will have to search for a panel with similar electrical characteristics.

I must admit that “at least to me” it would make more sense to skip this “extra step” and just ask the user the required parameters directly. Most people that implement this solution will presumably know their panel name and be able to look up the parameters in the datasheet.

However, I also have to admit that I neither have the knowledge, time nor Endurance to actively develop such a system from scratch. Likely, as you stated, in one’s free time.

So I understand that if I want to use this software, I will have to adapt to the method chosen by the developer(s). I could propose ideas to improve the usability and intuitiveness of the software. But this would require a redesign and reevaluation of the entire project. Which isn’t feasible just for one person who spend just a few hours with this project up until this point. I have to live with what other people deem “a good solution” even if that’s “not a good solution” to other people.

One can’t complain about something for free, which I did out of frustration primarily driven by tiredness (quite late in my Timezone) which still isn’t an excuse, obviously.

I have the habit of implementing solutions in under a day with all my HA related projects. Which worked decently great up until now. This however will likely require reading every single word of the doc (multiple times) to extrapolate the meaning of certain configuration options.

I don’t particularly like this process as it reminds me of (Desktop vs Terminal). The Terminal is more powerful but requires n^10 times more understanding than an intuitive GUI, which takes seconds to do 60-80% the Terminal can.

Usually at this point i would readjust my needs and settle with an overpriced premade solution. But since not a single automation solution exists that ties in greatly with HA or costs 6 figures, this is the best solution.
It will require much more time though than initially anticipated. And considering the “inaccurate” quality of irradiance forecasts with low-frequency updates in “free tiers” the providers grant us, this whole efficiency boost gained becomes marginal at best, presumably.

Will I still do it, absolutely, otherwise I wouldn’t have started HA anyway because the time sunken into it would have sufficed to toggle a light switch manually until I’m a thousand years old.

This seems like a curse, until it works, eventually (hopefully).

Thank you for your kind and fast reply.

Scrapper works quite well in any case, I think there is a limit on how frequently you can ask the website to elaborate the forecast for your location (from what I understood clearoutside.com generates the forecast for your location upon request, but the site won’t create a new one before one hour or so - I think to keep the burden low) so this is a good starting point to get some forecast coming in and seeing how EMHASS works. Some of us use solcast (but new users have a strict limit on the number of API calls) others (like me) forecast.solar.
Forecast.solar has a free tier that gives you today and tomorrow forecast which in my case is enough, but you have to heavily work with the jinja code to rearrange the forecast buckets, if you plan to run EMHASS computation more frequently than 1h.
If you do not want to do so forecast.solar also has a cheap (14€/year) personal tier with 30 minutes forecast and 3 days forecast but in any case I believe you’ll have to play with the API calls to download the data, format it and pass to EMHASS (I’m not sure this service is fully implemented in EMHASS and you can just pass the APIkey like it seems possible to do with solcast).

So to start with I suggest you try with the scrapper method and see how it goes.
You can try to just insert a random PV and inverter model and then refine it to match yours as much as possible (you see the results after performing a search in the webapp and you can compare the electrical parameters with your inverter).

Is it possible with the Add-On to implement the Deferrable load thermal model? I find no input in the Add-On Configuration page.
At the moment i am working with these shell-commands:
trigger_tibber_solcast: “curl -i -H ‘Content-Type: application/json’ -X POST -d ‘{"pv_power_forecast": {{ [states(‘sensor.inverter_power_kombiniert’)|int(0)]| list + (state_attr(‘sensor.solcast_pv_forecast_prognose_heute’, ‘detailedHourly’) | map(attribute=‘pv_estimate’) | map(‘multiply’, 1000) | map(‘round’, 0) | list + state_attr(‘sensor.solcast_pv_forecast_prognose_morgen’, ‘detailedHourly’) | map(attribute=‘pv_estimate’) | map(‘multiply’, 1000) | map(‘round’, 0) | list)[now().hour:][:23] }}, "load_cost_forecast":{{states(‘sensor.tibber_forecast’) }}}’ http://localhost:5000/action/dayahead-optim

trigger_tibber_solcast_mpc_zwei: “curl -i -H ‘Content-Type: application/json’ -X POST -d ‘{"pv_power_forecast": {{ [states(‘sensor.inverter_power_kombiniert’)|int(0)]| list + (state_attr(‘sensor.solcast_pv_forecast_prognose_heute’, ‘detailedHourly’) | map(attribute=‘pv_estimate’) | map(‘multiply’, 1000) | map(‘round’, 0) | list + state_attr(‘sensor.solcast_pv_forecast_prognose_morgen’, ‘detailedHourly’) | map(attribute=‘pv_estimate’) | map(‘multiply’, 1000) | map(‘round’, 0) | list)[now().hour:][:23] }}, "load_cost_forecast":{{states(‘sensor.tibber_forecast’) }}, "alpha":1, "beta":0,"prediction_horizon":24, "soc_init":{{states(‘sensor.battery_state_of_capacity’)|float(0)/100 }},"soc_final":0.1}’ http://localhost:5000/action/naive-mpc-optim
now i want to regulate my hot-water tank (200L with heating element). And next month i will get a heat pump, which i want to optimize :slight_smile:
Any suggestion how to do this with shell-command or the add-on configuration yaml?

The thermal model works well with the add on, but is not configurable via the GUI, you need to adapt the shell command.

I have started a GitHub discussion here if you would like to share you experiences: