EMHASS add-on: An energy management optimization add-on for Home Assistant OS and supervised

In the end it seems it’s something related to how the data is recorded/stored and not the amount and/or frequency.
I passed a sensor that is sampling with 1 second frequency from my real sensor (which is pulling data every 2 seconds) and it worked.
Maybe it’s something related to the fact that my real sensor is a RESFTUL one and sometimes the pull frequency is too high and HA can’t keep the pace; so maybe the data point are not equally spaced while in the sampled one this is automatically fixed by HA… who knows…
But for sure the problem is happening only with sensor_power_load_no_var_loads so I would say that maybe, by chance HA, can manage this situation while this add-on can’t for this specific input.
Anyway thanks for the patience and support.

I believe this ends my tests for this specific problem and provides a workaround is somebody else experiences the same.

1 Like

Hi,

Getting this issue when trying to install on an RPI4

23-08-15 08:21:42 WARNING (SyncWorker_7) [supervisor.addons.validate] Add-on have full device access, and selective device access in the configuration. Please report this to the maintainer of DeskPi Pro Active Cooling

As I don’t have DeskPi Pro Active Cooling installed I’m at a bit of a loss …

Sorted it.

1 Like

So what was it? Just for future reference for other people facing these warning messages.

I had a repo with that in it - removed it and problem went away.

Hello,
I have a question: how can I feed the add-on with dynamic peak hours/energy costs?
In my country peak hours change depending on the day (Sunday/bank holidays) and the costs are potentially updated on a monthly basis (unless your contract has a fixed cost).
I previously created some helpers (input_datetime) containing the start/end time but when I tried to use them

list_peak_hours_periods_start_hours:
  - peak_hours_periods_start_hours: input_datetime.peak_energy_start
list_peak_hours_periods_end_hours:
  - peak_hours_periods_end_hours: input_datetime.peak_energy_end

I received the error below (I assume the same would happen with dynamic costs).
Thanks!

Traceback (most recent call last):
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 2190, in wsgi_app
    response = self.full_dispatch_request()
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1486, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1484, in full_dispatch_request
    rv = self.dispatch_request()
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1469, in dispatch_request
    return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
  File "/usr/local/lib/python3.9/dist-packages/emhass/web_server.py", line 191, in action_call
    opt_res = dayahead_forecast_optim(input_data_dict, app.logger)
  File "/usr/local/lib/python3.9/dist-packages/emhass/command_line.py", line 228, in dayahead_forecast_optim
    df_input_data_dayahead = input_data_dict['fcst'].get_load_cost_forecast(
  File "/usr/local/lib/python3.9/dist-packages/emhass/forecast.py", line 685, in get_load_cost_forecast
    list_df_hp.append(df_final[self.var_load_cost].between_time(
  File "/usr/local/lib/python3.9/dist-packages/pandas/core/generic.py", line 7890, in between_time
    indexer = index.indexer_between_time(
  File "/usr/local/lib/python3.9/dist-packages/pandas/core/indexes/datetimes.py", line 852, in indexer_between_time
    start_time = to_time(start_time)
  File "/usr/local/lib/python3.9/dist-packages/pandas/core/tools/times.py", line 118, in to_time
    return _convert_listlike(np.array([arg]), format)[0]
  File "/usr/local/lib/python3.9/dist-packages/pandas/core/tools/times.py", line 98, in _convert_listlike
    raise ValueError(f"Cannot convert arg {arg} to a time")
ValueError: Cannot convert arg ['input_datetime.peak_energy_start'] to a time
1 Like

Any Idea why if I use forecast.solar PV prediction, with a Wp = 8.000 the chart shows a prediction (P_PV) up to 10.000 Wp while the service forecast itself is below 6.000 Wp? Am I misreading the chart?


curl -i -H "Content-Type:application/json" -X POST -d '{"solar_forecast_kwp":8}' http://localhost:5000/action/dayahead-optim

You are misreading the charts. The top chart has units of instant power in W while the low chart has units of energy in kWh. So not the same units but still it is odd that the prediction goes over 8 kW when you specified this as your peak power.
what is your configuration?

You are right about the units.
Here it is my configuration.
…unless it is something related to my previous problem (“Empty server reply”… You know what? In the end I think it was related to my limited hardware Pi3b+ 1GB RAM… as sometimes it was failing with a resampled sensor, sometimes was not… I think I’ve temporarily solved the problem by increasing the system swap file - a good guy created an add-on for that. Now it takes a while to process a couple of days but it works in almost any situation).

Edit: FYI if I use the scrapper forecast method the P_PV maximum doesn’t exceed 6 kW.

hass_url: empty
long_lived_token: empty
costfun: self-consumption
logging_level: DEBUG
optimization_time_step: 30
historic_days_to_retrieve: 2
method_ts_round: nearest
set_total_pv_sell: false
lp_solver: COIN_CMD
lp_solver_path: /usr/bin/cbc
set_nocharge_from_grid: false
set_nodischarge_to_grid: true
set_battery_dynamic: false
battery_dynamic_max: 0.9
battery_dynamic_min: -0.9
load_forecast_method: naive
sensor_power_photovoltaics: sensor.production_pv_w
sensor_power_load_no_var_loads: sensor.consumption_w
number_of_deferrable_loads: 1
list_nominal_power_of_deferrable_loads:
  - nominal_power_of_deferrable_loads: 0
list_operating_hours_of_each_deferrable_load:
  - operating_hours_of_each_deferrable_load: 0
list_peak_hours_periods_start_hours:
  - peak_hours_periods_start_hours: "02:54"
list_peak_hours_periods_end_hours:
  - peak_hours_periods_end_hours: "15:24"
list_treat_deferrable_load_as_semi_cont:
  - treat_deferrable_load_as_semi_cont: true
load_peak_hours_cost: 0.1907
load_offpeak_hours_cost: 0.1419
photovoltaic_production_sell_price: 0.065
maximum_power_from_grid: 6000
list_pv_module_model:
  - pv_module_model: SunPower_SPR_E20_327
  - pv_module_model: SunPower_SPR_P19_395_COM
list_pv_inverter_model:
  - pv_inverter_model: SMA_America__SB3000TL_US_22__240V_
  - pv_inverter_model: SMA_America__STP50_US_40__480V_
list_surface_tilt:
  - surface_tilt: 17
  - surface_tilt: 17
list_surface_azimuth:
  - surface_azimuth: 225
  - surface_azimuth: 225
list_modules_per_string:
  - modules_per_string: 9
  - modules_per_string: 14
list_strings_per_inverter:
  - strings_per_inverter: 1
  - strings_per_inverter: 1
set_use_battery: true
battery_discharge_power_max: 7100
battery_charge_power_max: 7100
battery_discharge_efficiency: 0.95
battery_charge_efficiency: 0.95
battery_nominal_energy_capacity: 27000
battery_minimum_state_of_charge: 0.3
battery_maximum_state_of_charge: 0.9
battery_target_state_of_charge: 0.6

About this question:
is it possible to pass the peak start/end hours and related prices via the curl command instead of loading them from config?
Or the only allowed way is to pass the list using load_cost_forecast as I see in some examples?

Yes the only option besides the configuration file is as a list of values using the load_cost_forecast flag.
You should be able to build your list of values for peak start/end hours prices using template sensor.
Then pass that sensor data in the curl command.

@davidusb I still get this issue where after rebooting Home Assistant I can’t post naive-mpc-optim for two days. I just get a “ValueError: could not convert string to float: ‘NOTRUN’” error.
After two days is just starts working again? I have to use day ahead for two days.

Schermafbeelding 2023-10-29 220000

I can’t save the configuration. Can anyone tell me why not.

Mislukt om add-onconfiguratie op te slaan: not a valid value. Got {'hass_url': 'empty', 'long_lived_token': 'empty', 'costfun': 'profit', 'logging_level': 'INFO', 'optimization_time_step': 30, 'historic_days_to_retrieve': 2, 'method_ts_round': 'nearest', 'set_total_pv_sell': False, 'lp_solver': 'COIN_CMD', 'lp_solver_path': '/usr/bin/cbc', 'set_nocharge_from_grid': False, 'set_nodischarge_to_grid': False, 'set_battery_dynamic': False, 'battery_dynamic_max': 0.9, 'battery_dynamic_min': -0.9, 'load_forecast_method': 'naive', 'sensor_power_photovoltaics': 'sen...
hass_url: empty
long_lived_token: empty
costfun: profit
logging_level: INFO
optimization_time_step: 30
historic_days_to_retrieve: 2
method_ts_round: nearest
set_total_pv_sell: false
lp_solver: COIN_CMD
lp_solver_path: /usr/bin/cbc
set_nocharge_from_grid: false
set_nodischarge_to_grid: false
set_battery_dynamic: false
battery_dynamic_max: 0.9
battery_dynamic_min: -0.9
load_forecast_method: naive
sensor_power_photovoltaics: sensor.total_production_power
sensor_power_load_no_var_loads: sensor.total_consumption_power
number_of_deferrable_loads: 2
list_nominal_power_of_deferrable_loads:
  - nominal_power_of_deferrable_loads: 3000
  - nominal_power_of_deferrable_loads: 750
list_operating_hours_of_each_deferrable_load:
  - operating_hours_of_each_deferrable_load: 5
  - operating_hours_of_each_deferrable_load: 8
list_peak_hours_periods_start_hours:
  - peak_hours_periods_start_hours: "05:54"
  - peak_hours_periods_start_hours: 624
list_peak_hours_periods_end_hours:
  - peak_hours_periods_end_hours: "09:24"
  - peak_hours_periods_end_hours: 714
list_treat_deferrable_load_as_semi_cont:
  - treat_deferrable_load_as_semi_cont: true
  - treat_deferrable_load_as_semi_cont: true
load_peak_hours_cost: 0.1907
load_offpeak_hours_cost: 0.1419
photovoltaic_production_sell_price: 0.065
maximum_power_from_grid: 9000
list_pv_module_model:
  - pv_module_model: CSUN_Eurasia_Energy_Systems_Industry_and_Trade_CSUN295_60M
list_pv_inverter_model:
  - pv_inverter_model: Fronius_International_GmbH__Fronius_Primo_5_0_1_208_240__240V_
list_surface_tilt:
  - surface_tilt: 30
list_surface_azimuth:
  - surface_azimuth: 205
list_modules_per_string:
  - modules_per_string: 16
list_strings_per_inverter:
  - strings_per_inverter: 1
set_use_battery: false
battery_discharge_power_max: 1000
battery_charge_power_max: 1000
battery_discharge_efficiency: 0.95
battery_charge_efficiency: 0.95
battery_nominal_energy_capacity: 5000
battery_minimum_state_of_charge: 0.3
battery_maximum_state_of_charge: 0.9
battery_target_state_of_charge: 0.6

Maybe it’s this?
My config:

list_peak_hours_periods_start_hours:
  - peak_hours_periods_start_hours: "10:00"
  - peak_hours_periods_start_hours: "10:00"
list_peak_hours_periods_end_hours:
  - peak_hours_periods_end_hours: "16:00"
  - peak_hours_periods_end_hours: "16:00"
list_treat_deferrable_load_as_semi_cont:
  - treat_deferrable_load_as_semi_cont: true
  - treat_deferrable_load_as_semi_cont: false

I don’t actually use the peak hours settings, but I think they are meant to be from hour to hour, so from 10am to 4pm. I think there are two because that’s the default example and I haven’t changed it.

Also, the semi cont settings set to true means that the load is constant, can’t be varied. In my case the first is a pool pump with a constant 1300 W load and the second is an EV that can be charged from 1 amp to 32 amps and I translate the returned Wattage to amps to set the car.

Thanks a lot. I overlookt that format.

1 Like

Interestingly I still can’t run MPC after a reboot for two days? I have to revert to dayahead for two days and then I can go back to MPC method.

2023-11-11 09:17:51,187 - web_server - ERROR - Exception on /action/naive-mpc-optim [POST]
Traceback (most recent call last):
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1455, in wsgi_app
    response = self.full_dispatch_request()
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 869, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 867, in full_dispatch_request
    rv = self.dispatch_request()
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 852, in dispatch_request
    return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
  File "/usr/local/lib/python3.9/dist-packages/emhass/web_server.py", line 179, in action_call
    input_data_dict = set_input_data_dict(config_path, str(data_path), costfun,
  File "/usr/local/lib/python3.9/dist-packages/emhass/command_line.py", line 110, in set_input_data_dict
    rh.get_data(days_list, var_list,
  File "/usr/local/lib/python3.9/dist-packages/emhass/retrieve_hass.py", line 140, in get_data
    df_tp = df_raw.copy()[['state']].replace(
  File "/usr/local/lib/python3.9/dist-packages/pandas/core/generic.py", line 5920, in astype
    new_data = self._mgr.astype(dtype=dtype, copy=copy, errors=errors)
  File "/usr/local/lib/python3.9/dist-packages/pandas/core/internals/managers.py", line 419, in astype
    return self.apply("astype", dtype=dtype, copy=copy, errors=errors)
  File "/usr/local/lib/python3.9/dist-packages/pandas/core/internals/managers.py", line 304, in apply
    applied = getattr(b, f)(**kwargs)
  File "/usr/local/lib/python3.9/dist-packages/pandas/core/internals/blocks.py", line 580, in astype
    new_values = astype_array_safe(values, dtype, copy=copy, errors=errors)
  File "/usr/local/lib/python3.9/dist-packages/pandas/core/dtypes/cast.py", line 1292, in astype_array_safe
    new_values = astype_array(values, dtype, copy=copy)
  File "/usr/local/lib/python3.9/dist-packages/pandas/core/dtypes/cast.py", line 1237, in astype_array
    values = astype_nansafe(values, dtype, copy=copy)
  File "/usr/local/lib/python3.9/dist-packages/pandas/core/dtypes/cast.py", line 1098, in astype_nansafe
    result = astype_nansafe(flat, dtype, copy=copy, skipna=skipna)
  File "/usr/local/lib/python3.9/dist-packages/pandas/core/dtypes/cast.py", line 1181, in astype_nansafe
    return arr.astype(dtype, copy=True)
ValueError: could not convert string to float: 'NOTRUN'
2023-11-11 09:17:51,216 - web_server - INFO - Setting up needed data
2023-11-11 09:17:51,217 - web_server - INFO -  >> Publishing data...
2023-11-11 09:17:51,217 - web_server - INFO - Publishing data to HASS instance
2023-11-11 09:17:51,237 - web_server - INFO - Successfully posted to sensor.p_pv_forecast = 0
2023-11-11 09:17:51,250 - web_server - INFO - Successfully posted to sensor.p_load_forecast = 0.0
2023-11-11 09:17:51,260 - web_server - INFO - Successfully posted to sensor.p_deferrable0 = 0.0
2023-11-11 09:17:51,270 - web_server - INFO - Successfully posted to sensor.p_deferrable1 = 0.0
2023-11-11 09:17:51,280 - web_server - INFO - Successfully posted to sensor.p_batt_forecast = 0.0
2023-11-11 09:17:51,290 - web_server - INFO - Successfully posted to sensor.soc_batt_forecast = 16.0
2023-11-11 09:17:51,299 - web_server - INFO - Successfully posted to sensor.p_grid_forecast = 0.0
2023-11-11 09:17:51,310 - web_server - INFO - Successfully posted to sensor.total_cost_fun_value = 3.79
2023-11-11 09:17:51,320 - web_server - INFO - Successfully posted to sensor.unit_load_cost = 0.06
2023-11-11 09:17:51,332 - web_server - INFO - Successfully posted to sensor.unit_prod_price = -0.03

That’s strange.
I rebooted quite often in the last few days and I’ve not experienced that.
I guess you already tried to uninstall, clean the share folder and install back

I’m on the latest version of the add-on

No, not recently. Did do a reinstall some time ago. I suspect it’s got something to do with the complexity of other configuration. My home assistant must be 6 years old now and so many projects and experiments carried out I’m thinking I should set up a separate VM running HA just for energy management as it’s more production like than many other thinks going on in my old HA.

Hello @davidusb
Because of my setup (two batteries running in parallel) I would need to install two instances of the add-on, so each can control a specific battery. I can split loads and all the rest so it would work from that side. At the moment I’m managing everything with just one instance but having two would allow me to optimize everything.
I could already make HAOS recognize the addon twice and I also checked I can choose a different port for the EMHASS server.
My last concern is about the published results; as they have the same name one instance would probably overwrite the results from the other when pushing to HA.
Is there any possibility to customize the name of the published values (maybe a feature request/enhancement)? Or to push them to specific sensors I can create for this purpose?
I would even be open to options such as forking the github project and make some changes to the code, use it as repository for the second instance of the add-on, as a short term workaround.
Please let me know your thoughts and thank you!

EDIT:
Never mind. While having a look at your code I found a feature request and then back to the documentation. Publish_prefix will do the work. Thanks!

I have three batteries, but just treat them as one virtual battery with the combined energy and power capacities, which seems to work well in EMHASS.

What other functionality are you seeking, would you like to charge/ discarge them at different rates?

1 Like