EMHASS: An Energy Management for Home Assistant

Hi, I just can’t what is wrong here. This is just working fine for me and there is a specific unit test in the code to test for this and everything seems fine.

What is the Home Assistant log saying when you execute that shell command?

Please open a github issue to follow this more in detail there.

1 Like

@KasperEdw
I solved it! I found out I have done two errors.

The first was passing wrong amount of data in this list. Nordpool publish prices for every hour so the list must have 24 data points, not 48 points.

You need to be careful here to send the correct amount of data on this list, the correct length. For example, if the data time step is defined to 1h and you are performing a day-ahead optimization, then this list length should be of 24 data points.

In the emhass configuration you must also have 60 (minuts) for optimization_time_step.
image

The price data must also be for the next day and price data must be in Euro. When you setup Nordpool addon you can choose Euro as the price data.
Update: You can use whatever currency you want as long as you use the same currency everywhere.

The second was using template with errors. My template was wrong with the parenthesis and curly bracket. After fixing the template the shell_command worked and the passing of load_cost_forecast and prod_price_forecast was successful.

Here is the correct template for passing forecast data from Nordpool.

shell_command:
  publish_data: "curl -i -H 'Content-Type:application/json' -X POST -d '{}' http://localhost:5000/action/publish-data"
  
  post_nordpool_forecast: "curl -i -H 'Content-Type: application/json' -X POST -d '{\"load_cost_forecast\":{{(
        (state_attr('sensor.nordpool_euro', 'raw_tomorrow')|map(attribute='value')|list)[:24])
        }},\"prod_price_forecast\":{{(
        (state_attr('sensor.nordpool_euro', 'raw_tomorrow')|map(attribute='value')|list)[:24])}}}' http://localhost:5000/action/dayahead-optim"

image

Hope it helps others

5 Likes

Hi guys! I am struggling with the initial config, as long as I am not changing the hass_url from the default “empty” I get EMHASS UI running without any data from HA. With the hass_url changed I get this error message:

s6-rc: info: service s6rc-oneshot-runner: starting
s6-rc: info: service s6rc-oneshot-runner successfully started
s6-rc: info: service fix-attrs: starting
s6-rc: info: service fix-attrs successfully started
s6-rc: info: service legacy-cont-init: starting
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service legacy-services: starting
services-up: info: copying legacy longrun emhass (no readiness notification)
s6-rc: info: service legacy-services successfully started

Traceback (most recent call last):
  File "/usr/local/lib/python3.9/dist-packages/requests/models.py", line 971, in json
    return complexjson.loads(self.text, **kwargs)
  File "/usr/lib/python3.9/json/__init__.py", line 346, in loads
    return _default_decoder.decode(s)
  File "/usr/lib/python3.9/json/decoder.py", line 337, in decode
    obj, end = self.raw_decode(s, idx=_w(s, 0).end())
  File "/usr/lib/python3.9/json/decoder.py", line 355, in raw_decode
    raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
  File "/usr/lib/python3.9/runpy.py", line 197, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib/python3.9/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/usr/local/lib/python3.9/dist-packages/emhass/web_server.py", line 241, in <module>
    config_hass = response.json()
  File "/usr/local/lib/python3.9/dist-packages/requests/models.py", line 975, in json
    raise RequestsJSONDecodeError(e.msg, e.doc, e.pos)
requests.exceptions.JSONDecodeError: Expecting value: line 1 column 1 (char 0)

With this config:

web_ui_url: 0.0.0.0
hass_url: https://xxx:8123/
long_lived_token: empty
costfun: self-consumption
optimization_time_step: 30
historic_days_to_retrieve: 2
method_ts_round: nearest
set_total_pv_sell: false
lp_solver: PULP_CBC_CMD
lp_solver_path: empty
sensor_power_photovoltaics: sensor.pv_power
sensor_power_load_no_var_loads: sensor.house_consumption
number_of_deferrable_loads: 2
list_nominal_power_of_deferrable_loads:
  - nominal_power_of_deferrable_loads: 3000
  - nominal_power_of_deferrable_loads: 750
list_operating_hours_of_each_deferrable_load:
  - operating_hours_of_each_deferrable_load: 5
  - operating_hours_of_each_deferrable_load: 8
list_peak_hours_periods_start_hours:
  - peak_hours_periods_start_hours: "02:54"
  - peak_hours_periods_start_hours: "17:24"
list_peak_hours_periods_end_hours:
  - peak_hours_periods_end_hours: "15:24"
  - peak_hours_periods_end_hours: "20:24"
list_treat_deferrable_load_as_semi_cont:
  - treat_deferrable_load_as_semi_cont: true
  - treat_deferrable_load_as_semi_cont: true
load_peak_hours_cost: 0.1907
load_offpeak_hours_cost: 0.1419
photovoltaic_production_sell_price: 0.065
maximum_power_from_grid: 22080
list_pv_module_model:
  - pv_module_model: IBEX-132MHC-EiGER-495-500
list_pv_inverter_model:
  - pv_inverter_model: GoodWe_10K_ET_Plus+
list_surface_tilt:
  - surface_tilt: 25
list_surface_azimuth:
  - surface_azimuth: 205
list_modules_per_string:
  - modules_per_string: 6
list_strings_per_inverter:
  - strings_per_inverter: 2
set_use_battery: false
battery_discharge_power_max: 6390
battery_charge_power_max: 6390
battery_discharge_efficiency: 0.95
battery_charge_efficiency: 0.95
battery_nominal_energy_capacity: 10668
battery_minimum_state_of_charge: 0.2
battery_maximum_state_of_charge: 1
battery_target_state_of_charge: 0.6

Could anyone point me in the right direction, please?

Hi, you need to define the long_lived_token parameter, otherwise EMHASS won’t be able to access your HA instance data.

Even if I use the long_lived_token the error is the same, however I do have a supervisor so I left it empty. Am I supposed to use the token under my admin profile, right? image
Same as the URL, which should not be needed with the supervisor. Maybe I am just messing around with wrong setting to get my problem solved. With the default config with supervisor like this

web_ui_url: 0.0.0.0
hass_url: empty
long_lived_token: empty

I get no error, but the data are not fed into EMHASS at all.

s6-rc: info: service s6rc-oneshot-runner: starting
s6-rc: info: service s6rc-oneshot-runner successfully started
s6-rc: info: service fix-attrs: starting
s6-rc: info: service fix-attrs successfully started
s6-rc: info: service legacy-cont-init: starting
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service legacy-services: starting
services-up: info: copying legacy longrun emhass (no readiness notification)
s6-rc: info: service legacy-services successfully started
[2022-09-04 09:24:00,823] INFO in web_server: Launching the emhass webserver at: http://0.0.0.0:5000
[2022-09-04 09:24:00,823] INFO in web_server: Home Assistant data fetch will be performed using url: http://supervisor/core/api
[2022-09-04 09:24:00,824] INFO in web_server: The base path is: /usr/src
[2022-09-04 09:24:00,828] INFO in web_server: Using core emhass version: 0.3.18
[2022-09-04 09:25:00,166] INFO in command_line: Setting up needed data
[2022-09-04 09:25:00,318] INFO in web_server:  >> Publishing data...
[2022-09-04 09:25:00,320] INFO in command_line: Publishing data to HASS instance
[2022-09-04 09:25:00,458] INFO in retrieve_hass: Successfully posted value in a newly created entity_id
[2022-09-04 09:25:00,572] INFO in retrieve_hass: Successfully posted value in a newly created entity_id
[2022-09-04 09:25:00,748] INFO in retrieve_hass: Successfully posted value in a newly created entity_id
[2022-09-04 09:25:00,860] INFO in retrieve_hass: Successfully posted value in a newly created entity_id
[2022-09-04 09:25:01,000] INFO in retrieve_hass: Successfully posted value in a newly created entity_id
[2022-09-04 09:25:01,123] INFO in retrieve_hass: Successfully posted value in a newly created entity_id
[2022-09-04 09:25:57,581] INFO in web_server: EMHASS server online, serving index.html... 

I can now access EMHASS UI, but it never gets updated, the data keep being the “default” one shown below.
image
What could be the problem then? Thanks for any suggestions.

Why do you say that no data is being fed to the add-on? You need to set some automations to launch the optimization tasks, have you already done this? The graph on the web ui won’t update until you launch an optimization task

Thanks a lot for your help, I thought that no data is being fed, so I just checked the deferrable load entity from time to time if they got changed or not. I also changed some details in the configuration and later discovered that the UI got updated with all the data. The P_deferrable in UI is spot on, but I struggle to get it into HA. The reason why I said that no data is being fed is because of the log:

[2022-09-04 20:25:00,808] INFO in web_server:  >> Publishing data...
[2022-09-04 20:25:00,809] INFO in command_line: Publishing data to HASS instance
[2022-09-04 20:25:00,904] INFO in retrieve_hass: Successfully posted to sensor.p_pv_forecast = 1391.13
[2022-09-04 20:25:00,946] INFO in retrieve_hass: Successfully posted to sensor.p_load_forecast = 169.94
[2022-09-04 20:25:00,989] INFO in retrieve_hass: Successfully posted to sensor.p_deferrable0 = 0.0
[2022-09-04 20:25:01,033] INFO in retrieve_hass: Successfully posted to sensor.p_deferrable1 = 0.0
[2022-09-04 20:25:01,077] INFO in retrieve_hass: Successfully posted to sensor.p_grid_forecast = -1221.19
[2022-09-04 20:25:01,118] INFO in retrieve_hass: Successfully posted to sensor.total_cost_fun_value = -0.61

It just keeps publishing the “default” values, instead of the ones I am seeing in UI.



I might be missing something totally obvious for you, but my lack of experience and language skills makes it quite challenging for me. Thanks for any suggestions.

Little edit:

Just noticed that the result table does not contain data unless I manually press the Perfect Optimization. I do have these lines in /config/configuration.yaml


And these lines in /config/automations.yaml

image
Are those the automatization you were talking about? I hope I did not miss any.

Hello all,
Seems to me I have issues with module and inverter name. If I understood well the documentation (chapter Configuration File) I need to find as close model as possible in SAM/deploy/libraries at develop · NREL/SAM · GitHub and to replace the special chars by underscore.

If I use the model used in the example (‘CSUN_Eurasia_Energy_Systems_Industry_and_Trade_CSUN295_60M’ and ‘Fronius_International_GmbH__Fronius_Primo_5_0_1_208_240__240V_’) everything works.

However if I try to insert ‘GoodWe_Technologies_Co___Ltd___GW9600A_ES__240V_’ or module ‘LONGi_Green_Energy_Technology_Co__Ltd__LR5_66HBD_490M’) I get following error message in log after clicking on Day-Ahead Optimization. See at the end of my post.

Am I missing anything? Or I need to pass somehow the information about used module/invertor?

Thank you

[2022-09-08 22:02:24,044] ERROR in app: Exception on /action/dayahead-optim [POST]
Traceback (most recent call last):
File “/usr/local/lib/python3.9/dist-packages/pandas/core/indexes/base.py”, line 3621, in get_loc
return self._engine.get_loc(casted_key)
File “pandas/_libs/index.pyx”, line 136, in pandas._libs.index.IndexEngine.get_loc
File “pandas/_libs/index.pyx”, line 163, in pandas._libs.index.IndexEngine.get_loc
File “pandas/_libs/hashtable_class_helper.pxi”, line 5198, in pandas._libs.hashtable.PyObjectHashTable.get_item
File "pandas/libs/hashtable_class_helper.pxi", line 5206, in pandas.libs.hashtable.PyObjectHashTable.get_item
KeyError: 'GoodWe_Technologies_Co___Ltd___GW9600A_ES__240V

The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File “/usr/local/lib/python3.9/dist-packages/flask/app.py”, line 2525, in wsgi_app
response = self.full_dispatch_request()
File “/usr/local/lib/python3.9/dist-packages/flask/app.py”, line 1822, in full_dispatch_request
rv = self.handle_user_exception(e)
File “/usr/local/lib/python3.9/dist-packages/flask/app.py”, line 1820, in full_dispatch_request
rv = self.dispatch_request()
File “/usr/local/lib/python3.9/dist-packages/flask/app.py”, line 1796, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
File “/usr/local/lib/python3.9/dist-packages/emhass/web_server.py”, line 134, in action_call
input_data_dict = set_input_data_dict(config_path, str(config_path.parent), costfun,
File “/usr/local/lib/python3.9/dist-packages/emhass/command_line.py”, line 76, in set_input_data_dict
P_PV_forecast = fcst.get_power_from_weather(df_weather)
File “/usr/local/lib/python3.9/dist-packages/emhass/forecast.py”, line 337, in get_power_from_weather
inverter = cec_inverters[self.plant_conf[‘inverter_model’][i]]
File “/usr/local/lib/python3.9/dist-packages/pandas/core/frame.py”, line 3505, in getitem
indexer = self.columns.get_loc(key)
File “/usr/local/lib/python3.9/dist-packages/pandas/core/indexes/base.py”, line 3623, in get_loc
raise KeyError(key) from err
KeyError: 'GoodWe_Technologies_Co___Ltd___GW9600A_ES__240V

1 Like

Hello,
The problem is that your module and inverter models are not found in the database. This is a recurrent problem.

You should look if your models are available. If they are not avalable, solution (1) is to pick another model as close as possible as yours in terms of the nominal power.
The available module models are listed here: https://github.com/davidusb-geek/emhass-add-on/files/9234460/sam-library-cec-modules-2019-03-05.csv
And the available inverter models are listed here: https://github.com/davidusb-geek/emhass-add-on/files/9532724/sam-library-cec-inverters-2019-03-05.csv

Solution (2) would be to use SolCast and pass that data directly to emhass as a list of values from a template. Take a look at this example here: The forecast module — emhass 0.3.18 documentation

I checked your module model and there are no 490W models or even 500W models on the database, so you are better off going with solution (2)

Oh, now I can see the problem I am facing, the SAM library included in the EMHASS is not up to date, judging by the date in the file name " 2019-03-05" I checked the file and did not find the module and inverter models, that I can see in the link documentation refers to:

The complete list of supported modules and inverter models can be found here: pvlib.pvsystem.retrieve_sam — pvlib python 0.10.3 documentation

This link then refers to :

Files available at
SAM/deploy/libraries at develop · NREL/SAM · GitHub

These files are not the same. The inverter Daman is talking about is listed in the original SAM libraries, but not in the EMHASS one. So I just wanted to point out to others, who did use the link from documentation, that it may contain the models you are looking for, but may not work.

Hi, I’ve just updated this in the documentation.

Hi David,

Thank you for a very cool project!

I’ve followed the example from haraldov to get a list from Nordpool as well as the example in The forecast module — emhass 0.3.18 documentation to get SolCast forecast data.

I’ve defined these shell_commands:

post_nordpool_forecast:
‘curl -i -H ‘‘Content-Type: application/json’’ -X POST -d ‘’{“load_cost_forecast”:{{(
(state_attr(’‘sensor.nordpool’’, ‘‘raw_tomorrow’’)|map(attribute=’‘value’’)|list)[:24])
}},“prod_price_forecast”:{{(
(state_attr(’‘sensor.nordpool’’, ‘‘raw_tomorrow’’)|map(attribute=’‘value’’)|list)[:24])}}}’’ http://localhost:5000/action/dayahead-optim

post_mpc_optim_solcast:
‘curl -i -H ‘‘Content-Type: application/json’’ -X POST -d ‘’{“load_cost_forecast”:{{(
(state_attr(’‘sensor.nordpool’’, ‘‘raw_tomorrow’’)|map(attribute=’‘value’’)|list)[:24])
}},“prod_price_forecast”:{{(
(state_attr(’‘sensor.nordpool’’, ‘‘raw_tomorrow’’)|map(attribute=’‘value’’)|list)[:24])
}}, “pv_power_forecast”:{{states(’‘sensor.solcast_24hrs_forecast’’)
}}}’’ http://localhost:5000/action/naive-mpc-optim

When I use the shell_command post_nordpool_forecast it seems to work and I get a list for the entire , but when I use post_mpc_optim_solcast the list stops at 10 am. I have a feeling it has to do with the fact that solcast sends 48 values, but Nordpool only sends 24 values, but I’m not sure.

Is there a way to combine Nordpool data with solcast forecast data?

Warmly, Per Takman

Hi, what do you mean by combine? Nordpool is giving you load cost and production prices forecasts in currency units while Solcast is giving you PV power forecast in Watts. How to combine different quantities? Or I don’t understand your problem with that.

The data is being truncated when using MPC because of the working principle of that algorithm and the prediction horizon parameter.

To control that you need to define additional parameters when using MPC, for example :

"prediction_horizon":10,soc_init":0.5,"soc_final":0.6,"def_total_hours":[1,3]

…if you have a battery and two deferrable loads.

Hi! I was not referring to combining price with watts. :slight_smile: I thought something was wrong becuase I did not see a result that covered the entire next day. My assumption was that 24 datapoints from Nordpool vs. 48 datapoints from solcast caused the problem.

From your answer I take it that MPC does not offer a prediction for the entire next day. Sounds like I need to study this in more detail.

@per.takman

Nordpool publish tomorrow power prices every day at 13:00. Sometimes the Nordpool plugin do not update the prices. I use a automation which trigger the shell command at 23:57 every day. I then get the prices from 00:00 the next day in emhass.

alias: EMHASS day-ahead optimization

description: ""
trigger:
  - platform: time
    at: "23:57:00"
condition: []
action:
  - service: shell_command.post_nordpool_forecast
    data: {}
mode: single

You can see if the tomorrow price data is available with Developer Tools->State:

image
If you are sending emty raw_tomorrow list the column unit_load_cost and the cost_profit in the emhass webpage do not update. If it work check also the price data are in the correct time index.

The shell_command I use now is this one:

shell_command:

  publish_data: "curl -i -H 'Content-Type:application/json' -X POST -d '{}' http://localhost:5000/action/publish-data"

  post_nordpool_forecast:

    'curl -i -H ''Content-Type: application/json'' -X POST -d ''{"load_cost_forecast":{{(

    (state_attr(''sensor.nordpool'', ''raw_tomorrow'')|map(attribute=''value'')|list)[:24])

    }},"prod_price_forecast":{{(

    (state_attr(''sensor.nordpool'', ''raw_tomorrow'')|map(attribute=''value'')|list)[:24])}}}'' http://localhost:5000/action/dayahead-optim'

Hi again,

Sorry about not being clear in my first post. I believe that I’ve followed the instruction now, but I still think that the fact that Nordpool publish 24 values and solcast publish 48 values cause a problem. Please bear with me.

After stumbling a bit on the syntax in Studio Code Server I now have the following shell commands:

dayahead_optim:
  'curl -i -H ''Content-Type: application/json'' -X POST -d ''{"load_cost_forecast":{{(
  (state_attr(''sensor.nordpool_kwh_se3_sek_3_10_025'', ''raw_tomorrow'')|map(attribute=''value'')|list)[:24])
  }},"prod_price_forecast":{{(
  (state_attr(''sensor.nordpool_kwh_se3_sek_3_10_025'', ''raw_tomorrow'')|map(attribute=''value'')|list)[:24])}}}'' http://localhost:5000/action/dayahead-optim'

mpc_optim:
  'curl -i -H "Content-Type: application/json" -X POST -d ''{"load_cost_forecast":{{(
  (state_attr(''sensor.nordpool_kwh_se3_sek_3_10_025'', ''raw_tomorrow'')|map(attribute=''value'')|list)[:24])
  }}, "prod_price_forecast":{{(
  (state_attr(''sensor.nordpool_kwh_se3_sek_3_10_025'', ''raw_tomorrow'')|map(attribute=''value'')|list)[:24])
  }}, "pv_power_forecast":{{states(''sensor.solcast_24hrs_forecast'')
  }}, "prediction_horizon":48,"soc_init":{{(states(''sensor.r2_d2_battery_level'')|float(0))/100
  }},"soc_final":0.05,"def_total_hours":[2]}'' http://localhost:5000/action/naive-mpc-optim'

publish_data: "curl -i -H 'Content-Type:application/json' -X POST -d '{}' http://localhost:5000/action/publish-data"

dayahead_optim and publish_data both execute sucessfully, but mpc_optim generate the following log trace in the EMHASS add-on:

[2022-09-12 22:57:35,264] INFO in command_line: Setting up needed data
[2022-09-12 22:57:35,272] INFO in retrieve_hass: Retrieve hass get data method initiated...
[2022-09-12 22:57:36,221] INFO in forecast: Retrieving weather forecast data using method = list
[2022-09-12 22:57:36,231] INFO in forecast: Retrieving data from hass for load forecast using method = naive
[2022-09-12 22:57:36,233] INFO in retrieve_hass: Retrieve hass get data method initiated...
[2022-09-12 22:57:39,148] ERROR in app: Exception on /action/naive-mpc-optim [POST]
Traceback (most recent call last):
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 2525, in wsgi_app
    response = self.full_dispatch_request()
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1822, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1820, in full_dispatch_request
    rv = self.dispatch_request()
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1796, in dispatch_request
    return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
  File "/usr/local/lib/python3.9/dist-packages/emhass/web_server.py", line 134, in action_call
    input_data_dict = set_input_data_dict(config_path, str(config_path.parent), costfun,
  File "/usr/local/lib/python3.9/dist-packages/emhass/command_line.py", line 112, in set_input_data_dict
    df_input_data_dayahead = copy.deepcopy(df_input_data_dayahead)[df_input_data_dayahead.index[0]:df_input_data_dayahead.index[prediction_horizon-1]]
  File "/usr/local/lib/python3.9/dist-packages/pandas/core/indexes/base.py", line 5039, in __getitem__
    return getitem(key)
  File "/usr/local/lib/python3.9/dist-packages/pandas/core/arrays/datetimelike.py", line 341, in __getitem__
    "Union[DatetimeLikeArrayT, DTScalarOrNaT]", super().__getitem__(key)
  File "/usr/local/lib/python3.9/dist-packages/pandas/core/arrays/_mixins.py", line 272, in __getitem__
    result = self._ndarray[key]
IndexError: index 47 is out of bounds for axis 0 with size 24
  1. Is there a way to use MPC together with data from Nordpool? Should I be using a different service for pv_power_forecast with 60 minute intervals?
  2. Is it sufficient to only use dayahead_optim once a day and then publish every 5 minutes?
  3. Does anybody have good example how to control battery based on sensor.p_batt_forecast?

Cheers, Per Takman

You need to ensure that the data you pass have the same number of elements.

Either 24 x60 minute intervals or 48 x30 minute intervals you can’t mix 24 values for price and 48 values for production.

I would suggest you just focus on price only and use the internal pvlib solar forecast to start.

In your case you have 24 x60 price forecasts so you need to change your time interval to 60 minutes.

Using the internal pvlib solar forecast is reasonable, but don’t worry if you don’t have exact models to match.

Once you get EMHASS writing with price forecasts, then as a second stage you can look to integrate the external solar forecasts.

Hi Mark,

I’ve already set the optimization time step to 60 to make Nordpool with their 24 data points work.

Your suggestion to use the internal pvlib solar forecast sounds good. Can I do this by simply removing the line

  }}, "pv_power_forecast":{{states(''sensor.solcast_24hrs_forecast'')

in my shell command mpc_optim?

The new shell command would then be:

mpc_optim:
  'curl -i -H "Content-Type: application/json" -X POST -d ''{"load_cost_forecast":{{(
  (state_attr(''sensor.nordpool_kwh_se3_sek_3_10_025'', ''raw_tomorrow'')|map(attribute=''value'')|list)[:24])
  }}, "prod_price_forecast":{{(
  (state_attr(''sensor.nordpool_kwh_se3_sek_3_10_025'', ''raw_tomorrow'')|map(attribute=''value'')|list)[:24])
  }}, "prediction_horizon":48,"soc_init":{{(states(''sensor.r2_d2_battery_level'')|float(0))/100
  }},"soc_final":0.05,"def_total_hours":[2]}'' http://localhost:5000/action/naive-mpc-optim'

Thank you in advance!

/Perr

Yes that is looking simpler, which is easier to debug.

You should also change prediction_horizon from 48 to 24 to match the 24 elements you are passing.

It really depends on which battery you have and the controls available to you in Home Assistant.

I have a Tesla powerwall2 and use the Tesla Gateway custom integration.

One automation that sets backup_reserve_percent based off the EMHASS state of charge forecast:

alias: Battery SOC Forecast
description: ""
trigger:
  - platform: state
    entity_id:
      - sensor.soc_batt_forecast
condition: []
action:
  - service: tesla_gateway.set_reserve
    data:
      backup_reserve_percent: "{{states('sensor.soc_batt_forecast')|int(0)-5}}"
mode: single

I also have an alternative method by changing the battery modes, which roughly does the right thing. This also has some alerts so I can monitor the mode changes.

alias: p_batt automation
description: ""
trigger:
  - platform: numeric_state
    entity_id: sensor.p_batt_forecast
    below: "-1000"
  - platform: numeric_state
    entity_id: sensor.p_batt_forecast
    above: "-1000"
    below: "4900"
  - platform: numeric_state
    entity_id: sensor.p_batt_forecast
    above: "4900"
condition: []
action:
  - choose:
      - conditions:
          - condition: numeric_state
            entity_id: sensor.p_batt_forecast
            below: "-1000"
        sequence:
          - service: notify.mobile_app_pixel_6
            data:
              title: p_batt alert {{states('sensor.p_batt_forecast')}} - mode:backup
              message: price:{{states('sensor.amber_general_price')}} $/kWh
          - service: tesla_gateway.set_operation
            data:
              real_mode: backup
              backup_reserve_percent: 3
      - conditions:
          - condition: numeric_state
            entity_id: sensor.p_batt_forecast
            above: "4900"
            enabled: false
        sequence:
          - service: tesla_gateway.set_operation
            data:
              real_mode: autonomous
              backup_reserve_percent: 1
            enabled: false
          - service: notify.mobile_app_pixel_6
            data:
              title: >-
                p_batt alert {{states('sensor.p_batt_forecast')}} - consider
                mode:autonomous
              message: Price:{{states('sensor.amber_general_price')}} $/kWh
    default:
      - service: tesla_gateway.set_operation
        data:
          real_mode: self_consumption
          backup_reserve_percent: 2
      - service: notify.mobile_app_pixel_6
        data:
          title: >-
            p_batt  alert {{states('sensor.p_batt_forecast')}} -
            mode:self_consumption
          message: Price:{{states('sensor.amber_general_price')}} $/kWh
mode: single