EMHASS: An Energy Management for Home Assistant

EDIT: As I finally found out in an earlier post further up this thread, the naive-mpc-optim has a hard coded limitation to 24 hour of forecast. Hence my error. I will try to reconsider how I use the optimization.

See if you can help me out here. Again trying to get a frequent naive-mpc-optim working that extends a bit further than 24 hours so I can make full use of nordpool prices when they are released. As Nordpool release hourly energy prices at 13:00 every day and the prices always extend to the end of the next day, I can make a more predictable optimization where I can opt for a certain SOC at midnight.

But. I get some errors when I send more than 24 hours of data to naive-mpc-optim. This is my REST JSON:

{ "prediction_horizon": 30,
  "soc_init": 1.0,
  "soc_target": 0.5,
  "def_total_hours": [],
  "load_cost_forecast": [3.52, 4.29, 3.51, 2.75, 2.76, 0.86, 0.86, 0.84, 0.82, 0.8, 0.79, 0.82, 0.85, 0.89, 2.47, 2.32, 2.14, 2.05, 1.92, 0.87, 0.85, 1.41, 1.91, 2.21, 2.5, 2.78, 2.64, 1.98, 0.79, 0.76],
  "prod_price_forecast": [3.21, 3.98, 3.2, 2.44, 2.45, 0.55, 0.55, 0.53, 0.51, 0.49, 0.48, 0.51, 0.54, 0.58, 2.16, 2.01, 1.83, 1.74, 1.61, 0.56, 0.54, 1.1, 1.6, 1.9, 2.19, 2.47, 2.33, 1.67, 0.48, 0.45],
  "pv_power_forecast": [875, 221, 6, 0, 0, 0, 0, 0, 0, 0, 0, 10, 149, 497, 965, 1397, 1689, 1914, 2028, 2011, 1886, 1662, 1373, 982, 446, 112, 3, 0, 0, 0]
}

And this is the error I get:

2023-08-24 17:52:49,565 - web_server - INFO - Setting up needed data
2023-08-24 17:52:49,568 - web_server - INFO - Retrieve hass get data method initiated...
2023-08-24 17:52:51,568 - web_server - INFO - Retrieving weather forecast data using method = list
2023-08-24 17:52:51,570 - web_server - INFO - Retrieving data from hass for load forecast using method = naive
2023-08-24 17:52:51,571 - web_server - INFO - Retrieve hass get data method initiated...
2023-08-24 17:52:56,482 - web_server - ERROR - Exception on /action/naive-mpc-optim [POST]
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 2190, in wsgi_app
    response = self.full_dispatch_request()
  File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1486, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1484, in full_dispatch_request
    rv = self.dispatch_request()
  File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1469, in dispatch_request
    return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
  File "src/emhass/web_server.py", line 174, in action_call
    input_data_dict = set_input_data_dict(config_path, str(data_path), costfun,
  File "/usr/local/lib/python3.8/site-packages/emhass-0.4.14-py3.8.egg/emhass/command_line.py", line 127, in set_input_data_dict
    df_input_data_dayahead = copy.deepcopy(df_input_data_dayahead)[df_input_data_dayahead.index[0]:df_input_data_dayahead.index[prediction_horizon-1]]
  File "/usr/local/lib/python3.8/site-packages/pandas/core/indexes/base.py", line 5039, in __getitem__
    return getitem(key)
  File "/usr/local/lib/python3.8/site-packages/pandas/core/arrays/datetimelike.py", line 341, in __getitem__
    "Union[DatetimeLikeArrayT, DTScalarOrNaT]", super().__getitem__(key)
  File "/usr/local/lib/python3.8/site-packages/pandas/core/arrays/_mixins.py", line 272, in __getitem__
    result = self._ndarray[key]
IndexError: index 29 is out of bounds for axis 0 with size 24

What’s with that 24 size error? All three arrays I send in are 30 element sizes.
The call works well if I set "prediction_horizon": 24. Although I currently get “Infeasible” result, which I have to try to resolve, but that’s another story.

EDIT: See edit at top of post.

Hello,
I wanted to share a solution I’ve found to pass arbitrary long datasets (there is a 255 characters limit on sensors state) to the optimization process.
I’ve not seen the same solution in previous comments (markpurcell was using the templates in commands but he was putting some execution code directly there) of the thread so I’m sharing if useful, as this potentially allows you to run arbitrarily complex code without the need to store the results somewhere.

In the following example I’m passing the energy pull hourly costs for the next 24h with a 30 minutes resolution.

In configuration.yaml I set the shell command like this. I rely on {{templates}} to replace portions of the code and compose the command as needed.
In this example I’m passing {{ api_endpoint }} string and {{ load_cost_forecast }} list.
It doesn’t like columns and the solution I found is to use a small template in this case as well: {{':'}}.

shell_command:
  # EMHASS component commands
  dayahead_optim: curl -i -H "Content-Type:application/json"  -X POST -d '{"solar_forecast_kwp"{{':'}}8, "load_cost_forecast"{{':'}}{{ load_cost_forecast }}}' {{ api_endpoint }}

The parameters are built within the automation. Be careful you can’t use the UI but have to create your own emhass_automation.yaml file, otherwise it will not work (parameters are replaced/evaluated when the file is loaded into the configuration).
My “test” automation, which I think is the interesting part here:

- id: 'test'
  alias: "test"
  description: "test_emhass_command"
  trigger: []
  condition: []
  action:
    - service: shell_command.dayahead_optim
      data:
        api_endpoint: "http://localhost:5000/action/dayahead-optim"
        load_cost_forecast: >
          {% set ns_forecast = namespace(forecast=[]) %}
          {% for item in states.sensor|selectattr('entity_id', 'search', 'pun_oggi_')|sort(attribute='entity_id', reverse= false )|map(attribute='entity_id')|list %} 
          {% if (now().time()) < strptime(state_attr(item,'start'),'%H:%M:%S').time() %}
            {#% set ns_forecast.forecast = ns_forecast.forecast + [item] %#}  {# this is for debugging and check which sensors I'm using in the loop - (un)comment as needed #}
            {% set ns_forecast.forecast = ns_forecast.forecast + [((states(item)|round(3)))|float] %} {# this is for 24h forecasts with 1h resolution #}
            {% set ns_forecast.forecast = ns_forecast.forecast + [((states(item)|round(3)))|float] %} {# this is for 24h forecasts with 30' resolution - (un)comment as needed #}
          {% endif %}
          {% endfor %}
          {% for item in states.sensor|selectattr('entity_id', 'search', 'pun_domani_')|sort(attribute='entity_id', reverse= false )|map(attribute='entity_id')|list %} 
          {% if (now().time()) >= strptime(state_attr(item,'start'),'%H:%M:%S').time() %}
            {#% set ns_forecast.forecast = ns_forecast.forecast + [item] %#}  {# this is for debugging and check which sensors I'm using in the loop - (un)comment as needed #}
            {% set ns_forecast.forecast = ns_forecast.forecast + [((states(item)|round(3)))|float] %} {# this is for 24h forecasts with 1h resolution #}
            {% set ns_forecast.forecast = ns_forecast.forecast + [((states(item)|round(3)))|float] %} {# this is for 24h forecasts with 30' resolution - (un)comment as needed #}
          {% endif %}
          {% endfor %}
          {{ ns_forecast.forecast }}
  mode: single

The jinja code is run (I have 48 sensors with the hourly energy cost for today and tomorrow) and then the final list (next 24 hours, 30 minutes resolution, so 48 values in total) is associated to the load_cost_forecast parameter, which is then passed to the curl command.

When the automation is launched this is the trace:

Result:

params:
  domain: shell_command
  service: dayahead_optim
  service_data:
    api_endpoint: http://localhost:5000/action/dayahead-optim
    load_cost_forecast:
      - 0.126
      - 0.126
      - 0.126
      - 0.126
      - 0.126
      - 0.126
      - 0.126
      - 0.126
      - 0.126
      - 0.126
      - 0.126
      - 0.126
      - 0.103
      - 0.103
      - 0.103
      - 0.103
      - 0.103
      - 0.103
      - 0.103
      - 0.103
      - 0.103
      - 0.103
      - 0.103
      - 0.103
      - 0.103
      - 0.103
      - 0.103
      - 0.103
      - 0.103
      - 0.103
      - 0.103
      - 0.103
      - 0.103
      - 0.103
      - 0.103
      - 0.103
      - 0.103
      - 0.103
      - 0.103
      - 0.103
      - 0.103
      - 0.103
      - 0.103
      - 0.103
      - 0.103
      - 0.103
      - 0.103
      - 0.103
  target: {}
running_script: false

And this is the result when testing the same code in the template section of the dev tools:

This is the EMHASS log:

2023-08-26 16:05:07,631 - web_server - INFO - Setting up needed data
2023-08-26 16:05:07,646 - web_server - INFO - Retrieving weather forecast data using method = solar.forecast
2023-08-26 16:05:08,350 - web_server - INFO - Retrieving data from hass for load forecast using method = naive
2023-08-26 16:05:08,439 - web_server - INFO - Retrieve hass get data method initiated...
2023-08-26 16:07:00,234 - web_server - INFO -  >> Performing dayahead optimization...
2023-08-26 16:07:00,244 - web_server - INFO - Performing day-ahead forecast optimization
2023-08-26 16:07:00,428 - web_server - INFO - Perform optimization for the day-ahead
2023-08-26 16:07:03,173 - web_server - INFO - Status: Optimal
2023-08-26 16:07:03,176 - web_server - INFO - Total value of the Cost function = 0.67

I hope it is useful.
Feel free to let me know what you think or if you see something weird/wrong.

perhaps anyone can share his experience about this question? Thanks!

I’d say it depends on your needs.

Can you run it with different horizons, depending on your needs?
Run it with an 8 hour horizon to see when the EV should charge, if you need it sooner than that. Then schedule the EV charge according to it.
Run the longer horizon prediction if you don’t need the EV that soon.

Maybe I was just assuming something apparent, and my apologies if so, but I’m not sure what your point is? Is it that the use of jinja template variables allow you to construct parts of the REST JSON?
In many cases it’s needed for other reasons as well. The sensor state limit of 255 characters you mention is one reason many sensors use attributes to store larger and more complex data, e.g. the Nordpool integration. In order to create a list from those attributes, you need a jinja template construct. However, it’s usually not required to store this in a variable as you can create the data on the fly in the call.
But in the case you need to apply a modification to a list in Homeassistant templates, like adding or multiplying values, you need a more complex namespace construct as you now did. In this case, and for readability, I agree that using variables to store data before creating the actual call is good practice.

For anyone using the Solcast Custom Integration, please be aware it has breaking changes in v4.0 and you will need to change your template for pv_forecast. Thanks to @anon7821378 for maintaining this integration which gives EMHASS access to great solar forecasts.

Here is my updated template:

        "pv_power_forecast": {{
          ([states('sensor.solcast_pv_forecast_power_now')|int(0)] +
          state_attr('sensor.solcast_pv_forecast_forecast_today', 'detailedForecast')|selectattr('period_start','gt',utcnow()) | map(attribute='pv_estimate')|map('multiply',1000)|map('int')|list +
          state_attr('sensor.solcast_pv_forecast_forecast_tomorrow', 'detailedForecast')|selectattr('period_start','gt',utcnow()) | map(attribute='pv_estimate')|map('multiply',1000)|map('int')|list
          )| tojson
        }},

The sensor names have changed and the detailed forecast is now reporting power over 30 minutes in kW, so now needs to be multiplied by 1000 (not 2000 as previously required).

As always, please check your own configuration in the developer tools.

6 Likes

Hi all,

Im pritty new with coding and im more of a hobiest trying to make a complex system work.

Ive tried implimenting the emhass into my system and im following allong the link A real study case — emhass 0.5.0 documentation

The simplest thing and i cant seem to find as there isnt really a step guide.

Also did the basic automation command.

  • Ive filled the config file
    hass_url: empty
    long_lived_token: empty
    costfun: self-consumption
    logging_level: INFO
    optimization_time_step: 30
    historic_days_to_retrieve: 2
    method_ts_round: nearest
    set_total_pv_sell: false
    lp_solver: COIN_CMD
    lp_solver_path: /usr/bin/cbc
    set_nocharge_from_grid: false
    set_nodischarge_to_grid: false
    set_battery_dynamic: false
    battery_dynamic_max: 0.9
    battery_dynamic_min: -0.9
    load_forecast_method: list
    sensor_power_photovoltaics: sensor.solar_total
    sensor_power_load_no_var_loads: sensor.house_consumption
    number_of_deferrable_loads: 1
    list_nominal_power_of_deferrable_loads:
    • nominal_power_of_deferrable_loads: 3000
      list_operating_hours_of_each_deferrable_load:
    • operating_hours_of_each_deferrable_load: 5
      list_peak_hours_periods_start_hours:
    • peak_hours_periods_start_hours: “02:54”
      list_peak_hours_periods_end_hours:
    • peak_hours_periods_end_hours: “15:24”
      list_treat_deferrable_load_as_semi_cont:
    • treat_deferrable_load_as_semi_cont: true
      load_peak_hours_cost: 0.1907
      load_offpeak_hours_cost: 0.1419
      photovoltaic_production_sell_price: 0.065
      maximum_power_from_grid: 9000
      list_pv_module_model:
    • pv_module_model: CSUN_Eurasia_Energy_Systems_Industry_and_Trade_CSUN295_60M
      list_pv_inverter_model:
    • pv_inverter_model: Fronius_International_GmbH__Fronius_Primo_5_0_1_208_240__240V_
      list_surface_tilt:
    • surface_tilt: 30
      list_surface_azimuth:
    • surface_azimuth: 205
      list_modules_per_string:
    • modules_per_string: 16
      list_strings_per_inverter:
    • strings_per_inverter: 1
      set_use_battery: true
      battery_discharge_power_max: 11600
      battery_charge_power_max: 11600
      battery_discharge_efficiency: 0.95
      battery_charge_efficiency: 0.95
      battery_nominal_energy_capacity: 40000
      battery_minimum_state_of_charge: 0.2
      battery_maximum_state_of_charge: 0.9
      battery_target_state_of_charge: 0.6

Simple part i would assume now push button and it would display graphs? This is the part im now lost and dont know were my info is haha,

Please go easy on me im new and learning

this is my log file

s6-rc: info: service s6rc-oneshot-runner: starting
s6-rc: info: service s6rc-oneshot-runner successfully started
s6-rc: info: service fix-attrs: starting
s6-rc: info: service fix-attrs successfully started
s6-rc: info: service legacy-cont-init: starting
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service legacy-services: starting
services-up: info: copying legacy longrun emhass (no readiness notification)
s6-rc: info: service legacy-services successfully started
2023-09-10 10:40:21,914 - web_server - INFO - Launching the emhass webserver at: http://0.0.0.0:5000
2023-09-10 10:40:21,915 - web_server - INFO - Home Assistant data fetch will be performed using url: http://supervisor/core/api
2023-09-10 10:40:21,915 - web_server - INFO - The data path is: /share
2023-09-10 10:40:21,916 - web_server - INFO - Using core emhass version: 0.5.0
waitress INFO Serving on http://0.0.0.0:5000
2023-09-10 10:40:37,932 - web_server - INFO - EMHASS server online, serving index.html…
2023-09-10 10:40:37,937 - web_server - WARNING - The data container dictionary is empty… Please launch an optimization task
2023-09-10 10:40:40,214 - web_server - INFO - Setting up needed data
2023-09-10 10:40:40,272 - web_server - INFO - Retrieving weather forecast data using method = scrapper
2023-09-10 10:40:42,278 - web_server - ERROR - Exception on /action/dayahead-optim [POST]
Traceback (most recent call last):
File “/usr/local/lib/python3.9/dist-packages/flask/app.py”, line 2190, in wsgi_app
response = self.full_dispatch_request()
File “/usr/local/lib/python3.9/dist-packages/flask/app.py”, line 1486, in full_dispatch_request
rv = self.handle_user_exception(e)
File “/usr/local/lib/python3.9/dist-packages/flask/app.py”, line 1484, in full_dispatch_request
rv = self.dispatch_request()
File “/usr/local/lib/python3.9/dist-packages/flask/app.py”, line 1469, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
File “/usr/local/lib/python3.9/dist-packages/emhass/web_server.py”, line 179, in action_call
input_data_dict = set_input_data_dict(config_path, str(data_path), costfun,
File “/usr/local/lib/python3.9/dist-packages/emhass/command_line.py”, line 91, in set_input_data_dict
P_load_forecast = fcst.get_load_forecast(method=optim_conf[‘load_forecast_method’])
File “/usr/local/lib/python3.9/dist-packages/emhass/forecast.py”, line 640, in get_load_forecast
if len(data_list) < len(self.forecast_dates) and self.params[‘passed_data’][‘prediction_horizon’] is None:
TypeError: object of type ‘NoneType’ has no len()
2023-09-10 10:40:51,600 - web_server - INFO - Setting up needed data
2023-09-10 10:40:51,605 - web_server - INFO - >> Publishing data…
2023-09-10 10:40:51,606 - web_server - INFO - Publishing data to HASS instance
2023-09-10 10:40:51,606 - web_server - ERROR - File not found error, run an optimization task first.

Hi
It’s always best to quote logs and code using the </> tool above, in the toolbar. This presents your data in a scrollable box and makes it easier to read.

This this where your code and logs shoud go .....................................................

Try going going to the UI on port 5000 and select the optimisation buttons. What do you see in the graph below.

It doesnt do anything.

s6-rc: info: service s6rc-oneshot-runner: starting
s6-rc: info: service s6rc-oneshot-runner successfully started
s6-rc: info: service fix-attrs: starting
s6-rc: info: service fix-attrs successfully started
s6-rc: info: service legacy-cont-init: starting
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service legacy-services: starting
services-up: info: copying legacy longrun emhass (no readiness notification)
s6-rc: info: service legacy-services successfully started
1 Like

OMG IGNORE ME its working haha. i turned it off and back on and reconfigured it, rebooted my device,
checked all config was there then booted. its now printing the values.

Woohooo

now my time to play around and put into apex cards

I do need to ask how do you get the sensors to be printed now from the optimisations?

Ive got the optimisation to work but i cant find it printing my sensor. Ive found a few automations to call for the change and i belive thats working but i cant find the sensor

What do you want to graph exactly?

Something like this:

Yes i want to do that,

If done the exact shell commands
Im reciving this errors

s6-rc: info: service s6rc-oneshot-runner: starting
s6-rc: info: service s6rc-oneshot-runner successfully started
s6-rc: info: service fix-attrs: starting
s6-rc: info: service fix-attrs successfully started
s6-rc: info: service legacy-cont-init: starting
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service legacy-services: starting
services-up: info: copying legacy longrun emhass (no readiness notification)
s6-rc: info: service legacy-services successfully started
2023-09-10 18:37:40,957 - web_server - INFO - Launching the emhass webserver at: http://0.0.0.0:5000
2023-09-10 18:37:40,957 - web_server - INFO - Home Assistant data fetch will be performed using url: http://supervisor/core/api
2023-09-10 18:37:40,957 - web_server - INFO - The data path is: /share
2023-09-10 18:37:40,958 - web_server - INFO - Using core emhass version: 0.5.0
waitress   INFO  Serving on http://0.0.0.0:5000
2023-09-10 18:37:43,624 - web_server - INFO - EMHASS server online, serving index.html...
2023-09-10 18:37:50,128 - web_server - INFO - Setting up needed data
2023-09-10 18:37:50,149 - web_server - INFO - Retrieve hass get data method initiated...
2023-09-10 18:37:51,566 - web_server - INFO -  >> Performing perfect optimization...
2023-09-10 18:37:51,566 - web_server - INFO - Performing perfect forecast optimization
2023-09-10 18:37:51,569 - web_server - INFO - Perform optimization for perfect forecast scenario
2023-09-10 18:37:51,570 - web_server - INFO - Solving for day: 8-9-2023
2023-09-10 18:37:51,716 - web_server - INFO - Status: Optimal
2023-09-10 18:37:51,716 - web_server - INFO - Total value of the Cost function = 0.52
2023-09-10 18:37:51,723 - web_server - INFO - Solving for day: 9-9-2023
2023-09-10 18:37:51,864 - web_server - INFO - Status: Optimal
2023-09-10 18:37:51,864 - web_server - INFO - Total value of the Cost function = 0.8
2023-09-10 18:37:52,685 - web_server - INFO - Setting up needed data
2023-09-10 18:37:52,690 - web_server - INFO - Retrieving weather forecast data using method = scrapper
2023-09-10 18:37:54,760 - web_server - ERROR - Exception on /action/dayahead-optim [POST]
Traceback (most recent call last):
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 2190, in wsgi_app
    response = self.full_dispatch_request()
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1486, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1484, in full_dispatch_request
    rv = self.dispatch_request()
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1469, in dispatch_request
    return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
  File "/usr/local/lib/python3.9/dist-packages/emhass/web_server.py", line 179, in action_call
    input_data_dict = set_input_data_dict(config_path, str(data_path), costfun,
  File "/usr/local/lib/python3.9/dist-packages/emhass/command_line.py", line 91, in set_input_data_dict
    P_load_forecast = fcst.get_load_forecast(method=optim_conf['load_forecast_method'])
  File "/usr/local/lib/python3.9/dist-packages/emhass/forecast.py", line 640, in get_load_forecast
    if len(data_list) < len(self.forecast_dates) and self.params['passed_data']['prediction_horizon'] is None:
TypeError: object of type 'NoneType' has no len()
2023-09-10 18:37:56,913 - web_server - INFO - Setting up needed data
2023-09-10 18:37:56,917 - web_server - INFO -  >> Publishing data...
2023-09-10 18:37:56,918 - web_server - INFO - Publishing data to HASS instance
2023-09-10 18:37:56,918 - web_server - ERROR - File not found error, run an optimization task first.

I do have to ask, do you need to setup your own sensors or the shell command does this?

Ive looked through the instructions and this entire thread and its not really mentioned, it kinda is done only through the publish data shell command.
Ive done the basics of just using

shell_command:
  dayahead_optim: "curl -i -H \"Content-Type:application/json\" -X POST -d '{}' http://localhost:5000/action/dayahead-optim"
  publish_data: "curl -i -H \"Content-Type:application/json\" -X POST -d '{}' http://localhost:5000/action/publish-data"

This still doesnt publish the data.

This was put in the configurations.yaml folder as required in the document

So it looks like you are passing no data.

if len(data_list) < len(self.forecast_dates) and self.params['passed_data']['prediction_horizon'] is None:

I’m no expert at this but the {} part of the dayahead-optim POST should have the data you want the system to process.

My post command looks like this:

{"load_cost_forecast":{{(([states('sensor.cecil_st_general_price')|float(0)] + state_attr('sensor.cecil_st_general_forecast', 'forecasts') |map(attribute='per_kwh')|list)[:48])
}},"prod_price_forecast":{{(([states('sensor.cecil_st_feed_in_price')|float(0)] + state_attr('sensor.cecil_st_feed_in_forecast', 'forecasts')|map(attribute='per_kwh')|list)[:48]) 
}},"pv_power_forecast":{{([states('sensor.sonnenbatterie_84324_production_w')|int(0)] + 
state_attr('sensor.forecast_today', 'detailedForecast')|selectattr('period_start','gt',utcnow()) | map(attribute='pv_estimate')|map('multiply',2000)|map('int')|list +
state_attr('sensor.forecast_tomorrow', 'detailedForecast')|selectattr('period_start','gt',utcnow()) | map(attribute='pv_estimate')|map('multiply',2000)|map('int')|list)[:48]| tojson
}}}

You can see I’m using a template to pass data for the following:

  • load_cost_forecast for the Load cost forecast.
  • prod_price_forecast for the PV production selling price forecast.
  • pv_power_forecast for the PV power production forecast.

This produces the following list:

{"load_cost_forecast":[0.47, 0.22, 0.28, 0.27, 0.21, 0.25, 0.26, 0.27, 0.26, 0.25, 0.23, 0.21, 0.2, 0.2, 0.2, 0.18, 0.18, 0.2, 0.2, 0.21, 0.21, 0.27, 0.28, 0.17, 0.11, 0.12, 0.19, 0.16, 0.12, 0.07, 0.04, 0.06, 0.04, 0.06, 0.06, 0.06, 0.06, 0.33, 0.35, 0.36, 0.35, 0.35, 0.43, 0.59, 0.78, 1.2, 1.2, 0.82],"prod_price_forecast":[0.37, 0.12, 0.17, 0.16, 0.11, 0.14, 0.16, 0.16, 0.15, 0.15, 0.13, 0.11, 0.09, 0.09, 0.09, 0.08, 0.08, 0.09, 0.09, 0.11, 0.11, 0.16, 0.17, 0.07, 0.02, 0.02, 0.09, 0.07, 0.03, -0.04, -0.07, -0.05, -0.07, -0.05, -0.05, -0.05, -0.05, 0.25, 0.27, 0.28, 0.27, 0.27, 0.34, 0.49, 0.66, 1.04, 1.04, 0.7],"pv_power_forecast":[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 36, 333, 845, 1389, 1925, 2486, 2917, 3257, 3545, 3768, 3895, 3968, 3963, 3901, 3743, 3526, 3258, 2893, 2451, 1874, 1354, 797, 245, 5, 0, 0, 0]}

I send this data between the curly brackets in the post command.

See this part of the manual here.

Have you got any experience with jinja templates?

By the way, you may only need to list the pv_power_forecast data if your feed-in tariff and supply tariff are static and included in the EMHASS configuration file (I think).

My tariffs change every 30 minutes, hence the lists of data for load_cost_forecast and prod_price_forecast. My electricity retailer provides this data.

Perhaps this will help. This is how I’ve configured my system:
My Setup

The only sensor you have to set up is the home consumption less any deferrable loads you want to manage. This sensor is then added to the EMHASS configuration file in the sensor_power_load_no_var_loads: line.

So the major issue im having is that my configuration is 4 different batteries. 2 x lg chem with solaredge. and 2 sperate sonnen batteries.
Im trying to somehow make sense of it and get the sensors to work.

Just add them together so you have a total storage value and a total max charge & discharge value.

So im going to post all my commands and setup, Ive managed to do a hole lot since the weekend and my first post including debugging my setup. ( It is soooo sensitive and ive copies someones value and wouldnt post anything)

So to start.

shell_command:
  dayahead_optim: "curl -i -H \"Content-Type: application/json\" -X POST -d '{}' http://localhost:5000/action/dayahead-optim"
  publish_data: "curl -i -H \"Content-Type: application/json\" -X POST -d '{}' http://localhost:5000/action/publish-data "

rest_command:
  naive_mpc_optim:
    url: http://localhost:5000/action/naive-mpc-optim
    method: POST
    content_type: 'application/json'
    payload: >-
      {
        "prod_price_forecast": {{
          ([states('sensor.home_feed_in_price')|float(0)] +
          (state_attr('sensor.home_feed_in_forecast', 'forecasts')|map(attribute='per_kwh')|list))
          | tojson 
        }},
        "load_cost_forecast": {{
          ([states('sensor.home_general_price')|float(0)] + 
          state_attr('sensor.home_general_forecast', 'forecasts') |map(attribute='per_kwh')|list) 
          | tojson 
        }},
        "pv_power_forecast": {{
          ([states('sensor.solar_total')|int(0)] +
          state_attr('sensor.solcast_pv_forecast_forecast_today', 'detailedForecast')|selectattr('period_start','gt',utcnow()) | map(attribute='pv_estimate')|map('multiply',2000)|map('int')|list +
          state_attr('sensor.solcast_pv_forecast_forecast_tomorrow', 'detailedForecast')|selectattr('period_start','gt',utcnow()) | map(attribute='pv_estimate')|map('multiply',2000)|map('int')|list
          )| tojson
        }},
        "prediction_horizon": {{
          min(48, (state_attr('sensor.home_feed_in_forecast', 'forecasts')|map(attribute='per_kwh')|list|length)+1)
        }},
        "num_def_loads": 1,
        "def_total_hours": [1,],
        "P_deferrable_nom": [1300,],
        "treat_def_as_semi_cont": [1,],
        "set_def_constant": [0,],
        "soc_init": {{(states('sensor.battery_total_2')|int(0))/100 }},
        "soc_final": 0.2
      }  

Ive combined all my sensor battery values for SOC into the sensor battery total. The issue is that it displays a 400% value.

hass_url: empty
long_lived_token: empty
costfun: profit
logging_level: INFO
optimization_time_step: 30
historic_days_to_retrieve: 2
method_ts_round: first
set_total_pv_sell: false
lp_solver: COIN_CMD
lp_solver_path: /usr/bin/cbc
set_nocharge_from_grid: false
set_nodischarge_to_grid: false
set_battery_dynamic: false
battery_dynamic_max: 0.9
battery_dynamic_min: -0.9
load_forecast_method: naive
sensor_power_photovoltaics: sensor.solar_total
sensor_power_load_no_var_loads: sensor.house_consumption
number_of_deferrable_loads: 1
list_nominal_power_of_deferrable_loads:
  - nominal_power_of_deferrable_loads: 1
list_operating_hours_of_each_deferrable_load:
  - operating_hours_of_each_deferrable_load: 5
list_peak_hours_periods_start_hours:
  - peak_hours_periods_start_hours: "10:00"
list_peak_hours_periods_end_hours:
  - peak_hours_periods_end_hours: "16:00"
list_treat_deferrable_load_as_semi_cont:
  - treat_deferrable_load_as_semi_cont: true
load_peak_hours_cost: 0.1907
load_offpeak_hours_cost: 0.1419
photovoltaic_production_sell_price: 0.065
maximum_power_from_grid: 20000
list_pv_module_model:
  - pv_module_model: CSUN_Eurasia_Energy_Systems_Industry_and_Trade_CSUN295_60M
list_pv_inverter_model:
  - pv_inverter_model: Fronius_International_GmbH__Fronius_Primo_5_0_1_208_240__240V_
list_surface_tilt:
  - surface_tilt: 22
list_surface_azimuth:
  - surface_azimuth: 75
list_modules_per_string:
  - modules_per_string: 20
list_strings_per_inverter:
  - strings_per_inverter: 1
set_use_battery: true
battery_discharge_power_max: 6600
battery_charge_power_max: 6600
battery_discharge_efficiency: 0.95
battery_charge_efficiency: 0.95
battery_nominal_energy_capacity: 40000
battery_minimum_state_of_charge: 0.1
battery_maximum_state_of_charge: 1
battery_target_state_of_charge: 0.1