EMHASS: An Energy Management for Home Assistant

I use the same strategy for EV charging:

    - name: def_total_hours_ev2
      state: "{{is_state('automation.p_deferable5_automation','on')|abs
             * (is_state('device_tracker.location_tracker','home')|abs) 
             * (is_state('binary_sensor.charger', 'on') | abs) 
             * (((states('number.charge_limit')|int(0) - states('sensor.battery')|int(0))/7.5/state_attr('sensor.charger_power','charger_phases')|int(0)+0.9)|int(0))|int(0)}}" 

This has the advantage as you say that the car is only scheduled for charging when it is home and plugged in. If not plugged in the optimisation can allocate the power to other loads which is efficient. It also schedules more hours based on the SOC so when the battery is getting low it gets higher priority.

The one downside I have encountered is that it schedules the car for the cheapest costs, as you would expect, but that maybe in 6 hrs or 18 hrs time. I have been thinking about a ā€˜boostā€™ flag to maybe add an additional 4-6 hours to the schedule to raise the priority and get the charging early for the EV as it may not be connected at exactly the cheapest cost of the day.

Nope
amber data has dropped to 39 now.
With 39 PV elements still NOTRUN.
This is baffling?

Amber will drop on elements until 12:30 (each day the forecast only goes to 03:30) when it will then go back up to 48 elements (full 24 hours forecast)

Have you tried adding in those additional payload elements, sounds like your main config file is suffering from some corruption.

Conversely take all the payload elements out and see if it will run, and then return them one at a time.

1 Like

Test 1.

{
  "prod_price_forecast": {{
    ([states('sensor.cecil_st_feed_in_price')|float(0)] +
    (state_attr('sensor.cecil_st_feed_in_forecast', 'forecasts')|map(attribute='per_kwh')|list))
    | tojson 
  }},
  "load_cost_forecast": {{
    ([states('sensor.cecil_st_general_price')|float(0)] + 
    state_attr('sensor.cecil_st_general_forecast', 'forecasts') |map(attribute='per_kwh')|list) 
    | tojson 
  }},
  "pv_power_forecast": {{
    ([states('sensor.sonnenbatterie_84324_production_w')|int(0)] +
    state_attr('sensor.forecast_today', 'detailedForecast')|selectattr('period_start','gt',utcnow()) | map(attribute='pv_estimate')|map('multiply',2000)|map('int')|list +
    state_attr('sensor.forecast_tomorrow', 'detailedForecast')|selectattr('period_start','gt',utcnow()) | map(attribute='pv_estimate')|map('multiply',2000)|map('int')|list
    )| tojson
  }},
  "prediction_horizon": {{
    min(48, (state_attr('sensor.cecil_st_feed_in_forecast', 'forecasts')|map(attribute='per_kwh')|list|length)+1)
  }},
  "alpha": 1,
  "beta": 0,
  "num_def_loads": 2,
  "def_total_hours": [2,2],
  "P_deferrable_nom":  [1300, 7360],
  "treat_def_as_semi_cont": [1, 0],
  "set_def_constant": [0, 0],
  "soc_init": {{ (states('sensor.sonnenbatterie_84324_state_charge_user')|int(0))/100 }},
  "soc_final": 0.1
}

Template output is valid JSON:

{
  "prod_price_forecast": [0.02, 0.06, 0.05, 0.02, 0.04, 0.05, -0.06, -0.06, -0.06, -0.06, 0.27, 0.27, 0.3, 0.3, 0.31, 0.37, 0.59, 0.62, 0.82, 0.82, 0.59, 0.59, 0.32, 0.32, 0.3, 0.13, 0.16, 0.31, 0.31, 0.31, 0.31, 0.12, 0.09, 0.09, 0.09, 0.07, 0.07, 0.07],
  "load_cost_forecast": [0.11, 0.15, 0.17, 0.13, 0.15, 0.16, 0.04, 0.04, 0.04, 0.04, 0.35, 0.36, 0.39, 0.39, 0.4, 0.46, 0.7, 0.74, 0.96, 0.96, 0.7, 0.7, 0.44, 0.44, 0.43, 0.23, 0.26, 0.43, 0.43, 0.43, 0.43, 0.22, 0.2, 0.2, 0.2, 0.17, 0.16, 0.16],
  "pv_power_forecast": [2833, 2776, 3245, 3490, 3649, 3735, 3743, 3686, 3561, 3352, 3086, 2681, 2160, 1677, 1136, 556, 64, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 59, 420, 941, 1452, 1900, 2390, 2865, 3156, 3377, 3509, 3554, 3544, 3447, 3281, 2999, 2654, 2278, 1865, 1401, 902, 366, 44, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
  "prediction_horizon": 38,
  "alpha": 1,
  "beta": 0,
  "num_def_loads": 2,
  "def_total_hours": [2,2],
  "P_deferrable_nom":  [1300, 7360],
  "treat_def_as_semi_cont": [1, 0],
  "set_def_constant": [0, 0],
  "soc_init": 0.34,
  "soc_final": 0.1
}

Error NOTRUN when node-red flow executed.
Error when curl command executed from commmand line.

Edit
Test 2. tried adding each variable and same result. Will now reset the EMHASS configuration file to default and reconfigure and try again. After that Iā€™ll have to uninstall EMHASS and reinstall.

Ok Iā€™ve reset the EMHASS config back to default and re-entered all the values to this with debug on:

hass_url: empty
long_lived_token: empty
costfun: profit
logging_level: DEBUG
optimization_time_step: 30
historic_days_to_retrieve: 2
method_ts_round: first
set_total_pv_sell: false
lp_solver: COIN_CMD
lp_solver_path: /usr/bin/cbc
set_nocharge_from_grid: false
set_nodischarge_to_grid: false
set_battery_dynamic: false
battery_dynamic_max: 0.9
battery_dynamic_min: -0.9
load_forecast_method: naive
sensor_power_photovoltaics: sensor.sonnenbatterie_84324_production_w
sensor_power_load_no_var_loads: sensor.house_power_consumption_less_deferrable
number_of_deferrable_loads: 2
list_nominal_power_of_deferrable_loads:
  - nominal_power_of_deferrable_loads: 1300
  - nominal_power_of_deferrable_loads: 7360
list_operating_hours_of_each_deferrable_load:
  - operating_hours_of_each_deferrable_load: 2
  - operating_hours_of_each_deferrable_load: 1
list_peak_hours_periods_start_hours:
  - peak_hours_periods_start_hours: "10:00"
  - peak_hours_periods_start_hours: "10:00"
list_peak_hours_periods_end_hours:
  - peak_hours_periods_end_hours: "16:00"
  - peak_hours_periods_end_hours: "16:00"
list_treat_deferrable_load_as_semi_cont:
  - treat_deferrable_load_as_semi_cont: true
  - treat_deferrable_load_as_semi_cont: false
load_peak_hours_cost: 0.1907
load_offpeak_hours_cost: 0.1419
photovoltaic_production_sell_price: 0.065
maximum_power_from_grid: 14490
list_pv_module_model:
  - pv_module_model: CSUN_Eurasia_Energy_Systems_Industry_and_Trade_CSUN295_60M
list_pv_inverter_model:
  - pv_inverter_model: Fronius_International_GmbH__Fronius_Primo_5_0_1_208_240__240V_
list_surface_tilt:
  - surface_tilt: 30
list_surface_azimuth:
  - surface_azimuth: 205
list_modules_per_string:
  - modules_per_string: 16
list_strings_per_inverter:
  - strings_per_inverter: 1
set_use_battery: true
battery_discharge_power_max: 3300
battery_charge_power_max: 3300
battery_discharge_efficiency: 0.95
battery_charge_efficiency: 0.95
battery_nominal_energy_capacity: 9300
battery_minimum_state_of_charge: 0.1
battery_maximum_state_of_charge: 1
battery_target_state_of_charge: 0.1

Still fails with command line curl:

curl -i -H "Content-Type: application/json" -X POST -d '{"load_cost_forecast":[0.08, 0.09, 0.04, 0.04, 0.04, 0.05, 0.04, 0.35, 0.39, 0.39, 0.39, 0.39, 0.46, 0.65, 0.7, 0.7, 0.7, 0.69, 0.68, 0.43, 0.43, 0.39, 0.25, 0.23, 0.43, 0.43, 0.43, 0.29, 0.2, 0.2, 0.2, 0.2, 0.2, 0.19, 0.19],"prod_price_forecast":[-0.03, -0.02, -0.06, -0.06, -0.06, -0.06, -0.06, 0.27, 0.3, 0.3, 0.3, 0.3, 0.37, 0.54, 0.59, 0.59, 0.59, 0.57, 0.57, 0.31, 0.31, 0.27, 0.14, 0.13, 0.31, 0.31, 0.3, 0.18, 0.1, 0.09, 0.1, 0.09, 0.09, 0.09, 0.09],"pv_power_forecast":[3307, 3649, 3735, 3743, 3686, 3561, 3352, 3086, 2681, 2160, 1677, 1136, 556, 64, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 59, 420, 941, 1452, 1900, 2390, 2865, 3156, 3377, 3509, 3554, 3544, 3447, 3281, 2999, 2654, 2278, 1865, 1401, 902, 366, 44, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],"prediction_horizon":35,"soc_init":0.61,"soc_final":0.1,"def_total_hours":[2,2],"num_def_loads":2,"P_deferrable_nom":[1300,7360],"treat_def_as_semi_cont": [1, 0]}' http://192.168.99.17:5000/action/naive-mpc-optim

Logfile:

s6-rc: info: service s6rc-oneshot-runner: starting
s6-rc: info: service s6rc-oneshot-runner successfully started
s6-rc: info: service fix-attrs: starting
s6-rc: info: service fix-attrs successfully started
s6-rc: info: service legacy-cont-init: starting
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service legacy-services: starting
services-up: info: copying legacy longrun emhass (no readiness notification)
s6-rc: info: service legacy-services successfully started
2023-08-21 10:36:16,522 - web_server - INFO - Launching the emhass webserver at: http://0.0.0.0:5000
2023-08-21 10:36:16,523 - web_server - INFO - Home Assistant data fetch will be performed using url: http://supervisor/core/api
2023-08-21 10:36:16,523 - web_server - INFO - The data path is: /share
2023-08-21 10:36:16,524 - web_server - INFO - Using core emhass version: 0.4.15
waitress   INFO  Serving on http://0.0.0.0:5000
2023-08-21 10:38:36,676 - web_server - INFO - Setting up needed data
2023-08-21 10:38:36,708 - web_server - INFO - Retrieve hass get data method initiated...
2023-08-21 10:38:36,724 - web_server - ERROR - The retrieved JSON is empty, check that correct day or variable names are passed
2023-08-21 10:38:36,724 - web_server - ERROR - Either the names of the passed variables are not correct or days_to_retrieve is larger than the recorded history of your sensor (check your recorder settings)
2023-08-21 10:38:36,724 - web_server - ERROR - Exception on /action/naive-mpc-optim [POST]
Traceback (most recent call last):
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 2190, in wsgi_app
    response = self.full_dispatch_request()
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1486, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1484, in full_dispatch_request
    rv = self.dispatch_request()
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1469, in dispatch_request
    return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
  File "/usr/local/lib/python3.9/dist-packages/emhass/web_server.py", line 174, in action_call
    input_data_dict = set_input_data_dict(config_path, str(data_path), costfun,
  File "/usr/local/lib/python3.9/dist-packages/emhass/command_line.py", line 110, in set_input_data_dict
    rh.get_data(days_list, var_list,
  File "/usr/local/lib/python3.9/dist-packages/emhass/retrieve_hass.py", line 147, in get_data
    self.df_final = pd.concat([self.df_final, df_day], axis=0)
UnboundLocalError: local variable 'df_day' referenced before assignment

This is a different error.

2023-08-21 10:38:36,724 - web_server - ERROR - The retrieved JSON is empty, check that correct day or variable names are passed
2023-08-21 10:38:36,724 - web_server - ERROR - Either the names of the passed variables are not correct or days_to_retrieve is larger than the recorded history of your sensor (check your recorder settings)

I assume this referrer to

and

Which are both ok?

Not sure what to do next. Delete the addon and reinstall? But not sure what is causing this.

I presume this isnā€™t a new sensor and has more than 2 days history?

correct.

The negative figures are when the tesla stops charging but the telsta integration doesnā€™t seem to catch up and is sometime 10 minnutes behind reporting negative 7kW. Iā€™ve tried to change this by calling button.ynot_force_data_update before and after changes to charging_amps and tesla charger switching on or off.

ok Iā€™ve uninstalled and reinstalled EMHASS.
Reconfigured with the same settings.
Checked the original template output is valid JSON.
Set up in node-red the more recent template that Mark suggested and also checked its output is valid.
Added this template as a rest_command in configfuration.yaml
So I have three different MPC templates, two in node-red and one in configuration.yaml
They all cause a NOTRUN error.
These are the three:

Original or what I was running yesterday without issue until I rebooted the system after installing browser-mod (which I havenā€™t removed):

{"load_cost_forecast":{{(([states('sensor.cecil_st_general_price')|float(0)]+state_attr('sensor.cecil_st_general_forecast', 'forecasts') |map(attribute='per_kwh')|list)[:48])
}},"prod_price_forecast":{{(([states('sensor.cecil_st_feed_in_price')|float(0)]+state_attr('sensor.cecil_st_feed_in_forecast', 'forecasts')|map(attribute='per_kwh')|list)[:48]) 
}},"pv_power_forecast":{{([states('sensor.sonnenbatterie_84324_production_w')|int(0)]+state_attr('sensor.forecast_today', 'detailedForecast')|selectattr('period_start','gt',utcnow()) | map(attribute='pv_estimate')|map('multiply',2000)|map('int')|list+state_attr('sensor.forecast_tomorrow', 'detailedForecast')|selectattr('period_start','gt',utcnow()) | map(attribute='pv_estimate')|map('multiply',2000)|map('int')|list)|tojson
}},"prediction_horizon":{{min(48, (state_attr('sensor.cecil_st_feed_in_forecast', 'forecasts')|map(attribute='per_kwh')|list|length)+1)
}},"soc_init":{{(states('sensor.sonnenbatterie_84324_state_charge_user')|int(0))/100
}},"soc_final":0.1,"def_total_hours":[2,2],"num_def_loads":2,"P_deferrable_nom":[1300,7360],"treat_def_as_semi_cont":[1, 0]}

New format which is much easier to read:

{
  "prod_price_forecast": {{
    ([states('sensor.cecil_st_feed_in_price')|float(0)] +
    (state_attr('sensor.cecil_st_feed_in_forecast', 'forecasts')|map(attribute='per_kwh')|list))
    | tojson 
  }},
  "load_cost_forecast": {{
    ([states('sensor.cecil_st_general_price')|float(0)] + 
    state_attr('sensor.cecil_st_general_forecast', 'forecasts') |map(attribute='per_kwh')|list) 
    | tojson 
  }},
  "pv_power_forecast": {{
    ([states('sensor.sonnenbatterie_84324_production_w')|int(0)] +
    state_attr('sensor.forecast_today', 'detailedForecast')|selectattr('period_start','gt',utcnow()) | map(attribute='pv_estimate')|map('multiply',2000)|map('int')|list +
    state_attr('sensor.forecast_tomorrow', 'detailedForecast')|selectattr('period_start','gt',utcnow()) | map(attribute='pv_estimate')|map('multiply',2000)|map('int')|list
    )| tojson
  }},
  "prediction_horizon": {{
    min(48, (state_attr('sensor.cecil_st_feed_in_forecast', 'forecasts')|map(attribute='per_kwh')|list|length)+1)
  }},
  "num_def_loads": 2,
  "def_total_hours": [2,2],
  "P_deferrable_nom":  [1300, 7360],
  "treat_def_as_semi_cont": [1, 0],
  "set_def_constant": [0, 0],
  "soc_init": {{ (states('sensor.sonnenbatterie_84324_state_charge_user')|int(0))/100 }},
  "soc_final": 0.0
}

And the rest_command:

rest_command:
  naive_mpc_optim:
    url: http://localhost:5000/action/naive-mpc-optim
    method: POST
    content_type: 'application/json'
    payload: >-
      {
        "prod_price_forecast": {{
          ([states('sensor.cecil_st_feed_in_price')|float(0)] +
          (state_attr('sensor.cecil_st_feed_in_forecast', 'forecasts')|map(attribute='per_kwh')|list))
          | tojson 
        }},
        "load_cost_forecast": {{
          ([states('sensor.cecil_st_general_price')|float(0)] + 
          state_attr('sensor.cecil_st_general_forecast', 'forecasts') |map(attribute='per_kwh')|list) 
          | tojson 
        }},
        "pv_power_forecast": {{
          ([states('sensor.sonnenbatterie_84324_production_w')|int(0)] +
          state_attr('sensor.forecast_today', 'detailedForecast')|selectattr('period_start','gt',utcnow()) | map(attribute='pv_estimate')|map('multiply',2000)|map('int')|list +
          state_attr('sensor.forecast_tomorrow', 'detailedForecast')|selectattr('period_start','gt',utcnow()) | map(attribute='pv_estimate')|map('multiply',2000)|map('int')|list
          )| tojson
        }},
        "prediction_horizon": {{
          min(48, (state_attr('sensor.cecil_st_feed_in_forecast', 'forecasts')|map(attribute='per_kwh')|list|length)+1)
        }},
        "num_def_loads": 2,
        "def_total_hours": [2,2],
        "P_deferrable_nom": [1300, 7360],
        "treat_def_as_semi_cont": [1, 0],
        "set_def_constant": [0, 0],
        "soc_init": {{(states('sensor.sonnenbatterie_84324_state_charge_user')|int(0))/100 }},
        "soc_final": 0.1
      }

They all produce something like this valid JSON output:

{
	"prod_price_forecast": [0.34, 0.33, 0.34, 0.45, 0.57, 0.57, 0.54, 0.45, 0.26, 0.21, 0.15, 0.11, 0.12, 0.16, 0.16, 0.13, 0.13, 0.1, 0.09, 0.09, 0.09, 0.07, 0.07, 0.07, 0.07, 0.07, 0.07, 0.07, 0.09, 0.11, 0.15, 0.09, 0.12, 0.18, 0.09, 0.07, 0.05, 0.05, 0.03, 0.02, 0.02, 0.02, 0.02, 0.02, 0.3, 0.31, 0.31, 0.3, 0.33],
	"load_cost_forecast": [0.43, 0.42, 0.43, 0.55, 0.69, 0.69, 0.65, 0.55, 0.38, 0.33, 0.25, 0.21, 0.23, 0.27, 0.27, 0.23, 0.23, 0.2, 0.19, 0.2, 0.19, 0.17, 0.16, 0.16, 0.16, 0.16, 0.17, 0.16, 0.19, 0.21, 0.25, 0.19, 0.22, 0.29, 0.19, 0.17, 0.16, 0.16, 0.15, 0.13, 0.13, 0.13, 0.13, 0.14, 0.39, 0.4, 0.4, 0.39, 0.42],
	"pv_power_forecast": [540, 739, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 59, 387, 891, 1389, 1837, 2300, 2725, 3011, 3247, 3441, 3600, 3646, 3539, 3351, 3043, 2650, 2239, 1808, 1337, 831, 305, 32, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
	"prediction_horizon": 48,
	"num_def_loads": 2,
	"def_total_hours": [2, 2],
	"P_deferrable_nom": [1300, 7360],
	"treat_def_as_semi_cont": [1, 0],
	"set_def_constant": [0, 0],
	"soc_init": 1.0,
	"soc_final": 0.0
}

If I take this output and send it via curl command such as this:

curl -i -H "Content-Type: application/json" -X POST -d '{  "prod_price_forecast": [0.35, 0.34, 0.34, 0.45, 0.57, 0.57, 0.54, 0.45, 0.26, 0.21, 0.15, 0.11, 0.12, 0.16, 0.16, 0.13, 0.13, 0.1, 0.09, 0.09, 0.09, 0.07, 0.07, 0.07, 0.07, 0.07, 0.07, 0.07, 0.09, 0.11, 0.15, 0.09, 0.12, 0.18, 0.09, 0.07, 0.05, 0.05, 0.03, 0.02, 0.02, 0.02, 0.02, 0.02, 0.3, 0.31, 0.31, 0.3, 0.33],  "load_cost_forecast": [0.44, 0.43, 0.43, 0.55, 0.69, 0.69, 0.65, 0.55, 0.38, 0.33, 0.25, 0.21, 0.23, 0.27, 0.27, 0.23, 0.23, 0.2, 0.19, 0.2, 0.19, 0.17, 0.16, 0.16, 0.16, 0.16, 0.17, 0.16, 0.19, 0.21, 0.25, 0.19, 0.22, 0.29, 0.19, 0.17, 0.16, 0.16, 0.15, 0.13, 0.13, 0.13, 0.13, 0.14, 0.39, 0.4, 0.4, 0.39, 0.42],  "pv_power_forecast": [504, 739, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 59, 387, 891, 1389, 1837, 2300, 2725, 3011, 3247, 3441, 3600, 3646, 3539, 3351, 3043, 2650, 2239, 1808, 1337, 831, 305, 32, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],  "prediction_horizon": 48,  "num_def_loads": 2,  "def_total_hours": [2,2],  "P_deferrable_nom":  [1300, 7360],  "treat_def_as_semi_cont": [1, 0],  "set_def_constant": [0, 0],  "soc_init": 1.0,  "soc_final": 0.0}' http://192.168.99.17:5000/action/naive-mpc-optim

I get this:

<!doctype html>
<html lang=en>
<title>500 Internal Server Error</title>
<h1>Internal Server Error</h1>
<p>The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.</p>

and logs like this from all methods of posting:

2023-08-21 16:20:08,398 - web_server - INFO - Setting up needed data
2023-08-21 16:20:08,401 - web_server - INFO - Retrieve hass get data method initiated...
2023-08-21 16:20:08,957 - web_server - ERROR - Exception on /action/naive-mpc-optim [POST]
Traceback (most recent call last):
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 2190, in wsgi_app
    response = self.full_dispatch_request()
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1486, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1484, in full_dispatch_request
    rv = self.dispatch_request()
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1469, in dispatch_request
    return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
  File "/usr/local/lib/python3.9/dist-packages/emhass/web_server.py", line 174, in action_call
    input_data_dict = set_input_data_dict(config_path, str(data_path), costfun,
  File "/usr/local/lib/python3.9/dist-packages/emhass/command_line.py", line 110, in set_input_data_dict
    rh.get_data(days_list, var_list,
  File "/usr/local/lib/python3.9/dist-packages/emhass/retrieve_hass.py", line 140, in get_data
    df_tp = df_raw.copy()[['state']].replace(
  File "/usr/local/lib/python3.9/dist-packages/pandas/core/generic.py", line 5920, in astype
    new_data = self._mgr.astype(dtype=dtype, copy=copy, errors=errors)
  File "/usr/local/lib/python3.9/dist-packages/pandas/core/internals/managers.py", line 419, in astype
    return self.apply("astype", dtype=dtype, copy=copy, errors=errors)
  File "/usr/local/lib/python3.9/dist-packages/pandas/core/internals/managers.py", line 304, in apply
    applied = getattr(b, f)(**kwargs)
  File "/usr/local/lib/python3.9/dist-packages/pandas/core/internals/blocks.py", line 580, in astype
    new_values = astype_array_safe(values, dtype, copy=copy, errors=errors)
  File "/usr/local/lib/python3.9/dist-packages/pandas/core/dtypes/cast.py", line 1292, in astype_array_safe
    new_values = astype_array(values, dtype, copy=copy)
  File "/usr/local/lib/python3.9/dist-packages/pandas/core/dtypes/cast.py", line 1237, in astype_array
    values = astype_nansafe(values, dtype, copy=copy)
  File "/usr/local/lib/python3.9/dist-packages/pandas/core/dtypes/cast.py", line 1098, in astype_nansafe
    result = astype_nansafe(flat, dtype, copy=copy, skipna=skipna)
  File "/usr/local/lib/python3.9/dist-packages/pandas/core/dtypes/cast.py", line 1181, in astype_nansafe
    return arr.astype(dtype, copy=True)
ValueError: could not convert string to float: 'NOTRUN'

@davidusb when I uninstall EMHASS is there a directory I should delete to clean up?

The way this system goes off the rails makes me think that its something external from the EMHASS system itself. Some data irregularities that may clear up after 2 days like the home power consumption less deferrables which has some issues due to Telsa data not being as timely as it should. I was getting negative values for a few minutes when ever the car stopped charging. Then the sensor from the telsta integration catches up and it goes back above the zero line again. Iā€™ve rewritted the template to ignore the tesla consumption if the calculation goes negative (hence the gap in the data).

Iā€™m thinking this may need its own HA server to separate from the 5 year old VM its running on now. A lot of stuff running in the current HA Server.

Thanks for patience

OMG makes so much senseā€¦ I was wondering why you had 2000.

Tomorrows experiment is to look at the first 30 mins and work out if I really need to wind back 29.999 minutes

1 Like

Have a look at timestamp rounding.

I use first as I want my now/current values to be considered.

If you only want to take into account the future forecasts the the ā€˜lastā€™ rounding method may work, but I havenā€™t tried.

What version of EMHASS should be in production?
image

That is the latest version

1 Like

Yeah here I have the same kind of problem cause most of the time my wife comes home at noon for a few hours so than she must charge immediately, because a few hours later she go back to work and on the evening when she is back home than it doesnā€™t matter which time the car charge on the night.

1 Like

I think it should be easy if we could set different prediction horizons for each deferrable load.

@davidusb, Do you think this is possible?

1 Like

Just as an update, Iā€™ve left the system running dayahead-optim for 3 days with no issues and then tested naive-mpc-optim tonight and its working again.

I noticed that when the pool pump started or stopped drawing power the total house consumption reported by the zibgee powerpoint was quickly updating the pool consumption but the total house consumption reported by the battery was not updated quite as quickly. This was causing a negative spike for a split second:

In the case of the EV charging it was even up to 13 minutes of negative consumption with the tesla integration not reporting that the car had stopped charging.

Iā€™ve changed the template to avoice reporting negative house consumption:

# Calculate power consumption for cecil st less the deferrable appliances
  - platform: template
    sensors:
      house_power_consumption_less_deferrables:
        unique_id: house_power_consumption_less_deferrables
        unit_of_measurement: W
        value_template: >-
          {% set consumption = states('sensor.sonnenbatterie_84324_consumption_w') | float(0) %}
          {% if is_state('switch.garage_power_point_l1', 'on') %}
            {# If the pool light is on subtract 11 watts #}
              {% set deferrable0 = states('sensor.garage_power_point_power') | float(0) - 11 %}
          {% else %}
              {% set deferrable0 = states('sensor.garage_power_point_power') | float(0) %}
          {% endif %}
          {% set deferrable1 = states('sensor.ynot_home_charge') | float(0) if is_state('device_tracker.ynot_location_tracker', 'home') else 0 %}
          {# below code stops resulte dropping below 0 when consumption hasn't cought up #}
          {% if (consumption - (deferrable0 + deferrable1)) | float(0) <= 0 %}
            {{ consumption | float(0) }}
          {% else %}
            {{ consumption - (deferrable0 + deferrable1) | float(0) }}
          {% endif %}

And Iā€™m forcing updates of the tesla integration before and after charge events.

You can increase the polling interval for the Tesla custom integration to avoid these issues.

In the above chart:

  • Car (HPWC) is a Shelly 3EM with Current Transformer (CT) clamps
  • Car (TWC) is the new gen 3 wall connector that reports current and voltage via a REST interface
  • Deferrable Load 2 is the EMHASS request
  • EV power is a template from the reported voltage, current and phases from the car (Tesla custom integration)
  • M3P charger power is the power sensor from the custom integration (note this is rounded to int), hence my template for EV power.

Note they are all pretty much updated in real time as I have the Tesla custom integration set to poll every 10 seconds when connected:

1 Like

Thanks for that Mark.
That will fix the tesla.
Battery update still a seocnd slow so may have to look at what the inverter can do for household consumption.
Thanks

EDIT: As I finally found out in an earlier post further up this thread, the naive-mpc-optim has a hard coded limitation to 24 hour of forecast. Hence my error. I will try to reconsider how I use the optimization.

See if you can help me out here. Again trying to get a frequent naive-mpc-optim working that extends a bit further than 24 hours so I can make full use of nordpool prices when they are released. As Nordpool release hourly energy prices at 13:00 every day and the prices always extend to the end of the next day, I can make a more predictable optimization where I can opt for a certain SOC at midnight.

But. I get some errors when I send more than 24 hours of data to naive-mpc-optim. This is my REST JSON:

{ "prediction_horizon": 30,
  "soc_init": 1.0,
  "soc_target": 0.5,
  "def_total_hours": [],
  "load_cost_forecast": [3.52, 4.29, 3.51, 2.75, 2.76, 0.86, 0.86, 0.84, 0.82, 0.8, 0.79, 0.82, 0.85, 0.89, 2.47, 2.32, 2.14, 2.05, 1.92, 0.87, 0.85, 1.41, 1.91, 2.21, 2.5, 2.78, 2.64, 1.98, 0.79, 0.76],
  "prod_price_forecast": [3.21, 3.98, 3.2, 2.44, 2.45, 0.55, 0.55, 0.53, 0.51, 0.49, 0.48, 0.51, 0.54, 0.58, 2.16, 2.01, 1.83, 1.74, 1.61, 0.56, 0.54, 1.1, 1.6, 1.9, 2.19, 2.47, 2.33, 1.67, 0.48, 0.45],
  "pv_power_forecast": [875, 221, 6, 0, 0, 0, 0, 0, 0, 0, 0, 10, 149, 497, 965, 1397, 1689, 1914, 2028, 2011, 1886, 1662, 1373, 982, 446, 112, 3, 0, 0, 0]
}

And this is the error I get:

2023-08-24 17:52:49,565 - web_server - INFO - Setting up needed data
2023-08-24 17:52:49,568 - web_server - INFO - Retrieve hass get data method initiated...
2023-08-24 17:52:51,568 - web_server - INFO - Retrieving weather forecast data using method = list
2023-08-24 17:52:51,570 - web_server - INFO - Retrieving data from hass for load forecast using method = naive
2023-08-24 17:52:51,571 - web_server - INFO - Retrieve hass get data method initiated...
2023-08-24 17:52:56,482 - web_server - ERROR - Exception on /action/naive-mpc-optim [POST]
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 2190, in wsgi_app
    response = self.full_dispatch_request()
  File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1486, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1484, in full_dispatch_request
    rv = self.dispatch_request()
  File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1469, in dispatch_request
    return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
  File "src/emhass/web_server.py", line 174, in action_call
    input_data_dict = set_input_data_dict(config_path, str(data_path), costfun,
  File "/usr/local/lib/python3.8/site-packages/emhass-0.4.14-py3.8.egg/emhass/command_line.py", line 127, in set_input_data_dict
    df_input_data_dayahead = copy.deepcopy(df_input_data_dayahead)[df_input_data_dayahead.index[0]:df_input_data_dayahead.index[prediction_horizon-1]]
  File "/usr/local/lib/python3.8/site-packages/pandas/core/indexes/base.py", line 5039, in __getitem__
    return getitem(key)
  File "/usr/local/lib/python3.8/site-packages/pandas/core/arrays/datetimelike.py", line 341, in __getitem__
    "Union[DatetimeLikeArrayT, DTScalarOrNaT]", super().__getitem__(key)
  File "/usr/local/lib/python3.8/site-packages/pandas/core/arrays/_mixins.py", line 272, in __getitem__
    result = self._ndarray[key]
IndexError: index 29 is out of bounds for axis 0 with size 24

Whatā€™s with that 24 size error? All three arrays I send in are 30 element sizes.
The call works well if I set "prediction_horizon": 24. Although I currently get ā€œInfeasibleā€ result, which I have to try to resolve, but thatā€™s another story.

EDIT: See edit at top of post.

Hello,
I wanted to share a solution Iā€™ve found to pass arbitrary long datasets (there is a 255 characters limit on sensors state) to the optimization process.
Iā€™ve not seen the same solution in previous comments (markpurcell was using the templates in commands but he was putting some execution code directly there) of the thread so Iā€™m sharing if useful, as this potentially allows you to run arbitrarily complex code without the need to store the results somewhere.

In the following example Iā€™m passing the energy pull hourly costs for the next 24h with a 30 minutes resolution.

In configuration.yaml I set the shell command like this. I rely on {{templates}} to replace portions of the code and compose the command as needed.
In this example Iā€™m passing {{ api_endpoint }} string and {{ load_cost_forecast }} list.
It doesnā€™t like columns and the solution I found is to use a small template in this case as well: {{':'}}.

shell_command:
  # EMHASS component commands
  dayahead_optim: curl -i -H "Content-Type:application/json"  -X POST -d '{"solar_forecast_kwp"{{':'}}8, "load_cost_forecast"{{':'}}{{ load_cost_forecast }}}' {{ api_endpoint }}

The parameters are built within the automation. Be careful you canā€™t use the UI but have to create your own emhass_automation.yaml file, otherwise it will not work (parameters are replaced/evaluated when the file is loaded into the configuration).
My ā€œtestā€ automation, which I think is the interesting part here:

- id: 'test'
  alias: "test"
  description: "test_emhass_command"
  trigger: []
  condition: []
  action:
    - service: shell_command.dayahead_optim
      data:
        api_endpoint: "http://localhost:5000/action/dayahead-optim"
        load_cost_forecast: >
          {% set ns_forecast = namespace(forecast=[]) %}
          {% for item in states.sensor|selectattr('entity_id', 'search', 'pun_oggi_')|sort(attribute='entity_id', reverse= false )|map(attribute='entity_id')|list %} 
          {% if (now().time()) < strptime(state_attr(item,'start'),'%H:%M:%S').time() %}
            {#% set ns_forecast.forecast = ns_forecast.forecast + [item] %#}  {# this is for debugging and check which sensors I'm using in the loop - (un)comment as needed #}
            {% set ns_forecast.forecast = ns_forecast.forecast + [((states(item)|round(3)))|float] %} {# this is for 24h forecasts with 1h resolution #}
            {% set ns_forecast.forecast = ns_forecast.forecast + [((states(item)|round(3)))|float] %} {# this is for 24h forecasts with 30' resolution - (un)comment as needed #}
          {% endif %}
          {% endfor %}
          {% for item in states.sensor|selectattr('entity_id', 'search', 'pun_domani_')|sort(attribute='entity_id', reverse= false )|map(attribute='entity_id')|list %} 
          {% if (now().time()) >= strptime(state_attr(item,'start'),'%H:%M:%S').time() %}
            {#% set ns_forecast.forecast = ns_forecast.forecast + [item] %#}  {# this is for debugging and check which sensors I'm using in the loop - (un)comment as needed #}
            {% set ns_forecast.forecast = ns_forecast.forecast + [((states(item)|round(3)))|float] %} {# this is for 24h forecasts with 1h resolution #}
            {% set ns_forecast.forecast = ns_forecast.forecast + [((states(item)|round(3)))|float] %} {# this is for 24h forecasts with 30' resolution - (un)comment as needed #}
          {% endif %}
          {% endfor %}
          {{ ns_forecast.forecast }}
  mode: single

The jinja code is run (I have 48 sensors with the hourly energy cost for today and tomorrow) and then the final list (next 24 hours, 30 minutes resolution, so 48 values in total) is associated to the load_cost_forecast parameter, which is then passed to the curl command.

When the automation is launched this is the trace:

Result:

params:
  domain: shell_command
  service: dayahead_optim
  service_data:
    api_endpoint: http://localhost:5000/action/dayahead-optim
    load_cost_forecast:
      - 0.126
      - 0.126
      - 0.126
      - 0.126
      - 0.126
      - 0.126
      - 0.126
      - 0.126
      - 0.126
      - 0.126
      - 0.126
      - 0.126
      - 0.103
      - 0.103
      - 0.103
      - 0.103
      - 0.103
      - 0.103
      - 0.103
      - 0.103
      - 0.103
      - 0.103
      - 0.103
      - 0.103
      - 0.103
      - 0.103
      - 0.103
      - 0.103
      - 0.103
      - 0.103
      - 0.103
      - 0.103
      - 0.103
      - 0.103
      - 0.103
      - 0.103
      - 0.103
      - 0.103
      - 0.103
      - 0.103
      - 0.103
      - 0.103
      - 0.103
      - 0.103
      - 0.103
      - 0.103
      - 0.103
      - 0.103
  target: {}
running_script: false

And this is the result when testing the same code in the template section of the dev tools:

This is the EMHASS log:

2023-08-26 16:05:07,631 - web_server - INFO - Setting up needed data
2023-08-26 16:05:07,646 - web_server - INFO - Retrieving weather forecast data using method = solar.forecast
2023-08-26 16:05:08,350 - web_server - INFO - Retrieving data from hass for load forecast using method = naive
2023-08-26 16:05:08,439 - web_server - INFO - Retrieve hass get data method initiated...
2023-08-26 16:07:00,234 - web_server - INFO -  >> Performing dayahead optimization...
2023-08-26 16:07:00,244 - web_server - INFO - Performing day-ahead forecast optimization
2023-08-26 16:07:00,428 - web_server - INFO - Perform optimization for the day-ahead
2023-08-26 16:07:03,173 - web_server - INFO - Status: Optimal
2023-08-26 16:07:03,176 - web_server - INFO - Total value of the Cost function = 0.67

I hope it is useful.
Feel free to let me know what you think or if you see something weird/wrong.