EMHASS: An Energy Management for Home Assistant

Am I wrong with this then? (alpha and beta at the end)
I don’t see an effect here:
{"pv_power_forecast": [1449.6,527.4,421.95,311.40000000000003,280.75,247.25,187,80.19999999999999,34.1,3.25,0,0,0,0,0,0,0,0,0,0,0,0,0,0], "prod_price_forecast": [0.197772,0.197772,0.20178,0.20178,0.199596,0.199596,0.20890799999999998,0.20890799999999998,0.204132,0.204132,0.20418,0.20418,0.218592,0.218592,0.19314,0.19314,0.18082800000000002,0.18082800000000002,0.16700399999999999,0.16700399999999999,0.15874799999999997,0.15874799999999997,0.126312,0.126312], "prediction_horizon":24, "alpha":0, "beta":1}

Yes that’s ok.

An effect on what? What are you expecting?

Sorry you may need to swap them. alpha=1, beta=0 to give priority to the now values.

Crazy plan here tomorrow, in the middle of a heat wave at the tail end of summer.

Optimisation looks pretty standard, charge battery during the day over the midday solar soak, run some deferrable loads in between the gaps and then discharge the battery during the evening peak.

This is where the fun starts our market operator has a cap on export prices of $16/ kWh (yes that isn’t a typo) and the national market runs out of capacity in our heat wave for a 5 hour price spike tomorrow night at the export cap prices and I have just upgraded my battery capacity to 43 kWh to match my solar production.

The EMHASS forecast cost for tomorrow is a credit of over $400!

In reality these high prices will likely collapse during the day, but will probably still end up with one or two 30 minute slot as a price spike. A great use case for the benefit of EMHASS thanks @davidusb .

Great to see it planning those crazy credits! Cheers :partying_face:

Thanks, on a first glance this looks better!

@davidusb it is about EMHASS taking showing strange numbers which I didn’t feed in for pv power in the current hour.

But I have an even more important porblem now:
After also including load_cost_forecast to my mpc call, EMHASS get’s stuck and the docker contaienrs CPU slowly rises to 1xx % from each mpc call to the next one.
My call now looks like this: {"pv_power_forecast":[{{{pv_power_forecast}}}], "prod_price_forecast":[{{{prod_price_forecast}}}], "load_cost_forecast":[{{{load_cost_forecast}}}], "prediction_horizon":{{{prediction_horizon}}}, "def_total_hours":[{{{deferrable1}}}], "alpha":1, "beta":0}

Output:

{"pv_power_forecast":[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,13,42.2,81.8,146.1,179.9,210.85000000000002,251.14999999999998,290.04999999999995,326.05,346.1,348.84999999999997,329.6,288.2,245,197.9,125.45,63.45,29.75,4.7,0,0,0,0,0,0,0,0,0,0,0,0], "prod_price_forecast":[0.155028,0.127824,0.127824,0.119724,0.119724,0.112392,0.112392,0.105780,0.105780,0.143676,0.143676,0.182040,0.182040,0.201396,0.201396,0.218376,0.218376,0.210600,0.210600,0.210180,0.210180,0.195408,0.195408,0.193032,0.193032,0.167748,0.167748,0.160596,0.160596,0.159216,0.159216,0.173892,0.173892,0.166356,0.166356,0.177036,0.177036,0.186252,0.186252,0.180168,0.180168,0.160272,0.160272,0.144336,0.144336,0.138288,0.138288], "load_cost_forecast":[0.261612,0.233592,0.233592,0.225249,0.225249,0.217697,0.217697,0.210886,0.210886,0.249919,0.249919,0.289434,0.289434,0.309371,0.309371,0.326860,0.326860,0.318851,0.318851,0.318418,0.318418,0.303203,0.303203,0.300756,0.300756,0.274713,0.274713,0.267347,0.267347,0.265925,0.265925,0.281042,0.281042,0.273279,0.273279,0.284280,0.284280,0.293772,0.293772,0.287506,0.287506,0.267013,0.267013,0.250599,0.250599,0.244369,0.244369], "prediction_horizon":47, "def_total_hours":[5.17], "alpha":1, "beta":0}

I already tried rounding the values of load_cost_forecast and prod_price_forecast to 6 digits but also this didn’t help. Without load_cost_forecast everything is working.

Any idea why this happens?
Thanks

EDIT:
Running it with a shorter prediction horizon (tested it with 32) works. I don’t think the power of the Synology DS916+ is the limiting factor here. @davidusb Is there a maximum length of the shell command?

@davidusb , interesting to note over my 5 hour price spike window which is longer than the batteries have capacity for EMHASS optimisation is to discharge at 100% (15 kW) for a time slot, then rest, then discharge at 100% for another time slot. Instead of discharging at a constant rate (say 6 kW) for the duration.

Is this something inherent in the coding?

I do not understand this one at all.

No is not a problem of processing power, it is just a problem of the number of linear equations to solve. The optimization problem is just too big for our open source solvers. The solution is what you did, reduce the prediction horizon . There is a maximum length of data allowable on the shell command, but I cannot give a number because it is specific to each configuration. You have to find it like you did by trial and error. Or just by default use shorter perdiction horizons. In any case if you launch an optimization task and the solution takes more than just some seconds to show up then this suggests that your optimization problem is too big. Too big means like we said a prediction horizon too big or that you have too many deferrable loads.

1 Like

Yes, it seems weird right? Indeed it would be better to discharge at a constant rate. The optimizer just found the schedule giving the lowest possible value for the given cost function (defaults to profit) and the constraints. There are no contraints for maximum power slope between time slots, so with the current code this is not a bug but just a possible mathematical result. This suggests that it may be interesting to add an additionnal constraint for a maximum slope value, ie power/time, for each deferrable load and for the battery power. This is an improvement that may be added in the future.
Another solution may be to filter this provided power schedule outside EMHASS and directly in home assistant using automations. The automations could be based on a sensor that is a filtered version of the schedule provided by EMHASS.

1 Like

Thanks, I decreased the prediction_horizon to 24 and it’s working now.

My defferable load gets set as it would have to modulate (green line), which my heatpump is not able to do:

Is this about using mpc? I do not remember seeing different watts for the same the defferable in different hours (it was always at 640) when using next-day-optimization.

Having a deferrable load taking different values is usually from:

treat_def_as_semi_cont: Define if we should treat each deferrable load as a semi-continuous variable. Semi-continuous variables are variables that must take a value between their minimum and maximum or zero.

Set this to true for your heatpump.

https://emhass.readthedocs.io/en/latest/config.html#optimization-configuration-parameters

1 Like

Question on MPC and prediction horizon:
Can I forecast for more than 24h?
I use 30 min intervals for MPC and works well as long as I provide data with up to 48 intervals. I can use less than 48 without problem. But if I want to send data with more than 48 intervals, I always get an error.

Use case: using MPC to forecast from midday today until midnight tomorrow (i.e. 36 hours). When using 30 min intervals, I need to set prediction horizon to 72 and provide data for 72 time slots.

Here is the error i get (prediction horizon set to 50 and corresponding data):

Blockquote
[2023-02-05 13:44:53,672] INFO in command_line: Setting up needed data
[2023-02-05 13:44:53,679] INFO in retrieve_hass: Retrieve hass get data method initiated…
[2023-02-05 13:44:55,362] INFO in forecast: Retrieving weather forecast data using method = list
[2023-02-05 13:44:55,366] INFO in forecast: Retrieving data from hass for load forecast using method = naive
[2023-02-05 13:44:55,368] INFO in retrieve_hass: Retrieve hass get data method initiated…
[2023-02-05 13:44:58,236] ERROR in app: Exception on /action/naive-mpc-optim [POST]
Traceback (most recent call last):
File “/usr/local/lib/python3.9/dist-packages/flask/app.py”, line 2525, in wsgi_app
response = self.full_dispatch_request()
File “/usr/local/lib/python3.9/dist-packages/flask/app.py”, line 1822, in full_dispatch_request
rv = self.handle_user_exception(e)
File “/usr/local/lib/python3.9/dist-packages/flask/app.py”, line 1820, in full_dispatch_request
rv = self.dispatch_request()
File “/usr/local/lib/python3.9/dist-packages/flask/app.py”, line 1796, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
File “/usr/local/lib/python3.9/dist-packages/emhass/web_server.py”, line 135, in action_call
input_data_dict = set_input_data_dict(config_path, str(data_path), costfun,
File “/usr/local/lib/python3.9/dist-packages/emhass/command_line.py”, line 119, in set_input_data_dict
df_input_data_dayahead = copy.deepcopy(df_input_data_dayahead)[df_input_data_dayahead.index[0]:df_input_data_dayahead.index[prediction_horizon-1]]
File “/usr/local/lib/python3.9/dist-packages/pandas/core/indexes/base.py”, line 5039, in getitem
return getitem(key)
File “/usr/local/lib/python3.9/dist-packages/pandas/core/arrays/datetimelike.py”, line 341, in getitem
“Union[DatetimeLikeArrayT, DTScalarOrNaT]”, super().getitem(key)
File “/usr/local/lib/python3.9/dist-packages/pandas/core/arrays/_mixins.py”, line 272, in getitem
result = self._ndarray[key]
IndexError: index 49 is out of bounds for axis 0 with size 48

Call function:

  post_mpc_optim: "curl -i -H \"Content-Type: application/json\" -X POST -d '{\"pv_power_forecast\":{{
    (state_attr('sensor.emhass_forecast_data', 'pv_estimate')|list)[:horizon]}}, \"load_cost_forecast\":{{
    (state_attr('sensor.emhass_forecast_data', 'tibber_prices')|list)[:horizon]}}, \"prediction_horizon\":{{
    min(horizon,(state_attr('sensor.emhass_forecast_data', 'counter_t')|int))}},\"soc_init\":{{
    states('sensor.powerwall_charge')|float(0)/100}},\"soc_final\":0.4,\"def_total_hours\":[8,3]}' http://localhost:5000/action/naive-mpc-optim"

Resulting string:

post_mpc_optim: "curl -i -H \"Content-Type: application/json\" -X POST -d '{\"pv_power_forecast\":[1694.6, 1559.5, 1366.1000000000001, 1172.0, 821.5, 476.0, 232.4, 33.2, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 158.79999999999998, 601.0, 1386.6000000000001, 2066.4, 2839.6, 3807.5, 4730.2, 5490.1, 6003.2, 6239.4, 6011.9, 5176.3, 4139.7], \"load_cost_forecast\":[0.2698, 0.2731, 0.2731, 0.2809, 0.2809, 0.2914, 0.2914, 0.3181, 0.3181, 0.3465, 0.3465, 0.3459, 0.3459, 0.3399, 0.3399, 0.3254, 0.3254, 0.3279, 0.3279, 0.3141, 0.3141, 0.3151, 0.3151, 0.3135, 0.3135, 0.3155, 0.3155, 0.3116, 0.3116, 0.3092, 0.3092, 0.3197, 0.3197, 0.3515, 0.3515, 0.3813, 0.3813, 0.3964, 0.3964, 0.3813, 0.3813, 0.3627, 0.3627, 0.3565, 0.3565, 0.3433, 0.3433, 0.3343, 0.3343, 0.3375], \"prediction_horizon\":50,\"soc_init\":0.14,\"soc_final\":0.4,\"def_total_hours\":[8,3]}' http://localhost:5000/action/naive-mpc-optim"

When setting the horizon to 48 or less, everything works. Am doing something wrong or is there a limitation?

thanks

You have set the prediction horizon to 50 but you are passing a list with 48 elements. Hence an error.
If you want to go up to 36h then set a higher time step. Instead of 30 min take 1h or even higher: 2h, 3h?
If you fix it to 1h then passing a list with no less than 36 elements and setting a prediction horizon of 36 should do fine.
If you do this be careful to transform the data on your passed lists to a 1h time step (if you choose that).

Thanks, it is already set to true. But how would I define the minimum and maximum?
My curent conf is:

  - P_deferrable_nom: # Watts
    - 643.0
  - def_total_hours: # hours
    - 5
  - treat_def_as_semi_cont: # treat this variable as semi continuous 
    - True
  - set_def_constant: # set as a constant fixed value variable with just one startup for each 24h
    - False

In the mpc call, I overwrite def_total_hours

David - thanks but not sure I fully follow. In my MPC call example I did set the prediction horizon to 50 and my lists for PV estimate and electricity prices each include a list with 50 items => should be consistent and I expected that the optimization can be performed (as long as number of data items passed and prediction horizon are consistent - regardless of using 24,36,48,50,72 or 100 items).

Are you saying that there is a Limitation of having a prediction horizon of max 48? Or did I do a stupid mistake in my function call that I am missing?

Thanks

You said that you use 30 min time step. So if you fixed the prediction horizon to 50 then the provided list should have at least 100 elements.

The way these semi-continuous variables are coded is that they either have a value of 0 or a value equal to the nominal power of your deferrable load defined in your configuration.

Ok but if this already set to true, why is it then still using values between min and max?

I’m unfortunately having another issue.
With the following:
{"pv_power_forecast":[865.4,749.05,534.4499999999999,542.65,789.6999999999999,1097,1036.4,768.05,482.65000000000003,305.20000000000005,174.45,73.3,15.350000000000001,0,0,0,0,0,0,0,0,0,0,0,0,0,0], "prod_price_forecast":[0.216144,0.207648,0.207648,0.195576,0.195576,0.186276,0.186276,0.187824,0.187824,0.198324,0.198324,0.212400,0.212400,0.232968,0.232968,0.250440,0.250440,0.240696,0.240696,0.217692,0.217692,0.193980,0.193980,0.185352,0.185352,0.179040,0.179040], "load_cost_forecast":[0.324561,0.315810,0.315810,0.303376,0.303376,0.293797,0.293797,0.295392,0.295392,0.306207,0.306207,0.320705,0.320705,0.341890,0.341890,0.359886,0.359886,0.349850,0.349850,0.326156,0.326156,0.301732,0.301732,0.292845,0.292845,0.286344,0.286344], "prediction_horizon":27, "def_total_hours":[1.3181818181818175], "alpha":1, "beta":0}

I get the same P_Load for each hour:

Changing alpha and beta only has an effect on the first line.

Thanks