EMHASS: An Energy Management for Home Assistant

Here are some of my configurations to get EMHASS working with Amber Electric (Australia) Custom Component.

@madpilot @kc_au @ThirtyDursty

The magic happens in the shell commands:

shell_command:
  dayahead_optim: "curl -i -H \"Content-Type: application/json\" -X POST -d '{}' http://localhost:5000/action/dayahead-optim"
  publish_data: "curl -i -H \"Content-Type: application/json\" -X POST -d '{}' http://localhost:5000/action/publish-data "
  post_amber_forecast: "curl -i -H 'Content-Type: application/json' -X POST -d '{\"prod_price_forecast\":{{(
          state_attr('sensor.amber_feed_in_forecast', 'forecasts')|map(attribute='per_kwh')|list)
          }},\"load_cost_forecast\":{{(
          state_attr('sensor.amber_general_forecast', 'forecasts') |map(attribute='per_kwh')|list)
          }}}' http://localhost:5000/action/dayahead-optim"
  post_emhass_forecast: "curl -i -H 'Content-Type: application/json' -X POST -d '{\"prod_price_forecast\":{{(
          state_attr('sensor.amber_feed_in_forecast', 'forecasts')|map(attribute='per_kwh')|list)
          }},{{states('sensor.solcast_24hrs_forecast')}},\"load_cost_forecast\":{{(
          state_attr('sensor.amber_general_forecast', 'forecasts') |map(attribute='per_kwh')|list)
          }}}' http://localhost:5000/action/dayahead-optim"

  post_mpc_optim: "curl -i -H \"Content-Type: application/json\" -X POST -d '{\"load_cost_forecast\":{{(
          ([states('sensor.amber_general_price')|float(0)] +
          state_attr('sensor.amber_general_forecast', 'forecasts') |map(attribute='per_kwh')|list)[:48])
          }}, \"prod_price_forecast\":{{(
          ([states('sensor.amber_feed_in_price')|float(0)] +
          state_attr('sensor.amber_feed_in_forecast', 'forecasts')|map(attribute='per_kwh')|list)[:48]) 
          }}, \"prediction_horizon\":{{min(48,
          (state_attr('sensor.amber_feed_in_forecast', 'forecasts')|map(attribute='per_kwh')|list|length)+1)
          }},\"soc_init\":{{states('sensor.powerwall_charge')|float(0)/100}},\"soc_final\":0.05,\"def_total_hours\":[8,3,2,2]}' http://localhost:5000/action/naive-mpc-optim"

But let me unpack this for you first, I recommend you do a lot of work in the Developer Tools Template first.

{{state_attr('sensor.p_batt_forecast', 'forecasts')|map(attribute='p_batt_forecast')|list|length}}
{{state_attr('sensor.p_batt_forecast', 'forecasts')|map(attribute='p_batt_forecast')|list}}

"soc_init":{{states('sensor.powerwall_charge')|float(0)/100}}
"prediction_horizon":{{min(48,state_attr('sensor.amber_feed_in_forecast', 'forecasts')|map(attribute='per_kwh')|list|length+1)}}
"load_cost_forecast":{{([states('sensor.amber_general_price')|float] + 
                         state_attr('sensor.amber_general_forecast', 'forecasts') |map(attribute='per_kwh')|list)[:48]}}
"prod_price_forecast":{{([states('sensor.amber_feed_in_price')|float] + 
                         state_attr('sensor.amber_feed_in_forecast', 'forecasts')|map(attribute='per_kwh')|list)[:48]}}
"load_now": {{(states('sensor.powerwall_load_now')|float(0)*1000)|round(0)}}
"pv_now": {{(states('sensor.powerwall_solar_now')|float(0)*1000)|round(0)}}
"def_total_hours [EV 11 kW]": {{ state_attr('sensor.duka_charging_rate_sensor','time_left')|int(2)}}
"def_total_hours [Pool Filter 1.4 kW]": 4
"def_total_hours [HVAC humidity 4 kW]": {{max(0,(state_attr('climate.family','current_humidity')|int(0) - 60)/10)|int(0)}}
"def_total_hours [HVAC temp 4 kW]": {{max(0,(state_attr('climate.bedroom','current_temperature')|float(0) -
                                        state_attr('climate.daikin_ap46401','temperature')|float))}}
"def_total_hours [HVAC No People 4 kW]": {{states('zone.home')}}
"def_total_hours [Pool Heater 5 kW]": {{states('input_number.pool_temperature_set_point')|int - 
                                   states('input_number.pool_temperature')|int}}

curl -i -H "Content-Type: application/json" -X POST -d '{"prod_price_forecast":{{(
             (state_attr('sensor.amber_feed_in_forecast', 'forecasts')|map(attribute='per_kwh')|list) )
          }},"load_cost_forecast":{{(
             (state_attr('sensor.amber_general_forecast', 'forecasts') |map(attribute='per_kwh')|list) )
          }},"prediction_horizon":{{(state_attr('sensor.amber_feed_in_forecast', 'forecasts')|map(attribute='per_kwh')|list|length)
          }},"soc_init":{{states('sensor.powerwall_charge')|float(0)/100
          }},"soc_final":0.05,"def_total_hours":[4,5,3,1]}' http://localhost:5000/action/naive-mpc-optim

  post_mpc_optim: "curl -i -H \"Content-Type: application/json\" -X POST -d '{\"prod_price_forecast\":{{(
          [states('sensor.amber_feed_in_price')|float(0)] +
          (state_attr('sensor.amber_feed_in_forecast', 'forecasts')|map(attribute='per_kwh')|list)[:47]) 
          }},\"load_cost_forecast\":{{(
          [states('sensor.amber_general_price')|float(0)] +
          (state_attr('sensor.amber_general_forecast', 'forecasts') |map(attribute='per_kwh')|list)[:47])
          }},\"prediction_horizon\":{{min(48,
          (state_attr('sensor.amber_feed_in_forecast', 'forecasts')|map(attribute='per_kwh')|list|length)+1)
          }},\"soc_init\":{{states('sensor.powerwall_charge')|float(0)/100}},\"soc_final\":0.05,\"def_total_hours\":[4,5]}' http://localhost:5000/action/naive-mpc-optim"

My Automations, either a single day-ahead optimisation or MPC running every minute.

alias: EMHASS day-ahead optimization
description: ''
trigger:
  - platform: time_pattern
    minutes: '25'
    hours: '5'
condition: []
action:
  - service: shell_command.post_amber_forecast
    data: {}
  - service: shell_command.publish_data
    data: {}
mode: single
alias: EMHASS MPC optimization
description: ''
trigger:
  - platform: time_pattern
    minutes: /1
condition: []
action:
  - service: shell_command.post_mpc_optim
    data: {}
  - service: shell_command.publish_data
    data: {}
mode: single
alias: p_deferable1 automation
description: ''
trigger:
  - platform: numeric_state
    entity_id: sensor.p_deferrable1
    above: '0.1'
    for:
      hours: 0
      minutes: 2
      seconds: 0
  - platform: numeric_state
    entity_id: sensor.p_deferrable1
    for:
      hours: 0
      minutes: 2
      seconds: 0
    below: '0.1'
condition: []
action:
  - choose:
      - conditions:
          - condition: numeric_state
            entity_id: sensor.p_deferrable1
            above: '0.1'
        sequence:
          - type: turn_on
            device_id: 03a766ec3275e2c80020c5f5a3d8e603
            entity_id: switch.pool_heater_pump
            domain: switch
          - service: notify.mobile_app_pixel_6
            data:
              message: Pool Heater on
    default:
      - type: turn_off
        device_id: 03a766ec3275e2c80020c5f5a3d8e603
        entity_id: switch.pool_heater_pump
        domain: switch
      - service: notify.mobile_app_pixel_6
        data:
          message: Pool Heater off
mode: single
alias: p_batt automation
description: ''
trigger:
  - platform: numeric_state
    entity_id: sensor.p_batt_forecast
    below: '-4000'
condition: []
action:
  - choose:
      - conditions:
          - condition: numeric_state
            entity_id: sensor.p_batt_forecast
            below: '-3000'
        sequence:
          - service: notify.mobile_app_pixel_6
            data:
              title: p_batt alert {{states('sensor.p_batt_forecast')}} - mode:backup
              message: d{{states('sensor.amber_general_price')}} $/kWh
          - service: tesla_gateway.set_operation
            data:
              real_mode: backup
      - conditions:
          - condition: numeric_state
            entity_id: sensor.p_batt_forecast
            above: '4900'
        sequence:
          - service: tesla_gateway.set_operation
            data:
              real_mode: autonomous
              backup_reserve_percent: 1
          - service: notify.mobile_app_pixel_6
            data:
              title: >-
                p_batt alert {{states('sensor.p_batt_forecast')}} -
                mode:autonomous
              message: d{{states('sensor.amber_general_price')}} $/kWh
    default:
      - service: notify.mobile_app_pixel_6
        data:
          title: >-
            p_batt alert {{states('sensor.p_batt_forecast')}} -
            mode:self_consumption
          message: d{{states('sensor.amber_general_price')}} $/kWh
      - service: tesla_gateway.set_operation
        data:
          real_mode: self_consumption
          backup_reserve_percent: 2
mode: single
6 Likes

Automation to get Tesla Powerwall Gateway to discharge to grid at 0700 and 1700 for 60 minutes (during sunrise/ sunset peak) for maximum FIT and to help the grid when under stress.

I recommend getting the excellent TeslaPy libray talking to your powerwall first. (You need the cache.json anyway)

Update: this can all now be controlled with Tesla Custom Integration

You need to do some setup first:

Tonight during the price spike event I had Home Assistant switch PW2 to Time Based Control and my battery started exporting at 5 kW, until 1800 when HA switched the PW2 back to self powered mode and stopped exporting. Needs the following TOU settings to be pre-loaded to PW2, but I find the highest FIT has been occuring around this time regularly so I can automate with HA to control the mode switching.

I do the following to discharge PW2 at 5 kW during peak FIT between 5pm - 6pm.

Setup time of use settings and prices per the attached screenshot.

Switch battery to Self-Powered and it will store and consume excess solar to match household production and load.

Switch to Time-Based Control and the Battery will charge from the grid during off peak window and discharge to the grid during peak (and sometimes mid peak).

So at 6am this morning TBC mode discharged the battery to get maximum FIT. At 1030 it started consuming excess solar and charging from the grid (upto 5 kW). At 1330 it (generally) stops charging from the grid but still charges off excess solar. At 1700 ideally your battery is at 100% and it will start discharging at 5kW as it tries to empty battery before next off peak window at 1800 to gain maximum FIT for the day.

The next step is very important, before 1800 switch back to self-consumption mode otherwise it will start charging from the grid at a very expensive price. There are some new advanced grid controls that would mitigate this.

You can edit the price schedule in the app to extend discharging window. Best I have done is $60 credit during price spike event. But be aware changing the schedule doesn’t take effect immediately I find changes take effect after the next quarter hour block (IE minute 0,15,30 or 45).

If you are uncomfortable with what the battery is doing, put it back into self-powered mode and it should settle within seconds.

You can tell battery to charge at any time by increasing reserve % (albeit slower at 3.3 kW).

I find this method extends the charging/ discharging windows for longer than SmartShift, if that is what you want. Works well with Amber pricing being so different at different times.

Home Assistant/ python scripts allow you to change between modes; self_consumption, backup (100% reserve) and autonomous (time based control) or set reserve % directly, so you can automate the above process based off your own rules. Home Assistant doesn’t allow you to change your tou windows.

This is actually quite helpful as it is supplying extra capacity to the grid when grid is under stress and you gain access to higher FIT and the financial benefits.

3 Likes

Nice setup, thanks for the sharing. This can be really helpful for Amber AU users :+1:

1 Like

Nice graphics also with the ApexCharts! There is also the option of the Plotly card from HACS. The local UI is using the Plolty library for Python charts… That local UI is just intended for debugging when setting up emhass.

HI @markpurcell , I cant send you enough Thank You’s and if I ever meet you, beers are on me!

I will have a few questions, but here is my first:

Your calling “sensor.solcast_24hrs_forecast”. What value are you populating here? I am using the HACS Solcast Integration and my options are Forecast Today, Forecast Today Remaining or Forecast Tomorrow. All returns a single instance value ie/ 40KW for today. Are you derving this value in YAML or how are you collecting this info? Are you using the core Solcast HACS or are you using another version?

2 Likes

Ah, you found that one…

I would focus on just getting amber buy/sell prices first and use the inbuild pvlib estimation for solar production which is pretty good. post_amber_forecast

You don’t need to use all of those scripts at the same time, rather they are different ways to populate the future forecasts.

I have a vision of submitting both actual current and forecast pv production, which is where that is going.

Unfortunately the solcast integration only provides 60 minute resolution and I wanted to run emhass on 30 minute resolution to match the amber pricing windows.

A lot of great discussion and visualisation for SolCast here: REST API command/platform fails on POST to external URL (solcast) - #132 by safepay

Something like this to grab the solcast data direct, I still need to convert kW to W and get into correct sensor (note sensors have 255 character limits - which can be exceeded when having a full list of 24 hrs - hence the use of attributes):

- platform: rest
  name: "Solcast Forecast Data"
  json_attributes:
    - forecasts
  resource: https://api.solcast.com.au/rooftop_sites/SOLCAST_RESOURCE_ID/forecasts?format=json&api_key=SOLCAST_API_KEY&hours=24
  method: GET
  value_template: "{{ value_json.forecasts[0].pv_estimate|round(2) }}"
  unit_of_measurement: "W"
  device_class: power
  scan_interval: 00:30
  force_update: true

Today’s plan. Feed-in price stays above 60¢/ kWh all day (peaks at $15/ kWh tonight), which is higher than the 40¢ / kWh general price tomorrow morning!

EMHASS daily optimisation is saying to shift all deferrable loads to tomorrow to maximise battery charging (to export over tonight’s peak @ $10-$15) and maximise exports during the day @ 60¢.

A lot of interest in EMHASS from the Australian Amber users facebook group as we try and optimise our way through a crazy electricity market. Redirecting...

3 Likes

Nice discussion. I’m currently working on a hopefully better load forecast module. I’m a data scientist, so now that EMHASS seems quite stable the next step is to improve those forecast. The naive persistence approach is basic but robust. I’m testing to see what kind of model could we use to obtain better load forecasts, better than the naive method. This is currently narrowed to RNN (Recurrent neural networks) and LSTM ( Long short-term memory) models. They seem to give some nice results. I’m working on this right now.

4 Likes

@davidusb A small feature request would be to put totals on the columns:
cost_profit cost_fun_profi
or like a 24 hour forecast of cost/profit so you can see how the model is going to perform as a whole for the period.

1 Like

Hi, yes this can be done. An additional table that sums up the results shown in the table. The total cost is already been printed on the add-on logger though. This result is printed every time that an optimization routine is launched.

That m it’s would be useful to have the sum of cost brought through into HA, or just bring cost_profit and others with attributes, when could then be summed inside HA.

apexcharts will sum data series, so it would be good to put in the header.

Could I also ask you consider bringing the other variables across, like SOC and p_grid as that can help with control.

This can be done for p_grid.

For the SOC this is already done. Aren’t you able to see sensor.soc_batt_forecast with its future values as attributes?

Advanced a bit on new load forecasting methods.

Here is a quite decent result for the N-BEATS model:

3 Likes

The pattern of life.

Interesting to see how each day is similar but different.

Of course the Mon-Fri pattern is often similar but different to the Sat-Sun pattern, so it would be interesting to see how the forecasting model accommodates that.

My observation of the current EMHASS forecasting is that not surprisingly if yesterday was abnormal (ie higher or lower consumption) that then flows into the current optimisation. Your approach with N-BEATS and other advanced models will address this.

One suggestion from me is to include the now actual power load value into the modelling. (sensor.power_load_no_var_loads)

As you are aware I am running the mpc optimisation updated every minute. Into that model I am injecting the buy/ sell values, and using method_ts_round first, I include the current now value plus the subsequent forecast values for future timeslots.

Given EMHASS already had access to the now values for load (sensor.power_load_no_var_loads) and production (sensor.power_photovoltaics), it would seem to make sense when using method_ts_round first to use the actual now values when developing the forecasts.

In practice I would see these now values replacing the now values in any EMHASS forecast and subsequent calculations. For example I can see in the optimisation results many examples when EMHASS sets the battery power value to match baseload or PV production values, to produce a net zero effect. Pvlib at that instant maybe forecasting 3000W production and then EMHASS calculates p_batt_forecast to -3000W to balance. But sensor.power_photovoltaics may be 2000W, thus p_batt_forecast should be calculated to match at -2000W, because if we set to -3000W we need to draw that additional 1000W power from the grid.

I am aware I can inject the now values myself, as I am doing with the buy/ sell values. But if I generate my own forecasts for PV production including the now value, I loose access to the benefits of pvlib and if I generate my own load forecasts including the now value I would loose access to N-BEATS or other advanced modelling.

I hope that makes sense, but happy to discuss further if useful.

Oh, sorry I missed the inclusion of that one.

I am finding control of the battery a frustrating issue, nothing to do with EMHASS.

Looking at Home Solar battery systems like the Victron @kc_au it seems you can send the desired battery control value in W directly - so p_batt_forecast from EMHASS can be sent directly to the Victron and it will match for both charging and discharging.

I have a Tesla Powerwall2 and monitoring and control via the undocumented API is somewhat limited. I’m using the excellent teslapy library which provides the best level of control possible.

The battery is capable of charging/ discharging at +/- 5 kW, but getting it to achieve this on command is difficult.

I can set backup mode which sets charging from the grid at 3.3 kW (usually) which is a close proxy for charging on demand, albeit a bit slower than max. So when p_batt_forecast is below -3300 I can switch to backup mode and when it is above -3300 I switch to self_consumption mode.

In self_consumption mode excess solar (upto 5 kW) is stored in the battery and excess load (upto 5 kW) is provided by the battery. So this is a great mode to balance peaks and troughs.

I can also set backup_reserve as desired % minimum charge. I can easily take sensor.soc_batt_forecast and use that to set backup_reserve. When this value is over the actual SOC the battery will charge from the grid, when this value is below SOC the battery will discharge to meet household load only, but only down to backup_reserve. I could set backup_reserve after every optimisation and that would be partially effective. But wouldn’t support the peaks and troughs, as the battery would not discharge below backup_reserve.

So what about discharging to the grid?
Possible, but complex.

I can set autonomous mode on the battery that takes into consideration history of solar production and load and a set time based control schedule with windows for super_offpeak, offpeak, mid_peak and peak. All with set buy/ sell prices for each window.

The general strategy the battery follows is to ensure there is sufficient battery charge to cover peak, by charging during offpeak. To maximise returns this includes discharging the battery (5 kW to the grid) during high price sell windows (peak) and charging the battery (5 kW from the grid) during low cost buy windows (off-peak).

I can switch to autonomous mode via a script, but cannot (as yet) use a script to change the windows or prices.

Now of course EMHASS is more flexible and might want to charge from the grid at anytime conditions are right, especially when optimising for wholesale pricing that changes continuously.

Typically the times we want to discharge to the grid are those times when the grid is under extreme demand, needs additional capacity and thus the wholesale price shoots up (in my case upto $13/ kWh) to drive additional supply to meet that demand. Generally the sunrise and sunset morning and evening peaks.

I have set in the battery high price peak windows for 0600-0700 & 1700-1800 and my automation is if p_batt_forcecast is greater than 4000W and it is between these hours I switch to autonomous mode which sets the battery to discharge (5 kW to the grid) during the times of most need and afterwards I switch back to self_consumption mode.

So possible, but complex with a Powerwall2. I guess they are trying to limit over-cycling the battery. But the Victron style level of control does have its advantages.

1 Like

@markpurcell I set my Grid Set Point to a value in W. So if I set it to -5000, the Victron inverter will target export 5000 W. if my house consumption is 2000W, the inverter will output 7000W to meet its target. 0 = self consume. + 5000 would be take 5000 from grid, and excess into battery, shortage from battery.

Just confirmation on how the Victron works. What I cant do, is set the Victron to export 3kw from battery, ignore house loads. and it takes 5-6 seconds to get data, and then 5-6 seconds to push, so I cant update it live.

Pros and Cons of each. I am struggling with visualising EMHASS data atm., but once I can see the data that EMHASS is asking my batt to do, I can do the automations. Wether I take the value from EMHASS direct, or apply some logic/rules to it. Unsure at this point.

Loving what I am seeing so far though!

1 Like

The more advanced models should adapt very well to the hour of the day and the day of the week (weekdays, weekends, etc). This is because I am using that information as encoded covariate inputs to the models. This is very interesting in deed as these values are perfectly known in the future. The overall quality of the forecast will also depend on the quantity of data used when training the model. We should provide as much data as possible. This is multivariate, so l am also adding as inputs the historical values of weather data for example. Something I’ve not done yet is to also provide occupation data, but I will do in the future. This is looking great. I am still thinking if this should be a complete new add-on or if I will directly integrate this in emhass. The problem that I see is that the docker image will be bigger and harder to compile for arm archs.

As for the now values, these are of course already integrated in the forecast models. The models often find a direct correlation with the closest data lags of the history values. When feeding history values to the models the now values are the last historical values, so they are taken into consideration when forecasting.
I’m not sure if I’m missing something else on your analysis about this.

My apologies, I think I haven’t explained myself for the ‘now’ variable.

Take for a specific example pv_forecast over the last 30 minutes (1630-1700 - sunset is 1703):

pvlib has calulated pv_forecast as 729 W and p_load constant at 896W and as a consequence p_batt_forecast has been set a constant 166W for the full 30 minute period.

I am running mpc_optim every minute.

emhass has already access to the top graph variables for load and pv, but doesn’t use them when calculating the optimisation, it only looks back over the last 24 hrs, I don’t believe it uses the current value.

So p_batt should be increasing after each mpc_optimisation to reflect the actual differences (now values) for Solar Power and Load Power.

This is just a minor example, but I sometimes have errors between forecasts and actuals of 1000 W or 2000 W, which would make a substantive difference to the published p_batt_forecast value.

Now for pv_forecast I could inject the ‘now’ values, with method_ts_round first using something like:

"pv_now": {{(states('sensor.powerwall_solar_now')|float(0))}}
"solcast_future_forecasts": {{state_attr('sensor.solcast_forecast_data', 'forecasts')|map(attribute='pv_estimate')| list}}
"pv_power_forecasts:"{{([states('sensor.powerwall_solar_now')|float(0)] + 
                       (state_attr('sensor.solcast_forecast_data', 'forecasts')|map(attribute='pv_estimate')| list))[:48]}}

pv_now would be updated and injected for each mpc_optim run and the remainder of the forecasts would be fairly stable.

"pv_now": 0.05

"solcast_future_forecasts": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.383, 2.8362, 4.7992, 6.8943, 8.5046, 10.0137, 11.0883, 12.0041, 12.6009, 12.8405, 13.0011, 12.7743, 12.5328, 11.8008, 10.8927, 9.6497, 7.8646, 6.2096, 4.2492, 2.1112, 0.3171]

"pv_power_forecasts:"[0.05, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.383, 2.8362, 4.7992, 6.8943, 8.5046, 10.0137, 11.0883, 12.0041, 12.6009, 12.8405, 13.0011, 12.7743, 12.5328, 11.8008, 10.8927, 9.6497, 7.8646, 6.2096, 4.2492, 2.1112]
1 Like

Oh yes I do get it now. Ok this is specific to MPC. You are right, when using MPC it is just using the forecast from the day ahead optimization. Those forecast should be updated every time. I’ll fix this.

1 Like

Ok, taking a deeper look into this, the dayahead and the MPC optimization implementations are just fine. The scraping method for PV forecast and the naive method for load consumption forecast are just what they are with their respective limitations. To make the current/now values act as input to the forecast we will need what I’m doing right now for the load forecast: train a machine learning regression model. What I realize now is that we need to do this also for the PV forecast. We have a PV forecast obtained from scraping a website with data from GFS model. We then use PVLib to convert from irradiance to watts. Now we can use this along with the history of real PV production values to train a regression model that will be more accurate because it will be fitted to the latest lags of the real data. Ok I’ll put all this on the roadmap and continue working on these better machine learning based forecast models. So there is nothing to fix, just need to provide better forecasts. Of course the user is always open to provide their own forecasts via the runtime passed data when using MPC.

1 Like