EMHASS: An Energy Management for Home Assistant

Penalizing the battery usage is an option.

But with the current code this is also possible by using the battery SOC min/max limitations. By using these you should be able to constraint the battery depth of discharge (DOD) which is a direct function of the total number of cycles of a battery in its lifetime. This is the way to go.

Another option is to add a penalty on battery number of cycles, but this will just add more equations and more burden on the open source linear programming solvers that we rely on. So this a no go.

Thank you for the quick reply!

It sounds like I got my point across, but I don’t quite understand how to use SOC min/max limitations to determine when to charge/discharge taking cost for the battery into account. Can you elaborate on how to achieve this so that it corresponds to what I’m after?

My understanding of the SOC that is published to HA is that I’m supposed to write automations that make sure that my SOC follows what EMHASS has found to be optimal. If the cost for usage is not taken into account when optimizing it seems hard to know when to refrain from discharging or am I missing something?

I agree that adding a penalty to the number of cycles does not seem like the way to go. Forgive me for making a statement without knowing the complexity of your code, but to me it seems reasonable that different sources of energy can have different cost/profit principles that should govern how they are utilized.

Let me know if you find this suggestion good enough to consider! If not I will attempt to figure out some logic that I can use in the automations that control discharge/charge from HA that prevent non-profitable discharge cycles.

Warmly, Per Takman :slightly_smiling_face:

What I meant is that instead of applying a different cost for a battery discharge, you can directly use the available SOC limits to affect the battery ageing. As I said at the end the battery ageing is direct function of the DOD which in turn depends on SOC min/max limitations. So instead of applying a “cost” we can use these parameters to improve the battery ageing. For higher DOD’s the battery will provide less lifetime cycles. For lower DOD’s the battery will achieve higher lifetime number of cycles. Fixing SOC_min = 0 and SOC_max = 100 will give you the higher possible DOD. If you want to be more conservative and save your battery life then just change that to more constrained values, say SOC_min = 30 and SOC_max = 90. So if you want to save your battery life then constraint the DOD.

I understand your point and that’s why I said that penalizing the battery usage could be an option. But also as I said we should focus on solutions that will minimize the number of equations to solve. If we add a different cost for battery discharge the this will just add more equation to solve. Some user with more than two deferrable load are very close to the open source solver limits, so adding more equations is really not a viable option. This will be a very different story if we relied on commercial LP solvers like CPLEX or Gurobi, but the problem is that these are REALLY expensive.

I agree that adding a penalty to the number of cycles does not seem like the way to go. Forgive me for making a statement without knowing the complexity of your code, but to me it seems reasonable that different sources of energy can have different cost/profit principles that should govern how they are utilized.

Let me know if you find this suggestion good enough to consider! If not I will attempt to figure out some logic that I can use in the automations that control discharge/charge from HA that prevent non-profitable discharge cycles.

This correct, you can and should use some simple rules to directly control your battery based on the provided optimized schedule. You can the apply some basic automations to further constraint your battery in order to slow its ageing.

Thank you for elaborating! I will attempt to use SOC_min/max to dynamically adjust the behavior based on profitability. Possibly by looking at price delta during the day.

I don’t have the domain knowledge required to understand the limits. But I appreciate that you’ve taken the time to make the world a better place by providing this add-on for HA.

Cheers, Per

1 Like

The list of boolean values needs lowercase letters {“set_def_constant”:[true, false]}.

The shell command:

shell_command:
  publish_def_true: curl -i -X POST http://localhost:5000/action/dayahead-optim -H "Content-Type:application/json" -d '{"set_def_constant":[true,true]}'
1 Like

Hi again,

I discovered a problem with my value_template that took some effort to debug. It turns out that the maximum number of characters is 255 for a list. If you pay attention to round off the numbers after adding/subtracting a float you sometimes exceed maximum length for the list. This cause the sensor to become unavailable.

The code below takes care of that!

    electricity_production_price:
      friendly_name: "Electricity production price"
      value_template: >-
        {%- set data = state_attr('sensor.tibber_prices', 'today') | map(attribute='total') | list %}
        {%- set values = namespace(all=[]) %}
        {% for i in range(data | length) %}
          {%- set v = ((data[i] | float + 0.63 - 1.03) |round(4)) %}
          {%- set values.all = values.all + [ v ] %}
        {%- endfor %} {{ (values.all)[:24] }}
      availability_template: >
        {{states('sensor.tibber_prices') in ['Ok']}}
2 Likes

Hi David,

with new forecast.solar in 0.3.20 there are some “approximation” of values which should be zeroed (solar forecast from dusk till dawn. Current situation:

image

Last “correct” value is for 17:00 than it goes up on linear trend till 8:00+ when forecast.solar prediction begins for my coordinates.

image

Did i miss some new option to zero values in forecast.solar? set_zero_min is True.

With the scrape method values are ok.

Thanks
Mirek

1 Like

Well this seems like a bug that needs to be solved. I didn’t see this behavior when I tested this. It will be solved on the next version. Switch to another method for now and thanks for reporting this.

1 Like

Hi and thanks for EMHASS :wink:
This might be a more of a general HASS question but is there a way to retain the data that have been generated from EMHASS ?
When my system reboots after an update or whatever the data is lost and the way it is setup in my system I need to wait until 23:xx until I generate a new optimization.
I guess it could be solved by a smarter list that takes the time into account but I’m not that good.

Hi. This is a known problem. Another user posted about the same issue some time ago.
I haven’t taken the time to put an automation that could solve this, it seems feasible. If anyone else can put this together it would be awesome.
One solution is to trigger an automation after a system restart to reuse the forecast values stored as attributes in the sensors published by emhass. This can be done with templates.
Another solution is to relaunch the optimization task after a system restart, this will regenerate the optimization results file used to publish the sensors data.
There may be other possible solutions…

1 Like

Ok I have putted together two possible automation to contour this problem with the easiest (and laziest) way.

A first option is just to relaunch the optimization task after a Home Assistant restart, like this:

- alias: Relaunch EMHASS tasks after HASS restart (option1)
  trigger:
    - platform: homeassistant
      event: start
  action:
  - service: shell_command.dayahead_optim

A second option is that I now use is fitted to my case as I have putted some automations to control my deferrable loads based on two modes on an input_select: “Auto” and “Optim” modes. In my case the “Auto” mode will just use a predefined manual schedule for my load. The “Optim” mode is using the results from EMHASS. So when Home Assistant restart I can fallback to the “manual” mode and sendd me a notification to alert me from this. Like this:

- alias: Relaunch EMHASS tasks after HASS restart (option2)
  trigger:
    - platform: homeassistant
      event: start
  action:
    - service: input_select.select_option
      target:
        entity_id: input_select.water_heater_mode
      data:
        option: Auto
    - service: notify.sms_free
      data_template:
        title: Changed water_heater_mode to Auto
        message: Home assistant restarted and automatically changed water_heater_mode to Auto mode, launch EMHASS optimization and set back to Optim mode

A more complete and neat solution can be to reuse the forecast values stored as attributes in the sensors published by emhass as I said before using templates.

1 Like

Yeah the reuse of the forecast values or in my case with my provider:
It lists all the values for the current and next day in two different attributes, so if it somehow possible to write a template that takes the two lists and subtracts the historical values for the current day so that the list starts with the current time.
All that is however beyond my copy and pasting skills :wink:

1 Like

You should be able to test the forecast.solar method after the current version update >> v0.2.23. Please confirm that it is now working as expected.

Hi David, I am building my own docker container for synology from github sources. But with the last commit on github I have still the same results with forecast.solar:

And maybe one suggestion - solar.forecast api is rate-limited for 1 IP for <10 gets per hour. Could you try to implement option for scraping data from forecast.solar HA integration to emhass - to avoid this rate limits (HA websocket {“type”:“energy/solar_forecast”} )?

So for me this problem exists just when I upgrade the add-on, not when HASS restarts.
So the automation that is useful for me is finally this:

- alias: Relaunch EMHASS tasks after HASS restart
  trigger:
    - platform: state
      entity_id: update.emhass_update
      to: 'off'
      for:
        minutes: 10
  action:
    - service: shell_command.dayahead_optim
    - service: notify.sms_free
      data_template:
        title: Updated EMHASS and relaunched optimization
        message: The EMHASS add-on was updated and the optimization task was relaunched

Bummer!
I will look for this on the specific case of the standalone docker container. I didn’t test it with the docker container presuming that my unittests were sufficient.
As for the number of API calls the easiest way will be to treat yourself the forecast.solar integration and build the list of values with templates, then pass them to emhass inside the shell command with the pv_power_forecast key.

1 Like

Hi David, thanks to help from you and Mark I have EMHASS running nicely. My pool is several months away from completion and my EV is in the shop for repair so I don’t have many deferrable loads at the moment.

My question is about home load forecasting. I was wondering if you had considered more “exotic” ways of generating the forecast. For example a federated learning approach that could see all those who use EMHASS leveraging a shared model that is trained locally. Here is an example recent paper explaining this approach.

I would be happy to run the central server if there is interest. Bit of a left field question!

1 Like

Hi. Yes this is very interesting and one of the main tasks to further improve EMHASS. Today we are only using a very naive persistance model. I begun some months ago with this subject and trained some machine learning models to try to predict my own load at home. You can check the results in this very thread some posts ago. I left this on a side while waiting to save more historical data, because at the time I realized that my recorder was set to purge every 30 days. My goal is to propose this load forecast based on machine learning as a new add-on or directly integrate it to EMHASS, but I fear the the docker will become too big. I basically benchmarked all the models in the Darts Python module which include some advanced deep learning models: LTSM, N-BEATS, etc.
Now the approach in the paper that you show is very interesting and useful to preserve data privacy. What I propose is to start trying to obtain some nice results based on the user own historical data, then we can try to improve the system with the federated approach proposed in the paper. But if you put together the server and the necessary architecture to deal with the federated approach then I will be more than happy to contribute to the project. As I said I think that we should start simple (with no so simple deep learning models) then scale up with the federated approach. What do you think?

2 Likes

anyone who have a guide on how to implement tibber prices?

I agree we should start with a model that is shipped in the add on, then perhaps support to train the model with local data. Then we can see if there is sufficient improvement to warrant the federated learning approach. I’ll do some further investigation into what the needs of the central server are.