EMHASS: An Energy Management for Home Assistant

I don’t know what happened, but it works :slight_smile:

Just one question. It looks like the UI is off by 2 hours. Estonia is GMT +2 and probably this is the reason.

It should be 15:00, but it’s 13:00 + 2:00 doens’t bother som much in the table

But in the graph would be nice to see correct time:

In secrets_emhass.yaml time_zone is set to Europe/Tallinn

time_zone: Europe/Tallinn

Try to play with the rounding parameter. Have a look here: https://community.home-assistant.io/t/emhass-an-energy-management-for-home-assistant/338126/1475?u=110hs.
If you are passing the right number of elements this should work and reflect your local time.
This is not a minor aspect as based on the time and energy price tariff EMHASS will tell you what to do.

EDIT:
Is the log showing the right time?

Nope, also 2h behind

Probably not rounding related as well, because its 2h off

Try last instead of nearest.

Maybe this is a question for those using EMHASS in a container (this is how you @arva use it, right?)… if there is any settings for the time, date and timezone.
Because 2h off seems too much… to just be a rounding problem.

Time zone was not defined in the container and it solved the -2h, but now its one hour ahead. ugh

EDIT: I put back “nearest” and now it’s in sync

Try to monitor it but in my experience “first” is the one that gives you the best flexibility (I started with “last” and then switched to “first” and changed my code as in this way till half past, regardless of how frequently you run the optim, it will not switch to the next slot until it’s time)

Would be nice if EMHASS could take in timestamps as well and match them. Also would be nice if you can send as many hours of data you have and then it will use as much coplete data it has for the prediction. At the moment i have set the prediction to 24h and as nordpool publishes next day data at 13:00 (in HA more likely 15:00) there is a time when I can upload 9 hours (24-15) and times when i can upload 33 hours of data. Solar forecast has data for 48 hours. Of course i can make automation that will update electricity prices around midnight to get consistent 24 hours but i loose the possibility to have a longer prediction between 15:00 00:00.

Can confirm - first is accurate in all times.

Maybe a little bit out or energy management scope, but would it be possible to load in my heat pump data:

  • outside temperature
  • room temperature
  • heat pump power sensor

I would like to use machine learning to calculate how long the heat pump needs to work, and what are the electricity consumption and power required to maintain room temperature in relation to the outside temperature. This information could be a good input for the defferable load and time prediction. Also heat pump power is not constant and can vary a bit. Here is a screenshot of the hesat pump power sensor:
Screenshot 2023-11-30 at 09.08.34
Heat pumo in constantly in a cycle of heating and defrosting and power is related to compressor frequency, which is related to degree minutes, which is related to several tings like outside and inside temperature and heat curve etc. At the moment I control the heat pump raising and lowering heat curve offset when electricity is cheap or expensive. Would be nice to know if this is the most optimum way, as heat pump consumes less energy when compressor is running in lower frequency. But to pinpoint most optimum running time and frequency to maintain the room temp is something that ML would probably do better.

At th moment i’m using NibePI to controll that. Creator of nibepi has done electricity, weather, and room temp oprtimizations, but would like to see more than a black box.

There are probably more data that machine learing model can take for that. I have several:


Screenshot 2023-11-30 at 09.11.58
Screenshot 2023-11-30 at 09.11.51

Screenshot 2023-11-30 at 09.11.33

I understand that it’s a totally different machine learing model, but to me it looks like EMHASS could be a good platform to do it, because integration and data flows with Home Assitant are already there.

And to make things even complicated, I have gas heater in the system too. So can choose between two heat sources or run them together. I have done the calculations when gas heating is cheaper than heat pump and have some results already. At the monent just collecting data.

More advanced thermodynamic models are on the future roadmap for EMHASS so your post is very much on topic.

I currently run a very simple model for my heat pump, which takes into consideration the temperature forecast for the coming 24 hours and allocated the requisite run hours based on a desirable set point.

image

image

It is summer here so the solar PV production and demand for my HVAC cooling is well aligned:


My automation takes the desired power level, sensor.p_deferrable3 and converts that into a set point for my HVAC using a simple linear mathematical formula:

0 W = 27 deg C
4000 W = 24 deg C

As you can see sometimes my HVAC can run away to 11 kW, but I use 4000W as an ‘average’ set point, I can adjust dynamically my p_nom_hvac using the slider, and if I set to 5000W the set point goes down to 23 deg and so on.

I also treat the sensor.power_load_no_var_loads slightly differently but I won’t go into that in this post.

A thermal model would be better, but this seems reasonable.

This approach doesn’t work for me during winter as the peak power demands for heating (overnight) and cheap solar PV energy (during the day) are out of sync. So I typically haven’t controlled my HVAC in winter via EMHASS and just catch consumption via sensor.power_load_no_var_loads

A couple of notes on the graphic, you can see when the p_deferrable3 goes to 5 kW (at 11am), the HVAC set point goes down to 23 deg C and the power consumption goes up to between 5-8 kW, you can also see at 3pm when the deferrable3 goes to 0W, the set point goes up to 27 deg C and power consumption goes down to 1-2 kW, so it is a reasonable approximation.

2 Likes

This would be a good step to start, but i’m totally clueless how to do this. I don’t know the correlation between outside temp and hours my heat pump needs to run.

I have made sensors that count’s how many hours heat pump has run heating and hot water today and i also have a sensors that calculate the daily average heating and hot water power (daily energy / running hours).
Screenshot 2023-12-01 at 09.20.32

So at the moment i can send def_total_hours and P_deferrable_nom to the prediction model by the end of day. But this is then based on yesterdays values and does not take account the outside temperature. Just model next day from previous day.

I have however a outside temp sensor as well:
Screenshot 2023-12-01 at 09.24.15

Has anybody formed any opinions about which entities should be retained in the recorder, logbook or history with EMHASS in mind? Cleaning up db before it gets too big to manage?
Also what is the common practice with historic_days_to_retrieve and purge_keep_days?

Hi,
I’m not sure, but could there be a bug when using

"set_def_constant":[true]

?

this is the command:

curl -i -H "Content-Type: application/json" -X POST -d '{
"set_def_constant":[true],
"pv_power_forecast":[0.0, 0.0, 7.6, 44.3, 142.7, 251.0, 234.6, 110.39999999999999, 21.4, 0.0, 0.0, 0.0],
"def_total_hours":[5],
"prediction_horizon":12,
"load_cost_forecast":[23.494008, 27.6, 27.543143999999998, 25.4988, 25.040243999999998, 24.73248, 24.172572, 24.27516, 24.755964, 27.2292, 28.968252, 27.387408],
"prod_price_forecast":[12.464, 12.454, 12.444, 12.434, 12.424, 12.414, 12.404, 12.414, 12.424, 12.434, 12.444, 12.454]
}' http://localhost:5000/action/naive-mpc-optim

and here the result:

I recognized that, when using set_def_constant with true, as I wanted to have my heatpump to run without interruption.
for me it seems to be “shifted” half of the deferrable hours. I’ve seen a similar result yesterday.

I use 5 days, but for some reason, after upgrading to 2023.12 my emhass no longer works. Get the errors in relation to insufficient history in the sensor.

Coincidence or did others experience the same? In the release notes there is a reference to using long term statistics. Hope this is not the cause of the issue?

2023-12-09 21:00:06,362 - web_server - ERROR - The retrieved JSON is empty, check that correct day or variable names are passed
2023-12-09 21:00:06,362 - web_server - ERROR - Either the names of the passed variables are not correct or days_to_retrieve is larger than the recorded history of your sensor (check your recorder settings)
2023-12-09 21:00:06,362 - web_server - ERROR - Exception on /action/naive-mpc-optim [POST]
Traceback (most recent call last):
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1455, in wsgi_app
    response = self.full_dispatch_request()
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 869, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 867, in full_dispatch_request
    rv = self.dispatch_request()
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 852, in dispatch_request
    return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
  File "/usr/local/lib/python3.9/dist-packages/emhass/web_server.py", line 179, in action_call
    input_data_dict = set_input_data_dict(config_path, str(data_path), costfun,
  File "/usr/local/lib/python3.9/dist-packages/emhass/command_line.py", line 120, in set_input_data_dict
    P_load_forecast = fcst.get_load_forecast(method=optim_conf['load_forecast_method'], set_mix_forecast=True, df_now=df_input_data)
  File "/usr/local/lib/python3.9/dist-packages/emhass/forecast.py", line 585, in get_load_forecast
    rh.get_data(days_list, var_list)
  File "/usr/local/lib/python3.9/dist-packages/emhass/retrieve_hass.py", line 147, in get_data
    self.df_final = pd.concat([self.df_final, df_day], axis=0)
UnboundLocalError: local variable 'df_day' referenced before assignment

I use MPC and no problems after updating to 2023.12.1.
I also tried a model_fit and it’s still working fine.
I retain 21 days (basically needed for the ML methods which I’m not using at the moment).
Make sure you have the right number of retention days declared in your config.yaml.

You may also check if something happened to your data by plotting some sensors you are passing to EMHASS and see if the time window is covered as you expect.

1 Like

I retain 21 days just in case I ever decide to switch to ML methods (not performing well for my setup at the moment). The sensors related to PV, consumption and so on are registering data every 5 seconds.
In total I have backups of about 250 MB.

My DB is 2.3 Gb at the moment and that’s with 7 days and purged. 6 years of experimenting and 2000 sensors. Have to go through them all and exclude most of them.

Just upgraded this morning but no issue despite huge history db. I used to have the same issue every time I upgraded or even rebooted. Had to wait 2 days before MPC would work again. But that stopped after deleting the db altogether, allowing it to rebuild.

@davidusb is there a definative list of entities that need to be retained in the recorder db for EMHASS historic_days_to_retrieve?