EMHASS: An Energy Management for Home Assistant

Emhass has been running good for a few weeks now and suddenly I get this error again.
Where does this come from suddenly? What should I check?
The sensor has enough history.

ERROR - web_server - The retrieved JSON is empty, A sensor:sensor.power_load_no_var_loads may have 0 days of history or passed sensor may not be correct

After removing and adding the MPC automation and the MPC config the logging seems better:

2024-04-29 10:00:00,097 - web_server - INFO - Passed runtime parameters: {}
2024-04-29 10:00:00,097 - web_server - INFO -  >> Setting input data dict
2024-04-29 10:00:00,097 - web_server - INFO - Setting up needed data
2024-04-29 10:00:00,106 - web_server - INFO -  >> Publishing data...
2024-04-29 10:00:00,106 - web_server - INFO - Publishing data to HASS instance
2024-04-29 10:00:00,133 - web_server - INFO - Successfully posted to sensor.p_pv_forecast = 2663.43
2024-04-29 10:00:00,137 - web_server - INFO - Successfully posted to sensor.p_load_forecast = 548.95
2024-04-29 10:00:00,141 - web_server - INFO - Successfully posted to sensor.p_deferrable0 = 0.0
2024-04-29 10:00:00,144 - web_server - INFO - Successfully posted to sensor.p_batt_forecast = 414.0
2024-04-29 10:00:00,147 - web_server - INFO - Successfully posted to sensor.soc_batt_forecast = 12.0
2024-04-29 10:00:00,150 - web_server - INFO - Successfully posted to sensor.p_grid_forecast = -2528.48
2024-04-29 10:00:00,153 - web_server - INFO - Successfully posted to sensor.total_cost_fun_value = 46.68
2024-04-29 10:00:00,155 - web_server - INFO - Successfully posted to sensor.optim_status = Optimal
2024-04-29 10:00:00,158 - web_server - INFO - Successfully posted to sensor.unit_load_cost = 19.6308
2024-04-29 10:00:00,161 - web_server - INFO - Successfully posted to sensor.unit_prod_price = 4.973
2024-04-29 10:00:00,413 - web_server - INFO - Passed runtime parameters: {'load_cost_forecast': [17.2606, 17.3126, 17.7853, 16.9649, 17.1642, 18.4542, 20.7364, 22.1451, 25.3452, 28.8538, 24.3425, 22.4737, 21.4296], 'prod_price_forecast': [2.737, 2.786, 3.232, 2.458, 2.646, 3.863, 6.016, 7.345, 10.364, 13.674, 9.418, 7.655, 6.67], 'prediction_horizon': 10, 'soc_init': 0.15, 'soc_final': 0.9}
2024-04-29 10:00:00,413 - web_server - INFO -  >> Setting input data dict
2024-04-29 10:00:00,413 - web_server - INFO - Setting up needed data
2024-04-29 10:00:00,416 - web_server - INFO - Retrieve hass get data method initiated...
2024-04-29 10:00:00,933 - web_server - INFO - Retrieving weather forecast data using method = scrapper
2024-04-29 10:00:01,761 - web_server - INFO - Retrieving data from hass for load forecast using method = naive
2024-04-29 10:00:01,762 - web_server - INFO - Retrieve hass get data method initiated...
2024-04-29 10:00:02,775 - web_server - INFO -  >> Performing naive MPC optimization...
2024-04-29 10:00:02,775 - web_server - INFO - Performing naive MPC optimization
2024-04-29 10:00:02,778 - web_server - INFO - Perform an iteration of a naive MPC controller
2024-04-29 10:00:02,779 - web_server - DEBUG - Deferrable load 0: Proposed optimization window: 0 --> 0
2024-04-29 10:00:02,779 - web_server - DEBUG - Deferrable load 0: Validated optimization window: 0 --> 0
2024-04-29 10:00:02,780 - web_server - WARNING - Solver default unknown, using default
Welcome to the CBC MILP Solver 
Version: 2.10.10 
Build Date: Sep 26 2023 

command line - /usr/local/lib/python3.11/dist-packages/pulp/solverdir/cbc/linux/arm64/cbc /tmp/65d1594368ca4738a4a8c5888ba68e39-pulp.mps -max -timeMode elapsed -branch -printingOptions all -solution /tmp/65d1594368ca4738a4a8c5888ba68e39-pulp.sol (default strategy 1)
At line 2 NAME          MODEL
At line 3 ROWS
At line 133 COLUMNS
At line 626 RHS
At line 755 BOUNDS
At line 846 ENDATA
Problem MODEL has 128 rows, 70 columns and 432 elements
Coin0008I MODEL read with 0 errors
Option for timeMode changed from cpu to elapsed
Continuous objective value is 54.4621 - 0.00 seconds
Cgl0003I 0 fixed, 0 tightened bounds, 12 strengthened rows, 0 substitutions
Cgl0003I 0 fixed, 0 tightened bounds, 1 strengthened rows, 0 substitutions
Cgl0004I processed model has 85 rows, 60 columns (20 integer (20 of which binary)) and 363 elements
Cbc0038I Initial state - 1 integers unsatisfied sum - 0.3
Cbc0038I Pass   1: suminf.    0.00000 (0) obj. 41.2483 iterations 7
Cbc0038I Solution found of 41.2483
Cbc0038I Relaxing continuous gives 41.5574
Cbc0038I Before mini branch and bound, 19 integers at bound fixed and 27 continuous
Cbc0038I Full problem 85 rows 60 columns, reduced to 2 rows 3 columns
Cbc0038I Mini branch and bound improved solution from 41.5574 to 54.4621 (0.00 seconds)
Cbc0038I After 0.00 seconds - Feasibility pump exiting with objective of 54.4621 - took 0.00 seconds
Cbc0012I Integer solution of 54.462053 found by feasibility pump after 0 iterations and 0 nodes (0.00 seconds)
Cbc0001I Search completed - best objective 54.4620528990656, took 0 iterations and 0 nodes (0.00 seconds)
Cbc0035I Maximum depth 0, 0 variables fixed on reduced cost
Cuts at root node changed objective from 54.4621 to 54.4621
Probing was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
Gomory was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
2024-04-29 10:00:02,788 - web_server - INFO - Status: Optimal
2024-04-29 10:00:02,788 - web_server - INFO - Total value of the Cost function = 54.46
Knapsack was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
Clique was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
MixedIntegerRounding2 was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
FlowCover was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
TwoMirCuts was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
ZeroHalf was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)

Result - Optimal solution found

Objective value:                54.46205290
Enumerated nodes:               0
Total iterations:               0
Time (CPU seconds):             0.00
Time (Wallclock seconds):       0.00

Option for printingOptions changed from normal to all
Total time (CPU seconds):       0.01   (Wallclock seconds):       0.01

2024-04-29 10:00:02,908 - web_server - INFO - Passed runtime parameters: {}
2024-04-29 10:00:02,908 - web_server - INFO -  >> Setting input data dict
2024-04-29 10:00:02,908 - web_server - INFO - Setting up needed data
2024-04-29 10:00:02,909 - web_server - INFO -  >> Publishing data...
2024-04-29 10:00:02,909 - web_server - INFO - Publishing data to HASS instance
2024-04-29 10:00:02,923 - web_server - INFO - Successfully posted to sensor.p_pv_forecast = 3590.4
2024-04-29 10:00:02,926 - web_server - INFO - Successfully posted to sensor.p_load_forecast = 544.0
2024-04-29 10:00:02,929 - web_server - INFO - Successfully posted to sensor.p_deferrable0 = 0.0
2024-04-29 10:00:02,932 - web_server - INFO - Successfully posted to sensor.p_batt_forecast = -3046.39
2024-04-29 10:00:02,936 - web_server - INFO - Successfully posted to sensor.soc_batt_forecast = 33.68
2024-04-29 10:00:02,941 - web_server - INFO - Successfully posted to sensor.p_grid_forecast = 0.0
2024-04-29 10:00:02,945 - web_server - INFO - Successfully posted to sensor.total_cost_fun_value = 54.46
2024-04-29 10:00:02,948 - web_server - INFO - Successfully posted to sensor.optim_status = Optimal
2024-04-29 10:00:02,952 - web_server - INFO - Successfully posted to sensor.unit_load_cost = 17.2606
2024-04-29 10:00:02,957 - web_server - INFO - Successfully posted to sensor.unit_prod_price = 2.737

I do get a warning, but seems no issue, although I donā€™t understand why I get it.

1 Like

Iā€™m having a similar error.
If I use a very old template sensor (that actually includes my deferrable load) it works, however if I use a sensor that was setup a couple of days ago it gets this error.

There is a difference. My sensor is months old and itā€™s still doing that.
Iā€™m thinking about making a new sensor and see what that does.

Have you tried passing a dummy power forecast in your curl statement? That removes the need to retrieve your power consumption history from the recorder database.

 "pv_power_forecast": [500, 500, 500, ā€¦],

If that works then itā€™s probably something wrong with the data in the recorder database. Might not have included this entity in the recorder include statement in configuration.yaml?


# maintain database in config directory
recorder:
  #  db_url: mysql://root:[email protected]:3307/harepository?charset=utf8
  purge_keep_days: 10
  include:
    domains:
      - alarm_control_panel
      - switch
      - group
      - lock
      - light
      - update
    entity_globs:
      - sensor.sonnenbatterie_*
      - sensor.p_*
      - sensor.pb*
      - sensor.pm*
      - sensor.pp*
      - sensor.cecil_st*
      - sensor.solcast*
    entities:
      - input_text.fifo_buffer
      - sensor.optim_status
      - sensor.power_load_no_var_loads
      ā€¦

Iā€™m still struggiling with home assistant. What does this mean? And I didnā€™t realise I have to do this?

How big is your recorder database? I find this happens with an unwieldy recorder database. Iā€™m running HA on an Intel NUC with a SSD (not a Pi with a SD card). Iā€™m still using a SQLAlchemy DB but you can upgrade to a better DB if you want.

If you are not restricting what goes into your recorder database by following the practices described here I find it can start to cause problems with retrieving the sensor_power_load_no_var_loads data, especially if youā€™ve had your system running for a few years.

If passing a dummy array works then Iā€™d suspect the health of your recorder database and a purge of the database may help. Otherwise set up an include statement and start managing what you put in the recorder database.

You can delete the recorder database altogether and then wait for two days and youā€™ll have started afresh. No problem doing that if you donā€™t mind missing some data for a while. Default size of recorder is only 10 days anyway I think. You can just delete it and the system will recreate a clean new database automatically.

I no longer use this data. I do this here in Node-Red but it might be easier to do what Mark does as an automation here.

My database is only 545MB. I run an odroid C4 with emmc. My system is now around one year old.
When I try to access the statistics in development of the sensor_power_load_no_var_loads I get nothing. It does show in statusses.

sensor_power_load_no_var_loads is not the entity that is being retrieved. Its the pointer to the sensor entity you created to calculate the power consumption less deferrable loads. In my case this sensor is named sensor.house_power_consumption_less_derferrables and can be seen in the EMHASS configuration yaml file. See below:

So the two days of hostory that is stored and retrieved from the the recorder database is this sensor.house_power_consumption_less_derferrables sensor. Your equivalent sensor (what ever you named it) should have the two days worth of data history available if you donā€™t post your own data in the MPC call.

I named it sensor_power_load_no_var_loads.

Hi all,

Any idea why there is a big difference between the SOC forecast and actual forecast?

I charge/discharge based on the Watts configured in the p_batt_forecast.

Do I need to increase the battery size for example?

This is fairly normal and reflects the lag between the forecast SOC at the end of the timeperiod (which in your case looks to be 60 minutes) and the current SOC.

I have a 30 minute time period which looks similar.

Screenshot 2024-05-03 17.20.27

I am not sure, but you can try and change the efficiency of the charge and discharge of the battery in order to get higher and lower sensor.p_batt_forecast?
This is my graph, look familiar with yours, eff on 0.92 for charge and discharge.

when i use deferableload1 to charge my ev, I also change my amps based on the output from emhass, so when i do so my car is not fully charged. If emhass is lowering watts also it needs to calulate longer time to charge as well?

I get a rather weird battery charge forecast from EMHASS when the electricity prices are low. It wants to alternately charge and discharge the battery during the time period and I canā€™t see why. It looks like this.
Full expected PV power in.
From 11 to 16, the energy price is at its lowest.
It wants to maximize battery charge at 12, then full recharge at 13, then full charge again from 14-16. There canā€™t be an economy in this. The difference between actual sales price and purchase price is 0.22 so I sell for slightly less than I buy. Thereā€™s also a charge/discharge weight of 0.50, so I canā€™t understand why EMHASS would want to buy and sell when the difference is less than 0.72?

Thereā€™s no consumer that would need both PV and battery levels estimated at 13, because that equals almost 10 kWh during that hour.

Optimization status was optimal, so this is not caused by an unfeasible optimization.

image

I have

  - weight_battery_discharge: 0.5
  - weight_battery_charge: 0.5

any ideas?

I had the same issue. Whatever I did, I couldnā€™t make EMHASS retrieve the load history. Got the same error when trying to do an ml fit from the web interface using the same load sensor.
I tried a curl command from within the emhass docker container to try to retrieve the sensor history from HAā€™a RESTful API and that worked well.
I upgraded EMHASS to the latest version, still the same error.
I ran a python snippet to test the same kind of call to HA that EMHASS does, and it could retrieve the sensor history.
A couple of days later, the sensor just works in EMHASS again. I did nothing to fix it.

So sorry if I canā€™t help you solve it, but know that you arenā€™t alone. Btw, you running the EMHASS plugin or docker standalone?

Emhass plugin.

Hello it looks like you are from Sweden. How have you calculated the electricity price buy vs sell? Iā€™m not sure how to calculate that.

Write a template that adjusts the hours to suite.
I have a Tesla which I control via the Tesla HACS.
The entities I use to control the charging are:

  1. device_tracker.tesla_location_tracker Car location
  2. binary_sensor.tesla_charger Charger is connected to car
  3. number.tesla_charge_limit Charge limit set in car
  4. sensor.tesla_battery SoC of car battery

So in the MCP POST I check that the car is at home and is connected to the charger first and it not I return 0 hours else I calculate a number of hours depending on the difference between the current SoC and the desired SoC.

{%- if is_state('device_tracker.tesla_location_tracker', ['home']) -%}
  {%- if is_state('binary_sensor.tesla_charger', ['on']) -%}
    {{ ((states('number.tesla_charge_limit')|int(80)- 
     (states('sensor.tesla_battery')|int(0)))/30*3)|int(0) }}
  {%- else -%} 
    0
  {%- endif -%}
{%- else -%} 
      0
{%- endif -%}

You are right. Today everything is back to normal with no errors. Didnā€™t do a thing to it.