EMHASS: An Energy Management for Home Assistant

I also had it 2 times, but I couldn’t find any logs.
EMHASS just stopped and a restart of the addons fixes it.

I will temporary make an automation to restart emhass

I also having same problems.

The current add-on test image fix the crashing issue. You can use that in the meantime. Just copy your configuration in the test image configuration and your are done. Of course the fix will be available in the next release

1 Like

Emhass 0.16.2 is released and should fix the crashing issue.

@davidusb
I still have the issue that when I’m on the configuration page I cannot switch anymore between the “graphical” and “code-style” view. There is no reaction when I click “<>”.
I removed the 0s (zeros) from the ‘operating_hours_of_each_deferrable_load’: [None, None, None, None] and since then the problem exists. (I see this in the logs)

Is there any other place (e.g. in the file system) where I can update "operating hours of each def load to : ‘operating_hours_of_each_deferrable_load’: [0, 0, 0, 0] again?

Thanks for your help

Yes you can try to manually modify the options.json file inside the add-on docker image. It depends on where you’re storing your config.json. Two options either inside the add-on container or in the “share” folder in Home Assistant. You’ll need the Terminal & SSH add-on.

Follow this procedure from gemini:

Here is the procedure to modify the configuration, covering both the internal method (installing nano) and the external method (editing via the share folder), which is much easier for most users.

Option 1: Edit from the “Share” Folder (Recommended)

This is the easiest method because you can access the file directly from your computer if you have the Samba Share add-on, or simply use the File Editor add-on in Home Assistant.

  1. Locate the File:
    The file config.json is typically located in your Home Assistant’s /share directory.
  • Path: /share/config.json
  1. Edit the File:
  • Open the File Editor (or VS Code) add-on in Home Assistant.
  • Navigate to the file and open it.
  • Make your changes to the JSON parameters.
  • Save the file.
  1. Restart EMHASS:
  • Go to Settings > Add-ons > EMHASS.
  • Click Restart for the changes to take effect.

Option 2: Edit Inside the Container (Advanced)

Use this if you cannot access the file externally or need to edit files that are not exposed to the /share folder.

1. Find the EMHASS Container ID
Open the Terminal in Home Assistant and run:

docker ps | grep emhass

Copy the Container ID (e.g., a1b2c3d4e5).

2. Enter the Container

docker exec -it <ID> bash

(If bash fails, try sh)

**3. Install nano**
Since the container is lightweight, it likely lacks an editor. Install one:

apk update && apk add nano

(If that fails, try apt-get update && apt-get install nano)

4. Edit the File
Navigate to the data directory (typically /data or wherever the config resides in the container):

cd /data
nano config.json

  • Edit your parameters.
  • Press Ctrl+O then Enter to Save.
  • Press Ctrl+X to Exit.

5. Restart the Container
Type exit to leave the container, then run:

docker restart <ID>

:warning: Important:
If you edit config.json manually, avoid opening the “Configuration” tab in the EMHASS Add-on UI and clicking “Save”, as this may overwrite your manual changes with the old values stored in the UI form.

Hi,
After update to the latest version I have the following problem. How can I fix this?
WARNING in optimization: MPC Prediction Horizon (56) does not match the initialized optimization window (96). This may cause shape mismatch errors in the solver

Probably the related issue and a possible solution here: Suddenly infeasible after updating to 0.16.1 · Issue #700 · davidusb-geek/emhass · GitHub

That’s it! SOLVED! Thanks a lot.

After removing the addon from HA and installing it again the Warning in optimazation is not there anymore.
But this is still there.

[2026-01-30 14:52:00,920] INFO in forecast: Retrieving weather forecast data using method = open-meteo

[2026-01-30 14:52:00,920] INFO in forecast: Loading existing cached Open-Meteo JSON file: /data/cached-open-meteo-forecast-b.json

[2026-01-30 14:52:00,921] INFO in forecast: The cached Open-Meteo JSON file is recent (age=10m, max_age=30m)

/app/.venv/lib/python3.12/site-packages/scipy/optimize/_chandrupatla.py:437: RuntimeWarning:

invalid value encountered in divide

/app/.venv/lib/python3.12/site-packages/scipy/optimize/_chandrupatla.py:437: RuntimeWarning:

invalid value encountered in divide

These seem to happen when the solar irradiance is very close to zero.
The pvlib module handle these very well so you can safely ignore these warnings.

Ok. Thanks.
The solar panels are currently completely covered with snow.

Hi everyone,

I’m running EMHASS with a separate Day-Ahead optimization (once per day) and a short-horizon MPC loop every 15 minutes, and I’d like to better understand the following warning that appears on every MPC run:

WARNING in optimization: MPC Prediction Horizon (16) does not match the initialized optimization window (96).
INFO in optimization: Resizing optimization problem from 96 to 16 timesteps.

Is it correct that I have to reinstall the add-on to solve this warning?
Functionally everything works fine, but I want to make sure this warning is safe to ignore and that I’m using EMHASS as intended.

Thanks a lot for your insights!

What do you mean by short horizon MPC run? It does not need to be short horizon, e.g. I supply it with an horizon until the end of the next day. So up to almost 48 hours * 4 (15 minute timestep). Did you configure the timestep in the config or the commandline? In latter case it might be different between the day-ahead and MPC. Perhaps you can supply the curl/rest commands.

My setup

Day-Ahead (runs once in the morning):

  • delta_forecast_daily = 1
  • optimization_time_step = 15
    → results in a 24h / 96-step optimization
    The resulting scheduled SOC is stored and later used as a target for MPC.

MPC (runs every 15 minutes):

  • prediction_horizon = 16 (4 hours)
  • optimization_time_step = 15
  • soc_init = current battery SOC
  • soc_final dynamically derived from the Day-Ahead SOC near the end of the MPC horizon

The MPC payload explicitly contains prediction_horizon = 16.

With the day ahead soc from the morning the soc final meet the soc of the dayahead at this timepoint. This gives a good regulation for the battery load.

My question

Even though the Day-Ahead optimization runs only once per day, the MPC warning appears on every MPC execution.

This suggests that:

  • the optimization problem is initially built with a 96-step window (likely due to the Day-Ahead configuration or internal defaults),
  • and then resized to 16 steps once the MPC payload is processed.

Is this expected behavior?

  • Does EMHASS always initialize the optimization window based on delta_forecast_daily / internal config before reading the MPC payload?
  • Is there a recommended way to ensure that the MPC optimization is initialized directly with the MPC horizon, without resizing (e.g. config separation, different endpoint usage, or separate instances)?

to complete the picture here my rest commands and sensors

  emhass_mlforecast_fit:
    url: "http://192.168.0.38:5000/action/forecast-model-fit"
    method: POST
    headers:
      Content-Type: application/json
    payload: >
      {
        "historic_days_to_retrieve": 20,
        "model_type": "long_train_data",
        "var_model": "sensor.qcells_house_load",
        "sklearn_model": "KNeighborsRegressor",
        "num_lags":96,
        "split_date_delta": "48h",
        "perform_backtest": "False"
      }
      
  emhass_mlforecast_tune:
    url: "http://192.168.0.38:5000/action/forecast-model-tune"
    method: POST
    headers:
      Content-Type: application/json
    payload: >
      {
        "historic_days_to_retrieve": 20,
        "model_type": "long_train_data",
        "var_model": "sensor.qcells_house_load",
        "sklearn_model": "KNeighborsRegressor",
        "num_lags":96,
        "split_date_delta": "48h",
        "perform_backtest": "False"
      }
      
     
  emhass_mlforecast_predict:
    url: "http://192.168.0.38:5000/action/forecast-model-predict"
    method: POST
    headers:
      Content-Type: application/json
    payload: >
      {
        "model_type": "long_train_data",
        "model_predict_publish": true,
        "model_predict_entity_id": "sensor.p_load_forecast_custom_model",
        "model_predict_unit_of_measurement": "W",
        "model_predict_friendly_name": "Load Power Forecast custom ML model",
        "var_model": "sensor.qcells_house_load"
      }
      
  emhass_dayahead:
    url: "http://192.168.0.38:5000/action/dayahead-optim"
    method: POST
    headers:
      Content-Type: application/json
    payload: >
      {
        "pv_power_forecast":{{ state_attr('sensor.emhass_solcast_pv_forecast', 'pv_power_forecast') | to_json}},
        "delta_forecast_daily": 1,
        "optimization_time_step":15,
        "load_cost_forecast": {{ state_attr('sensor.emhass_epex_forecast', 'load_cost_forecast') | to_json}},
        "load_power_forecast": {{ state_attr('sensor.emhass_load_forecast_dict_local', 'load_forecast') | to_json}}
      }
      
  emhass_naive_mpc_optim_forecast:
    url: http://192.168.0.38:5000/action/naive-mpc-optim
    method: POST
    content_type: "application/json"
    timeout: 30
    payload: "{{ state_attr('sensor.emhass_mpc_payload','payload') }}"

  emhass_publish_data:
    url: http://192.168.0.38:5000/action/publish-data
    method: POST
    content_type: 'application/json'
    payload: >-
      {"publish_prefix": "{{ prefix | default('') }}"}

    
template:
  - sensor:
      - name: "emhass_epex_forecast"
        unique_id: emhass_epex_forecast
        icon: mdi:script
        state: "{{ now() }}"  # Irgendein gültiger state, wird von EMHASS ignoriert
        attributes:
          load_cost_forecast: >
            {% set items = state_attr('sensor.epex_spot_data_total_price_3', 'data') %}
            {% if not items %}
              {}
            {% else %}
              {%- set ns = namespace(out={}) -%}
              {%- for i in items -%}
                {%- set ts = as_datetime(i.start_time).strftime('%Y-%m-%d %H:%M:%S%z') -%}
                {%- set ts = ts[:-2] ~ ':' ~ ts[-2:] %} {# formatiert +0100 → +01:00 #}
                {%- set ns.out = ns.out | combine({ ts: i.price_per_kwh }) -%}
              {%- endfor -%}
              {{ ns.out }}
            {% endif %}

  - sensor:
      - name: "emhass_load_forecast_dict_local"
        state: "{{ now() }}"
        attributes:
          load_forecast: >
            {% set items = state_attr('sensor.p_load_forecast_custom_model', 'scheduled_forecast') %}
            {% if not items %}
              {}
            {% else %}
              {%- set ns = namespace(out={}) -%}
              {%- for i in items -%}
                {# parse original, convert to local timezone #}
                {%- set dt = as_local(as_datetime(i.date)) -%}
                {%- set ts = dt.strftime('%Y-%m-%d %H:%M:%S%z') -%}
                {%- set ts = ts[:-2] ~ ':' ~ ts[-2:] -%}
                {%- set val = (i.p_load_forecast_custom_model | float) -%}
                {%- set ns.out = ns.out | combine({ ts: val }) -%}
              {%- endfor -%}
              {{ ns.out }}
            {% endif %}    
            
  - sensor:
      - name: "emhass_solcast_pv_forecast"
        icon: mdi:solar-power
        state: "{{ now() }}"
        attributes:
          pv_power_forecast: >
            {% set heute = state_attr('sensor.solcast_pv_forecast_prognose_heute', 'detailedForecast') or [] %}
            {% set morgen = state_attr('sensor.solcast_pv_forecast_prognose_morgen', 'detailedForecast') or [] %}
            {% set items = heute + morgen %}
            {% if not items %}
              {}
            {% else %}
              {%- set ns = namespace(out={}) -%}
              {%- for i in items -%}
                {%- set dt = as_datetime(i.period_start) -%}
                {%- if dt -%}
                  {%- set ts = dt.strftime('%Y-%m-%d %H:%M:%S%z') -%}
                  {%- set ts = ts[:-2] ~ ':' ~ ts[-2:] -%} {# formatiert +0100 → +01:00 #}
                  {%- set ns.out = ns.out | combine({ ts: (i.pv_estimate | float |multiply(1000) ) }) -%}
                {%- endif -%}
              {%- endfor -%}
              {{ ns.out }}
            {% endif %}
 
  - sensor:
      - name: "emhass_solcast_pv_forecast_bias"
        icon: mdi:solar-power
        state: "{{ now() }}"
        attributes:
          pv_power_forecast: >
            {% set bias = states('sensor.pv_bias_faktor') | float(1) %}
            {% set heute = state_attr('sensor.solcast_pv_forecast_prognose_heute', 'detailedForecast') or [] %}
            {% set morgen = state_attr('sensor.solcast_pv_forecast_prognose_morgen', 'detailedForecast') or [] %}
            {% set items = heute + morgen %}
            {% if not items %}
              {}
            {% else %}
              {%- set ns = namespace(out={}) -%}
              {%- for i in items -%}
                {%- set dt = as_datetime(i.period_start) -%}
                {%- if dt -%}
                  {%- set ts = dt.strftime('%Y-%m-%d %H:%M:%S%z') -%}
                  {%- set ts = ts[:-2] ~ ':' ~ ts[-2:] -%}
                  {%- set pv = (i.pv_estimate | float * 1000 * bias) | round(0) -%}
                  {%- set ns.out = ns.out | combine({ ts: pv }) -%}
                {%- endif -%}
              {%- endfor -%}
              {{ ns.out }}
            {% endif %}

 
            
  - sensor:
      - name: "emhass_mpc_payload"
        state: "{{ now() }}"
        attributes:
          payload: >
            {% set pv_attr = state_attr('sensor.emhass_solcast_pv_forecast_bias', 'pv_power_forecast') or {} %}
            {% set load_attr = state_attr('sensor.emhass_load_forecast_dict_local', 'load_forecast') or {} %}
            {% set price_attr = state_attr('sensor.emhass_epex_forecast', 'load_cost_forecast') or {} %}
            {% set soc_init = (states('sensor.qcells_battery_capacity') | float(0)) / 100 %}
            {# --- dynamisches soc_final abhängig von dayahead Lauf --- #}
            {# Prediction horizon: 4h = 16 * 15min steps #}
            {% set prediction_horizon_steps = 16 %}
            {% set horizon_end_ts = as_timestamp(now()) + prediction_horizon_steps * 15 * 60 %}

            {% set dayahead = state_attr('sensor.dayaheadsoc_batt_forecast', 'battery_scheduled_soc') or [] %}
            {% set ns = namespace(best_diff=999999, best_soc=none) %}

            {% for i in dayahead %}
              {% set dt = as_timestamp(as_datetime(i.date)) %}
              {% set diff = (dt - horizon_end_ts) | abs %}
              {% if diff < ns.best_diff %}
                {% set ns.best_diff = diff %}
                {% set ns.best_soc = (i.dayaheadsoc_batt_forecast | float(50)) / 100 %}
              {% endif %}
            {% endfor %}

            {% set soc_final = ns.best_soc if ns.best_soc is not none else soc_init %}
            
            {# Build payload dict #}
            {% set payload = {
              "pv_power_forecast": pv_attr,
              "load_power_forecast": load_attr,
              "load_cost_forecast": price_attr,
              "prediction_horizon": prediction_horizon_steps,
              "soc_init": soc_init,
              "soc_final": soc_final
                          } %}
            {{ payload | tojson }}
´´´´

Look here: Optimization window size mismatch in 0.16.0 · Issue #694 · davidusb-geek/emhass · GitHub

This is related to the delta_forecast_days, you can safely ignore this warning, EMHASS is still working as expected.

Thanks a lot, Ralf

It’s fixed in the latest releases.

(post deleted by author)

Hello, I’m new to emhass and tried my first simulations. My question is: If I run an optimization with a Loadsensor (from the whole building 70kW+; naive) the optimization runs quite long and I restarted the whole system after 2h. If I copy the same Sensor with a factor of 0.1 (so its ~7kW), the same simulations takes 2 seconds to calculate.
Is there a limitation or overflow I didn’t read in the documentation? Otherwise I have to factor every Sensor…

Florian :slight_smile: