EMHASS: An Energy Management for Home Assistant

Looking into optimizing my selfconsumtion, I came accross emhass. I am however struggeling with advanced setup. Mainly missing information about how to make custom sensors, automation to publish data to emhass.

  • I currently have 4 sensors for electricity prices, base on template sensors scraping the information from my energy provider. I have a off-peak and a peak price for electricity both for production as consumption. (used in home assistant to calculate cost)
  • In my country the weekend is also fully in the off-peak period
  • Iā€™m looking how to create my own prediction template sensor on hour basis in home assistant (template yaml)
  • Iā€™m looking for a ā€œblueprintā€ automation publishing this data towards emhass every night for optimization
    Are there any links or clues, I tried reading and searching but could not find what I`m looking for.

We canā€™t provide any insights unless you include a copy of your charts so we can see what is going on, if you want the battery to charge.

I have been trying to find documentation on how to format such csv files. Both for load cost and production sell price, but Iā€™m failing. I have found the expected location (e.g. /data/data_load_cost_forecast.csv) of the csv, but not the expected format.

I Think this is the correct link to the EMHASS BluePrint development

Here in Sweden, the trading and settlement period for electricity will transition from 60-minute to 15-minute intervals this summer. This change is being implemented to harmonize the electricity market with a uniform time period across Europe. The transition is regulated by law and will impact all participants in the electricity market.

How is this handled in other countries? Will it be possible to control EMHASS using 15-minute intervals? For context, I use Nordpool.

Is there anything I can prepare in advance to ensure a smooth transition to the new intervals?

Here is the charts. Electricity price is low whole night but no charging planned. Discharge is planned between 07AM-07PM (07-17) allthough price difference is less than 1 Swedish Crown and weight_battery_discharge: 1

I donā€™t think this should be negative

What is the correct value?

0.95 not -0.95 in my case.

Thanks a lot. It may be cause of my charging problem.

Did that solve your problem?

Svensk?

Is it possible to insert variables here

Something hapened or I did something to break my day ahead optimizer but I canā€™t figure it out. Iā€™m sure I am missing something obvious but in the below log you see that if I put load_forecast_method to typical (the new default from what I understand) I get the No such file or directory: ā€˜/app/data/data_train_load_clustering.pklā€™ error message. If I put it back to naive emhass crashes. The rest command with the runtime parameters you see in the log are the same I used before and worked fineā€¦

2025-01-18 22:17:34,870 - web_server - INFO - Saved parameters from webserver
2025-01-18 22:17:40,352 - web_server - INFO - EMHASS server online, serving index.html...
2025-01-18 22:17:40,363 - web_server - INFO - The data container dictionary is empty... Please launch an optimization task
2025-01-18 22:17:57,108 - web_server - INFO -  >> Obtaining params: 
2025-01-18 22:17:57,111 - web_server - INFO - Passed runtime parameters: {'load_cost_forecast': [0.2445, 0.2445, 0.2369, 0.2369, 0.2293, 0.2293, 0.2297, 0.2297, 0.2238, 0.2238, 0.2226, 0.2226, 0.2205, 0.2205, 0.2194, 0.2194, 0.2192, 0.2192, 0.2209, 0.2209, 0.2229, 0.2229, 0.2231, 0.2231, 0.2249, 0.2249, 0.2249, 0.2249, 0.2214, 0.2214, 0.2108, 0.2108, 0.2057, 0.2057, 0.2157, 0.2157, 0.2223, 0.2223, 0.2356, 0.2356, 0.2496, 0.2496, 0.2617, 0.2617, 0.2592, 0.2592, 0.2512, 0.2512], 'prod_price_forecast': [0.1256, 0.1256, 0.1187, 0.1187, 0.1118, 0.1118, 0.1122, 0.1122, 0.1068, 0.1068, 0.1057, 0.1057, 0.1038, 0.1038, 0.1028, 0.1028, 0.1026, 0.1026, 0.1042, 0.1042, 0.1059, 0.1059, 0.1061, 0.1061, 0.1078, 0.1078, 0.1078, 0.1078, 0.1046, 0.1046, 0.095, 0.095, 0.0904, 0.0904, 0.0995, 0.0995, 0.1055, 0.1055, 0.1175, 0.1175, 0.1302, 0.1302, 0.1412, 0.1412, 0.1388, 0.1388, 0.1316, 0.1316], 'pv_power_forecast': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 6, 128, 326, 480, 685, 954, 1240, 1502, 1618, 1582, 1495, 1353, 1164, 934, 673, 403, 90, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]}
2025-01-18 22:17:57,112 - web_server - INFO -  >> Setting input data dict
2025-01-18 22:17:57,112 - web_server - INFO - Setting up needed data
2025-01-18 22:17:57,243 - web_server - INFO - Retrieving weather forecast data using method = list
2025-01-18 22:17:57,260 - web_server - ERROR - Exception on /action/dayahead-optim [POST]
Traceback (most recent call last):
  File "/usr/local/lib/python3.11/dist-packages/flask/app.py", line 1511, in wsgi_app
    response = self.full_dispatch_request()
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/dist-packages/flask/app.py", line 919, in full_dispatch_request
    rv = self.handle_user_exception(e)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/dist-packages/flask/app.py", line 917, in full_dispatch_request
    rv = self.dispatch_request()
         ^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/dist-packages/flask/app.py", line 902, in dispatch_request
    return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)  # type: ignore[no-any-return]
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/dist-packages/emhass/web_server.py", line 388, in action_call
    input_data_dict = set_input_data_dict(
                      ^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/dist-packages/emhass/command_line.py", line 188, in set_input_data_dict
    P_load_forecast = fcst.get_load_forecast(
                      ^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/dist-packages/emhass/forecast.py", line 1048, in get_load_forecast
    with open(data_path, "rb") as fid:
         ^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: '/app/data/data_train_load_clustering.pkl'
FileNotFoundError: [Errno 2] No such file or directory: '/app/data/data_train_load_clustering.pkl'
2025-01-18 22:18:19,161 - web_server - INFO - serving configuration.html...
2025-01-18 22:18:19,340 - web_server - INFO - Obtaining parameters from config.json:
2025-01-18 22:18:32,903 - web_server - INFO - Saved parameters from webserver
2025-01-18 22:18:41,746 - web_server - INFO -  >> Obtaining params: 
2025-01-18 22:18:41,752 - web_server - INFO - Passed runtime parameters: {'load_cost_forecast': [0.2445, 0.2445, 0.2369, 0.2369, 0.2293, 0.2293, 0.2297, 0.2297, 0.2238, 0.2238, 0.2226, 0.2226, 0.2205, 0.2205, 0.2194, 0.2194, 0.2192, 0.2192, 0.2209, 0.2209, 0.2229, 0.2229, 0.2231, 0.2231, 0.2249, 0.2249, 0.2249, 0.2249, 0.2214, 0.2214, 0.2108, 0.2108, 0.2057, 0.2057, 0.2157, 0.2157, 0.2223, 0.2223, 0.2356, 0.2356, 0.2496, 0.2496, 0.2617, 0.2617, 0.2592, 0.2592, 0.2512, 0.2512], 'prod_price_forecast': [0.1256, 0.1256, 0.1187, 0.1187, 0.1118, 0.1118, 0.1122, 0.1122, 0.1068, 0.1068, 0.1057, 0.1057, 0.1038, 0.1038, 0.1028, 0.1028, 0.1026, 0.1026, 0.1042, 0.1042, 0.1059, 0.1059, 0.1061, 0.1061, 0.1078, 0.1078, 0.1078, 0.1078, 0.1046, 0.1046, 0.095, 0.095, 0.0904, 0.0904, 0.0995, 0.0995, 0.1055, 0.1055, 0.1175, 0.1175, 0.1302, 0.1302, 0.1412, 0.1412, 0.1388, 0.1388, 0.1316, 0.1316], 'pv_power_forecast': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 6, 128, 326, 480, 685, 954, 1240, 1502, 1618, 1582, 1495, 1353, 1164, 934, 673, 403, 90, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]}
2025-01-18 22:18:41,752 - web_server - INFO -  >> Setting input data dict
2025-01-18 22:18:41,752 - web_server - INFO - Setting up needed data
2025-01-18 22:18:41,904 - web_server - INFO - Retrieving weather forecast data using method = list
2025-01-18 22:18:41,917 - web_server - INFO - Retrieving data from hass for load forecast using method = naive
2025-01-18 22:18:41,921 - web_server - INFO - Retrieve hass get data method initiated...
2025-01-18 22:18:41,921 - web_server - INFO - Retrieve hass get data method initiated...
[2025-01-18 22:19:15,901] DEBUG in utils: Obtaining secrets from Home Assistant Supervisor API
2025-01-18 22:19:15,945 - web_server - INFO - Launching the emhass webserver at: http://0.0.0.0:5000

Hi folks,

Relatively new, but Iā€™m running into an issue. Probably a dumb mistake, but I havenā€™t been able to solve it.

  1. I have set up EMHASS to pull from my SolCast a few times a day, and save to cacheā€¦ That automation and script seems to work.
  2. Then, my day-ahead optimisation automation is set to run a few times a day, with "weather_forecast_cache_only":true so that it only pulls the forecasted solar PV power from the cache, not a new call to SolCast. That also seems to work well (until the next thing messes it up, at least).
  3. I have set my naive-MPC automation to run every 5 minutes, again using "weather_forecast_cache_only":true. However, it errors and says it is ā€œUnable to obtain cached Solcast forecast data within the requested timeframe rangeā€, then tells me to try it allowing it to make a fresh call. But, it then goes ahead and deletes the existing cache file. Which then breaks the day-ahead optimisation automation. etc.

Obviously, Iā€™d love your help with why the naive-MPC automation is having issues with the cached PV power file. Because Iā€™m a bit stumped.

Configs, automations, and logs below.

HA configuration.yaml:

# EMHASS
shell_command:

  weather_forecast_cache:
    curl -i -H 'Content-Type:application/json' -X POST -d '{
        "entity_save":true
    }' http://homeassistant.local:5000/action/weather-forecast-cache

  dayahead_optim: 
    curl -i -H 'Content-Type:application/json' -X POST -d '{
        "entity_save":true, 
        "optimization_time_step":30,
        "weather_forecast_cache_only":true
    }' http://homeassistant.local:5000/action/dayahead-optim

  naive_mpc_optim:
    curl -i -H 'Content-Type:application/json' -X POST -d '{
        "entity_save":true, 
        "optimization_time_step":5, 
        "prediction_horizon":30, 
        "weather_forecast_cache_only":true
    }' http://homeassistant.local:5000/action/naive-mpc-optim

  publish_data: 
    curl -i -H "Content-Type:application/json" -X POST -d '{
        
    }' http://homeassistant.local:5000/action/publish-data

HA Automations.yaml:

#EMHASS
- alias: SolCast get predicted PV power
  triggers:
    - trigger: time
      at:
        - "04:50:00"
        - "07:55:00"
        - "10:37:00"
        - "11:55:00"
        - "15:55:00"
        - "18:55:00"
  action:
  - service: shell_command.weather_forecast_cache

- alias: EMHASS day-ahead optimization
  triggers:
    - trigger: time
      at:
        - "04:55:00"
        - "10:38:30"
        - "11:57:00"
        - "15:57:00"
  action:
  - service: shell_command.dayahead_optim

- alias: EMHASS MPC optimisation
  triggers:
    - trigger: time_pattern
      minutes: "/5"
  action:
  - service: shell_command.naive_mpc_optim

- alias: EMHASS publish data
  triggers:
    - trigger: time_pattern
      minutes: "/6"
  action:
  - service: shell_command.publish_data

Log file (note: 10:37:00 shows Solcast call and save to cache, 10:38:30 shows day-ahead optimization run, 10:40:00 shows failed MPC run):

2025-01-19 10:36:56,069 - web_server - INFO - EMHASS server online, serving index.html...
2025-01-19 10:37:00,224 - web_server - INFO -  >> Obtaining params: 
2025-01-19 10:37:00,226 - web_server - INFO - Passed runtime parameters: {'entity_save': True}
2025-01-19 10:37:00,227 - web_server - INFO -  >> Performing weather forecast, try to caching result
2025-01-19 10:37:00,233 - web_server - DEBUG - setting`passed_data:days_to_retrieve` to 9 for fit/predict/tune
2025-01-19 10:37:00,239 - web_server - INFO - Retrieving weather forecast data using method = solcast
2025-01-19 10:37:00,979 - web_server - INFO - Saved the Solcast results to cache, for later reference.
2025-01-19 10:38:30,117 - web_server - INFO -  >> Obtaining params: 
2025-01-19 10:38:30,120 - web_server - INFO - Passed runtime parameters: {'entity_save': True, 'optimization_time_step': 30, 'weather_forecast_cache_only': True}
2025-01-19 10:38:30,120 - web_server - INFO -  >> Setting input data dict
2025-01-19 10:38:30,121 - web_server - INFO - Setting up needed data
2025-01-19 10:38:30,126 - web_server - DEBUG - setting`passed_data:days_to_retrieve` to 9 for fit/predict/tune
2025-01-19 10:38:30,157 - web_server - INFO - Retrieving weather forecast data using method = solcast
2025-01-19 10:38:30,160 - web_server - INFO - Retrieved Solcast data from the previously saved cache.
2025-01-19 10:38:30,162 - web_server - INFO - Retrieving data from hass for load forecast using method = naive
2025-01-19 10:38:30,164 - web_server - INFO - Retrieve hass get data method initiated...
2025-01-19 10:38:37,446 - web_server - INFO -  >> Performing dayahead optimization...
2025-01-19 10:38:37,446 - web_server - INFO - Performing day-ahead forecast optimization
2025-01-19 10:38:37,456 - web_server - INFO - Perform optimization for the day-ahead
2025-01-19 10:38:37,638 - web_server - DEBUG - Deferrable load 0: Proposed optimization window: 0 --> 0
2025-01-19 10:38:37,639 - web_server - DEBUG - Deferrable load 0: Validated optimization window: 0 --> 0
2025-01-19 10:38:37,664 - web_server - DEBUG - Deferrable load 1: Proposed optimization window: 0 --> 0
2025-01-19 10:38:37,664 - web_server - DEBUG - Deferrable load 1: Validated optimization window: 0 --> 0
2025-01-19 10:38:37,679 - web_server - WARNING - Solver default unknown, using default
Welcome to the CBC MILP Solver 
Version: 2.10.10 
Build Date: Sep 26 2023 
command line - /usr/local/lib/python3.11/dist-packages/pulp/solverdir/cbc/linux/arm64/cbc /tmp/abd040fd00f640a080e5f1292decc122-pulp.mps -max -timeMode elapsed -branch -printingOptions all -solution /tmp/abd040fd00f640a080e5f1292decc122-pulp.sol (default strategy 1)
At line 2 NAME          MODEL
At line 3 ROWS
At line 629 COLUMNS
At line 2738 RHS
At line 3363 BOUNDS
At line 3892 ENDATA
Problem MODEL has 624 rows, 480 columns and 1436 elements
Coin0008I MODEL read with 0 errors
Option for timeMode changed from cpu to elapsed
Continuous objective value is -1.40155 - 0.00 seconds
Cgl0003I 0 fixed, 0 tightened bounds, 22 strengthened rows, 148 substitutions
Cgl0004I processed model has 321 rows, 261 columns (195 integer (195 of which binary)) and 812 elements
Cbc0038I Initial state - 1 integers unsatisfied sum - 0.0732931
Cbc0038I Pass   1: suminf.    0.07329 (1) obj. -1.40155 iterations 1
Cbc0038I Solution found of -1.40155
Cbc0038I Relaxing continuous gives -1.40155
Cbc0038I Before mini branch and bound, 194 integers at bound fixed and 47 continuous
Cbc0038I Mini branch and bound did not improve solution (0.05 seconds)
Cbc0038I After 0.05 seconds - Feasibility pump exiting with objective of -1.40155 - took 0.00 seconds
Cbc0012I Integer solution of -1.4015451 found by feasibility pump after 0 iterations and 0 nodes (0.05 seconds)
Cbc0001I Search completed - best objective -1.40154513989406, took 0 iterations and 0 nodes (0.05 seconds)
Cbc0035I Maximum depth 0, 0 variables fixed on reduced cost
Cuts at root node changed objective from -1.40155 to -1.40155
Probing was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
Gomory was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
Knapsack was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
Clique was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
MixedIntegerRounding2 was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
FlowCover was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
TwoMirCuts was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
ZeroHalf was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
Result - Optimal solution found
Objective value:                -1.40154514
Enumerated nodes:               0
Total iterations:               0
Time (CPU seconds):             0.05
Time (Wallclock seconds):       0.07
Option for printingOptions changed from normal to all
Total time (CPU seconds):       0.06   (Wallclock seconds):       0.08
2025-01-19 10:38:37,822 - web_server - INFO - Status: Optimal
2025-01-19 10:38:37,824 - web_server - INFO - Total value of the Cost function = -1.40
2025-01-19 10:38:37,886 - web_server - INFO - Publishing data to HASS instance
2025-01-19 10:38:37,917 - web_server - INFO - Successfully posted to sensor.p_pv_forecast = 7817.2
2025-01-19 10:38:37,924 - web_server - DEBUG - Saved sensor.p_pv_forecast to json file
2025-01-19 10:38:37,935 - web_server - INFO - Successfully posted to sensor.p_load_forecast = 2762.23
2025-01-19 10:38:37,941 - web_server - DEBUG - Saved sensor.p_load_forecast to json file
2025-01-19 10:38:37,953 - web_server - INFO - Successfully posted to sensor.p_deferrable0 = 0.0
2025-01-19 10:38:37,959 - web_server - DEBUG - Saved sensor.p_deferrable0 to json file
2025-01-19 10:38:37,970 - web_server - INFO - Successfully posted to sensor.p_deferrable1 = 54.97
2025-01-19 10:38:37,976 - web_server - DEBUG - Saved sensor.p_deferrable1 to json file
2025-01-19 10:38:37,988 - web_server - INFO - Successfully posted to sensor.p_grid_forecast = -5000.0
2025-01-19 10:38:37,994 - web_server - DEBUG - Saved sensor.p_grid_forecast to json file
2025-01-19 10:38:37,999 - web_server - INFO - Successfully posted to sensor.total_cost_fun_value = -1.4
2025-01-19 10:38:38,005 - web_server - DEBUG - Saved sensor.total_cost_fun_value to json file
2025-01-19 10:38:38,008 - web_server - INFO - Successfully posted to sensor.optim_status = Optimal
2025-01-19 10:38:38,014 - web_server - DEBUG - Saved sensor.optim_status to json file
2025-01-19 10:38:38,026 - web_server - INFO - Successfully posted to sensor.unit_load_cost = 0.2943
2025-01-19 10:38:38,033 - web_server - DEBUG - Saved sensor.unit_load_cost to json file
2025-01-19 10:38:38,044 - web_server - INFO - Successfully posted to sensor.unit_prod_price = 0.052
2025-01-19 10:38:38,050 - web_server - DEBUG - Saved sensor.unit_prod_price to json file
2025-01-19 10:39:29,610 - web_server - INFO - EMHASS server online, serving index.html...
2025-01-19 10:40:00,252 - web_server - INFO -  >> Obtaining params: 
2025-01-19 10:40:00,254 - web_server - INFO - Passed runtime parameters: {'entity_save': True, 'optimization_time_step': 5, 'prediction_horizon': 30, 'weather_forecast_cache_only': True}
2025-01-19 10:40:00,255 - web_server - INFO -  >> Setting input data dict
2025-01-19 10:40:00,255 - web_server - INFO - Setting up needed data
2025-01-19 10:40:00,261 - web_server - DEBUG - setting`passed_data:days_to_retrieve` to 9 for fit/predict/tune
2025-01-19 10:40:00,303 - web_server - INFO - Retrieve hass get data method initiated...
2025-01-19 10:40:06,451 - web_server - INFO - Retrieving weather forecast data using method = solcast
2025-01-19 10:40:06,456 - web_server - ERROR - Unable to obtain cached Solcast forecast data within the requested timeframe range.
2025-01-19 10:40:06,456 - web_server - ERROR - Try running optimization again (not using cache). Optionally, add runtime parameter 'weather_forecast_cache': true to pull new data from Solcast and cache.
2025-01-19 10:40:06,457 - web_server - WARNING - Removing old Solcast cache file. Next optimization will pull data from Solcast, unless 'weather_forecast_cache_only': true

Any advice would be greatly appreciated.

My guess is your problem is that your optimization time step in the mpc command is different than day ahead. Solcast and day ahead use 30 minute blocks optimization_time_step while in mpc you have it setup to use 30 5 minute blocks (so your mpc horizon is 150 minutes). Probably not what you want. If you change your mpc command to:

        "optimization_time_step":30, 
        "prediction_horizon":5,

with PH 5 being the minimum for mpc if I remember correctly, it might just do what you want it to do.

Today everything is working again with load_forecast_method naive. Donā€™t know why but did a full reboot yesterdayā€¦

The error message about the missing data_train_load_clustering.pkl file when I switch to load_forecast_method ā€œtypicalā€ is still there but probably because my sensor_power_load_no_var_loads only has 5 days of history at the moment and I need 9 ?

Probably it is, I observe the results and it looks prety good. And yes, svensk :slight_smile:

@HansD youā€™re a champion - that fixed it perfectly.

Thanks for taking the time to help. Much appreciated

Installation with Docker: Not going well

I think many of the issues I am experiencing are down to my lack of experience in using Docker. When I first found EMHASS I was keen to get it running alongside HA on my Pi 3B, but each time I tried, it slowed the machine down to a crawl and really caused more issues. I never succeeded in the add-on route. This deviceā€™s IP is 192.168.1.6.

Thatā€™s why I decided to run it on a standalone Pi 3B running Raspberry Pi OS and Docker. It has a fixed IP address 192.168.1.8.

Here are the issues Iā€™ve found:

  1. In the installation instructions there is this docker command:
    docker run --rm -it --restart always -p 5000:5000 --name emhass-container -v ./config.json:/share/config.json -v ./secrets_emhass.yaml:/app/secrets_emhass.yaml Package emhass Ā· GitHub

Docker simply replies with:
docker: Conflicting options: --restart and --rm.
See ā€˜docker run --helpā€™.

I canā€™t believe Iā€™m failing at such an early stage in the installation process, and with such a fundamental error. In the end I dropped the --rm option, and that at least got my container running. How do these mutually exclusive options live together in your settings?

I am new to Docker, so youā€™re dealing with a noob when it comes to this software.

  1. It took a few iterations of my secrets_emhass.yaml file for the webserver to start running. Here is the working version:

# Use this file to store secrets like usernames and passwords.
# Learn more at Storing secrets - Home Assistant
# server_ip: 192.168.1.8
hass_url: https://mysubdomain.duckdns.org:8123/
long_lived_token: the_long_lived_token_generated_in_the_security_tab_of_my_user_profile_in_home_assistant
time_zone: Europe/London
Latitude: 52.13
Longitude: -1.19
Altitude: 203
solcast_api_key: my_key_substituted_here
solcast_rooftop_id: my-rooftop-if-here
solar_forecast_kwp: 2.8

I can go to http://192.168.1.8:5000, and see the EMHASS Energy Management home page. My console reads as follows:

emhass@emhass:~ $ sudo docker run -it --restart always -p 5000:5000 --name emhass-container -v ./config.json:/share/config.json -v ./secrets_emhass.yaml:/app/secrets_emhass.yaml Package emhass Ā· GitHub
2025-01-13 12:40:25,972 - web_server - INFO - Launching the emhass webserver at: http://0.0.0.0:5000
2025-01-13 12:40:25,973 - web_server - INFO - Home Assistant data fetch will be performed using url: https://mysubdomain.duckdns.org:8123/
2025-01-13 12:40:25,973 - web_server - INFO - The data path is: /app/data
2025-01-13 12:40:25,974 - web_server - INFO - The logging is: INFO
2025-01-13 12:40:25,985 - web_server - INFO - Using core emhass version: 0.12.2
2025-01-13 12:40:45,307 - web_server - INFO - EMHASS server online, serving index.htmlā€¦
2025-01-13 12:40:45,343 - web_server - INFO - The data container dictionary is emptyā€¦ Please launch an optimization task

So I go to the web page and press the Day-ahead Optimization button. After a while I get this error in the console (mirrored on the web page):

2025-01-21 12:25:09,823 - web_server - INFO - >> Obtaining params:
2025-01-21 12:25:09,825 - web_server - INFO - Passed runtime parameters: {}
2025-01-21 12:25:09,826 - web_server - INFO - >> Setting input data dict
2025-01-21 12:25:09,826 - web_server - INFO - Setting up needed data
2025-01-21 12:25:09,947 - web_server - ERROR - Unable to access Home Assistance instance, check URL
2025-01-21 12:25:09,947 - web_server - ERROR - If using addon, try setting url and token to ā€˜emptyā€™

I have tried combinations of URL in place of this one. with/without port 8123. Direct IP address with/without port 8123 and https:// http://. The error is the same. The settings are definitely being picked up and reflected in web_server INFO messages above.

Are there any insights as to what could be wrong here? My HA Pi definitely works and is accessible with both https://mysubdomain.duckdns.org:8123/ and http://192.168.1.6:8123/.

Thanks in advance for any words of wisdom.

ChatGPT says:
The issue with the Docker command you provided is the combination of --restart always and --rm.

  • ā€“restart always instructs Docker to automatically restart the container if it stops unexpectedly.
  • ā€“rm tells Docker to remove the container after it exits.
    These options are inherently contradictory. If the container is removed after it exits, it cannot be restarted automatically.
    Hereā€™s how to fix the command:
  • Choose between restart and removal:
    • If you want the container to restart automatically, remove the --rm option:
      docker run --rm -it --restart always -p 5000:5000 --name emhass-container -v ./config.json:/share/config.json -v ./secrets_emhass.yaml:/app/secrets_emhass.yaml Package emhass Ā· GitHub

    • If you want the container to be removed after it exits, remove the --restart always option:
      docker run --rm -it -p 5000:5000 --name emhass-container -v ./config.json:/share/config.json -v ./secrets_emhass.yaml:/app/secrets_emhass.yaml Package emhass Ā· GitHub

Additional Considerations:

  • Raspberry Pi 3B Resources: Monitor your Raspberry Piā€™s resources (CPU, memory, storage) to ensure the container doesnā€™t consume too much, especially with --restart always.
  • Docker Image: Ensure the Docker image youā€™re using (Package emhass Ā· GitHub) is compatible with the Raspberry Pi 3Bā€™s architecture (usually ARM).
  • Volume Permissions: Verify that the volumes (./config.json, ./secrets_emhass.yaml) have the correct permissions for the Docker container to access them.
    By addressing these points, you should be able to successfully run the Docker container on your Raspberry Pi 3B.