Hass just stops sometimes - how do I find out why?

Hi. This isn’t really about configuration, but it seems to be the catch-all for other topics.

Sometimes, maybe once a week, maybe more often, I just find hass is not running anymore and I can’t see why.
Here’s the tail of my log file from today… nothing looks like a problem to me.
Is there another log I can search for clues in?
I’m running it as an init.d daemon on Ubuntu 14.04 running on a BeagleBone Black (Ninja Sphere).

17-03-05 13:07:42 INFO (Wemo HTTP Thread) [pywemo.subscribe] Received event from <WeMo Motion "Motion Hallway">(192.168.1.4) - BinaryState 0 
17-03-05 13:07:42 INFO (Wemo HTTP Thread) [homeassistant.components.binary_sensor.wemo] Subscription update for  <WeMo Motion "Motion Hallway"> 
17-03-05 13:07:51 INFO (MainThread) [homeassistant.components.http] Serving /api/services/device_tracker/see to 49.197.182.246 (auth: True) 
17-03-05 13:07:51 INFO (MainThread) [homeassistant.core] Bus:Handling <Event call_service[L] domain=device_tracker, service=see, service_call_id=3054880016-79, service_data=hostname=LWi6, gps_accuracy=65, dev_id=lwi6, battery=90, gps=[-19.281463125834, 146.798547494072], battery_status=Unplugged> 
17-03-05 13:07:51 INFO (MainThread) [homeassistant.core] Bus:Handling <Event state_changed[L] entity_id=device_tracker.lwi6, new_state=<state device_tracker.lwi6=not_home; source_type=gps, entity_picture=/local/LWheadSquareVerySmall.jpg, friendly_name=Lindsay i, gps_accuracy=65, longitude=146.798547494072, latitude=-19.281463125834, battery=90 @ 2017-03-05T09:09:17.275781+10:00>, old_state=<state device_tracker.lwi6=not_home; source_type=gps, entity_picture=/local/LWheadSquareVerySmall.jpg, friendly_name=Lindsay i, gps_accuracy=65, longitude=146.8172563781491, latitude=-19.31587725971317, battery=96 @ 2017-03-05T09:09:17.275781+10:00>> 
17-03-05 13:07:52 INFO (MainThread) [homeassistant.core] Bus:Handling <Event service_executed[L] service_call_id=3054880016-79> 
17-03-05 13:07:52 INFO (MainThread) [homeassistant.components.http] Serving /api/camera_proxy/camera.living_room to 49.197.182.246 (auth: True) 
17-03-05 13:07:53 INFO (MainThread) [homeassistant.components.http] Serving /api/camera_proxy/camera.living_room to 49.197.182.246 (auth: True) 
17-03-05 13:07:53 INFO (MainThread) [homeassistant.components.http] Serving /api/camera_proxy/camera.living_room to 49.197.182.246 (auth: True) 
17-03-05 13:07:53 INFO (MainThread) [homeassistant.components.http] Serving /api/camera_proxy/camera.living_room to 49.197.182.246 (auth: True) 

Thanks.

Any more help here? Today it’s just stopped twice in a few hours.

Try looking in /var/log/syslog and /var/log/messages. They are system log files.

Otherwise, try changing the logging level in the logging component to record more detail.

1 Like

Thanks for the reply. /var/log/messages didn’t exist on my system.
/var/log/syslog only had some records of cron running; nothing else around the time of the last log entry in my homeassistant log.

You could also try some of the ideas I put on this thread, but I think his problem is more of a system wide error, rather than just HA

Not a fix for this but maybe good workaround and test method.

If your familiar with docker, try running in docker.
You can use same config folder and ip just do something like below after install docker:

RUN COMMAND:
docker run -v /docker/docker/homeassistant:/config -p 1883:1883 -p 8123:8123 --name=containername homeassistant/home-assistant:version

FOR EXAMPLE
This will download Version 27.2 and name it HASS–0272

docker run -v /docker/docker/homeassistant:/config -p 1883:1883 -p 8123:8123 --name=HASS–0272 homeassistant/home-assistant:0.27.2

you may need to change the volume based on your setup.
In some other docker install I use something like “-v /volume2/docker/folder:/containerfolder”
you can change ports as needed

Thanks for the suggestion. I’ve used Docker a small amount before, but it’s a bit of a grey area for me.
What is the point? Just to see if the environment would be more stable?

Yes.

If docker is stable maybe stick with that or rebuild current non docker server from scratch.