RSyslog, offload log files to another server in HASS.io

Alright, I was able to do this using a combination of help I found in other forums.

So you need to set up a docker container for logspout. So you have to disable protection mode for your SSH add on in Home Assistant / Hass.io.

Then run the following docker run command to build the logspout container.

docker run -d --name="logspout" \
        --volume=/var/run/docker.sock:/var/run/docker.sock \
        --publish=127.0.0.1:8000:80 \
        gliderlabs/logspout \
        syslog://192.168.1.252:8514

Pay attention to that last line, it is where you need to define the IP address of your syslog server and port (I use GrayLog). Once I set up the input in graylog, all of my logs are flowing from Home Assistant to GrayLog easy peasy. Let me know if you have questions.

3 Likes

I sort of expected it would be natural to have remote syslog ability. Iā€™m disappointed.
Would it be hard to create ā€œsomething resembling sending lines to remote syslogā€ to launch up node-red and make it sent ā€œrelevant dataā€ to IP:514 ?
At least node red gets all the operational data and can send it to IP:port, building the filters and formatters should be possible.
As I write this, my remote syslog server is logging all the events and all the data from my HA, but there is just too much noise.

Mar  8 15:13:33	192.168.X.YY {"event_type"	0d	user	notice	{"event_type":"state_changed","entity_id":"sensor.memory_free","event":{"entity_id":"sensor.memory_free","old_state":{"entity_id":"sensor.memory_free","state":"112.0","attributes":{"state_class":"measurement","unit_of_measurement":"MiB","icon":"mdi:memory","friendly_name":"Memory free"},"last_changed":"2022-03-08T13:13:18.527059+00:00","last_updated":"2022-03-08T13:13:18.527059+00:00","context":{"id":"217e078b7e4050142bd0db8e2e44e46e","parent_id":null,"user_id":null}},"new_state":{"entity_id":"sensor.memory_free","state":"112.2","attributes":{"state_class":"measurement","unit_of_measurement":"MiB","icon":"mdi:memory","friendly_name":"Memory free"},"last_changed":"2022-03-08T13:13:33.528685+00:00","last_updated":"2022-03-08T13:13:33.528685+00:00","context":{"id":"da4d10da48c9fad17502dbd3420ef4a8","parent_id":null,"user_id":null}}},"origin":"LOCAL","time_fired":"2022-03-08T13:13:33.528685+00:00","context":{"id":"da4d10da48c9fad17502dbd3420ef4a8","parent_id":null,"user_id":null}}

ā€¦ for ā€œmemory free 112.0ā€ :slight_smile:

I just created a Logspout add-on to be able to send logs to a remote log management system on HASS.io / Hassos. Itā€™s probably less ideal than using something like rsyslog or a specific Docker log driver but it seems to work quite well. Feel free to try it out. Feedback is much appreciated.

3 Likes

Just here to upvote this and share our requirements and use case

We need to send logs via syslog to a specified remote server with a configurable destination IP, Port, and ideally, protocol (UDP/TCP). Syslog Facility and Severity parameters should be supported.

Iā€™m in a business context in the IT department and we want to use Home Assistant and Konnected to create entry alarms which leverage installed sensors. The business policy requires that everything log to our remote centralized seim so we can coordinate physical and digital events.

Thanks very much.

@bertbaron will check out the logspout add-on.

Thanks very much.

Just an FYI/Tip, While I havenā€™t had a chance to figure out a nice/permanent solution, I am getting logs off my HaOS instances in realtime by using

ssh -p 22222 hassio@hassvm journalctl -f | ....

For this to work, you need ssh access to the host OS which is described in the Hass developer docs under troubleshooting. (Ssh access needs to be enabled by installing an authorized_keys file. There is a mechanism to do this with a USB thumb drive. Iā€™ve also figured out an alternate method if you have console access to the machine/VM.)

This lets me see logging output from the HaOS kernel and user level, as well as the running containers like homeassistant and supervisor.

Unfortunately, when I looked the host OS didnā€™t have systemd-journal-remote or journald-remote to allow forwarding logs.

Hope this helps.

hi rct,

Iā€™m logged on via ssh, but donā€™t have journalctl command? (I have named-journalprint ?)
can you post more details how this could be running on a synology server to grab the logs and put in the syslog server, any idea?
tx !

@migube - Are you running the whole Home Assistant OS in a VM on Synology, or are you running just the containers?

What I described applies to HaOS with developer ssh access enabled for the HaOS host operating system on port 22222. This is NOT the ssh addon that runs in a container. If you are coming up with named-journalprint when trying to run journalctl it sounds like you are typing that into the SSH addonā€™s zsh.

1 Like

Hi, if there was a still the need to export all HA-OS logs to a remote syslog server, than GitHub - mib1185/ha-addon-syslog: Syslog Home Assistant AddOn - to send your HAOS logs to a remote syslog server might be helpful.

1 Like

I really liked this idea, thank you!
I just want to get rid of a problem Iā€™ve been tracking for months, I donā€™t want to set-up complex centralized logging infra at home (at least not yet) so this approach works great for me.

Just in case others are on the same boat, I took this idea one step further and created a small script that I can add to crontab on another raspberry pi and run this every X amount of time, making sure it reconnects if the connection dies for some reason (which is bound to happen if you mean to leave this running for a long time).

#!/bin/bash
#
# Script that records the Home Assistant journal for at most
# 1 hour, trying to reconnect if it breaks for some reason

HASS_IP=<IP here>
MAX_TIME=<an hour or a day>

start_time=$(date +%s)

# Keep looping to restart the connection if it fails
while true; do
  echo "Starting journalctl replication"

  # Launch the process and record the PID to track how it's going
  (ssh root@$HASS_IP -p 22222 journalctl -f > /tmp/$(date +%s).log 2>&1) &
  PID=$!
  echo "PID now: $PID"

  # Wait for the process to finish
  while kill -0 $PID 2>/dev/null; do
    # Check if it's time to kill it
    elapsed=$(( $(date +%s) - $start_time ))
    if [ $elapsed -gt $MAX_TIME ]; then
      echo "Finishing journalctl replication"
      kill $PID
      exit 0
    fi  

    # Wait a bit to check again
    sleep 10
  done

  echo -n "Replication finished prematurely"

  # The process ended, most likely we need to start it again but just in case
  # check if it was time to end it anyway
  elapsed=$(( $(date +%s) - $start_time ))
  if [ $elapsed -gt $MAX_TIME ]; then
    echo ". Finishing due to elapsed time"
    exit 0
  fi

  echo ", restarting"
done

HTH someone