Frigate hardware accelerated object detection on Arm device

I’m currently running Home Assistant Container on an Odroid N2 and running Frigate for object detection with a couple WiFi cameras. Frigate works okay for a single camera using the CPU, but I’m interested in configuring hardware acceleration or running it on a dedicated machine with hardware acceleration. I have a spare Odroid XU4 (HC1) that I think would be perfect for a dedicated Frigate NVR.

Has anyone set up Frigate with hardware acceleration on an Odroid N2 or XU4? If so, do you have any advice for how to set up hardware acceleration (e.g., what OS image to use for XU4 or what gpu drivers to install, how to configure in the frigate configuration)?

Thanks for your advice! If this is the wrong forum, please let me know. Since Frigate has great integration with Home Assistant, I thought this is a great place to ask.

I set up my HC1 as a dedicate Frigate box and it works well. Object classification is performed using the CPU, but video transcoding is hardware accelerated. Setup details are below in case it’s helpful for others.

You’ll need:

  • Odroid HC1 (but HC2 or XU4 would work too)
  • Power supply and Ethernet cable
  • 8-16 GB micro SD card
  • 2.5" SSD

Instructions:

  1. Download and flash the latest XU4 Ubuntu minimal image from the Odroid wiki to the SD card.

  2. Follow the Odroid installation instructions to run from SSD. Note: I did not install linux-image-xu3. An 8-16 GB root partition on the SSD is sufficient.

  3. Set the locale. An example for the US is below.

sudo dpkg-reconfigure tzdata
# Set locale
sudo locale-gen "en_US.UTF-8"
sudo update-locale LC_ALL=en_US.UTF-8 LANG=en_US.UTF-8
sudo dpkg-reconfigure locales
# Select "en_US.UTF-8" for all of the steps.
  1. sudo apt-get install docker docker-compose

  2. Run Frigate using docker-compose up -d. Example docker-compose.yml and Frigate config.yml files are below. Replace any <> text with your specifics.

  3. Set up Frigate and use the UI to configure masks. Motion masks greatly reduce CPU and power usage by ensuring irrelevant motion like video timestamps doesn’t trigger object classification to start. For my setup, I found objects masks unnecessary.

docker-compose.yml example

version: "3.6"
services:
  frigate:
    container_name: frigate
    privileged: true
    restart: unless-stopped
    image: blakeblackshear/frigate:stable-armv7
    volumes:
      - /etc/localtime:/etc/localtime:ro
      - <path-to-data>/frigate/config.yml:/config/config.yml:ro
      - <path-to-data>/frigate/media:/media/frigate
      - type: tmpfs
        target: /tmp/cache
        tmpfs:
          size: 1000000000
    ports:
      - "5000:5000"
      - "1935:1935"
    environment:
      FRIGATE_RTSP_PASSWORD: "<password>"

config.yml example. Save this config.yml file at the <path-to-data>/frigate/config.yml location from the docker-compose.yml file.

mqtt:
  host: <mqtt-server-address>
cameras:
  <camera-name-here>:
    ffmpeg:
      inputs:
        - path: rtsp://<camera-parameters>
          roles:
            - detect
            - rtmp
            - record
            - clips
      hwaccel_args:
        - -c:v
        - h264_v4l2m2m
    width: <camera-parameters>
    height: <camera-parameters>
    fps: <camera-parameters>
    record:
      enabled: True
      retain_days: 7
    clips:
      enabled: True
    snapshots:
      enabled: True
    motion:
      mask:
        - <put-mask-parameters-here>
detectors:
  cpu1:
    type: cpu
objects:
  track:
    - person