Google Coral USB + Frigate + PROXMOX

2022-09-09 - v3 Edit: Updated to reflect final working LXC->Docker->Frigate approach. Added notes on frigate config, camera streams and frigate storage.

Background:
I had a working setup on ESXI, but alas, no PCIE slot and thus no way to pass through the USB google coral in such a way that the VM will recognize it.
(See this thread here for more on that struggle… )
It was suggested that it could be done via PROXMOX, so after looking at many different threads, I pieced together the below set of instructions. I have been running this for a few months now and it seems reasonably stable with a frigate inference speed of between 8-10.

Known Issues:

  • After a reboot, the Coral sometimes changes from one USB port or bus to another, which requires editing the configuration file for the frigate LXC, and then restarting the frigate LXC.
  • The docker LXC (running frigate) is priviledged. Others have posted improvements to get the LXC to run as unpriviledged but I havent tried this yet. I will update the instructions if/when I get it working.

Part 1 - Installing ProxMox 7.1

  1. Download Proxmox 7.0 ISO

  2. Download USB Imager

  3. Format the USB Stick ( Very important)
    If you forget to do this, you can use these steps to clean the USB drive.

  4. Install ProxMox
    Dont choose ZFS unless you have 32Gb+ RAM. Otherwise just accept defaults
    The licensing pop up is normal.

  5. Apply Updates
    Go to updates, refresh, then click upgrade.
    Its normal that some will fail as you dont have the enterprise license
    Leave update window open and answer any questions that pop up

Part 2 - Installing Home Assistant

  1. Watch this for reference:
    https://www.youtube.com/watch?v=9sKpOODLJHs

  2. Create new VM
    Under OS, choose “Do not use any media”
    Under System, for BIOS choose “OVMF (UEFI)” and for local storage choose “local-lvm”. Also ensure you uncheck “pre-enroll keys”.
    Accept defaults for everything else (unless you want to mess with # of CPUs or memory etc).

  3. Go to the hardware tab, find the “Hard Disk (scsi0)”, then click “Detach”, then “Remove”.

  4. Get URL for KVM/ProxMox here:
    Alternative - Home Assistant
    copy the download URL to the .qcow2 file

  5. Open the command prompt for proxmox (not for the VM itself). Run the below commands to get the home assistant .qcow2 file.

    wget TheQCOW2DownloadURL

  6. Decompress the file (From the same command line):

    Xz -d -v (yourfilename).qcow2.xz

  7. Import the file into the VM (From the same command line):

    qm importdisk 104 ./haos_ova–7.1.qcow2 local-lvm -–format qcow2

    *Note the VM # and file name to use in the statement

  8. From ProxMox UI, go back into VM to attach this new disk to your VM
    Double click on the unused disk
    Check SSD emulation, and add the disk

  9. Update VM boot options
    Set SCSI drive to first and enable it

  10. Start VM

  11. Open VM CLI, note the IP address displayed

Part 3 - Configure Home Assistant

  1. After you log in, go to profile (bottom left), then enable advanced mode
  2. Install file editor from add on store
  3. Install mosquitto mqtt broker
  4. Open the users area and create a user/password entry for frigate to use. (Frigate will use it to connect to Mosquitto MQTT)

Part 4 - Install Docker in LXC Container

  1. Watch this for reference:
    https://www.youtube.com/watch?v=gXuLiglJceY

  2. Download Turnkey-core template

    1. From console, run “pveam update” to refresh the list of templates
    2. Go to proxmox storage (“local (pve)”) and go to “CT Templates”
    3. Search for “Core”, select “turnkey-core” and hit “download”
      I used 16.1.1 (“Debian-10-turnkey-core_16.1-1_amd64.tar.gz”)
    4. Click “CT Templates” again and verify it shows up in your list
  3. Create LXC container

    1. Click “Create CT”
    2. General tab
      1. Hostname: “docker”
      2. Set password and note it for later user
      3. Uncheck “unpriviledged container” (make it privileged)
        (I think the container needs to be “priviledged” for the USB to pass through correctly. Someone correct me if Im wrong here)
    3. Template tab
      1. Storage: local
      2. Template: choose the turnkey-core template
    4. Root Disk tab
      1. 32 GB (or whatever you want)
    5. CPU tab
      1. Cores: 1 (or whatever you want)
    6. Memory tab:
      1. Memory: 2125 (or whatever you want)
    7. Network tab:
      1. IPv4: DHCP
    8. Accept all other defaults
    9. Click finish to initialize
  4. Configure the new Container

  5. Options->Features

    1. Enable “keyctl”
    2. Enable “Nesting”
  6. Install Debian Turnkey Core
    1. Start the container
    2. Open console for the container, and login with root + password noted earlier
    3. Skip first 2 prompts, then click “Install” to install security updates
    4. Once finished, CTRL+C to get out
    5. Update & upgrade debian
    1. >apt update
    2. >apt upgrade

  7. Install Docker in the LXC Container

    1. Review this for reference: Install Docker Engine on Debian | Docker Docs

    2. And review this: Setup and Install Dock... | The Homelab Wiki

    3. Run the below commands (Step 1, 2 and 3 from the “Set up the repository” section here: Install Docker Engine on Debian | Docker Docs
      (Remove “sudo” from commands since you are already logged in as root)

      1. Step 1:
        apt-get install \
        ca-certificates \
        curl \
        gnupg \
        lsb-release
        
      2. Step 2
        curl -fsSL https://download.docker.com/linux/debian/gpg gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
        
      3. Step 3:
        echo \
        "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/debian \
        $(lsb_release -cs) stable" tee /etc/apt/sources.list.d/docker.list > /dev/null
        
  8. Install docker engine.
    apt-get update apt-get install docker-ce docker-ce-cli containerd.io

  9. Verify Docker is running:
    Systemctl status docker

  10. Install Portainer (to simplify management of docker)

    1. Review this page here: Installing Docker and ... | The Homelab Wiki
    2. Run this code to install portainer on port 9000 and 8000
      docker run -d \
      --name="portainer" \
      --restart on-failure \
      -p 9000:9000 \
      -p 8000:8000 \
      -v /var/run/docker.sock:/var/run/docker.sock \
      -v portainer_data:/data \
      portainer/portainer-ce:latest
      
    3. Confirm container’s IP
      ip addr
    4. Open your browser and log in to portainer:
      http://yourcontanieripaddress:9000/
    5. Log in as “admin” and set the password

Part 5 - Installing Frigate in Docker using Portainer

  1. Review this for reference:
    https://www.youtube.com/watch?v=qfchptEzqdo&t=727s

  2. In Docker LXC, Go to /home folder, create a frigate folder, and create a config.yml file (/home/frigate/config.yml)

    vi config.yml
    (esc+i for insert, esc+o for newline, :q! For quit, :wq! For write/quit)

    Create your frigate configuration (your on your own here)

    detectors:
      cpu1:
        type: cpu
    
  3. In portainer, create a new stack called “homeautomation”:

#######################FRIGATE
frigate:
__container_name: frigate
__image: blakeblackshear/frigate:stable-amd64
__restart: always
__devices:
___- /dev/bus/usb:/dev/bus/usb
__volumes:
___- /etc/localtime:/etc/localtime
___- /home/frigate/config.yml:/config/config.yml:ro
__ports:
___- 5000:5000
___- 1935:1935
__environment:
___FRIGATE_RTSP_PASSWORD: "topsecretfrigatepassword"
  1. Start the container and confirm you can access frigate on port 5000
    http://yourcontaineripaddress:5000/
  2. Confirm frigate is working using basic CPU detection

Part 6 - Configure LXC to pass through the Coral

  1. In the host, verify which USB bus the coral is on. Run “lsusb” to confirm.
    You will need to pass the entire bus through. For me it was bus 002.
    Also note if it lists “2.0 root hub” or “3.0 root hub”. You want to ensure the coral is plugged in to a USB 3.0 root hub if you want the best inference speed

  2. In the host, navigate to /etc/pve/lxc and edit the config file for your LXC #

    vi 101.conf

    Add the higlighted bit below to your configuration:
    (everything below the “swap: 512”)

    arch: amd64
    cores: 1
    features: keyctl=1,nesting=1
    hostname: docker
    memory: 2125
    net0: name=eth0,bridge=vmbr0,firewall=1,hwaddr=22:FA:5D:3E:BD:D9,ip=dhcp,type=veth
    ostype: debian
    rootfs: local-lvm:vm-101-disk-1,size=32G
    swap: 512
    lxc.cgroup2.devices.allow: c 226:0 rwm
    lxc.cgroup2.devices.allow: c 226:128 rwm
    lxc.cgroup2.devices.allow: c 29:0 rwm
    lxc.cgroup2.devices.allow: c 189:* rwm
    lxc.apparmor.profile: unconfined
    lxc.cgroup2.devices.allow: a
    lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file 0, 0
    lxc.mount.entry: /dev/bus/usb/002 dev/bus/usb/002 none bind,optional,create=dir 0, 0
    lxc.cap.drop:
    lxc.mount.auto: cgroup:rw
    

    Note I used cgroup2. You may need to use cgroup.
    (Not sure how to tell which control group is required)

  3. If it wont start, reboot the host (not the guest)

  4. Ensure you created the container earlier as a privileged container (uncheck unpriviledged)

    1. If you made a mistake, you can fix it by doing a backup then restore of the LXC
  5. Log in to the LXC and confirm that the Coral USB is passed through
    (No point in going any further if it isnnt detected!)

    1. apt-get install usbutils

    2. lsusb

  6. Update frigate’s config.yml to Switch to coral

    detectors:
    _coral:
    __type: edgetpu
    __device: usb
    
  7. Check log file (using portainer) for frigate and confirm its working

Part 7 - Take a snapshot!

  1. This is a good time to take a snapshot within ProxMox.
    Go to the Home Assistant VM, click snapshots and take a snapshot.

Part 8 - Install Home assistant Frigate add on and integration

  1. From add on store, add this repository:
    https://github.com/blakeblackshear/frigate-hass-addons

  2. Add the “Frigate NVR Proxy” add-on

  3. Configure the add on (Enter the IP address and port of frigate)
    eg 192.168.1.210:5000

  4. Set to show in sidebar

  5. Start Frigate

  6. Confirm you can see frigate in the side bar and it works

  7. Celebrate

Appendix A - Notes on Frigate and Camera Setup

Its mentioned in the frigate documentation that Frigate doesn’t need a high res stream to perform detection, and there are diminishing returns when passing it a higher resolution stream or frame rate.
If your camera supports multiple streams, what I have found works well is to:

  • Create two rtsp steams on the camera
    • 1 high resolution & high frame rate (for NVR recording)
    • 1 low resolution & low frame rate (for detection, and for displaying in home assistant)
  • In home assistant, use the “generic camera” integration to view the low res stream directly in home assistant. eg for HikVision:
  • Configure 2 inputs in frigate (one for record, one for detect)
  • Disable RTMP in frigate (Theres no point in re-streaming the feed to home assistant. There will be less lag if we have home assistant pull directly from the camera).
  • Frigate Config Example:
rtmp:
  enabled: false

cameras:
  driveway_cam2:
    ffmpeg:
      inputs:
        - path: rtsp://frigateuser:[email protected]:554/Streaming/Channels/101
          roles:
            - record
        - path: rtsp://frigateuser:[email protected]:554/Streaming/Channels/102
          roles:
            - detect
      output_args:
        detect: -f rawvideo -pix_fmt yuv420p
        record: -f segment -segment_time 60 -segment_format mp4 -reset_timestamps 1 -strftime 1 -c copy -c:a aac
    detect:

Appendix B - Camera Selection

FWIW. I started out with the Unifi G3 Flex camera, and upgraded to the HikVision DS-2CD2347G2-LU.
I am extremely happy with this camera’s night vision performance. It truly does provide full color video under very low light conditions.

low light conditions.

Appendix C - Frigate Recording Storage

In terms of storage for frigate camera recordings etc, I have found it useful to pass through a ZFS mount point. This way the recordings can be easily blown away without affecting frigate, and vice versa. Also it keeps the LXC image smaller, which simplifies snapshots, backups and restores.
Unfortunately I lost my notes on the setup, but I think it was roughly along the lines of the below:

  1. From proxmox “DataCenter”, Go to storage, and create a ZFS filesystem on a free drive.
  2. From proxmox docker LXC, Go to resources => Add => Mount Point. For “Storage”, choose your zfs file system, and enter a size in the “Disk Size” field that is smaller than the remaining available space. For the path, enter something like “/mnt/frigatedata”
  3. In portainer, ensure the mount is passed through to the container running frigate by editing the stack to include this: “/mnt/frigatedata:/media/frigate”. So effectively Proxmox is passing the ZFS partition through to the LXC as “/mnt/frigatedata”, then docker is passing it through to the container as “/media/frigate” (which is the default location used by frigate to store recordings).
#######################FRIGATE
  frigate:
    container_name: frigate
    image: blakeblackshear/frigate:stable-amd64
    restart: always
    devices:
      - /dev/bus/usb:/dev/bus/usb
    volumes:
      - /etc/localtime:/etc/localtime
      - /home/frigate/config.yml:/config/config.yml:ro
      - /mnt/frigatedata:/media/frigate
      - /run:/tmp/cache
      
    ports:
      - 5000:5000
      - 1935:1935
    environment:
      FRIGATE_RTSP_PASSWORD: "topsecretfrigatepassword"

Appendix D - Results

So far I have caught 1 night walker with this setup. The below happened at 2 am but with the hikvision it almost looks like daytime. The night-walker was detected just before he got close to the cars and immediately persuaded to take his shenanigans elsewhere.
DIY Home Security vs Night Crawler 2022-04-24 - YouTube

24 Likes

That’s certainly one way to install Home Assistant OS on Proxmox, but it leaves room for error. A much easier/safer way is to run a single line script (Home Assistant OS VM) from Proxmox Helper Scripts | Proxmox Scripts For Home Automation Just a suggestion :wink:

6 Likes

Thanks! Looks like my inference speed is still poor,. Its back to 153 :frowning_face:

If I try the link you sent and run it as an LXC, do you think that would make a difference?

Don’t know. I prefer hardware NVR

Running Frigate in Docker in Proxmox LXC with remote nas Share (cifs) · Discussion #1111 · blakeblackshear/frigate · GitHub

Installation on a virtual machine within Proxmox · Discussion #1837 · blakeblackshear/frigate · GitHub

Hope I’m not speaking too soon again,… but I think I finally got it going by running frigate inside of docker, inside of an LXC…
So:

  • PROXMOX
    • VM (HassOS Docker Image from earlier as usual)
      • Docker
        • Home Assistant
    • LXC (Debian 10 Turnkey from this article here)
      • Docker
        • Portainer
        • Frigate

The LXC config Im using is below.
Note that Im using control group 2 (cgroup2 instead of cgroup). Not sure how to tell which is required but seems some people had to use cgroup and not cgroup2.
Also had to use lsusb to confirm that USB port 002 was the one my coral was on.

lxc.cgroup2.devices.allow: c 226:0 rwm
lxc.cgroup2.devices.allow: c 226:128 rwm
lxc.cgroup2.devices.allow: c 29:0 rwm
lxc.cgroup2.devices.allow: c 189:* rwm
lxc.apparmor.profile: unconfined
lxc.cgroup2.devices.allow: a
lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file 0, 0
lxc.mount.entry: /dev/bus/usb/002 dev/bus/usb/002 none bind,optional,create=dir 0, 0
lxc.cap.drop:
lxc.mount.auto: cgroup:rw

Inference speed is 45ms with the coral but Im hoping thats just because its a USB 2.0 port on my dev environment…

I will post up full revised steps if it looks like its stable…

@importfanatik It is because of usb2 coral is data hungry, with usb3+docker in lxc on pve 7.1 I get near native speed of 8-10ms. I tried usb3 passthru+vm but the port kept resetting and when it worked the inference speed was around 50ms.

Docker in lxc is the way to go for this. I’m using same config you posted for about 3 months with no trouble.

1 Like

Same on my side, setting it through VM in proxmox, the tpu get stuck at some point and tries to re-start.
I am wondering if I should move to full lcx with docker, but I feel like I need a full fledged vm, even if right now I don’t forsee strong reasons to support this.

How did you get frigate to show up in Home assistant? (That is my next hurtle). Is it the “Frigate NVR” add-on? or the “Frigate NVR Proxy”? Could you share the basic steps?

If you run ha in docker then just install from HACS → Explore & Add Repositories → Frigate, CTRL+R to reload the page (otherwise newly installed integration won’t show in the list), Configuration → Integrations → Add Integration → Frigate and feed it http://[address]:[port] of the lxc container.

If you use ha os I think you need the frigate proxy, but I don’t know more about that.

I also highly suggest GitHub - dermotduffy/frigate-hass-card: A Lovelace card for Frigate in Home Assistant.

2 Likes

What’s the easiest way to run frigate? I am getting error trying to run it right now, I have HA running on proxmox server in LXC. When I run frigate on my rpi it runs no error but my rpi way slower than my old pc with my promox.

I have updated my original post above with what I did to get it going (PROXMOX->LXC->Docker->Frigate).

I will update it again when I get the HASS add-on and integration going, and the custom card that was suggested.

Take a look here
https://github.com/tteck/Proxmox/issues/22

1 Like

Perhaps HW related, but I also noticed if I pass through any other usb3 device to this or any other vm, and actually try to saturate the connection (i.e. copy few Gb of files to an usb3 ssd enclosure) it will cause the tpu port to keep resetting with xhci errors in dmesg and tpu errors in frigate log same as when I was running it in vm instead of lxc.

I’m using NUC with xeon e2176m so maybe the small factor makes some sacrifices and the usb3 ports actually share the capacity instead of being able to saturate the ports for multiple devices. Maybe worth mentioning as possible troubleshooting step to try to run the tpu alone 1st.

Updated my post with the final frigate NVR proxy step.

anybody know where I can get my hand on Google Coral in the US?

Hello. I have a problem. Everything works ok, but if I do a reset, Promox assigns me a different bus and won’t connect, what should I do?

You may have already seen this, but if you hit BUY on coral.ai it will show you the official distributors… alas, last I checked, they were all out.

Any other reseller is likely to price gouge you…

I just ran into the same issue today.

You need to go back into your LXC config and adjust the bus number.
From the host console:

  1. type “lsusb” to see which bus is assigned
  2. update your conf file (eg. /etc/pve/lxc/103.conf) and change “/dev/bus/usb/002 dev/bus/usb/002” to “/dev/bus/usb/003 dev/bus/usb/003” (or whatever the new bus is).

Does anyone know an easier fix for this?

1 Like

Here is how I have done this in an Unpriviledged container in ProxMox

Create unpriviledged LXC container (Ubuntu in my case), install docker, frigate, etc

Assumptions:

  1. LXC container uses default user/group mapping
  2. Container ID is 200
  3. Coral USB sits within /dev/bus/usb/003

Create a convenience name for the container’s root group (100000)

proxmox$ groupadd -g 100000 lxc-frigate-root

Add these lines in the LXC config file /etc/pve/lxc/200.conf

usb0: host=1a6e:089a,usb3=1 # coral ID pre-load
usb1: host=18d1:9302,usb3=1 # coral ID post-load
lxc.cgroup2.devices.allow: c 189:* rwm # usb coral
lxc.mount.entry: /dev/bus/usb/003 dev/bus/usb/003 none bind,optional,create=dir

Add this to /etc/udev/rules.d/60-mycoraltpu.rules

SUBSYSTEMS=="usb", ATTRS{idVendor}=="18d1", ATTRS{idProduct}=="9302", GROUP="lxc-frigate-root"
SUBSYSTEMS=="usb", ATTRS{idVendor}=="1a6e", ATTRS{idProduct}=="089a", GROUP="lxc-frigate-root"

And it works!

Explanation: the udev rule recognises and assigns the Coral USB to group 100000 in Proxmox. Group 100000 is mapped to the Root group of the unpriviledged container. Doing this allows the LXC root group to read/write to the Coral USB on the Proxmox host.

In my system, sometimes the Coral is assigned to bus 002 rather than 003. So I added an additional line in the 200.conf file

# usb0: host=1a6e:089a,usb3=1 # coral ID pre-load (this entry not needed)
# usb1: host=18d1:9302,usb3=1 # coral ID post-load (this entry not needed)
lxc.cgroup2.devices.allow: c 189:* rwm # usb coral
lxc.mount.entry: /dev/bus/usb/003 dev/bus/usb/003 none bind,optional,create=dir
lxc.mount.entry: /dev/bus/usb/002 dev/bus/usb/002 none bind,optional,create=dir

UPDATE: perhaps it may be more ‘correct’ to assign Coral USB to plugdev group, and assign the LXC root user (100000) to be a member of plugdev group. On the other hand I think the Coral USB is not going to be ‘shared’ amongst other VMs/Containers so probably doesn’t matter. Maybe not…

11 Likes

I cannot get this working for the life of me. Current LXC config:

arch: amd64
cores: 3
features: keyctl=1,nesting=1
hostname: docker
memory: 2125
net0: name=eth0,bridge=vmbr0,firewall=1,hwaddr=6A:97:28:DE:AF:CE,ip=dhcp,type=veth
onboot: 1
ostype: debian
parent: ha20220814
rootfs: local-lvm:vm-101-disk-0,size=8G
swap: 512
unprivileged: 1
lxc.cgroup2.devices.allow: c 226:0 rwm
lxc.cgroup2.devices.allow: c 226:128 rwm
lxc.cgroup2.devices.allow: c 29:0 rwm
lxc.cgroup2.devices.allow: c 189:* rwm
lxc.apparmor.profile: unconfined
lxc.cgroup2.devices.allow: a
lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file 0, 0
lxc.mount.entry: /dev/bus/usb/001 dev/bus/usb/001 none bind,optional,create=dir 0, 0
lxc.cap.drop:
lxc.mount.auto: cgroup:rw

lsusb on proxmox host

Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 001 Device 007: ID 8087:0aaa Intel Corp. Bluetooth 9460/9560 Jefferson Peak (JfP)
Bus 001 Device 002: ID 18d1:9302 Google Inc. 
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

I’m able to successfully run the test model on the proxmox host, but not in the LXC (docker container running. Still getting the following error - any advice on how I might be able to get this resolved? Have tried all the posts I can find, but still nothing is helping. Any help appreciated : )

ValueError: Failed to load delegate from libedgetpu.so.1.0