Local realtime person detection for RTSP cameras

Before testing it I literally plugged it in and restarted frigate without any changes. Even tried restarting and unplugging and replugging it.

How can I check it’s not being used by something else?

Thanks

How are you running frigate? Can you post your docker command or compose file? Take a look at the this comment as well: https://github.com/blakeblackshear/frigate/issues/132#issuecomment-626912694

I’m on Ubuntu 18.04 LTS on an Intel NUC i5. Docker command is:

sudo docker run --rm --shm-size=1g -v /usr/share/hassio/homeassistant/custom_components/frigate:/config:ro -v /etc/localtime:/etc/locatime:ro -p 5001:5001 blakeblackshear/frigate:stable

When I run docker exec -t frigate find /dev/bus I get

find: '/dev/bus': No such file or directory

Cheers

Please take a look at the docker run command in the readme. You are missing --privileged and -v /dev/bus/usb:/dev/bus/usb. These are required for the container to access the Coral.

Of course, how silly of me!

Thank you, working perfectly now.

Docker compose is directly compatible with portainer. In portainer, a docker compose is known as a “stack”, but you just paste your docker compose in there and off you go. Portainer is just a web front end to the standard docker functionality. Nothing different really.

Ok thanks for explaining it. I’ll follow the guide and give it another try. Just as well because I was about to return the Coral stick. I tried using it with DOODS/Deepstack but models you need to use with it are just not as accurate as some other ones. Sure it’s lightning fast but what’s the point when it recognises one car as 98% and another car in my driveway as 58% - and sometimes doesn’t recognise it at all!

It’s true that the mobile models are not as good as some of the dedicated CPU/GPU ones, but there’s a big difference in cost to get those going (Theoretically I think frigate would allow you to do this though now as he reintroduced CPU capability and I think GPU was just added too).

I’ve found mobilenet is good enough when set up correctly that I only have ~1% false positives and it’s pretty close to impossible to be a person in camera range and not get picked up. You certainly wouldn’t be able to get close enough to do any damage without me being alerted… But I do get the occasional false positive.

For counting cars in a driveway I think you will have a hard time finding anything that would work good enough to be reliable. Even the better CPU models would lose tracking of one car sometimes or have the confidence jumping around.

the Inception model worked perfectly with DOODs but is just a tad slow to process. But i don’t think the implementation is as good as live analysis like the frigate does - so hopefully i can manage to get it up and running!

Yeah think I am going to giver up on this sadly. Trying to create the stack with the docker compose file from this guide produces the error below. I think its a privilege thing because I’m on Hassio. I suppose I either need to wait until an Addon is made or until someone produces a Beginners step-by-step guide how to install this on a RPi4 using Hassio.

Deployment error

Error response from daemon: error while creating mount source path '/code/02-Frigate/config': mkdir /code: read-only file system

If you are running under portainer, hass.io supervisor isn’t in the picture at all so permissions is not an issue. Your error likely means that you haven’t mapped the directory “code” inside the container to an appropriate directory outside the container. Why don’t you paste your docker compose here?

Are you using raspbian or just a plain hass.io installation with HassOS?

I’m using Hassio (HassOS). The docker compose file i tried to use was from the link above. I pasted it into the web editor of the stack menu. (it wouldn’t accept it until i moved rows 2-4 1 column left but have pasted the original code from the link).

EDIT: I dont think i actually replied directly to you @scstraus

frigate:
    container_name: frigate
    restart: unless-stopped
    privileged: true
   shm_size: '2g' # should work for 5-7 cameras
   image: kpine/frigate-raspberrypi:latest
   volumes:
     - /dev/bus/usb:/dev/bus/usb
     - /etc/localtime:/etc/localtime:ro
     - /code/02-Frigate/config:/config
   ports:
     - "5000:5000"

Hi all,

I think this is a great project and I’ve been running this relatively successfully with 4 cameras without a Coral device (using CPU mode).
I decided to purchase a M.2 PCIe Coral device and have just connected tonight.
I followed the instructions the install the driver as per: https://coral.ai/docs/m2/get-started#1-install-the-pcie-driver (I’m running Debian 10).
It appears Frigate is trying to use the device as I no longer see the “Falling back to CPU” message in the logs.
The docker container appears to startup successfully loading my cameras with the exception of one error message:

frigate    | W :122] Could not set performance expectation : 16 (Inappropriate ioctl for device)

After a period of time I see the following error messages repeating:

frigate    | E :237] HIB Error. hib_error_status = 0000000000002200, hib_first_error_status = 0000000000000200
frigate    | E :237] HIB Error. hib_error_status = 0000000000002200, hib_first_error_status = 0000000000000200
frigate    | Detection appears to be stuck. Restarting detection process
frigate    | Waiting for detection process to exit gracefully...
frigate    | Starting detection process: 181
frigate    | W :122] Could not set performance expectation : 38 (Inappropriate ioctl for device)

It doesn’t appear that any detection is occurring due to the errors above.
Has anyone seen these before?
Also I am not sure re the correct syntax to map the PCIe version of the Coral in Docker. Without any additional entries at all from when I was running in CPU mode Frigate appears to start up and not use CPU mode (however I do get the errors above).
Any advice would be great.

P.S. Is there a way to force Frigate to use CPU mode? It’s a little difficult to remove the PCIe card each time to test things.

Thanks,

Adrian.

Re my issue above, as I get a similar set of errors ( the HIB Error. hib_error_status...) trying to run the demo code on the Coral website (the one that identifies the parrot), I’m assuming it’s either the driver or hardware related.
Just trying to find another PC to test in…

UPDATE: this issue was due to me not compiling the gasket driver successfully (I didn’t have the kernel headers installed).
Now I have the M.2 version of the Coral running through a M.2 to PCIe adapter working great. Runs around 6-7ms inference speed with 4x 1440p/5fps cameras.
Thank you Blake for a great project, I’ll be donating soon…

Hey,

I’d like to give this a go. But just wanted some advice on the best way to set it up.

Currently I have hass running in a docker container under a Ubuntu vm running on windows server using hyper-v, with a xeon e3-1226 v3.

I’m open to getting the coral stick but the problem is hyper v doesn’t allow USB pass through. So I have a couple of options. I either get a raspberry pi, install usbip on it and route the USB to my hyper v instance and then push it to the docker image, or I run the frigate docker image directly on a dedicated raspberry pi 4 with the coral stick. Is the pi powerful enough for this task? Will I be able to get by on my server without a coral stick?

I have currently 3 1080 cameras with probably another 2 or so to be added later.

Thanks!

I’m running 4 1080p cameras at 4fps on an intel machine with inference ~10-11 ms which should mean my hardware can theoretically do something like ~100 fps.

I regularly get up to about 80-90 FPS being sent to the coral on that setup… So if you wanted to run 3 cameras at 4 fps, you’d probably want something in the ballpark of 60-70 fps capability at least.

I couldn’t find the exact inference times that people were getting from the Pi 4, but I recall being disappointed when I saw it. I’m guessing it could do something like 25-40 FPS, which would mean you’d need to run at quite a low frame rate like 2 FPS to get it to work. The Atomic Pi I think was a better option, getting up to ~40-60 fps based on the 16ms inference time I saw…

How about just shitcanning Windows Server and make that thing into a real server running Debian and Hass Supervised…? :upside_down_face: You’d probably get 100+FPS which would make short work of a decent framerate at 1080p.

Thanks for the reply - The Atomic Pi looks interesting. I’m willing to potentially buy some dedicated hardware for it, but I’m not sure what to go for - I don’t want to go overboard and buy something overkill and waste money, I want to keep costs down, but I equally don’t want to go for something underpowered and not up to the task. NUCs for instance I imagine will work brilliantly, but are getting into the hundreds. What would you recommend as a good happy medium?

Unfortunately I’m too far invested in my setup now, and have tons of other stuff setup to redo my server as a linux server - but yeah if I was to start again, I’d probably go linux based.

What about something like:


with
https://www.mouser.co.uk/ProductDetail/Coral/G650-04527-01?qs=XeJtXLiO41SNhFZkjmCwDg%3D%3D

Would that work well?

Yes it would work fantastically well. 70-100fps. It’s actually the identical processor to the one in my Synology 918+ which is doing 4 1080p cameras @ 4fps and a ton of other tasks, and I still have plenty of headroom. I’ve never hit the limit of FPS that I’ve noticed.

Frigate takes about 30-40% of the processor currently and even with all the synology stuff I’m running (like 4 cameras of surveillance station recording and plex), I’m averaging about 50% CPU usage, with my biggest spikes being to about 85% CPU.

You’d have power left over to run hass easily unless you went really crazy with trying high framerates (but anything over about 4 is totally unnecessary IMO, even 4 maybe is)… I’d install debian and hass supervised on it with portainer addon and then add frigate via portainer.

@jon102034050 what did you have to do to change the container to host mode? I am also using portainer in HASSIO to run this. It is connecting to MQTT so it appears to be working but I can’t for the life of me connect to port 5000 to see any debug information. Was there something you needed to enable/change in portainer to access it on port 5000?