Local realtime person detection for RTSP cameras

I am seeing some issues, any help will be great - i have a 5 camera setup

  1. Some cameras stop detecting for a while, then they come back randomly, not sure what could that be
  2. I am using the HA integration with MQTT cameras, sometimes the image goes back to an older image detected, say from yesterday… why would that be?
  3. Seems like most issues i have (except #2) look to be related to ffmpeg arguments? anyway to test what would be the ideal args? All my cameras work well on VLC

Thanks in advance

Thank you for the quick response Blake.

I couldn’t get that to work, but I noticed that if I opened a Bash terminal within the home assistant docker I could use wget to access the photo. I did a little more googling and found the HA service call: telegram_bot.send_photo.

This revised version of your HA automation seems to work when manually triggered:

automation:
  - alias: Alert me if a person is detected while armed away
    trigger: 
      platform: state
      entity_id: binary_sensor.camera_person
      from: 'off'
      to: 'on'
    condition:
      - condition: state
        entity_id: alarm_control_panel.home_alarm
        state: armed_away
    action:
      - service: telegram_bot.send_photo
        data:
          caption: 'A person was detected.'
          url: 'http://<ip>:5000/<camera_name>/person/best.jpg'

Note that both the caption and the url are surrounded by single quotes.

1 Like

done!

Were you able to get the Mini PCIe Accelerator working with this? I’m looking to add this card to my unraid box

that’s and m-key m.2 to pcie adaptor. The dual edge tpu requires e-key. Maybe that will help?

i think my config issue is due to my NVR not conforming to standard RTSP url format.

to access my stream in HA i use this url (removed private info)

camera: 
  - platform: ffmpeg
    name: Channel 1
    input: -rtsp_transport tcp -i rtsp://xxx.xxx.xx.xxx/user=xxxx_password=xxxxxxx_channel=1_stream=0.sdp?real_stream

any idea how i can get these urls to work on frigate?

This would be the input in frigate.

rtsp://xxx.xxx.xx.xxx/user=xxxx_password=xxxxxxx_channel=1_stream=0.sdp?real_stream
1 Like

That’s what I’ve been using but it gives me the following.

Fontconfig error: Cannot load default config file
Traceback (most recent call last):
  File "detect_objects.py", line 441, in <module>
Starting detection process: 19
Attempting to load TPU as usb:0
    main()
  File "detect_objects.py", line 196, in main
    ffmpeg_input = get_ffmpeg_input(ffmpeg['input'])
KeyError: 'input'

I’ve got a very basic config at the moment just to get things going then I was going to tweak it. As soon as I remove the cameras section the add on can start. Here is my camera config.

Camera:
  backdoor:
    input: rtsp://xxx.xxx.xx.xxx/user=xxxx_password=xxxxxxx_channel=1_stream=0.sdp?real_stream

Which model is used by default (from here I suppose: https://coral.ai/models/)
MobileNet SSD v1 (COCO) ? MobileNet SSD v2 (COCO) ?
I was thinking of training model to recognize if door open or closed, but that seems complicated :smiley:
thx

I’ve got an interesting problem. It goes like this:

Install frigate, decide its awesome and order two coral usb accelerators(used.)

Install one accelerator on my main HA box(NUC,) and my inference speed is around 8.2 ms or so(I’m only testing on one camera at the moment)

Run frigate while tuning cameras, zones, detections, etc.
Wake up the next morning and Home Assistant is unresponsive.
Check hardware and see the light blinking on the coral USB(odd, it was solid when I installed it…)
Power down my HA box, power it back up.
Frigate comes up and says it can’t find the TPU…
Swap the TPU out for my backup, and frigate once again finds my TPU and starts working with it.(I’m now thinking that it’s just bad luck and TPU suffered a hardware failure.)

Fast forward two days, and it happens AGAIN. Same exact symptoms. Only this time I don’t have an extra TPU lying around.

I give up for a day, then decide to do some testing.
I spin up HA supervised on a PI I have lying around(NUC is my main system) and start playing around with the TPU(s) in frigate.
This is when something odd happens, frigate detects the TPU!!!
Here’s the catch. The TPU is only detected when I am using the lower resolution substream that my camera has, and WILL NOT detect using the original higher quality stream that I was originally using on the NUC.
I return to the main system that was running frigate originally and try again, no dice(with the original stream.) I try again with the lower resolution substream, and low and behold THE TPU IS DETECTED!!!

I have no idea what to make of this, other than to think that I somehow overstressed the TPU’s(both apparently?!?!) and damaged them. Has anyone seen partial hardware failures of these things? I am intrigued to be honest.

UPDATE:

I plugged the tpu(dead?!) back into my main system, yet again. I then lowered the resolution on my camera until frigate could find the TPU. (2048 x 1536 did not work, the next lowest resolution 1280 x 960 did) Then without making any other changes I raised the resolution back up to 2048*1536, and to my utter confusion frigate was able to find the TPU. To say that I am confused is an understatement. My testing methodology is typically pretty sound as I do quite a lot of troubleshooting for a living, but this has me utterly and completey baffled.

hi, i tested some time ago and even wrote here low res is unusable for proper detection but i would not go above Full HD as that would be too much load on the entire system for no real benefit. I still have some missed detection but that probably not related to frigate. Maybe a more advanced detectors like yolov5 could solve it, but in general the current system is totally usable. I have only 1 coral and 8 cams hooked up.

The TPU can be finicky without enough power, so make sure it has a solid power source from the USB port. The blinking is normal and what happens when it is running detection. Also, the TPU can only be accessed by a single process. Frigate failing to find the TPU could be a lingering process that is still holding onto the TPU or it could have gotten into a bad state due to insufficient power. On the systems where I have seen that happen, unplugging the TPU and plugging it back in to power cycle it fixes it.

The size of the video stream has nothing to do with the TPU. Frigate pre-processes the frames before sending to the TPU, so nothing different is passed to the TPU regardless of how many cameras you have or what resolution they are. A higher resolution stream increases the CPU load, the network IO, and the number of detections that are run per frame and that may create a side effect that impacts the power or communication with the TPU. I would bet that uplugging and replugging the same TPU would have worked in most of these scenarios. Try your resolutions again, but unplug/replug the TPU between each stop/start of the container. It is likely you are seeing a correlation with the resolution by chance.

Blake please explain what you mean "nothing different is passed to the TPU regardless of how many cameras you have or what resolution they are. "
as this cant be true. If i add more cameras we can even slow down the entire processing or maxing out the coral. Same with resolution, low res causing terrible detection.

It is a common misconception that frigate processes the entire frame from the camera, but it doesn’t. Frigate uses motion detection to only detect objects in certain areas of the frame. Each detection must be a 300x300 pixel image, so Frigate resizes the area(s) with motion from each frame to 300x300. That resizing is not done on the TPU. The TPU is just processing a queue of 300x300px images and returning detections. It is not involved in any other part of the process nor does it know anything about it. Higher resolutions allow smaller parts of your frame to be 300x300px, so it improves accuracy for detecting objects that are far away from the camera.

1 Like

yes, i know 100% how deep learning works, but still my tests shows that 640x480 sub stream is terrible compared to 1080. I am not that level in this to explain why but that what we saw. Also what i see the main issue the fps for the Coral. That kills or overheat it, so in my case all cams just streaming with 2 or 4 fps. However i just want to add for simple indoor usage no matter sub or main stream, we use these on outdoor farm environment often long distances where HD is far better.

I could certainly see it being a power problem and contemplated purchasing a powered USB 3 hub.

To talk to each point:

Unplugging the TPU did not fix the problem. I can say this definitively. As I eluded to in my post, I used two different TPU’s and switched between them(unplugging in the process.) This was after the initial troubleshooting, which of course, the first step was “turn it off, turn it back on” multiple times.

I also rebuilt the frigate container multiple times, deleted the config, and restarted the host pc(NUC,) going so far as to remove power completely.

I just did another test, here is exactly what I did:

Frigate working, with USB TPU(@full camera resolution.)

  1. Shut down Frigate(HA supervised.)
  2. Unplug TPU
  3. Plug in new TPU(the other I have on hand, same USB 3 port)
  4. Start Frigate, TPU not detected…falling back to cpu
  5. Stop Frigate, reduce camera resolution(to 1280 x 960.)
  6. Restart Frigate, TPU is now detected and running inference.
  7. Stop Frigate
  8. Increase camera resolution back up to 2048 x 1536
  9. Start Frigate, TPU found.

This tells me definitively that it is tied to resolution. How, I don’t know.

Another detail:
My tpu’s never flash when running. The led is always solid. The only time they have flashed is when I experienced the failure.

Edit:

I can help provide logs if this is something you want to dig into, just let me know what you need and how I can get it.

what is your fps. Mine is only 4fps. I actually use a BTC bitcoin mining motherboard with 1000Watt PSU so i am sure my USB ports are well powered. I would play with fps and powered USB hub.

I’ve played with FPS and set it anywhere from 15 all the way down to 6. Initial failure happened when running at 15 FPS, and the subsequent failures have happened anywhere between 6-15 FPS. I have seen no correlation relating to FPS. Although it could be tied to a process accessing the RTSP stream somehow, which is maybe broken(killed?) by changing the res on the camera…?

i set the same fps in config file as on the cams usually 2 or 4 fps. Most cams here are Hikvision which are capable of many different fps luckily

From a software perspective, there is nothing that could possibly change whether or not the TPU is detected based on the resolution. In fact, the process that is dedicated to the TPU starts before the camera based code is even executed. There is nothing in the logs at all? It’s just a single camera?