Local realtime person detection for RTSP cameras

The old version used OpenCV to read the rtsp feed, so it’s hard to compare the two. I’m sure there is some combination input parameters that will work better. Which camera do you have again? Often times you can find ffmpeg settings from zone minder or shinobi that will work for your camera.

That would help.
Since the update to V3.0 Beta the camera snapshot (with bounding boxes) from frigate into home assistant stopped working.

I’ll post my config soon, in the meantime check out this post Local realtime person detection for RTSP cameras

update:

camera:
  - name: Person Front Yard Last
    platform: mqtt
    topic: frigate/front/person/snapshot

Asking here as a last resort before I give up on this. I’ve been using the docker version for quite some time with great success. I ran my docker containers on debian 9 and 10, but a few days ago I switched my setup to vmware esxi with an ubuntu vm for docker container. I was able to restore pretty much all of my 30+ containers on that new vm (including home assistant and zigbee2mqtt), except the frigate. It throws an exception about not being able to access the edgetpu.
I did passthrough the usb device to the vm and the same setup works fine for a few other usb devices (zwave/zigbee usb sticks in home assistant for example).
I tried playing with permissions and udev rules, but no luck, it just keeps complaining.
I installed python-edgetpu to the vm itself and it throws the same exception when I try to initialize the detection engine, so I know it has nothing to do with the container, but rather the esxi/vm itself.
Figured I’d ask if anybody else has a similar virtual setup.

On connect called
Traceback (most recent call last):
  File "detect_objects.py", line 134, in <module>
    main()
  File "detect_objects.py", line 83, in main
    prepped_frame_queue
  File "/opt/frigate/frigate/object_detection.py", line 17, in __init__
    self.engine = DetectionEngine(PATH_TO_CKPT)
  File "/usr/local/lib/python3.6/dist-packages/edgetpu/detection/engine.py", line 72, in __init__
    super().__init__(model_path)
  File "/usr/local/lib/python3.6/dist-packages/edgetpu/basic/basic_engine.py", line 40, in __init__
    self._engine = BasicEnginePythonWrapper.CreateFromFile(model_path)
RuntimeError: Error in device opening (/sys/bus/usb/devices/4-1)!
On connect called
Traceback (most recent call last):
  File "detect_objects.py", line 134, in <module>
    main()
  File "detect_objects.py", line 83, in main
    prepped_frame_queue
  File "/opt/frigate/frigate/object_detection.py", line 17, in __init__
    self.engine = DetectionEngine(PATH_TO_CKPT)
  File "/usr/local/lib/python3.6/dist-packages/edgetpu/detection/engine.py", line 72, in __init__
    super().__init__(model_path)
  File "/usr/local/lib/python3.6/dist-packages/edgetpu/basic/basic_engine.py", line 40, in __init__
    self._engine = BasicEnginePythonWrapper.CreateFromFile(model_path)
RuntimeError: Error in device opening (/sys/bus/usb/devices/4-1)!

I seem to see people having USB device pass through issues with esxi. Try passing the whole USB hub through. You may need to move to a diff USB port to avoid affecting other USB devices you don’t want to pass through.

1 Like

I tried passing both USB controllers through (3.0 and 3.1) and basically shot myself in the foot :slight_smile: As soon as I enabled them for pass through and went for a reboot I realized that it was a bad idea because I run esxi off of a usb thumb drive :grinning: Had to reflash it from scratch. I may try to pass through only one of the controllers, though in this case I will be left with only one controller for all the other VMs. What surprises me is that I have a bunch of other USB devices (hard drives, zwave stick, zigbee sniffer, etc.) and they all work fine. I even do see the edgetpu in my lsusb output on the VM:
Bus 004 Device 002: ID 1a6e:089a Global Unichip Corp.
Not sure what the hell is going on with this specific device :frowning:

@achurak1 The Coral device is strange in that the device ID seems to ‘change’ after it boots up. So you can’t just pass-through the device based on the device ID. Instead, like @hasshoolio suggests, try passing the port/hub where the deivce is plugged-in instead.

See this post for some more info.

1 Like

Doesn’t change for me. It does show up in the VM, but not as Google, as Global Unichip Corp., which I assume is the actual board manufacturer.
Thanks for the link, I will see if there’s a way to match and replace the vendor id in the esxi settings to make it look like actual Google device.
Passing through the whole controller is going to be my very last resort or I can just utilize one of my spare raspberry pi lying around, just would prefer to keep everything in the same place for easier management.

How about a usb3 pcie addin card? Should be less than $20usd

2 Likes

Worked, perfect. Thank you. I was just getting confused of the placement for best.jpg

From what I can tell, no one has successfully gotten it working on esxi or any other VM… I tried and failed myself on esxi after having the CPU version working flawlessly under esxi… Will be interesting to see if someone can crack it.

Here is a preview of the new stats endpoint I implemented to help me track the performance implications of the changes I am making.
image

I will probably add more before I am done, but it should really help diagnose where bottlenecks are.

1 Like

Very cool. Will you publish these over mqtt so ha can autodiscover them? I can see pushing them to infux or Prometheus.

Possibly. They change very frequently, so it is probably better to poll them rather than flooding mqtt.

Yeah if you are doing real-time I could see that. My power company provided smart bridge does minute and instantaneous via mqtt. But the instant is only for 5 mins after some data is published to the topic and it’s a single value.

Problem is, I have a GPU (for encoding/decoding) and two network cards already installed into the PCI slots. I’ll need to check if I have any left able to accept a usb controller, plus I’ll need to find some sort of pci extension cable, if it exists, cause even if I do have a slot available, nothing would fit into it because of all the other cards.
For now I decided to use an atomic pi I had lying around for another docker instance with only frigate running on it for now. As expected it works fine and also shows the coral as a Google device. That part, google vs global unichip vendor id, I don’t understand at all. Some people report that it always shows up as global unichip first, but changes to google after you run the detection engine for the first time.

Hi Blake

Big thanks for the latest changes. Looking forward to the dynamic regions/object tracking :slight_smile:

I wondered what you thought of having ‘region labels’, which could be added to the mqtt topic?

I use the person detection to trigger my security lights, so this would allow the same camera to trigger different lights. E.g. region A would trigger the garage security light and region B would trigger the front door light. Not sure if this fits in with what you are doing but just thought I’d put it out there.

Thanks :christmas_tree:

1 Like

I have a cheap wyze v2 that I’m running the dafang hacked firmware on. It’s definitely not the most stable or reliable camera. I purchased it tto start looking into things like this and will eventually get proper cameras once I figure out placement and power.

At this point I’m pretty sure its the v4l video server on the camera that is dying. One by one I removed most of the ffmpeg settings and still had the stream fail. The camera logs have some errors that may be related so I’m researching those.

As a side note, is there a way to have frigate ignore object detection for things we don’t care about?

There will surely be a more elegant solution but until then you could add a check to the get_current_frame_with_objects function, e.g.

for obj in detected_objects:
    if obj['name'] == "person":
        label = "{}: {}% {}".format(obj['name'],int(obj['score']*100),int(obj['area']))

That is a good idea. Can you open an issue on github? The new dynamic regions will mostly make the current regions obsolete. Tripwires and similar features will likely be what you want.

2 Likes