Local realtime person detection for RTSP cameras

Maybe someone using Nvidia GPU can help me in correctly configuring, but when I put this in my config file:

ffmpeg:
  input_args:
    - -c:v 
    - h264_cuvid

I keep getting:

ffmpeg sent a broken frame. something is wrong.

I’ve also tried it with hwaccel_args, same thing. As soon as I remove those from the config (and Frigate is running on the CPU) everything works.

Trying to get this to work on my GTX 1050 TI. Running Unraid (yes all Nvidia arguments are passed to the container, other containers also are able to work with the videocard). In combination with an RTSP stream coming from a Ubiquiti UVC-G3 bullet cam.

Hopefully someone can shine some light on how to properly configure, because it has cost me two evenings already of trial and error getting it to work :frowning:

Thanks!

You can use the mjpeg streams, but those are generated from the input assigned to the detect role, not the resolution from the rtmp role. There is no way to get bounding boxes on streams other than the detect stream. The resolution can be specified with the ?h= parameter, but it wont do you any good to increase it beyond the resolution of the source feed.

Ok just checking out the new web gui now. The zone helper tool is just awesome. Thanks for all the hard work!

1 Like

everybody talking about the new web gui, but if i push open web gui on the add on page i don’t get anything, and couldn’t find any reference to the actual address :frowning:

We’re working on it. In short, home assistant ingress is very cumbersome. You can get to the UI for now by visiting http://<your-home-assistant-machine-ip>:5000/

@blakeblackshear I just opened the new GUI and noticed all of my camera’s images were frozen at somewhere around 7am this morning. There were no events captured from today even though I had been out and around the cameras a few times. Also none of the HA device history show any activity for today.

In the logs I see this.

Exception in thread detected_frames_processor:
Traceback (most recent call last):
  File "/usr/lib/python3.8/threading.py", line 932, in _bootstrap_inner
    self.run()
  File "/opt/frigate/frigate/object_processing.py", line 517, in run
    camera_state.update(frame_time, current_tracked_objects, motion_boxes, regions)
  File "/opt/frigate/frigate/object_processing.py", line 355, in update
    c(self.name, removed_obj, frame_time)
  File "/opt/frigate/frigate/object_processing.py", line 439, in end
    event_data = obj.to_dict(include_thumbnail=True)
  File "/opt/frigate/frigate/object_processing.py", line 167, in to_dict
    'thumbnail': base64.b64encode(self.get_thumbnail()).decode('utf-8') if include_thumbnail else None
  File "/opt/frigate/frigate/object_processing.py", line 174, in get_thumbnail
    jpg_bytes = self.get_jpg_bytes(timestamp=False, bounding_box=False, crop=True, height=175)
  File "/opt/frigate/frigate/object_processing.py", line 186, in get_jpg_bytes
    best_frame = cv2.cvtColor(self.frame_cache[self.thumbnail_data['frame_time']], cv2.COLOR_YUV2BGR_I420)
KeyError: 1611057140.753266

RC2 is out: https://github.com/blakeblackshear/frigate/releases/tag/v0.8.0-rc2

Thanks. I added something to cache this exception and handle it more gracefully.

Thanks for the hard work, @blakeblackshear! I just installed the RC2 hassio addon and I can access the web UI via ingress now, but after clicking on “Cameras” and then on one of my cameras, the mjpeg doesn’t show.

Just noticed that too. Guess we will need an RC3. Getting closer.

1 Like

Has anyone seen another addon using ingress with a working mjpeg feed? It seems like something with ingress is breaking it.

I’m seeing a different Content-Type response header between Frigate directly and the ingress. Could the missing boundary be the cause?

Frigate:

Content-Type: multipart/x-mixed-replace; boundary=frame

Ingress:

Content-Type: multipart/x-mixed-replace

I also see a bug with the preact-router not setting the default view to the list of cameras. I think this should be fixable. I’ll take a look.

1 Like

That’s probably the issue.

Has anyone been able to run this on a dev board or dev board mini? I just got one today and can’t figure out what to do.

The device shows up as dev/cu.usbmodemmocha_tang1 and dev/tty.usbmodemmocha_tang1 on my Mac. I’ve tried mounting both of these as volumes with /dev/bus/usb like so.

frigate:
    container_name: frigate
    restart: unless-stopped
    privileged: true
    image: blakeblackshear/frigate:stable-amd64
    volumes:
      - /dev/tty.usbmodemmocha_tang1:/dev/bus/usb
      - /etc/localtime:/etc/localtime:ro

I also tried - /dev/bus/usb:/dev/bus/usb like the documentation shows, but didn’t see anything in dev/usb on my machine it’s on.

Hello :slight_smile: I just swapped my whole Hassio Installation to a newer NUC and installes the driver for coral like documented on their site. I have the pcie version and without seeing it in exposed pci devices in hassio it is somehow workung directly :slight_smile: I guess it’s frigate exposing the pci right? :slight_smile: great job! I now have TPU… wow … today it’s a good day :slight_smile: Thanks sooo much for all your hard work!

No one has Nvidia? :stuck_out_tongue:

I confirm! Same issue here .

Hi, I have installed 0.8.0-rc2

I can’t visualize clips in web ui nor in HA media brower

I have this error with all model of cameras I have hooked up.

There’s a bug with rc2… Look a couple of replays up…
Go back to rc1… if possible…