Local realtime person detection for RTSP cameras

Actually seems to be a bug in HA Supervisor: https://github.com/home-assistant/supervisor/issues/2669

how are ppl adding the GUI into HASSIO lovelace? I guess I need to build a custom lovelace page with all the entitiesā€¦

I thought I could just embed frigates gui ipā€¦

Hi guys,

I am planning to run frigate and read almost whole thread but I donā€™t have clear enough what would be the best approach given my hardware.
I have a Coral stick,RPI3 used for general stuff (torrent, rtl_433, other small bit, CPU usage peaks 20% max) and a mini PC with Celeron N4100 (gemini lake), 4 Gbyte of Ram running Proxmox + Supervised Home assistant.

Now I understood RPI3 is not ideal due to lack of USB3, so best would be to use Coral Stick on my mini PC, but:

  • Clear not ideal/difficult to run under Proxmox or in LXC container for performance
  • I have 1.5Gbyte Ram for Proxmox, 2.5Gbyte allocated for HomeAssistant supervised (Debian)ā€¦how much ram frigate uses? (5 cameras, 15 regions)

Any suggestion?

i created a camera view using custom swipe card and picture glance. default view is the ipcamera itself, and subsequent swipes are the actual detection cams. real alerts when we are away get sent to us via telegram, and we also record clips accessed through the media browser. i have an enterprise NVR that does the real 24/7 recording of the cameras, but the telegram notifies and media browser clips make for super quick review on the go

image
image
image

3 Likes

Thanks for pointing this out.

how did you swipe through the captured snapshots?

also how can we add more objects? Like for example squirrels (they keep recording as humans in my setup), are they mapped or do I need to somehow take pictures and connect them to the aiā€¦not sure how this works.

The frigate ā€œcamā€ that comes into ha only shows the last detection. I run detections for people and cats, just cause.

in the case of my backyard camera example, frigate brings in a camera.backyard, camera.backyard_person, and camera.backyard_cat

i then just built a swipe card that included those entities. https://github.com/bramkragten/swipe-card

something like this

- type: custom:swipe-card
 cards:
   - type: picture-glance
     camera_image: camera.backyard
     entities:
       - entity: light.some_light
   - type: picture-glance
     camera_image: camera.backyard_person
     entities:
       - entity: binary_sensor.backyard_person_motion
       - entity: sensor.backyard_person
   - type: picture-glance
     camera_image: camera.backyard_cat
     entities:
       - entity: light.some_light
   
    

regarding the objects like squirrels etc, youā€™ll need to read through the documentation for frigate and research loading up your own ai models vs what is built into frigate by default

1 Like

anyone having issues upgrading to 1.13 frigate add-on?

it seems to error out because it canā€™t find something new in the config file, but i donā€™t see any documentation stating anything different or additional to what i have in my config for the 1.12 version add-on

I use Nabu cloud service to remotely connect to HA. remote access to Frigate works fine if I use the sidebar to select it, however I was trying to integrate frigate into a lovelace webpage card. This works great locally if I define my url as http://xx.xx.xx.xx:5000. I even used some fun layout to provide card navigation above the frigate display. The problem is that this doesnā€™t work remotely.

Is there a way to display frigate embedded in a lovelace page and use a URL that works with Nabu cloud?

1 Like

Iā€™m trying to understand how the ai works, when we pick ā€œpersonā€ or ā€œcatā€ is the comparison made locally or with some cloud data? For example if i add a new label like wolf for example, do i need to train the code to recognize the new label somehow or will it fetch similar data already preserved in some centralized db?

Itā€™s all local. The model is trained on the COCO dataset and provided by Google. It does not learn or adapt at runtime in any way. You can train your own models and mount them in the container if you want.

Ok so that means thereā€™s a dataset of predefined items alreadyā€¦ Is there a place where i can get the full list?

Ok i think Iā€™ve found it https://www.tensorflow.org/js/models

Ok itā€™s too complex, even if i figure out how to train it i have no clue how to add the new object into your code :slight_smile:

I will be laying the groundwork for custom models in the next release.

10 Likes

Hi all, just curious if anyone has gotten any sort of hardware acceleration working running Frigate in Windows Docker with wsl2?

Nope, still trying. Read it might be related to the windows version used. Still testing other stuff before reinstalling my windows 10.

1 Like

Has anyone run frigate using an existing video file as an input rather than a stream? I would like to run it against a long clip from a static camera to pull person/vehicle detections rather than skipping through it a few seconds at a time.

There are some docs in the contributing section (may not be released yet) for doing this: https://github.com/blakeblackshear/frigate/blob/0f1bc40f0077830d4dbd4a5d51e2f36cfea010ae/docs/docs/contributing.md#2-create-a-local-config-file-for-testing

Iā€™ve got a working system on x86 VM using a coral edge pcie tpu, sub 8ms inference times. Problem iā€™m trying to solve is the FFMPEG load, which is much higher than iā€™d like.
I canā€™t easily get a GPU into the VM, and iā€™ve explored the poor offerings from both Nvidia and AMD with regards to shared GPU access etc.

So Iā€™ve picked up a jetson nano, and installed the coral edge pcie tpu there. I have the expectation of using the built in GPU in the ffmeg commands as documented already by blakeblackshear.

Problem is on starting frigate, i appear to be stuck on:
peewee_migrate INFO : Starting migrations

I see a bit of a load in top, nginx is consuming about 12%.

just wondering if this is normal? or what i might be able to do to understand if and what is wrong.

thanks.