I think that h265 not possible
H265 is possible but I can only view it in safari
Updated to 0.9.* made the config changes necessary for the new release. Now I get this error:
[Errno 18] Invalid cross-device link: '/media/frigate/clips/frigate.db' -> '/media/frigate/frigate.db'
[cmd] python3 exited 1
[cont-finish.d] executing container finish scripts...
[cont-finish.d] done.
Does anyone know what this is about? No changes to the clips folder in the unraid and thatâs where her the db lives.
Interesting. When i have setup h265 the frigate logs was with errors. Dont remember which. After change h264 all good.
Having a weird problem where the MQTT based entities with the latest beta Integration stop updating after a few days until I restart the frigate container. Whatâs weird is that I notice if I watch with MQTT Explorer, the topic is available = online
, my mosquito cluster sees the client active and checking the events in MQTT Explorer they are showing up correctly.
On a side note, while investigating this I may have stumbled on a bug in the mqtt.py
file for Frigate the code as donât see an on_disconnect
event defined. However doesnât seem to be my issue in this case.
Any one else seeing anything like this?
Does only admin have access to Frigate addon webinterface?
Could be that the detect has to be H264? But record can be H265, thatâs how mine is set.
I see most people get by using just a PI to run HS and adding a coral stick to run Frigate, but what about adding Facial Recognition to the mix. Would something like a higher end NVIDIA Jetson with a coral stick work, or even just the PI and coral stick could handle this workflow? Trying to find an All in 1 hardware solution for facial recognition.
Doesnât seem to matter where I move the db-shm and db.wal files. I just keep getting the same error each time.
Iâm assuming this is an Unraid specific breaking change for 0.9.*
rename old clips folder or move it, use clean empty clips folder, do the same for db folder then start frigate.
That was it. Thank you
Sorry just jump in because of topic title of real time person detection catches my eyes
I had tested frigates/doods/deepstack with both Nvidia CUDA/Google Coral for Incident detection like somebody fall down, kids runningâŚect
There is no realtime IMHO from my test, but 2secs to 8 secs delay count from a person jump into camera to trigger something like light or alert. I needs something below 1sec, ideally 0.7sec of response from Camera to HA to light bulb.
Problem is not with edge TPU because inference time is about 75ms-300ms, the real problem is too much over head at HA/Frigate. Maybe write your own python scripts that pump events to MQTT will do better.
Just share my experience
My experience is that lights are on after less than 1 second. I run frigate and home assistant in docker on a pc with Ubuntu. I use node red for automations. I have a intel 4770 cpu and a coral usb no gpu to speak of. Imho it works great.
Interesting, mine tested on Nano jetson 4Gb (both CUDA & Coral) and Intel i3 10gen (Coral). I still can feel lag about 2-8 sec 1080p over RSTP connection. My router is asus XT8. your below 1 sec count from âsuddenly appear in camera to HA triggerâ? can you try print a person face and suddenly show to camera and check how fast it trigger HA (light bulb)?
Hmm interesting point, will keep it in mind thx
When I enter the room and the light is on, most likely faster than if I had to reach for the switch which is right next to the door.
I have not done scientific tests measuring the delay, and I do unfortunately not have a printer to test it in the way you propose. Nevertheless, for my use case it works great. It is a lot better than PIR which may get triggered by the cats.
I use cameras for controlling the lights in three rooms.
gradually entering a scene and run into scene is different. in my case. my camera at gate and I always receive a telegram message trigger by friggate/HA when I unlocking the lock. Frigate is doing great Also I got another cam at kitchen which DOODS will trigger google home to alert the kettle temperature when a person (kid) is detected in kitchen. It is working fine too. but both is not the real time I am looking for when detecting very fast pace motion. you can use a blank A4 cover your face and suddenly show to camera and wait for trigger. sometimes it took few sec to response.
again, yesterday I did another experiment. Just looking frigget in your phone and walk into camera. without any AI detection the live video itself is few second slow.
I agree that is not reel time. I would consider real time to be in the millisecond area. Frigate and Home Assistant is not there, at least not on my setup. However, in a very unscientific test with a mobile phone, it took 43 frames or 1.4 seconds, from the camera appeared on a video I filmed with the phone, to the lights were on. For my application, that is plenty fast enough.
What are you trying to do that requires very fast response/low latency? Do you run anything wireless? What kind of video system do you use? What frame rate do you use for inference? My test was not sub 1 second as I thought, but still very far from 8 seconds.