@blakeblackshear I have rencently disabled draw zones, and still have timestamps enabled for each camera in config file. When both set to True, I am getting timestamps. When draw_zones disabled, no timestamps.
Using rc2. Not seeting anything here which could explain: https://github.com/blakeblackshear/frigate/pull/231
Tested with same config file on stable, and timestamps work as expected. Maybe I missed something in the release note…
It was from a long time that I wanted to try your component…
Well done! installing took a little hammering here and there (but mainly due tom my config) but finally runs.
Please be aware that in the github instructions docker command thre is a missing - in the docker run snippet in the name option (i.e. there is -name instead of --name).
@blakeblackshear ok, I had seen this. but for me, “drawing” means the colored rectangles, not the timestamps
@zeliant I will try what you suggest. But I have a single graphics card (the intel gpu), and I have to virtualize it to be used by multiple clients. I cannot just simply enable GPU passthrough. Yesterday for example, I installed Jellyfin (great program BTW) inside docker running on VM. I enabled GVT-g on this VM, and HW acceleration works fine. This is why I wanted to run frigate inside VM and not LXC, because I think that LXC cannot enable GVT-g.
My i7 7700 gets a little bit stressed (3 cameras)
How can I lower cpu usage apart external accelerators ?
I already entered QuickSync parameters .
Apologize as I guess it has been asked already in this megathread.
Also would it be possible to trigger recognition just on demand ? My 3 cameras have highly reactive motion detection alarms (which gives a ton of false positives) so triggering recognition would be really a perfect solution
I assume your LXC is running in dom0. In that case you don’t need GVT-g, I think it should work fine as described and still coexist with other VMs using GVT-g.
Let me try to rephrase what you said:
Since LXC running on top of proxmox host, don’t need virtualisation, and can passthrough GPU to LXC while maintaining GVT-g for VMs?
I just tried and it seems it still works. I am running zoneminder in docker VM which which has GVT-g enabled, and ffmeg on 3 streams @1080p seems capable of using vaapi (while using roughly 20% CPU).
I do have an issue with detect_objects.py erroring out because it cant find plasma store if run from crontab at reboot, however if i manually run the py file from terminal then everything works as expected.
my cron is as follows but have tried a multitude of variants with same outcome.
On connect called
Traceback (most recent call last):
File "/opt/frigate/detect_objects.py", line 460, in <module>
main()
File "/opt/frigate/detect_objects.py", line 175, in main
plasma_process = start_plasma_store()
File "/opt/frigate/detect_objects.py", line 68, in start_plasma_store
plasma_process = sp.Popen(plasma_cmd, stdout=sp.DEVNULL, stderr=sp.DEVNULL)
File "/usr/lib/python3.7/subprocess.py", line 800, in __init__
restore_signals, start_new_session)
File "/usr/lib/python3.7/subprocess.py", line 1551, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'plasma_store': 'plasma_store'
Just switched over to this project from DOODS. I had an extra 4gb RPi4 lying around so i decided to give the new dev build a try after seeing it was available. With six substreams and a coral, I’m only seeing ~30% CPU utilization, 18ms coral speed, and 7-10fps detection speed. Most cameras look like they are detecting at around 5-8fps.
I’m just blown away at how well this performs. What an amazing project.
Traceback (most recent call last):
File “detect_objects.py”, line 462, in
main()
File “detect_objects.py”, line 238, in main
frame_shape = get_frame_shape(ffmpeg_input)
File “/opt/frigate/frigate/video.py”, line 40, in get_frame_shape
video_info = [s for s in info[‘streams’] if s[‘codec_type’] == ‘video’][0]
KeyError: ‘streams’