Looks good! Have got the docker up and running, but struggling to understand how to setup a camera. I’ve entered the default data in config.yaml, but I want to use VAAPI for my camera, how do I set this up?
Sorry you’re having issues! It should work out of the box.
What does your docker run command look like (or your docker-compose if you are using that)?
I am gonna spend some time on making it easier to troubleshoot FFMPEG, right now the errors are simply swallowed.
And this is what my config.yaml currently looks like:
# See the README for the full list of configuration options.
cameras:
- name: FrontYardAmcrest
host: 192.168.1.13
port: 554
username: <redacted>
password: <redacted>
path: /cam/realmonitor?channel=1&subtype=0
# MQTT is optional
#mqtt:
# broker: <ip address or hostname of broker>
# port: <port the broker listens on>
# username: <if auth is enabled>
# password: <if auth is enabled>
I’m reading the part that shows the folllowing (in readme.md) to use VAAPI hardware acceleration.
Here we go. Looks like it’s working with VAAPI acceleration? Apologies I was a bit confused, I thought I need to somehow capture that VAAP command in the config.yaml somehow.
Hi, nice to see many recent projects working on the use of ML for NVR&Security around HA. I’m actually using Frigate (on Coral USB) + Deepstack (on CPU) + HA to get mobile notifications with snapshot&video of detected people on my cameras. I combine Frigate with Deepstack because TFlite models raise too many false positives. Coral is great to keep CPU load negligible but is very limited on detector capability/tuning: to use the CPU just to confirm the coral detections is fine to keep low CPU usage. The idea to combine (as a pipe) multiple detectors makes false positives disappear for me.
At some point you speak of “multiple detectors” on the future of Viseron: would be possible to run multiple detectors on the same event/image to confirm its goodness? It would be a big selling point and I could switch on your wagon to simplify my setup.
Sounds like a great idea! I cant see a problem with implementing that feature.
I would really appreciate if you created a FR on GitHub, its much easier to keep track of there.
Traceback (most recent call last):
File "viseron.py", line 163, in <module>
main()
File "viseron.py", line 56, in main
mqtt_queue=mqtt_queue,
File "/src/viseron/lib/nvr.py", line 47, in __init__
self.ffmpeg = FFMPEGCamera(self.config, frame_buffer)
File "/src/viseron/lib/camera.py", line 52, in __init__
self._logger.debug(f"FFMPEG decoder command: {' '.join(self.build_command())}")
File "/src/viseron/lib/camera.py", line 80, in build_command
+ self.config.camera.output_args
TypeError: can only concatenate list (not "str") to list
I know VERY little about FFMPEG so not sure how to troubleshoot
also side note is it possible to construct the FFMPEG command for the user?
I’m wondering if you could have: FFMPEG_TYPE: GPU
and the specific command including the formulated rtsp path could be added (as the user/pass/port/ip/path are all specified?
include a CUSTOM type for anyone who wants to add their own FFPMEG args?
You can pretty much piece it together using the information in the README under the Camera section.
If you pulled the viseron-cuda image it should default to using your nvidia-gpu if it is supported and generate the correct ffmpeg command.
What happens if you remove your hwaccel_args from the config?
You probably need to tune the parameters a little.
If you turn on debug logging it will print how big the motion area is and you can pick a more suitable value.
motion_detection:
area: 100 # higher value means less sensitive
frames: 1 # how many frames in a row motion must persist before triggering