Viseron v3.0.0b1 - Self-hosted, local only NVR and AI Computer Vision software

Viseron
Viseron is a self-hosted NVR deployed via Docker, which utilizes machine learning to detect objects and start recordings.

v3.0.0b1 was just released which features 24/7 recordings among other things
To try it out you simply use the 3.0.0b1 Docker tag.

Check out the release notes:

Viserons features include, but not limited to the following:

  • 24/7 Recordings
  • Timeline view of events
  • Object detection via:
    • YOLOv3, YOLOv4 and YOLOv7 Darknet using OpenCV
    • Tensorflow via Google Coral EdgeTPU
    • CodeProjectAI
  • Motion detection
  • Face recognition via:
  • Image Classification
  • Responsive, mobile friendly Web UI written in TypeScript React
  • MQTT support
  • Home Assistant MQTT Discovery
  • Lookback, buffers frames to record before the event actually happened
  • Supports hardware acceleration on different platforms
    • CUDA for systems with a supported GPU
    • OpenCL
    • OpenMax and MMAL on the RaspberryPi 3B+
    • video4linux on the RaspberryPi 4
    • Intel QuickSync with VA-API
    • NVIDIA video4linux2 on Jetson Nano
  • Multiplatform, should support any amd64, aarch64 or armhf machine running Linux.
    Specific images are built to support:
    • RaspberryPi 3B+
    • RaspberryPi 4
    • NVIDIA Jetson Nano
  • Zones to limit detection to a particular area to reduce false positives
  • Masks to limit where object and motion detection occurs
  • Stop/start cameras on-demand over MQTT

Check out the documentation here:

I hope you’ll find this useful!
Viseron is a project that is under active development and I appreciate any feedback or feature requests you have.

23 Likes

Looks good! Have got the docker up and running, but struggling to understand how to setup a camera. I’ve entered the default data in config.yaml, but I want to use VAAPI for my camera, how do I set this up?

This is great, thanks for another alternative!!

1 Like

Sorry you’re having issues! It should work out of the box.
What does your docker run command look like (or your docker-compose if you are using that)?

I am gonna spend some time on making it easier to troubleshoot FFMPEG, right now the errors are simply swallowed.

Here’s my docker run

docker run -d --name=viseron --restart always --device /dev/dri -v /srv/dev-disk-by-label-docker/docker/viseron/recordings:/recordings:rw -v /srv/dev-disk-by-label-docker/docker/viseron/config:/config:rw -v /etc/localtime:/etc/localtime:ro roflcoopter/viseron-vaapi:latest

And this is what my config.yaml currently looks like:

# See the README for the full list of configuration options.
cameras:
  - name: FrontYardAmcrest
    host: 192.168.1.13
    port: 554
    username: <redacted>
    password: <redacted>
    path: /cam/realmonitor?channel=1&subtype=0

# MQTT is optional
#mqtt:
#  broker: <ip address or hostname of broker>
#  port: <port the broker listens on>
#  username: <if auth is enabled>
#  password: <if auth is enabled>

I’m reading the part that shows the folllowing (in readme.md) to use VAAPI hardware acceleration.

ffmpeg -hide_banner -loglevel panic -avoid_negative_ts make_zero -fflags nobuffer -flags low_delay -strict experimental -fflags +genpts -stimeout 5000000 -use_wallclock_as_timestamps 1 -vsync 0 -hwaccel vaapi -vaapi_device /dev/dri/renderD128 -rtsp_transport tcp -i rtsp://<username>:<password>@<host>:<port><path> -f rawvideo -pix_fmt nv12 pipe:1

But I’m not sure how this needs to be captured in config.yaml in order to use hardware acceleration?

can you add

logging:
  level: debug

and send me the output of docker logs viseron (remove any sensitive information)

The ffmpeg command should be generated by default so you dont need to add anything.


[2020-09-01 07:27:55] [root        ] [INFO    ] - Kill received! Sending kill to threads..


[2020-09-01 07:27:55] [lib.nvr     ] [INFO    ] - Stopping NVR thread


[2020-09-01 07:27:55] [lib.nvr     ] [INFO    ] - Exiting NVR thread


[2020-09-01 07:27:55] [lib.camera  ] [INFO    ] - FFMPEG frame grabber stopped


[2020-09-01 07:27:55] [root        ] [INFO    ] - Exiting


[2020-09-01 07:27:58] [root        ] [INFO    ] - -------------------------------------------


[2020-09-01 07:27:58] [root        ] [INFO    ] - Initializing...


[2020-09-01 07:27:58] [root        ] [INFO    ] - Starting cleanup scheduler


[2020-09-01 07:27:58] [apscheduler.scheduler] [INFO    ] - Adding job tentatively -- it will be properly scheduled when the scheduler starts


[2020-09-01 07:27:58] [apscheduler.scheduler] [INFO    ] - Added job "Cleanup.cleanup" to job store "default"


[2020-09-01 07:27:58] [apscheduler.scheduler] [INFO    ] - Scheduler started


[2020-09-01 07:27:58] [apscheduler.scheduler] [DEBUG   ] - Looking for jobs to run


[2020-09-01 07:27:58] [root        ] [INFO    ] - Running initial cleanup


[2020-09-01 07:27:58] [apscheduler.scheduler] [DEBUG   ] - Next wakeup is due at 2020-09-02 01:00:00+00:00 (in 48721.393142 seconds)


[2020-09-01 07:27:58] [lib.cleanup ] [DEBUG   ] - Running cleanup


[2020-09-01 07:27:58] [lib.detector] [INFO    ] - Initializing detection thread


[2020-09-01 07:27:58] [lib.nvr     ] [INFO    ] - Initializing NVR thread


[2020-09-01 07:27:58] [lib.camera  ] [INFO    ] - Initializing ffmpeg RTSP pipe


[2020-09-01 07:27:58] [lib.camera  ] [DEBUG   ] - Getting stream characteristics for rtsp://xxx:[email protected]:554/cam/realmonitor?channel=1&subtype=0


[2020-09-01 07:28:01] [lib.camera  ] [INFO    ] - Resolution: 3840x2160 @ 15 FPS


[2020-09-01 07:28:01] [lib.camera  ] [INFO    ] - Starting capture process


[2020-09-01 07:28:01] [lib.camera  ] [DEBUG   ] - FFMPEG decoder command: ffmpeg -hide_banner -loglevel panic -avoid_negative_ts make_zero -fflags nobuffer -flags low_delay -strict experimental -fflags +genpts -stimeout 5000000 -use_wallclock_as_timestamps 1 -vsync 0 -hwaccel vaapi -vaapi_device /dev/dri/renderD128 -rtsp_transport tcp -i rtsp://XXX:[email protected]:554/cam/realmonitor?channel=1&subtype=0 -f rawvideo -pix_fmt nv12 pipe:1


[2020-09-01 07:28:01] [lib.camera  ] [INFO    ] - Starting decoder thread


[2020-09-01 07:28:01] [lib.recorder] [INFO    ] - Initializing ffmpeg recorder


09-01 07:28:01] [lib.recorder] [DEBUG   ] - FFMPEG encoder command: ffmpeg -hide_banner -loglevel panic -hwaccel vaapi -vaapi_device /dev/dri/renderD128 -f rawvideo -pix_fmt nv12 -s:v <width>x<height> -r <fps> -i pipe:0 -y -c:v h264_vaapi -vf format=nv12|vaapi,hwupload <file>


[2020-09-01 07:28:01] [lib.nvr     ] [INFO    ] - NVR thread initialized


[2020-09-01 07:28:01] [lib.nvr     ] [INFO    ] - Starting main loop


[2020-09-01 07:28:01] [lib.nvr     ] [DEBUG   ] - Waiting for first frame


[2020-09-01 07:28:01] [root        ] [INFO    ] - Initialization complete


[2020-09-01 07:28:03] [lib.nvr     ] [DEBUG   ] - First frame received


[ WARN:1] global /root/opencv-master/modules/dnn/src/dnn.cpp (1404) setUpNet DNN: OpenCL target is not supported with current OpenCL device (tested with GPUs only), switching to CPU.

Here we go. Looks like it’s working with VAAPI acceleration? Apologies I was a bit confused, I thought I need to somehow capture that VAAP command in the config.yaml somehow.

Great! I am going to publish a new release soon with some improvements to logging and troubleshooting.

Just published a new release to github/docker hub.
It includes a few breaking changes, so check out the release here

Thank you for this! Is there a way to ignore parts of the frame?

At the moment no, but there is a feature request up on GitHub for this so i will get to it soon!

1 Like

Hi, nice to see many recent projects working on the use of ML for NVR&Security around HA. I’m actually using Frigate (on Coral USB) + Deepstack (on CPU) + HA to get mobile notifications with snapshot&video of detected people on my cameras. I combine Frigate with Deepstack because TFlite models raise too many false positives. Coral is great to keep CPU load negligible but is very limited on detector capability/tuning: to use the CPU just to confirm the coral detections is fine to keep low CPU usage. The idea to combine (as a pipe) multiple detectors makes false positives disappear for me.

At some point you speak of “multiple detectors” on the future of Viseron: would be possible to run multiple detectors on the same event/image to confirm its goodness? It would be a big selling point and I could switch on your wagon to simplify my setup.

PS: I can create a FR on GitHub if you want.

Sounds like a great idea! I cant see a problem with implementing that feature.
I would really appreciate if you created a FR on GitHub, its much easier to keep track of there.

1.2.0

Changes and new Features

  • You can now make use of a secrets.yaml to substitute values in your config file.
  • Added some benchmarks to the README

Fixes

  • Fixes issue with default motion detection config causing decoder to crash

not sure if you prefer here or github - but all working well without hwaccel_args.
Using a 1080Ti so want to make that work:)

hwaccel_args: ffmpeg -hide_banner -loglevel panic -avoid_negative_ts make_zero -fflags nobuffer -flags low_delay -strict experimental -fflags +genpts -stimeout 5000000 -use_wallclock_as_timestamps 1 -vsync 0 -c:v h264_cuvid -rtsp_transport tcp -i rtsp://<username>:<password>@<host>:<port><path> -f rawvideo -pix_fmt nv12 pipe:1

With my own camera info is giving me:

Traceback (most recent call last):
File "viseron.py", line 163, in <module>
main()
File "viseron.py", line 56, in main
mqtt_queue=mqtt_queue,
File "/src/viseron/lib/nvr.py", line 47, in __init__
self.ffmpeg = FFMPEGCamera(self.config, frame_buffer)
File "/src/viseron/lib/camera.py", line 52, in __init__
self._logger.debug(f"FFMPEG decoder command: {' '.join(self.build_command())}")
File "/src/viseron/lib/camera.py", line 80, in build_command
+ self.config.camera.output_args
TypeError: can only concatenate list (not "str") to list

I know VERY little about FFMPEG so not sure how to troubleshoot

also side note is it possible to construct the FFMPEG command for the user?
I’m wondering if you could have:
FFMPEG_TYPE: GPU
and the specific command including the formulated rtsp path could be added (as the user/pass/port/ip/path are all specified?
include a CUSTOM type for anyone who wants to add their own FFPMEG args?

You can pretty much piece it together using the information in the README under the Camera section.
If you pulled the viseron-cuda image it should default to using your nvidia-gpu if it is supported and generate the correct ffmpeg command.

What happens if you remove your hwaccel_args from the config?

ah, ok good to know. I grabbed the line from the readme.
everything works, but wasnt sure (even with the cuda image) it was using the GPU

So with trigger: false i’m getting recordings when motion is detected.

however with trigger: true i’m not getting any object detection (even with me waving franticly at the camera)

    width: 1920
    height: 1080
    motion_detection:
      interval: 1
      trigger: true
    object_detection:
      interval: 1
      labels:
        - label: person
          confidence: 0.9
        - label: dog
          confidence: 0.9
        - label: car
          confidence: 0.9

You probably need to tune the parameters a little.
If you turn on debug logging it will print how big the motion area is and you can pick a more suitable value.

motion_detection:
  area: 100 # higher value means less sensitive
  frames: 1 # how many frames in a row motion must persist before triggering
1 Like

btw you can run nvidia-smi to see if the GPU is being used

1 Like