Viseron v3.0.0b1 - Self-hosted, local only NVR and AI Computer Vision software

Hi @roflcoopter,

I use HassOS on an Intel NUC 7th gen. I also have a Coral EdgeTPU usb device.

How can I install Viseron in this scenario? Is there a HACS component available or I should try with Portainer?

Thanks a lot, can’t wait to try this.

Hi!

I am not too familiar with HassOS, but Viseron only runs as a Docker container, its not a custom component that you can install via HACS.

So i guess Portainer is the way to go, unless you have access to the CLI then that would also be an alternative.

Thanks for the answer. I’ll check with Portainer or CLI.

Another quick question: does Viseron also provide an RTMP for the camera to integrate, thus avoiding to overload the camera with too many connections?

It does not do restreaming, however an mjpeg stream is served.
Restreaming could definitely be added tho, if you dont mind i would appreciate if you could add a feature request on GitHub explaining what you need/use cases

I will, thanks.

i tried it out and deinstalled it, and surely removed all and everything. mqtt integration of ha still discover it and turns out ha itself is sending messages to mqtt to discover viseron.

so, i stopped ha, removed everything again out of the core, restarted again.

mqtt -again- finds viseron (wich isnt even there) and adds entities for it.

any idea how to break this circle?

The messages are retained in your MQTT broker, so you need to clear those. Otherwise Home Assistant will just read them again on next startup.

I’m trying to get this set up and I’m having issues getting the stream to come through.

So first I was getting

viseron  | [2021-05-09 18:42:01] [viseron.camera          ] [ERROR   ] - b''
viseron  | [2021-05-09 18:42:01] [viseron.camera.front_door] [ERROR   ] - Unable to decode frame. FFmpeg pipe seems broken

So I turned on the ffmpeg debug logging and noticed my audio codec wasn’t supported

 [segment @ 0x558ed828e600] Opening '/segments/Front door/20210509184350.mp4' for writing
viseron  | [file @ 0x558ed82cbcc0] Setting default whitelist 'file,crypto,data'
viseron  | [mp4 @ 0x558ed82c7f00] Could not find tag for codec pcm_alaw in stream #1, codec not currently supported in container
viseron  | [AVIOContext @ 0x558ed82cbdc0] Statistics: 0 seeks, 0 writeouts
viseron  | Could not write header for output file #0 (incorrect codec parameters ?): Invalid argument
viseron  | Stream mapping:
viseron  |   Stream #0:0 -> #0:0 (copy)
viseron  |   Stream #0:1 -> #0:1 (copy)
viseron  |   Stream #0:0 -> #1:0 (h264 (native) -> rawvideo (native))
viseron  |     Last message repeated 1 times
viseron  | [AVIOContext @ 0x558ed8288d00] Statistics: 0 seeks, 0 writeouts
viseron  | [2021-05-09 18:43:50] [viseron.nvr.front_door  ] [DEBUG   ] - First frame received
viseron  | [2021-05-09 18:43:50] [viseron.camera          ] [ERROR   ] - b''
viseron  | [2021-05-09 18:43:50] [viseron.camera.front_door] [ERROR   ] - Unable to decode frame. FFmpeg pipe seems broken

is there anything I can do to make it work?

This error means that ffmpeg cant encode pcm_alaw inside of an mp4 container.
You could try mkv instead of mp4 by setting the extension under recorder, like so:

recorder:
  extension: mkv

or you could set an audio_codec, note that this will cause ffmpeg to reencode and increase the CPU a little bit:

recorder:
  audio_codec: aac

Viseron is Awesome!!!

I am testing this huge list of options and functionalities and I am very impressed.

I also have a question regarding the CUDA container version.
My setup is the following: Ubuntu18LTS in VM (Proxmox), GPU Nvidia GT730, camera-based on Rpi3, H264 RTSP stream.
Nvidia drivers and runtime installed.
During the boot process of the CUDA container, I can see in my logs that:

OpenCL is available!
VA-API cannot be used
CUDA cannot be used

So, I can not use CUDA.
I am using GT730 in my HA with other image_processing integration (with CUDA support) and it works (nvidia/cuda:10.2-cudnn7-devel-ubuntu18.04).
Do you think that problem is due to my old Nvidia GPU?

Viseron uses the cudnn8 based images so maybe thats why.
What do you get if you run nvidia-smi on the host?

This is what I get:
image

Can you get your image_processing integration to work if you use nvidia/cuda:11.1-cudnn8-devel-ubuntu20.04 instead? That is the base image that Viseron is using.

If it does work for you i might be able to get Viseron to work

I had a problem with an older card when I was trying to use Shinobi’s GPU based object detection - the compute version wasn’t high enough to be used. One way I could tell was nothing showed up in my nvidia-smi. My card had a compute version of 3.5 ( your card is a 3.0). When I got one slightly newer (nothing fancy) it now works.

Does Viseron have any requirements for the compute version?

I agree with @jarekmor - Viserion is awesome!

How do I create an alert that will send the image when a person is detected?

I’m using the Viseron docker image but can see all the topics in MQTT

Thank you!

Its a bit strange, the table outline here suggests that your GPU cannot run CUDA 11.3, since it requires Compute Capability 3.5 and later.

@gniknalu

Does Viseron have any requirements for the compute version?

It does indirectly since i am using CUDA 11.1. However its a piece of cake to support more versions of CUDA so if you give me a few days i can setup one that ships with that instead.

1 Like

I am using AppDaemon to trigger when Viseron starts recording.
To do that i trigger on sensor.<client ID>_<camera name>_status = recording

You can also trigger on the sensor in Home Assistant named binary_sensor.<client ID>_<camera name>_object_detected_person

1 Like

I was trying few ways to run my other image processing integration (which is DOODS) to use nvidia/cuda:11.1-cudnn8-devel-ubuntu20.04 but I am not sure I am doing it correctly as my limited experience with Docker.
I could not build new container (errors) nor compile it (also errors).
I did change ENVs in container to deploy new one to use 11.1-cudnn8-devel-ubuntu20.04 and it works but I am not sure if it is proper way.

what ENV vars did you change?

From the container logs I can see:
2021-06-02 02:23:56.578491: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1544] Found device 0 with properties:
pciBusID: 0000:06:10.0 name: NVIDIA GeForce GT 730 computeCapability: 3.5
coreClock: 0.9015GHz coreCount: 2 deviceMemorySize: 1.96GiB deviceMemoryBandwidth: 37.33GiB/s