Object detection for video surveillance

Still getting error messages:

Watsor    | MainThread       werkzeug                 INFO    : Listening on ('0.0.0.0', 8080)
Watsor    | MainThread       root                     INFO    : Starting Watsor on 31387a180f95 with PID 13
Watsor    | outdoorcam       FFmpegDecoder            INFO    : [h264 @ 0x55ae8828f0] corrupted macroblock 43 15 (total_coeff=-1)
Watsor    | outdoorcam       FFmpegDecoder            INFO    : [h264 @ 0x55ae8828f0] error while decoding MB 43 15
Watsor    | outdoorcam       FFmpegDecoder            INFO    : [h264 @ 0x55ae8828f0] corrupted macroblock 70 0 (total_coeff=-1)
Watsor    | outdoorcam       FFmpegDecoder            INFO    : [h264 @ 0x55ae8828f0] error while decoding MB 70 0
Watsor    | outdoorcam       DetectionSieve           ERROR   : Frame 0 missed
Watsor    | outdoorcam       FrameBuffer              WARNING : Stale frame 0 dated 39 seconds ago is in State.PUBLISH, resetting...

Do I need to install the TensorFlow Lite library as well or is this part of the docker container already?

Try to unplug the Coral, plug again and wait for a couple of second before starting docker container. Also the restart of the Raspberry may help.

TensorFlow Lite library is part of the Docker image. It make sense to install it outside of Docker and run classification test just to identify whether the problem is in device USB connection, 64-bit Raspberry OS or Docker image.

Got it back up & running.

I added a 2nd camera at 2fps and turned up the fps for the 1st one to 10 but this seems to go straight to the CPU usage of the Pi4 itself.

Here’s the output from the metrics link:

{
    "cameras": [
        {
            "name": "outdoorcam",
            "fps": {
                "decoder": 10.3,
                "sieve": 10.3,
                "visual_effects": 10.1,
                "snapshot": 10.3,
                "mqtt": 10.3
            },
            "buffer_in": 10,
            "buffer_out": 0
        },
        {
            "name": "yardcam",
            "fps": {
                "decoder": 2.1,
                "sieve": 2.1,
                "visual_effects": 0.0,
                "snapshot": 2.1,
                "mqtt": 2.1
            },
            "buffer_in": 10,
            "buffer_out": 0
        }
    ],
    "detectors": [
        {
            "name": "Coral",
            "fps": 12.2,
            "fps_max": 58,
            "inference_time": 17.1
        }
    ]
}

I also noticed that when I increase the fps further the Pi4 (2GB) that runs the ‘Superviced’ HA Install’ gets very sluggish.
Seems like it gets overwhelmed by the MQTT messages from Watsor with 3 messages per second for e.g the ‘car’ sensor that constantly keeps updating for the 3 cars it sees.

Got it back up & running.

What was the problem and how did you solve it?

I also noticed that when I increase the fps further the Pi4 (2GB) that runs the ‘Superviced’ HA Install’ gets very sluggish.
Seems like it gets overwhelmed by the MQTT messages from Watsor with 3 messages per second for e.g the ‘car’ sensor that constantly keeps updating for the 3 cars it sees.

The slowness is not concerned with MQTT. The messages that are being sent, even if there are tens of them per second, are small - several hundreds bytes at max. They hardly cause performance problems.

Highly likely the UI of HomeAssistant leads to the tardiness. Watching the live stream of Motion JPEG camera (and other camera integrations) in Lovelace utilize much resources on the machine. You may notice that closing the browser window results in decrease of CPU load. I recommend opening the live stream UI or direct video stream from Watsor HTTP on the machine other than where HomeAssistant backend is running.

Large video resolution results in more CPU load especially since hardware acceleration is not enabled in FFmpeg. If it can not be sorted in camera settings, reduce the size of the video using FFmpeg scaling.

This! It is actually a combination of strain on the CPU and how the snapshots are being taken. If the stream is not on then the lag and tardiness is due to the negotiation and establishment of the stream to get the one frame. If the stream is already established in a thread then it is likely loading the CPU.

Honestly, I’m not sure.
I rebooted, unplugged and re-plugged the Coral device multiple time and noticed that once I added a 2nd camera this one was showing an image on the interface. So I rebooted the first camera and it worked there as well.

I have tried to implement the scaling, but yet again, I am out of my depth; I added it to my decoder config after the - i option:

    - -i                          # camera input field will follow '-i' ffmpeg argument automatically
    - -filter:v
    -  fps=fps=15
    - -f
    -  rawvideo
    - -pix_fmt
    -  rgb24
# Test
    - -vf
    -  scale=-1:480

and changed the resolution for the camera to 640x480 as well. But it results in a single snapshot being shown at /video/mjpeg/outdoorcam, no moving picture; same result when I look at it on VLC.

It recognizes objects at startup, but will then freeze while the sensors are still active for a little longer:

{
    "cameras": [
        {
            "name": "outdoorcam",
            "fps": {
                "decoder": 73.5,
                "sieve": 23.4,
                "visual_effects": 0.0,
                "snapshot": 23.1,
                "mqtt": 23.4
            },
            "buffer_in": 20,
            "buffer_out": 0
        }
    ],
    "detectors": [
        {
            "name": "Coral",
            "fps": 29.3,
            "fps_max": 58,
            "inference_time": 17.3
        }
    ]
}

Sorry, I tried to upload a screenshot from HA but there seem to be issues pasting images, so here’s the info from MQTT Explorer:

{"fps_in": 177.7, "fps_out": 28.5, "buffer": 20}

As you can see, the frame rate - even though the limit is set to 15fps - goes through the roof and eventually knocks out my Pi4 that runs HA cold.

I’m pretty sure it’s not the UI because the Pi4 croaks eventually, even when the UI is not open/not in use - just like it happens to me in this case as well:

the frame rate - even though the limit is set to 15fps - goes through the roof

When you defined the second video filter, the first one stopped to be honored. The filters in FFmped are separated by commas. Instead define both speed and scale filters in one line as follows:

    - -filter:v
    -  fps=fps=15,scale=-1:480

The slowness is not concerned with MQTT.

I think I know what’s going on. The recorder integration in HomeAssistant constantly saves data. By default it stores everything from sensors to state changes. The data is saved on the SD card of Raspberry Pi, which is slow medium, resulting in degradation of system’s reaction time. I can assure that this happens on a PC too with file-based database engine such as SQLite.

Fortunately, HomeAssistant allows to customize what needs to be written or not, using the include and exclude parameters of the recorder. It turned out is easier to include what’s needed rather than trying to exclude unnecessary, because too much is being saved by default.

In my demo project I include only few sensors that are rendered in History. They do not need to be recorded for Watsor to work, just if one wants to observe their measurements.

recorder:
  include:
    entities:
      - alarm_control_panel.home_alarm
      - binary_sensor.camera1_person_detected
      - binary_sensor.camera1_car_detected
      - sensor.detector1_fps
      - sensor.detector1_fps_max
      - sensor.detector1_inference_time
      - sensor.camera1_person_count

Customize the recorder and your Raspberry PI will come to life.

Great, this worked :+1:

This reduces the CPU load quite a bit.

Will have to experiment with the recorder settings next.
I run MariaDB on my main HA device, but it’s probably the same issue that you describe with SQLite.
If this fixes it, that would be great for Watsor, but I’d need to find another solution for my Energy Meter - simply excluding the sensors from being recorded would defeat the purpose of retrieving the info in the first place.

I assume that working with detection zones will do something similar to the CPU load as scaling down the images did.
Just need to learn how to work with GIMP now :wink:
I just downloaded and installed it but haven’t worked with it yet.

If this has a similar effect on the CPU load, Watsor will be the solution I was looking for - happy to run it on a Pi4 all by itself but I don’t want to buy a high-powered machine just so that can run it.

I’m a little disappointed by the Coral USB, though, at the moment.
It’s probably cutting the Pi usage in half but I expected it to do more than that.

Thanks again for your help @asmirnou - I’ll report back how it goes.

Quick update:

Set up all three cameras with zones that cover between 25% and 40% of the area on each on of them and might be able to reduce them further after some more testing.
This brought the idle CPU load down to about 60% - see details below. This seems okay for me at the moment.

One observation I’ve made was that the video output stream seems to die after a few hours which coincides with the object detection stopping. The fps values in my HA dashboard and still change but that’s all of the action I can see after that until I restart the Watsor container.

{
    "cameras": [
        {
            "name": "outdoorcam",
            "fps": {
                "decoder": 3.1,
                "sieve": 3.1,
                "visual_effects": 0.0,
                "snapshot": 3.1,
                "mqtt": 3.1
            },
            "buffer_in": 0,
            "buffer_out": 0
        },
        {
            "name": "patiocam",
            "fps": {
                "decoder": 3.1,
                "sieve": 3.0,
                "visual_effects": 0.0,
                "snapshot": 3.0,
                "mqtt": 3.0
            },
            "buffer_in": 10,
            "buffer_out": 0
        },
        {
            "name": "yardcam",
            "fps": {
                "decoder": 3.1,
                "sieve": 3.0,
                "visual_effects": 0.0,
                "snapshot": 3.0,
                "mqtt": 3.0
            },
            "buffer_in": 0,
            "buffer_out": 0
        }
    ],
    "detectors": [
        {
            "name": "Coral",
            "fps": 9.0,
            "fps_max": 61,
            "inference_time": 16.5
        }
    ]
}

The metrics look ok, a zero in decoder or Coral detector would indicate a process that died, but 3.1 FPS means the processes are running. visual_effects is zero because Motion JPEG HTTP stream merely wasn’t requested at that moment.
What happens when you open Motion JPEG from Watsor’s home page having noticed the detection has stopped?
Could you take a look at container logs or better grab the log files at /var/log/watsor and share with me? If nothing suspicious are in logs, it probably means the problem is somewhere on HA side.

Amazing work, thanks for this!

Is support for CUDA11/CuDNN8 being considered?

On my Jetson it runs fine with these libraries:

   CUDA: 10.2.89
   cuDNN: 8.0.0.145
   TensorRT: 7.1.0.16

I haven’t checked CUDA 11, but if PyCUDA and TensorRT are installed, should be no problem.

1 Like

Just a bit concerned that CUDA11 has quite a bit of changes. Will test it…

Sorry, the metrics I shared were from after I restarted the container, here is what’s shown when there’s no output any more:

{
    "cameras": [
        {
            "name": "outdoorcam",
            "fps": {
                "decoder": 3.1,
                "sieve": 0.0,
                "visual_effects": 0.0,
                "snapshot": 0.0,
                "mqtt": 0.0
            },
            "buffer_in": 0,
            "buffer_out": 0
        },
        {
            "name": "yardcam",
            "fps": {
                "decoder": 10.2,
                "sieve": 0.0,
                "visual_effects": 0.0,
                "snapshot": 0.0,
                "mqtt": 0.0
            },
            "buffer_in": 0,
            "buffer_out": 0
        }
    ],
    "detectors": [
        {
            "name": "Coral",
            "fps": 0.0,
            "fps_max": 0.0,
            "inference_time": 0.0
        }
    ]
}

I’m also having huge issues when I want to watch the mJPEG streams from the Watsor homepage.
On my Win10 PC the stream for any camera stops after a few seconds, on my Android tablet they run a little longer but will eventually stop as well.
If I use VLC on the other hand, the streams keep running on both devices without any issues.

Just for the record, here is my ffmpeg decoder config again:

ffmpeg:
  decoder:
    - -hide_banner
    - -loglevel
    -  error
    - -nostdin
    - -fflags
    -  nobuffer
    - -flags
    -  low_delay
    - -fflags
    -  +genpts+discardcorrupt
    - -i                          # camera input field will follow '-i' ffmpeg argument automatically
    - -filter:v
    -  fps=fps=10,scale=-1:480
    - -f
    -  rawvideo
    - -pix_fmt
    -  rgb24

Now that I figured out how to copy files off the container, I will send you logs when the recognition freezes again.

In case someone is going to deploy Watsor on Jetson Nano, here is a short guide.

Does anyone run a surveillance system on Kubernetes? Here is Helm chart to deploy Watsor in cluster.

Hi
First of all thank you for a great project that works, however I’m having a small problem in terms of my configuration of the camera this is my log output

unifi            FFmpegDecoder            INFO    : [h264 @ 0x55a355f045c0] error while decoding MB 41 37, bytestream -40
unifi            FFmpegDecoder            INFO    : [h264 @ 0x55a355f045c0] error while decoding MB 82 38, bytestream -12
unifi            FFmpegDecoder            INFO    : [h264 @ 0x55a355f045c0] error while decoding MB 18 37, bytestream -12
watchdog         WatchDog                 WARNING : Thread unifi (FFmpegEncoder) is not alive, restarting...
unifi            FFmpegDecoder            INFO    : [h264 @ 0x55a355f045c0] error while decoding MB 70 34, bytestream -10
unifi            FFmpegEncoder            INFO    : /usr/share/watsor/Videos/unifi.mp4: No such file or directory
unifi            FFmpegDecoder            INFO    : [h264 @ 0x55a355f045c0] error while decoding MB 75 39, bytestream -22

This is my decoding configurations

ffmpeg:
  decoder:
    - -hide_banner
    - -loglevel
    -  error
    - -nostdin
    - -fflags
    -  nobuffer
    - -flags
    -  low_delay
    - -fflags
    -  +genpts+discardcorrupt
    - -i                          # camera input field will follow '-i' ffmpeg argument automatically
    - -filter:v
    -  fps=fps=25,scale=-1:360
    - -f
    -  rawvideo
    - -pix_fmt
    -  rgb24

I am running Unifi cameras, everything is on a virtual machine with GPU support
This is the output of Matrics:

{
    "cameras": [
        {
            "name": "unifi",
            "fps": {
                "decoder": 25.3,
                "sieve": 24.3,
                "visual_effects": 23.8,
                "snapshot": 23.8,
                "encoder": 0.0,
                "mqtt": 24.3
            },
            "buffer_in": 20,
            "buffer_out": 10
        }
    ],
    "detectors": [
        {
            "name": "Tesla K80",
            "fps": 24.3,
            "fps_max": 83,
            "inference_time": 12.1
        }
    ]
}

Does anyone have any ideas on where these Errors are coming from and how to fix them thank you in advance
Andy.

Andy, thank you for being interested in this project.

/usr/share/watsor/Videos/unifi.mp4: No such file or directory

This error is easy to fix as it is just about the missing directory. It comes from FFmpeg encoder sub-process and is the result of the fact that output camera option is set to /Videos/unifi.mp4 assumed to be a path relative to the config file. When starting FFmpeg sub-processes the application currently doesn’t specify the working directory and FFmpeg treats the paths not relative to the config directory or Watsor working directoty, but relative to the home directory of the user, that for Docker container is /usr/share/watsor/. That’s why it tries to create an output file in wrong place. As a workaround, set the input and output options as absolute paths, including environmental variables if necessary. I’ll fix the setting of the working directory of FFmpeg in next release.

BTW, the encoder in config is optional. Remove the entire ffmpeg.endoder section and output to disable this feature.

error while decoding MB 41 37, bytestream -40

This one happens in the pipeline camera → network-> FFmpeg and most probably is concerned with the fact that FFmpeg, being busy decoding the video, doesn’t have time or computing resources to process all incoming data, having to drop the packets.

Enabled scale filter indicates the camera resolution is large leading to higher bitrate and forcing FFmpeg to decode longer. The scale filter should be the last resort unless the resolution can not be changed in camera settings. Most cameras have main and sub streams. The preferable way is to feed FFmpeg with the low resolution sub stream, while using main stream for watching.

Another optimization - to enable hardware accelerated video decoding in FFmpeg:

    - -hwaccel
    - vaapi
    - -hwaccel_device
    - /dev/dri/renderD128
    - -hwaccel_output_format
    - yuv420p

The HW accelerated host device depends on the machine and has to be added to the container. The decoding takes 40% faster with hardware acceleration, thus the packet dropping won’t be necessary.

If the camera outputs more than 25 FPS (otherwise why FPS filter is applied?) then you can add -vsync drop option in FFmpeg to drop excessive frames. The amount of transmitted data can be reduced also by limiting FPS in camera settings.

As you run on virtual machine, CPU there may be throttled or limited to just few cores. If the resolution and/or FPS of the camera can not be reduced and hardware acceleration can not be enabled, make sure CPU units are sufficient by tweaking the virtual machine.

Hi asmirnou,
Thanks for your reply and suggestions I have enabled hardware acceleration but this results in an error and no GPU acceleration or decoding takes place

watchdog         WatchDog                 WARNING : Thread def (FFmpegDecoder) is not alive, restarting...
watchdog         WatchDog                 WARNING : Thread def1 (FFmpegDecoder) is not alive, restarting...
unifi            FFmpegDecoder            INFO    : [h264 @ 0x557d9cd58940] error while decoding MB 60 35, bytestream -22
Thread-36        werkzeug                 INFO    : 192.168.1.125 - - [20/Jul/2020 19:33:59] "GET /metrics HTTP/1.1" 200 -
unifi            FFmpegDecoder            INFO    : [AVHWDeviceContext @ 0x557d9cd9ee40] libva: va_getDriverName() failed with unknown libva error,driver_name=(null)
unifi            FFmpegDecoder            INFO    : [AVHWDeviceContext @ 0x557d9cd9ee40] Failed to initialise VAAPI connection: -1 (unknown libva error).
unifi            FFmpegDecoder            INFO    : Device creation failed: -5.
unifi            FFmpegDecoder            INFO    : Device setup failed for decoder on input stream #0:1 : Input/output error
def              FFmpegDecoder            INFO    : [AVHWDeviceContext @ 0x556908349a20] libva: va_getDriverName() failed with unknown libva error,driver_name=(null)
def              FFmpegDecoder            INFO    : [AVHWDeviceContext @ 0x556908349a20] Failed to initialise VAAPI connection: -1 (unknown libva error).
def              FFmpegDecoder            INFO    : Device creation failed: -5.
def              FFmpegDecoder            INFO    : Device setup failed for decoder on input stream #0:0 : Input/output error
def1             FFmpegDecoder            INFO    : [AVHWDeviceContext @ 0x56132f6e8180] libva: va_getDriverName() failed with unknown libva error,driver_name=(null)
def1             FFmpegDecoder            INFO    : [AVHWDeviceContext @ 0x56132f6e8180] Failed to initialise VAAPI connection: -1 (unknown libva error).
def1             FFmpegDecoder            INFO    : Device creation failed: -5.

This is the only thing that gives me something but still through errors and no hardware acceleration,

ffmpeg:
  decoder:
    - -hide_banner
    - -loglevel
    -  error
    - -nostdin
    - -fflags
    -  nobuffer
    - -flags
    -  low_delay
    - -fflags
    -  +genpts+discardcorrupt
    - -i                          # camera input field will follow '-i' ffmpeg argument automatically
    - -filter:v
    -  fps=fps=10,scale=-1:360
    - -f
    -  rawvideo
    - -pix_fmt
    -  rgb24

Thank Andy.