try removing these lines and commenting out all of the encoder
Somethings happening, but not a lotā¦
MainThread werkzeug INFO : Listening on (ā0.0.0.0ā, 8080)
MainThread root INFO : Starting Watsor on bf72466c-watsor with PID 6
voordeur MQTT ERROR : [Errno 99] Cannot assign requested address
voordeur FFmpegDecoder INFO : [h264 @ 0x558ab2f51d60] error while decoding MB 105 18, bytestream -17
Thread-2 werkzeug INFO : 192.168.1.112 - - [19/Aug/2020 21:25:25] āGET / HTTP/1.1ā 200 -
voordeur FFmpegDecoder INFO : [AVHWDeviceContext @ 0x558ab335d7a0] No VA display found for device: /dev/dri/renderD128.
voordeur FFmpegDecoder INFO : Device creation failed: -22.
voordeur FFmpegDecoder INFO : Device setup failed for decoder on input stream #0:0 : Invalid argument
watchdog WatchDog WARNING : Thread voordeur (FFmpegDecoder) is not alive, restartingā¦
watchdog WatchDog WARNING : Process voordeur (MQTT) is not alive, restartingā¦
voordeur MQTT ERROR : [Errno 99] Cannot assign requested address
voordeur FFmpegDecoder INFO : [h264 @ 0x55b882117d60] error while decoding MB 99 16, bytestream -17
voordeur FFmpegDecoder INFO : [AVHWDeviceContext @ 0x55b882741020] No VA display found for device: /dev/dri/renderD128.
voordeur FFmpegDecoder INFO : Device creation failed: -22.
voordeur FFmpegDecoder INFO : Device setup failed for decoder on input stream #0:0 : Invalid argument
watchdog WatchDog WARNING : Thread voordeur (FFmpegDecoder) is not alive, restartingā¦
watchdog WatchDog WARNING : Process voordeur (MQTT) is not alive, restartingā¦
voordeur MQTT ERROR : [Errno 99] Cannot assign requested address
voordeur FFmpegDecoder INFO : [h264 @ 0x55b7cad75d60] error while decoding MB 118 84, bytestream -9
Thread-3 werkzeug INFO : 192.168.1.112 - - [19/Aug/2020 21:25:39] āGET /snapshot/voordeur/person HTTP/1.1ā 200 -
voordeur FFmpegDecoder INFO : [h264 @ 0x55b7cad75d60] error while decoding MB 107 85, bytestream -5
voordeur FFmpegDecoder INFO : [h264 @ 0x55b7cad75d60] error while decoding MB 66 87, bytestream -9
Thread-4 werkzeug INFO : 192.168.1.112 - - [19/Aug/2020 21:25:42] āGET / HTTP/1.1ā 200 -
voordeur FFmpegDecoder INFO : [AVHWDeviceContext @ 0x55b7cafe09e0] No VA display found for device: /dev/dri/renderD128.
voordeur FFmpegDecoder INFO : Device creation failed: -22.
voordeur FFmpegDecoder INFO : Device setup failed for decoder on input stream #0:0 : Invalid argument
Thread-5 werkzeug INFO : 192.168.1.112 - - [19/Aug/2020 21:25:44] āGET /snapshot/voordeur/car HTTP/1.1ā 200 -
Thread-6 werkzeug INFO : 192.168.1.112 - - [19/Aug/2020 21:25:47] āGET /metrics HTTP/1.1ā 200 -
watchdog WatchDog WARNING : Thread voordeur (FFmpegDecoder) is not alive, restartingā¦
watchdog WatchDog WARNING : Process voordeur (MQTT) is not alive, restartingā¦
voordeur MQTT ERROR : [Errno 99] Cannot assign requested address
voordeur FFmpegDecoder INFO : [AVHWDeviceContext @ 0x55f329a836e0] No VA display found for device: /dev/dri/renderD128.
voordeur FFmpegDecoder INFO : Device creation failed: -22.
voordeur FFmpegDecoder INFO : Device setup failed for decoder on input stream #0:0 : Invalid argument
Thread-7 werkzeug INFO : 192.168.1.112 - - [19/Aug/2020 21:25:53] āGET /health HTTP/1.1ā 200 -
watchdog WatchDog WARNING : Thread voordeur (FFmpegDecoder) is not alive, restartingā¦
watchdog WatchDog WARNING : Process voordeur (MQTT) is not alive, restartingā¦
voordeur MQTT ERROR : [Errno 99] Cannot assign requested address
voordeur FFmpegDecoder INFO : [h264 @ 0x55ea9e652d60] left block unavailable for requested intra4x4 mode -1
voordeur FFmpegDecoder INFO : [h264 @ 0x55ea9e652d60] error while decoding MB 0 24, bytestream 72221
Please use code blocks if you want help. Not sure if itās connecting to your mqtt broker?
Are there any recommended FFMPEG decoder/encoder settings for a RPi4 + Coral USB
Not FFmpeg settings, but general tips. Raspberry Pi is not a good choice for such kind of a problem, even though itās popular.
I am experiencing high CPU usageā¦ What can i do to get it back down to this level?
- Reduce camera resolution, better to use cameraās secondary stream with lowest possible width and height.
- Slow camera down by limiting FPS, better in camera settings, if not, then in FFmpeg filter or via HA MQTT.
- Config hardware acceleration for decoding H.264 video. Note that 64-bit OS does not support MMAL video decoding yet, but supports OMX and V4L2 encoding.
I am unable record movement
The config volume is read-only by default. You better consider MPEG TS broadcasting rather than recording in file as other volumes may be difficult to add in add-on.
I am using a Pi4 with Coral USB but having serious performance issues that are causing my HA instance to crash and fail.
You need to include in recoder only those measurements that are really needed to avoid degradation of HomeAssistantās performance, because the sensor values and detection details are transmitted over MQTT very often and the recorder integration in HomeAssistant saves everything by default from sensors to state changes.
The string is correct. Another file with wrong syntax is probably mounted.
Your config doesnāt specify the correct MQTT broker of Home Assistant.
mqtt:
host: localhost
Localhost here means local host of a Docker container, not HA. Take the host from HA main configuration file.
Follow the general tips: reduce camera resolution and limit FPS in camera settings.
Thank you for investigating this. I need to play around OpenCV to get more data and patterns before I can say something essential.
What bothers me a bit is that FFmpeg has many command line options and potentially can get an input and tweak a feed from any camera, while OpenCV is configured as code resulting to the coupling of a program with certain hardware.
I ended up doing a lot of research into this and have been able to reduce the cpu load by a factor of 4 when using GPU decoding. I still donāt know why FFMpeg is so CPU hungry and suspect it has to do with it pre-encoding the stream on its own on the cpu. openCV seems to not use FFmpeg the same way, it uses the library, not the binary and does only what is needed for video processing: decodes the video stream from h264 or h265 to a numpy array in one command all in the GPU. Home assistant camera component itself was adding a jpeg encoding followed by a decoding and conversion from raw to numpy which was another 50% reduction once I eliminated it. I am now decoding a 3MP stream at 20fps with <3% CPU load Vs. ~15% with Watsor/FFMpeg with GPU decoding. By the way I have switched to an openCV framework running the YoloV4 resnet model directly in homeassistant which significantly improved speed, sensitivity. and accuracy, reduced cpu/gpu load over watsor and am considering an improved pytorch version implementation.
Hi @asmirnou. Thank you for replying - you hadnāt posted in a while i was worried you had forgot about us!
I am already using the substreams at 640x480 @ 15fps and using your automation to reduce to 3fps. TBH, I am using your exact demo setup with very minor tweaks.
The crashes only seem to be when movement is detected. My notification automation that movement is detected fires but HA frontend becomes unresponsive immediately after.
I also noticed you have a sensor.camera1_person_count
in your recorder include
file. Can you please post the yaml for this?
thanks!
I have now excluded these entities as i think these are the ones that will be constantly writing to the recorder. Lets hope this works!
- sensor.driveway_detection_buffer
- sensor.driveway_detection_fps_in
- sensor.driveway_detection_fps_out
- sensor.driveway_sensor
- sensor.garden_detection_buffer
- sensor.garden_detection_fps_in
- sensor.garden_detection_fps_out
- sensor.garden_sensor
- sensor.watsor_metrics
Sorry, not sure what you mean with thisā¦?
It actually was a reference to camera1_car_count
from here. I renamed the sensor and forgot to change the reference.
The syntax of that line is correct. I suspected the quote at the beginning, but then double-checked and changed my post to avoid misleading.
YAML configuration file is supposed to be in Home Assistant config folder. The default path to the file /config/watsor/config.yaml
can be changed at Configuration tab of the add-on. Make sure there is no other file that is referenced in Configuration tab and where the line has some typo.
Iām sure thats fine only 1 file. Before I changed according to the other previous post it did not even start. After that it did.
Ok, something is aliveā¦
this is my config file:
# Optional HTTP server configuration and authentication.
http:
port: 8080
# username: !env_var "USERNAME john"
# password: !env_var "PASSWORD qwerty"
# Optional MQTT client configuration and authentication.
mqtt:
host: 192.168.1.142
port: 1883
username: !secret mqtt_username
password: !secret mqtt_password
# Default FFmpeg arguments for decoding video stream before detection and encoding back afterwards.
# Optional, can be overwritten per camera.
ffmpeg:
decoder:
- -hide_banner
- -loglevel
- error
- -nostdin
# - -hwaccel # These options enable hardware acceleration of
# - vaapi # video de/encoding. You need to check what methods
# - -hwaccel_device # (if any) are supported by running the command:
# - /dev/dri/renderD128 # ffmpeg -hwaccels
# - -hwaccel_output_format # Then refer to the documentation of the method
# - yuv420p # to enable it in ffmpeg. Remove if not sure.
- -fflags
- nobuffer
- -flags
- low_delay
- -fflags
- +genpts+discardcorrupt
- -i # camera input field will follow '-i' ffmpeg argument automatically
- -f
- rawvideo
- -pix_fmt
- rgb24
# encoder: # Encoder is optional, remove the entire list to disable.
# - -hide_banner
# - -loglevel
# - error
# - -hwaccel
# - vaapi
# - -hwaccel_device
# - /dev/dri/renderD128
# - -hwaccel_output_format
# - yuv420p
# - -f
# - rawvideo
# - -pix_fmt
# - rgb24
# - -i # detection output stream will follow '-i' ffmpeg argument automatically
# - -an
# - -f
# - mpegts
# - -vcodec
# - libx264
# - -pix_fmt
# - yuv420p
# - -vf
# - "drawtext='text=%{localtime\\:%c}': x=w-tw-lh: y=h-2*lh: fontcolor=white: box=1: [email protected]"
# Detect the following labels of the object detection model.
# Optional, can be overwritten per camera.
detect:
- person:
area: 20 # Minimum area of the bounding box an object should have in
# order to be detected. Defaults to 10% of entire video resolution.
confidence: 60 # Confidence threshold that a detection is what it's guessed to be,
# otherwise it's ruled out. 50% if not set.
- car:
zones: [1, 3, 5] # Limit the zones on mask image, where detection is allowed.
# If not set or empty, all zones are allowed.
# Run `zones.py -m mask.png` to figure out a zone number.
- truck:
# List of cameras and their configurations.
cameras:
- front: # Camera name
width: 640 #
height: 360 # Video feed resolution in pixels
input: "rtsp://admin:[email protected]:554/Streaming/Channels/102/"
#mask: porch.png # Optional mask. Must be the same size as your video feed.
detect: # The values below override
- person: # detection defaults for just
- car: # this camera
# - backyard: # Camera name
# width: 640 #
# height: 480 # Video feed resolution in pixels
# input: !ENV "rtsp://${RTSP_USERNAME}:${RTSP_PASSWORD}@192.168.0.20:554/cam/realmonitor?channel=1&subtype=2"
# output: !ENV "${HOME}/Videos/backyard.mp4"
ffmpeg: # These values override FFmpeg defaults
decoder: # for just # this camera
- -hide_banner
- -loglevel
- error
- -nostdin
# - -hwaccel
# - vaapi
# - -hwaccel_device
# - /dev/dri/renderD128
# - -hwaccel_output_format
# - yuv420p
- -i # camera input field will follow '-i' ffmpeg argument automatically
- -filter:v
- fps=fps=15
- -f
- rawvideo
- -pix_fmt
- rgb24
# encoder:
# - -hide_banner
# - -loglevel
# - error
# - -hwaccel
# - vaapi
# - -hwaccel_device
# - /dev/dri/renderD128
# - -hwaccel_output_format
# - yuv420p
# - -f
# - rawvideo
# - -pix_fmt
# - rgb24
# - -i # detection output stream will follow '-i' ffmpeg argument automatically
# - -an
# - -f
# - mp4
# - -vcodec
# - libx264
# - -pix_fmt
# - yuv420p
# - -vf
# - "drawtext='text=%{localtime\\:%c}': x=w-tw-lh: y=h-2*lh: fontcolor=white: box=1: [email protected]"
# - -y
I see in the webbrowser in terface:
See Iām trying to see things:
Seeing that MQTT is updating fps is going 15.3, 15.4 etc):
Clicking motion jpg gives me the camera streamā¦
BUT
snapshot person and snapshot car is black. And I am sure cars are passingā¦
Where and what should I do to get people/car count or alarm in homeassistant? Any help/guidance please.
thank you!
Try to reduce the minimum area used to detect an object:
detect:
- person:
area: 0
- car:
area: 0
The camera may be too far away from the road and the people/cars take less than default 10% of the image area.
You missed area
key.
Sorry, my fault, edited my post where was a typo.
Also your front camera settings override the defaults, put the block with area in camera settings.