My cameras are the same FPS regardless of it being day or night. My steps above only grab a single image, so it’s basically fetching when Blue Iris alerts on motion.
I am fairly new to home assistant and have installed it to complement my blue iris cctv server and to setup this tensorflow detection as described here and it works great.
The problem I have is that everytime it updates on motion I get a pop sound from the pc speaker. This is quite annoying as I use the speaker for other audio.
I have looked around and tried various things but I am not even sure where to start with this. Im certain it is home assistant that is causing it.
My setup is Win 10 with blue iris, on the same machine running WSL ubuntu and docker for home assistant container and a node red container.
Everything works fine apart from this pop sound. When adding a camera to the dashboard I get this same pop everytime it refreshes (approx 10 seconds) this is what makes me think that HA is the source of the pop sound.
I don’t have a browser or any program left open and I use the audio/speaker output of the machine to a remote hard wired speaker for audio out of blue iris itself so would like HA to be silent.
I have searched for a wav or mp3 in the HA container to see if I could rename it but nothing shows up so It might even be something that gets triggered in the host ubuntu system?
If anyone can point me in the right direction then I would be very grateful.
Its BlueIris that makes the noise.
Have you tried running blue iris as a service? Not sure if it will help to be honest, I run the server headless and separate from everything else.
In Blue Iris: Settings → Users → Your Account → Edit → Uncheck “On login play sound”.
Thanks guys, that’s exactly it TaperCrimp and now its sorted. I thought I knew BI very well and just assumed it couldn’t be that as I have been using it like this for six months or more and only started after installing as per this guide. Also thanks for this write up, I had to learn a lot but its working amazingly well!
Cheers for the help
No problem, glad to assist. At some point I’ll need to update the screenshots since they’ve changed in the latest version of node-red-contrib-home-assistant-websocket. Plus, you can now do regex matches against sensors so I could probably consolidate those 4 cameras down to a single node.
Thanks again for all of the feedback on this. A large part of writing this was around not exposing my Home Assistant deployment to the Internet. However, with the release of remote access via the HA cloud, I’ve been playing around with the Home Assistant iOS application. I’m about 90% sure I can get the same functionality minus the funky nuances of hard links, experimental PushOver notifications, docker instances, etc. At some point soon I’ll be putting a new post up with the details on this. The added advantage is that it can link directly to the camera instead of a snapshot. Stay tuned!
Fantastic, I just recently found this and made a few changes. I have had BI for a while now but had no idea it could do so much (with MQTT). And I now have TensorFlow working on a Pi (that is the only thing its running and at 60% memory use). Love this and look forward to more updates.
Update: I’ve not got a post up on using this with iOS notifications. It’s also much more dynamic and does not require a separate set of flows for each camera. Hope this helps someone!
Another update: I’ve been redoing a bunch of the automations and managed to reduce all of the alerting nodes down to 6 total. It’s also just one flow that covers every single camera with some regex and a fancy change node. Much cleaner, and the alerting removes the need for any hard links and does all notification through the HA docker instance. I’ll update the instructions soon. Best of all, it’s generic enough that you can swap in whatever notification mechanism that you’d like.
That’s great work TaperCrimp, I had a quick look but will have to spend some time soon to understand your new method. Mine is quite messy with 10 cameras but at the same time it provides the flexibility to have different alert sounds for each camera or groups of cameras.
Also I feed a false result back in one more time to double check that nothing is being picked up by tensorflow. I would like to make it slightly more clever, I know you can limit what it detects by adding ‘persons’ to the config but I just search on all and it doesn’t seem any slower, later in the flow I filter by the object detected, if it matches persons I alert with sound if not I just alert but no sound, Its amazing the things it picks up but nothing seems as accurate as person sensing anyway.
What I would like to do is be able to limit areas in scenes that vary for different camera views, something like in this config https://github.com/arsaboo/homeassistant-config/blob/master/configuration.yaml#L415
However I dont seem to be able to do a config with multiple platform - tensorflow options!!
Is this possible to do? anyone know if I can I use conditions ie which camera triggered to decide in the home assistant config what it should search for and the area specified?
Cheers
Honestly I’m not sure on this. All of the motion detection, zones, hot spots, etc take place on the Blue Iris side. I just use TensorFlow to detect if there are any people in them. I never got into the super advanced options. On a positive note, you could still use something like the updated Node-RED flow to inject different sounds based on the camera: it’d be a Change Node with a modification to the template, or at least I’d assume that’s what it would be.
@TaperCrimp thanks for the write up. I was wondering if there is a way to combine this with the new stream platform https://www.home-assistant.io/blog/2019/04/03/release-91/. I have blue iris cameras and use stream platform to cast to chromecast displays while I am home. Stream protocol cameras use h264 protocol and generic camera platform. Your blue-iris created cameras use mpjeg platform and mjpg. I am wondering if your write up could be adapted to use stream protocol cameras use h264 protocol and generic camera platform to run tensorflow and send pushover notifications.
Thanks, yeah I have no problems on the blue iris side, infact I have made it much more sensitive to trigger more now using the tensor flow method. It was more the case that there are options in tensor flow on HA configuration to minimise the area in which it detects objects. This is fine but it then becomes the default for all images from other cameras that are fed into it. What I was just wondering if conditional arguments can be used within the HA configuration, ie cam1 triggers therefore use A as the area setting for this object search if it was cam 2 use area B where A&B are the numbers that decide where in the frame tensor flow examines. It implies it can in the configuration link I posted but it doesn’t work for me.
The reason is because on some fixed cams there is always some thing in the view it recognises when triggered for example a boat on one camera I have! It’s not a show stopper as I can filter it out later on object type but was just curious if when tensor flow is called whether it could take arguments within the configuration to decide on an area setting or object type to sense?
Yes, stream seems to work just fine. However, I have found that tensorflow doesn’t seem to like being triggered back to back. I’ve had it fail to detect a person approaching if the person triggers more than one camera in under a few seconds. (Like if I turn a corner)
You can do this by specifying different TensorFlow config s per camera instead of grouping all cameras into the same config.
@TaperCrimp - great write up by the way. Your stated annoyance was the reason I built the component in the first place!
Is there a way to push the image through to push button?
I just want to stop the notification if I am home.
What happens when the docker gets updated with a new HA version does the tensorflow install not get wiped out? @TaperCrimp
I have updated my HA many times and tensorflow still works