Improving Blue Iris with Home Assistant, TensorFlow, and Pushover [SIMPLIFIED!]

In Blue Iris: Settings → Users → Your Account → Edit → Uncheck “On login play sound”.

1 Like

Thanks guys, that’s exactly it TaperCrimp and now its sorted. I thought I knew BI very well and just assumed it couldn’t be that as I have been using it like this for six months or more and only started after installing as per this guide. Also thanks for this write up, I had to learn a lot but its working amazingly well!
Cheers for the help

No problem, glad to assist. At some point I’ll need to update the screenshots since they’ve changed in the latest version of node-red-contrib-home-assistant-websocket. Plus, you can now do regex matches against sensors so I could probably consolidate those 4 cameras down to a single node.

Thanks again for all of the feedback on this. A large part of writing this was around not exposing my Home Assistant deployment to the Internet. However, with the release of remote access via the HA cloud, I’ve been playing around with the Home Assistant iOS application. I’m about 90% sure I can get the same functionality minus the funky nuances of hard links, experimental PushOver notifications, docker instances, etc. At some point soon I’ll be putting a new post up with the details on this. The added advantage is that it can link directly to the camera instead of a snapshot. Stay tuned!

2 Likes

Fantastic, I just recently found this and made a few changes. I have had BI for a while now but had no idea it could do so much (with MQTT). And I now have TensorFlow working on a Pi (that is the only thing its running and at 60% memory use). Love this and look forward to more updates.

Update: I’ve not got a post up on using this with iOS notifications. It’s also much more dynamic and does not require a separate set of flows for each camera. Hope this helps someone!

1 Like

Another update: I’ve been redoing a bunch of the automations and managed to reduce all of the alerting nodes down to 6 total. It’s also just one flow that covers every single camera with some regex and a fancy change node. Much cleaner, and the alerting removes the need for any hard links and does all notification through the HA docker instance. I’ll update the instructions soon. Best of all, it’s generic enough that you can swap in whatever notification mechanism that you’d like.

2 Likes

That’s great work TaperCrimp, I had a quick look but will have to spend some time soon to understand your new method. Mine is quite messy with 10 cameras but at the same time it provides the flexibility to have different alert sounds for each camera or groups of cameras.

Also I feed a false result back in one more time to double check that nothing is being picked up by tensorflow. I would like to make it slightly more clever, I know you can limit what it detects by adding ‘persons’ to the config but I just search on all and it doesn’t seem any slower, later in the flow I filter by the object detected, if it matches persons I alert with sound if not I just alert but no sound, Its amazing the things it picks up but nothing seems as accurate as person sensing anyway.

What I would like to do is be able to limit areas in scenes that vary for different camera views, something like in this config https://github.com/arsaboo/homeassistant-config/blob/master/configuration.yaml#L415

However I dont seem to be able to do a config with multiple platform - tensorflow options!!

Is this possible to do? anyone know if I can I use conditions ie which camera triggered to decide in the home assistant config what it should search for and the area specified?
Cheers

Honestly I’m not sure on this. All of the motion detection, zones, hot spots, etc take place on the Blue Iris side. I just use TensorFlow to detect if there are any people in them. I never got into the super advanced options. On a positive note, you could still use something like the updated Node-RED flow to inject different sounds based on the camera: it’d be a Change Node with a modification to the template, or at least I’d assume that’s what it would be.

@TaperCrimp thanks for the write up. I was wondering if there is a way to combine this with the new stream platform https://www.home-assistant.io/blog/2019/04/03/release-91/. I have blue iris cameras and use stream platform to cast to chromecast displays while I am home. Stream protocol cameras use h264 protocol and generic camera platform. Your blue-iris created cameras use mpjeg platform and mjpg. I am wondering if your write up could be adapted to use stream protocol cameras use h264 protocol and generic camera platform to run tensorflow and send pushover notifications.

Thanks, yeah I have no problems on the blue iris side, infact I have made it much more sensitive to trigger more now using the tensor flow method. It was more the case that there are options in tensor flow on HA configuration to minimise the area in which it detects objects. This is fine but it then becomes the default for all images from other cameras that are fed into it. What I was just wondering if conditional arguments can be used within the HA configuration, ie cam1 triggers therefore use A as the area setting for this object search if it was cam 2 use area B where A&B are the numbers that decide where in the frame tensor flow examines. It implies it can in the configuration link I posted but it doesn’t work for me.

The reason is because on some fixed cams there is always some thing in the view it recognises when triggered for example a boat on one camera I have! It’s not a show stopper as I can filter it out later on object type but was just curious if when tensor flow is called whether it could take arguments within the configuration to decide on an area setting or object type to sense?

Yes, stream seems to work just fine. However, I have found that tensorflow doesn’t seem to like being triggered back to back. I’ve had it fail to detect a person approaching if the person triggers more than one camera in under a few seconds. (Like if I turn a corner)

You can do this by specifying different TensorFlow config s per camera instead of grouping all cameras into the same config.

@TaperCrimp - great write up by the way. Your stated annoyance was the reason I built the component in the first place!

Is there a way to push the image through to push button?

I just want to stop the notification if I am home.

What happens when the docker gets updated with a new HA version does the tensorflow install not get wiped out? @TaperCrimp

I have updated my HA many times and tensorflow still works

yeah I was told tensorflow is baked into the container. Right now my issue is when the component tries to setup it sameta HA into a boot loop. Can’t figure out why, doesn’t throw any errors.

Once quick question - I have started writing my automation in Node-Red, and I was comparing what I have to your images above. In the “Current State Node” I cannot find how to change the “If State” to “Halt If State”.

Has this been changed in the palette or is there a way to change it?
Currently I am just using “If State is not 0”, but I wasn’t sure if this would have the same results.

UPDATE:
I think I found my own answer. They renamed it.
https://github.com/zachowj/node-red-contrib-home-assistant-websocket/pull/113

Thanks,
DeadEnd

I know this tutorial was intended for non hassio users but I was hoping to get help with setting it up via hassio.
I used this to get tensorflow installed. And it said it installed without error

https://github.com/hunterjm/hassio-addons/tree/master/tensorflow

But i get this error when home assistant boots

Unable to locate tensorflow models or label map

A few items have changed since I wrote this:

  • I switched from vanilla HA docker to HASSIO, except running in a VM. I’ve found it to be much easier than trying to maintain a bunch of disparate docker containers.
  • Node-RED has been updated and a few of the components have changed, as @DeadEnd had noticed.
  • I switched from TensorFlow to Amazon Rekognition using this custom component by @robmarkcole. It works just as well and doesn’t require screwing around with TensorFlow files. Unfortunately it’s not in default, but the custom component works just fine.

If anyone is interested, I’d put together a different guide that will send you a camera stream when there’s an alert. It used the iOS application alerting.