[Integration] Android TV live cam video streams in a Picture in Picture triggered by motion detection. AKA video-doorbell

Hi PPL!
I’m working in a custom implementation of a “live video doorbell”.
My approach is from outside of the HA ecosystem, and would be great to fully integrate to it.
I have a Tp-link TAPO C310, an old Sansung Galaxy TAB 3, and a TV with a Xiomi MiTV hdmi plug.
Thanks to this integration the C310 works with HA. But any ONVIF cam that exposes motion detection sensor can work with this.
Thanks to this app, my old galaxy tab with android 4.4 revive from the obsolete hardware shelf and can be turned into a full arm linux host. It is running ubuntu 20.04 with a decent performance.
HA runs inside it in the CORE flavor (no docker in this deploy).
Thanks to this app, my androidTV can recieve notifications on a PIP while i’m using it in any other task (like watching TV).
The working flow is simple:
HA detects motion on the cam, and trigger a bash script that sends the camera live view to the PIP on the TV.
This is the way i found to make it work. But, for sure, can be done in a better way. So, if anyone is interested in this project, please comment here, so, we can look for a better approach.

And, of course, a big thanks to the ones who develop the components in use in this deploy.

Greets!

11 Likes

This sounds very nice. Is it just installing the apps and it works?
Any yaml scripts to be adjusted?

In my approach, Bash scripting goes great and works really fine.

configuration.yaml

shell_command:
  c310notif: bash /home/homeassistant/c310notif.sh

c310notif.sh (located at HA user home location)

#!/bin/bash
# Get a access token to the video_stream_proxy using a long-time-token
token=$(curl -s -H "Authorization: Bearer your-own-long-time-token" -H "Content-Type: application/json" "http://HA.LOCAL.IP:8123/api/states/camera.c310_sd" -o - | jq -S -r '.attributes.access_token')
# Send data to the first androidTV
curl -s -X POST --no-keepalive -H "Content-Type: application/json" -d '{"duration": 30,"position": 2,"title": "¡ATENCION!","titleColor": "#ff00ee","titleSize": 20,"message": "ALARMA EN C310","messageColor": "#ffffff","messageSize": 14,"backgroundColor": "#80ffff00","media": { "web": { "uri": "http://HA.LOCAL.IP:8123/api/camera_proxy_stream/camera.c310_sd?token='"$token"'", "width": 480, "height": 300}}}' http://ATV1.LOCAL.IP:7979/notify -o /dev/null
# Send data to a second AndroidTV
curl -s -X POST --no-keepalive -H "Content-Type: application/json" -d '{"duration": 30,"position": 2,"title": "¡ATENCION!","titleColor": "#ff00ee","titleSize": 20,"message": "ALARMA EN C310","messageColor": "#ffffff","messageSize": 14,"backgroundColor": "#80ffff00","media": { "web": { "uri": "http://HA.LOCAL.IP:8123/api/camera_proxy_stream/camera.c310_sd?token='"$token"'", "width": 480, "height": 300}}}' http://ATV2.LOCAL.IP:7979/notify -o /dev/null
# 30 secs of live video, so, wait 
sleep 30 
# clean ffmpeg processes, they uses not to die alone, and consume innecesary resources
killall -9 ffmpeg

Automation is simple:

on MOTION DETECTED in CAM DEVICE
CALL service: shell_command.c310notif

The bash script includes all data “harcoded” (destination TVs to notificate, and cam sources (with token obtained via de API).

Here is where i need some help. The whole procedure, for sure, can be done inside HA.

I did a detailed write-up of how I managed to display a low-latency real time feed from my doorbell in a picture-in-picture popup on Android TV. I did this using WebRTC Camera and PiPup. Posting here for anyone who gets here through search.

10 Likes

well documented,
well done!

When you use webrtc to reference the entity instead of directly pulling from your cameras, I would think it would be still be streaming through HA and out the webrtc. This would still do the conversions. In my testing this isn’t as reliable as connecting directly to the cameras and displaying the stream directly rather than through a camera entity.

In your comparison photo, this becomes a little more evident.

“WebRTC vs HLS video stream comparison - 9 second difference”

The top image is actually clearer but the words are not which is interesting. But if you look at the shrubs/flowers, you can see a conversion difference. The bottom isn’t as sharp which is the webrtc which I am assuming is flowing through the entity: camera.your_camera.

Thanks for the write up though. The use of pipup is great. I am going to give it a try. I currently use the Notifications for Android TV / Fire TV - Home Assistant to send still images but would prefer a pip of the actual stream. I’m hesitant though knowing pipup isn’t active anymore.

Thanks for the comment Rob. To be clear, do you mean that the following two custom cards aren’t really equivalent, and the first one is going to perform better for me than the second one?

type: 'custom:webrtc-camera'
url: 'rtsp://admin:PASSWORD@CAMERA_IP:554//h264Preview_01_sub'

versus

type: 'custom:webrtc-camera'
entity: camera.front_driveway_rtsp
# where camera.front_driveway_rtsp is defined elsewhere as:
# camera:
#   - platform: generic
#     name: Front Driveway RTSP
#     stream_source: rtsp://admin:PASSWORD@CAMERA_IP:554//h264Preview_01_sub

My understanding from the WebRTC Camera docs is that referencing the camera RTSP stream via an entity is supported as I described it. If your experience says otherwise, I’ll definitely do more testing.

Its definitely supported. My comment was by doing so, you’re essentially having funneling the stream through two layers.

ipcam ↔ HA cam proxy (HLS?) ↔ webrtc ↔ client

Vs

ipcam ↔ webrtc ↔ client

The webrtc directly to the ipcam should definitely perform better and be more reliable.

I just tested this. The two methods of embedding the cards produced exactly the same streams for me, in a side-by-side comparison. I also took a look in the source code. You can see on this line that if the integration finds a entity attribute in the card, it calls its utils.get_stream_source() method, which calls the HA camera.stream_source() method to retrieve the underlying RTSP stream of the camera entity.
I would have loved for you to be right!

1 Like

Well… My suspicion was in fact incorrect. I just turned debugging on for webrtc in HA. Loaded two different camera cards with entity vs url and refreshed for 2 different cameras.

The log does indeed show direct connections to the cameras rather than through the camera proxy regardless of the method selected.

# tail -f home-assistant.log | grep -i  webrtc | grep GIN
2022-03-30 10:29:30 DEBUG (webrtc) [custom_components.webrtc.utils] [GIN] 2022/03/30 - 10:29:30 | 200 |  5.056587371s |       127.0.0.1 | GET      "/ws?url=rtsp://xxx:[email protected]/h264Preview_01_sub"
2022-03-30 10:29:44 DEBUG (webrtc) [custom_components.webrtc.utils] [GIN] 2022/03/30 - 10:29:44 | 200 |  33.08806091s |       127.0.0.1 | GET      "/ws?url=rtsp://xxx:[email protected]/h264Preview_01_sub"

Well I would have loved to have been wrong since i’d rather use the entity. This simplifies my cards. Thanks for keeping me straight.

Terrific writeup. I looked into pipup a while back, but couldn’t get past the lack of a “close popup” feature. I’d use that feature to turn off the pip view when motion ends. Relying on a timeout isn’t the most ideal solution if you have a lot of triggers (facing a busy city street). Right now I’m using tinycam which is triggered by tasker to accomplish this.

I have a sony a80j Oled TV that runs google TV. Any ideas if this will work with it?

I’m working through your write-up and running into a few issues.

Everything goes well downloading the apk and sideloading it through ADB.

When running ./adb shell pm list packages | grep pip I get the following error:

grep : The term 'grep' is not recognized as the name of a cmdlet, function, script file, or operable program. Check
the spelling of the name, or if a path was included, verify that the path is correct and try again.
At line:1 char:32
+ ./adb shell pm list packages | grep pip
+                                ~~~~
    + CategoryInfo          : ObjectNotFound: (grep:String) [], CommandNotFoundException
    + FullyQualifiedErrorId : CommandNotFoundException

When I drop the | grep pip I get a full list of packages and found the package for PIPup. And that’s where the next difference is. The package name is showing as nl.rogro82.pipup.
So then I run ./adb shell appops set nl.rogro82.pipup SYSTEM_ALERT_WINDOW allow
I assume it completes but it doesn’t give any feedback.

So I created the post.json file in the ADB directory and run the curl command but get an error running that.

PS C:\Android> curl -d "@post.json" -H "Content-Type: application/json" -X POST http://IPADDRESS/notify
Invoke-WebRequest : Cannot bind parameter 'Headers'. Cannot convert the "Content-Type: application/json" value of type
"System.String" to type "System.Collections.IDictionary".
At line:1 char:25
+ curl -d "@post.json" -H "Content-Type: application/json" -X POST http ...
+                         ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : InvalidArgument: (:) [Invoke-WebRequest], ParameterBindingException
    + FullyQualifiedErrorId : CannotConvertArgumentNoMessage,Microsoft.PowerShell.Commands.InvokeWebRequestCommand

So for fun I tried calling the rest_command.pipup_image_on_tv service through dev tools as described. Getting a 400 error here.

2022-04-23 14:20:52 WARNING (MainThread) [homeassistant.components.rest_command] Error. Url: http://IPADDRESS/notify. Status code 400. Payload: b'{\n  "duration": 20,\n  "position": 0,\n  "title": "hey",\n  "titleColor": "red",\n  "titleSize": 10,\n  "message": "I can see you",\n  "messageColor": "#fbf5f5",\n  "messageSize": 14,\n  "backgroundColor": "#0f0e0e",\n  "media": { \n    "image": {\n      "uri": "https://mir-s3-cdn-cf.behance.net/project_modules/max_1200/cfcc3137009463.5731d08bd66a1.png",\n      "width": 640,\n      "height": 480\n    }\n  }\n}'

For reference, NVIDIA Shield running sw 9.0.2 Android 11.

Any tips?

400 status code is generally for incorrect syntax.

Is your JSON well formed? No random characters anything? You’re trying the correct port as well with your shield’s IP address? IPADDRESS:7979

Also grep is a Linux command for searching for text/strings. But you found the right app to grant the permission.

Is there any chance of getting this working on a Fire TV?

I don’t think the existing Notifications for Fire TV supports video streams. I followed your write up successfully but there isn’t actually a popup when I try the example JSON or other tests on a Firestick 4K.

I’ve simply copied/pasted the commands from the write-up so I don’t really know if the syntax is correct. I’ve changed the IP and the port is correct.

Seems like this is your problem:

There are a few solutions mentioned in that post on how to use curl and not invoke-webrequest on Windows.

Sorry you had difficulty Mike. In summary, I think there are three issues, one of which is my fault:

  1. As @abigdeel pointed out, grep is a linux command. The commands I posted should work on Linux or MacOS, but I haven’t used windows in years.
  2. There was an error in my post. The APK you should have (and that I have) identifies itself as “nl.rogro82.pipup”. The “nl.begner.pipup” one in my notes must have been left over from when I was trying out a different fork/build. To be crystal clear, the correct APK is the one built by gmcmicken off madjam002’s fork of rogro82’s original.
  3. I didn’t even know Window’s had a cURL command. Anyway, follow the instructions in that stackoverflow link @abigdeel posted to either give the window’s version the the kind of arguments it expects, or get your hands on curl.exe (a window’s port of the real curl command).

I’ll update my post with some of this info.

Thanks for the tip. Never even considered it was OS related. Just sent the curl command from a linux box and it doesn’t give an error, but it also does not give me a popup on my android box.

@seanblanchfield Are you using an Android 11 box?

I wonder if it has to do with:

Permissions updates in Android 11 | Android Developers
Android 11 makes several changes to how apps are granted the SYSTEM_ALERT_WINDOW permission. The changes are intended to protect users by making the permission grant more intentional.

I am using Android TV 11 on an Nvidia Shield, and the popup is working. The SYSTEM_ALERT_WINDOW permission is an important piece of this.

Did you grant the SYSTEM_ALERT_WINDOW permission using the adb command I have in my write up?

# Grant permissions for it to draw a system alert window (required on Nvidia Shield at least)
adb shell appops set nl.rogro82.pipup SYSTEM_ALERT_WINDOW allow