View Assist - Visual feedback for Assist voice assistant on an Android tablet (Install info provided on Wiki)

Here’s a video on what I call ‘View Assist’. This is a visual feedback device tied into the Assist voice assistant. The project goal was to use cheap/existing hardware to replicate the functionality of the Amazon Echo Show devices we use in our home.

The proof-of-concept was created using a 2016 Amazon Fire7 Android tablet running Android 5 (!!!) that had been sitting in a drawer. The device feeds its audio via an Android webcam app to Stream Assist which passes the audio over to the HA server for wake word detect. You’ll notice in the video that the ‘listening bar’ is a bit delayed in appearing but the actual listening happens before the visual appears. Display control is via the ‘Browser Mod’ extension which allows for changing the view being displayed on the tablet. I am also running Fully Kiosk Android app for the full screen display. Fully Kiosk also exposes the tablet as a media player device. While I used an old tablet, I’m confident that any Android device capable of running the webcam app and Fully Kiosk can be used for a similar project both for visual and audio output or audio only if you choose. A great way to repurpose ejunk!

From here, all commands are being handled by the ‘custom sentences’ via HA automation page. This allows for making specialty sentences which call a wide range of services. I am able to tie into things like my chores list, shopping list, Wikipedia, launch Waze maps, send broadcast messages to other media devices, play music from Music Assistant, etc. You are really only limited by your imagination. Unfortunately I am limited to only custom sentences for actions as the underlying service calls for Assist are not yet exposed so I can’t detect when those core sentences (eg turn on light) happen so I can’t create a view that shows the entity that just turned on. Hopefully this will be exposed as well as the assist device calling it so that I can better tailor the display to only the device being used for the voice command. This will also be useful for extending this to audio only satellite voice assistants as well.

The display portion is running a single dashboard with one view per display screen (eg clock). I will admit that I knew next to nothing in regards to CSS and still know next to nothing about graphical layouts and colors. You’ll see that what I’ve done leans heavy on the Amazon look of the Echo Show devices I’m looking to replace.

I could not have done any of this without the support of the HA community both here on the forum as well as on Discord. I know I tried a lot of people’s patience with my questions but they were all extremely helpful in getting me this far. I can only imagine what this could look like in capable hands.

At any rate, if anyone is interested I’ll gladly share the details of how I did this in the ‘Share my project’ section of the forum. My hope is that this might inspire folks to expand on this idea and to do it the ‘right way’.

Thanks for watching. If you like it please click the Vote button next to the title of the post at the top.

EDIT I’ve started to document this and have a bit on the Wiki page. The majority of the information has been provided and I will work to add supplemental information and examples.

fantastic work and dedication has gone into this project. I take my hat off to you … this has certainly been a labour of love and sheer tenacity! good luck. a very worthy contender :+1:

1 Like

Yes, please do share what you’ve done. I’d live to add this capability to my own system.

Great work and great results!
I would appreciate if you could share the details of this project. Many thanks!

1 Like

Great to see the demo, well done so far :slight_smile:

I like this. A lot. :smile:

1 Like

Thanks all for the kind words. I’m hoping to get things written up in the coming days.

1 Like

Congratulations on what you achieved, that looks pretty epic and the performance is very impressive for an old Android 5 device!

Would be really interesting to see how you put it all together as it may be something that is perfect for the Thinksmart View that people are currently playing with: Is this the perfect standalone tablet for HA?


Thanks for the link. I will certainly check it out!

I am working on a Github wiki page as we speak that details the installation and configuration of the Android applications and HA extensions. Will also share the automations, scripts, custom sentences, and the lovelace views so that folks can pick and choose what they might want.

I really hope that others will contribute and help with improving this. I’m seeing a lot of interest and believe this could be a really useful addition to Assist.


This is really, really cool!

@FelixKa just got the digital mics working within PostmarketOS for the thinksmart, so the basis for a fully open source dashboard/voice assistant will soon be in place.

An awesome community developed dash/assistant that’s built upon a failed smarthome product… somehow seems… poetic, haha :smile:

This just might be the missing link, nice work @dinki


WOW! I just gave a better look at the project and this is absolutely amazing! Can this do on device wake word detection? If so, I’m buying one today!


IIRC there are some community members doing it on the android rom, though it’s a bit hacky.

I’m anticipating much faster and more robust progress with voice on Linux once the rom is stable across the different hardware variants.

1 Like

I think you’re right about this being a great device for View Assist. I just purchased a Thinksmart View and will get View Assist up and running on it as soon as I receive it. Thanks for making me aware of this project! Can’t wait!


With postmarketOS I have so far only tried openWakeWord and it works flawlessly on-device running an onnx model (I tried the hey_jarvis_v0.1.onnx one), using around 6% CPU.
It uses quite a bit of RAM (about 700MB of the 2GB available) but I’d have to look into whether it actually is using it or just reserving it for some purpose.
Haven’t tried TFLite inferencing because there are no pre-built binares for the Python library that work on Alpine Linux (which is what postmarketOS is based on), due to its use of musl instead of glibc. Didn’t look into whether it’s possible to build them or whether TFLite, in fact, can’t run on musl-based distros.

Right now I’m still focused on getting the hardware going properly for multiple variants. But thought I’d at least share that I did check using openWakeWord to validate that I would be able to use it for my purposes :slight_smile:


Super exciting to hear this. My HA server is fine for handling the off device detection with streaming but on device would surely give a better experience.

Can’t wait to get my device in to start testing.


Another ThinkSmart View user trying to replicate my echo show devices, here. THE STUPID ADS. I DON’T WANT YOU TO DO ANYTHING BUT SHOW ME A CLOCK.

I digress. This project looks amazing and I can’t wait to implement it on my thinksmarts. It would be especially cool if it could be running in the background so that Fully Kiosk can be running a different app and still pop over to the assist screen to do the task, then go back to what it was doing.


I’ve updated the top message with the location of the start of the wiki. I want to remind everyone that I am far from a professional so any tips/guidance/corrections will go a long way to making this project better. Thanks again for the kind words!


Just added the pertinent portions of the configuration (just the voice and feedback, not the additional dashboards) to my thinksmart and it works great. Thanks again for the documentation and working out the configuration for this to all work well.

1 Like

Excellent. I appreciate the feedback and knowing what I put together worked. I hope to tidy things up and add the rest in the next few days. Let me know if you run into any issues. I tried my best to make the views responsive by using percentages but not sure how it will look on other devices.

My thinksmart shipped this afternoon and will certainly be my target device if I can manage to get the OS installed :slight_smile:

1 Like

I have three of these lenovo devices setup around my house doing on-device wake word detection, but they use a 3rd party app plugin that runs snowboy for the actual detection, not openwakeword. Despite the 3rd party plugin the experience is pretty fast and seamless, though I think your method has a few advantages, such as working on earlier versions of android, and being able to use openwakeword.

Here’s a video of one of mine in action:

See more here if you’re interested: Setting up a 100% local smart speaker on an Android tablet using Tasker and Snowboy to handle wake word detection