Yes, please do share what you’ve done. I’d live to add this capability to my own system.
Great work and great results!
I would appreciate if you could share the details of this project. Many thanks!
Great to see the demo, well done so far
I like this. A lot.
Thanks all for the kind words. I’m hoping to get things written up in the coming days.
Congratulations on what you achieved, that looks pretty epic and the performance is very impressive for an old Android 5 device!
Would be really interesting to see how you put it all together as it may be something that is perfect for the Thinksmart View that people are currently playing with: Is this the perfect standalone tablet for HA?
Thanks for the link. I will certainly check it out!
I am working on a Github wiki page as we speak that details the installation and configuration of the Android applications and HA extensions. Will also share the automations, scripts, custom sentences, and the lovelace views so that folks can pick and choose what they might want.
I really hope that others will contribute and help with improving this. I’m seeing a lot of interest and believe this could be a really useful addition to Assist.
This is really, really cool!
@FelixKa just got the digital mics working within PostmarketOS for the thinksmart, so the basis for a fully open source dashboard/voice assistant will soon be in place.
An awesome community developed dash/assistant that’s built upon a failed smarthome product… somehow seems… poetic, haha
This just might be the missing link, nice work @dinki
WOW! I just gave a better look at the project and this is absolutely amazing! Can this do on device wake word detection? If so, I’m buying one today!
IIRC there are some community members doing it on the android rom, though it’s a bit hacky.
I’m anticipating much faster and more robust progress with voice on Linux once the rom is stable across the different hardware variants.
I think you’re right about this being a great device for View Assist. I just purchased a Thinksmart View and will get View Assist up and running on it as soon as I receive it. Thanks for making me aware of this project! Can’t wait!
With postmarketOS I have so far only tried openWakeWord and it works flawlessly on-device running an onnx model (I tried the hey_jarvis_v0.1.onnx
one), using around 6% CPU.
It uses quite a bit of RAM (about 700MB of the 2GB available) but I’d have to look into whether it actually is using it or just reserving it for some purpose.
Haven’t tried TFLite inferencing because there are no pre-built binares for the Python library that work on Alpine Linux (which is what postmarketOS is based on), due to its use of musl instead of glibc. Didn’t look into whether it’s possible to build them or whether TFLite, in fact, can’t run on musl-based distros.
Right now I’m still focused on getting the hardware going properly for multiple variants. But thought I’d at least share that I did check using openWakeWord to validate that I would be able to use it for my purposes
Super exciting to hear this. My HA server is fine for handling the off device detection with streaming but on device would surely give a better experience.
Can’t wait to get my device in to start testing.
Another ThinkSmart View user trying to replicate my echo show devices, here. THE STUPID ADS. I DON’T WANT YOU TO DO ANYTHING BUT SHOW ME A CLOCK.
I digress. This project looks amazing and I can’t wait to implement it on my thinksmarts. It would be especially cool if it could be running in the background so that Fully Kiosk can be running a different app and still pop over to the assist screen to do the task, then go back to what it was doing.
I’ve updated the top message with the location of the start of the wiki. I want to remind everyone that I am far from a professional so any tips/guidance/corrections will go a long way to making this project better. Thanks again for the kind words!
Just added the pertinent portions of the configuration (just the voice and feedback, not the additional dashboards) to my thinksmart and it works great. Thanks again for the documentation and working out the configuration for this to all work well.
Excellent. I appreciate the feedback and knowing what I put together worked. I hope to tidy things up and add the rest in the next few days. Let me know if you run into any issues. I tried my best to make the views responsive by using percentages but not sure how it will look on other devices.
My thinksmart shipped this afternoon and will certainly be my target device if I can manage to get the OS installed
I have three of these lenovo devices setup around my house doing on-device wake word detection, but they use a 3rd party app plugin that runs snowboy for the actual detection, not openwakeword. Despite the 3rd party plugin the experience is pretty fast and seamless, though I think your method has a few advantages, such as working on earlier versions of android, and being able to use openwakeword.
Here’s a video of one of mine in action:
See more here if you’re interested: Setting up a 100% local smart speaker on an Android tablet using Tasker and Snowboy to handle wake word detection
Thanks for sharing. Really cool stuff for sure. It won’t be long before everything comes together.
I’m really curious what’s going to happen when I have multiple devices listening for wake word. Right now with the Amazon devices I have an issue with the wrong one answering so seems like it isn’t listening at all. Maybe with these it will just have two listening and hopefully will respond back on the correct one or both.
I’m excited to see how much easier the new ‘sections’ view type will eventually make creating these views. I struggle with the CSS positioning and sizing the most but once ‘sections’ matures we will really be in business!