I was looking for a “virtual pet” for home assistant. Something that could react to wake-words and be the face of my smart home. A friendly companion I could interact with instead of a dashboard.
Whilst searching, I found other people had been looking for the same thing, but none were available. So, I went ahead and made one…
Meet Macs, a playful, expressive, animated companion for Home Assistant:
What I don’t get from the description: This is not an assistant itself, it is just a card that can display an avatar that I can control e.g. from my existing assistant by exposing some of this entities and tell it what it’s good for, or based on automations that react on things my assistant does (like intent scripts, …)?
ok, so it really is an custom card that can be used with your existing assistant.
This is quite amazing.
We have an Thinksmart 8" Smart-Clock in the living room and this will definitely be used.
But I have some bugs with it I guess.
I configured it to react on a voice assistant and also show the text / responses.
The face animation is working when I speak to the voice assistant.
But it won’t update the text automatically after the a question/response.
I have to reload the browser page.
And I also can’t seem to change the mood using the entities of the MACS device.
I’ve seen that manual mood is only available when “react to wakework” is disabled.
So I will use my own automation to have more control here.
Weather, temperature and rain/snow also works fine.
Really cool idea.
One last question for now:
Is there a specific reason why all of this is wrapped in an iFrame?
I’ve seen you used a lot CSS vars for colors and other things.
So it would be quite cool to be able to “theme” it using card-mod.
Hi Thyraz. Thanks for all the comments. Sounds like you’ve already been able to answer most of your own questions.
In terms of the iframe, it’s because it’s a stand-alone webpage. You can pass query params in the URL to adjust mood etc. And then in the Home Assistant dashboard, it means you can just load the page in an iframe. It means that Macs isn’t dependent on Home Assistant to run, which makes development much easier.
In terms of colours, my plan is to add options to the card editor to allow different colour schemes. Only reason it hasn’t been implemented yet is because I tried a few different schemes and didn’t like any of them. Is there anything in particular you’re trying to change, or the whole thing?
I’m currently working on some other bugs (see github), and have a fresh install of home assistant to work with. I’ll see if I can replicate the need to refresh the browser for dialogue to show.
That is really fun. I’ve been doing something much simpler - an animated gif of a mouth and eyes, changing state based on two input_booleans, with a template sensor to store and display the event history.
I’m not happy with it, as there is no lip sync, but it serves the purpose for now, although it is already kind of unnerving.
Which would deliver a full-featured animated avatar that operates from the audio response!
The model is interesting - he translates the audio (phonemes) into “vonemes” which are facial expressions, much like the cartoon guys do, and syncs the facials to the audio.
This, I could put on a tablet or on an ESPHome-ESP32 robot. Have you considered this approach? Like yours, it looks like it would require a server process (Node.js) on the HA server, a webhook to direct the audio response, and an iframe to display the image on a dashboard.