Executive Dysfunction Automated Audio Medication Reminder Proof of Concept

About 2 weeks ago, a friend asked for my help with a project using AI language models to create an assistant robot for people with ADHD, autism, and other neurodivergence. As a long-time user of Home Assistant, I saw great potential in using Home Assistant as the base for this device.

The proposal was to create a portable assistant resembling a small action figure or stuffed animal. This companion would provide audible reminders for medications, tasks, and calendar appointments. Eventually, we hope to extend its capabilities to support speech therapy and other forms of audible assistance. We aimed for hardware that could accommodate various designs, akin to a Build-A-Bear workshop.

This is still in its early stages, but progress has been swift. I have a prototype in my workshop for ongoing development along with one at home with my fiancé in use. My fiancé has a strict medication schedule with a requirement for eating full meals before taking the medication. Without reminders, their ADHD brain often will forget the step of taking the medication after successfully eating the meal. The device in the video features an M5 Atom Echo with a speaker upgrade fitted into a 3D printed character affectionately dubbed “Boo”. I’ve also integrated a lithium battery with a 5-volt boost converter, offering about 12 hours of runtime.

Currently, it interacts with Home Assistant through a few automations. The main one handles advanced medication reminders. This provides flexibility in scheduling and logging responses. While I haven’t yet enabled full voice command functionality for Boo, a workaround involves using another M5 Atom Echo as the primary voice assistant, with Boo relaying vocalized outcomes.

Looking ahead, I plan to upgrade to a more robust ESP32 for additional features like haptic feedback and gesture recognition. Firmware improvements will streamline LED animations and enable conversational responses. I believe this application of a voice assistant could greatly benefit many people, and I’m grateful for the support from the Home Assistant development team.

I’ll continue expanding on this project in the coming weeks. For now, I’m sharing the video and will soon provide links to the necessary 3D print files and code. I hope you enjoy this project and we welcome any assistance in achieving our stretch goals for this open-source venture.


This is freaking awesome, thank you for sharing.
I myself am both ASD & ADHD, as is actually everyone else in my household.
I use HA for my youngest’s medication, but for mine I have an app (although I’ve thought of switching it over to HA).
This makes me want to do it even more now, especially as I have a spare Atom Echo!


I’m glad this kind of thing can be helpful to so many people. One of the most helpful aspects of this has been the advanced medication reminder blueprint that Mati24 made. Here is a link to that and it’s definitely a good start to many of the things that I would like to see it do.

Nice entry. Have you ever thought about embedding two ATOM M5 on the same 3D-printed case? So that the device acts as both a voice assistant and as a speaker/notifier?

Thx a lot fo your participation in the contest :wink:

1 Like

The results of the contest are out!
They may be of interest to you :wink:
Have a look!

1 Like

That thought did cross my mind but I was hoping to find a solution that did not involve having to M5 echoes in one enclosure. The solution I used for the video I posted was to have just a standard M5 echo used as the Wake word trigger but have that trigger automations that would have the audio pushed to the Avatar device.

I had been looking at a few people that have set up conversational routines by putting the media speaker into listening mode after it pushes information. This scenario would play out by having the media speaker ask if you have taken a specific type of medication and then disable itself as a speaker and enable listening for the person to respond. It seems like this may achieve what I’m looking for but I just haven’t had the time to implement it yet to try it out. I imagine it is going to only be able to accept very deliberate commands though. Then again I don’t need to have an elaborate conversation with this a simple yes or no should do for my purposes for now.

That’s exciting and I am very happy to be mentioned in the post. I know it was a very quick turnaround and I was a bit late to the game but I’m glad that people found the project interesting :grinning: