Hey everyone,
If you’re like me and have a Mac Mini (or other Apple Silicon Mac) sitting around with plenty of power under the hood—maybe from an upgrade or just underutilized—this might be a practical way to put it to good use in your smart home setup. I’ve put together a detailed guide on installing Home Assistant OS (HAOS) on a Mac Mini, running everything locally with a voice assistant powered by Ollama. No cloud services involved, which keeps things private and responsive.
The guide walks through setting up HAOS in a lightweight VM using UTM/QEMU, integrating Ollama natively on the Mac for GPU-accelerated AI, and adding speech-to-text (Whisper) and text-to-speech (Piper) for voice interactions. It’s optimized for Apple Silicon, handles common pitfalls like VM configuration and networking, and includes optional steps for music control and persistent memory.
You can find the full repo here: github.com/iXanadu/haos-on-mac
It’s broken down into phases, so you can follow along step-by-step, whether you’re starting fresh or troubleshooting an existing setup. Prerequisites include at least 16GB unified memory (32GB recommended for smoother performance with larger models), and it’s tested on macOS Sequoia.
As a bonus, if you’re interested in adding semantic memory to your voice assistant (so it can remember facts in a more natural, meaning-based way rather than just keywords), I also have a companion repo for that. It uses FastAPI, PostgreSQL with pgvector, and Ollama embeddings, and integrates directly into HA via a blueprint and Pyscript. This pairs well with the Mac setup since it leverages the same Ollama instance.
Check it out here: github.com/iXanadu/ha-semantic-memory
Both are open-source under MIT, and I’d appreciate any feedback or suggestions if you give them a try—especially from other Mac users experimenting with local HA deployments. Let me know if you run into issues or have improvements!
Thanks,
iXanadu