Finally got around to configuring my Atom Echo

I’m running HA with Docker Compose on linux, and I admit that setting up the voice pipeline software manually as individual docker instances was easier than I expected.

The M5Stack Atom Echo works amazingly well. I’ve even created a custom wake word for it which only took about an hour. The speaker sucks, as expected, but I’ve seen someone else post about how to connect a headphone jack to it for an external speaker that I might give a go at.

Using the HA built-in LLM everything works as expected though it’s a dumb LLM without much real knowledge. Dropping in Llama3.2 from my Ollama install causes it to ramble on and on about the simplest task. That’s using the default prompt provided by HA. Do we know what the default prompt being used by the built-in HA LLM is?

llama3.3 isn’t working at all in my ollama, looks like a known bug and unrelated to HA.