Cutting the Alexa cord

I’ve got a decent setup so far, but am still no where close to replacing Alexa. I’m using Echo Shows with lineageOS and View Assistant. I am using extended openai conversation with qwen3 (I’ve tried 8b, 14b, 30b and next-80b).

One of my biggest challenges is that the common use cases I need to emulate is “Hey Alexa, play xyz music” and it will play on that device without having to specify which device. It’s not just music but pretty much any request. I need to pass in the context that the device I am speaking to is the one that should take the action.

Another challenge is question and answers. A common question I have is “What is X number of foot pounds in newton meters”, or “what time does sprouts close”, “how long will it take me to get to the airport”.

Any help would be appreciated, Home Assistant is very flexible but not entirely intuitive out of the gate and I am still learning.

I’ve just been through the same thing and gotten rid of a lot of Alexas.

I made this post: A Better (and Simpler) LLM Music Assistant Script

I made a script that the LLM can use to play music with Music Assistant. The trick for me to get it to just play something was:

  • Forget about entity ids etc. and just use an area “i.e. living room”
  • Tell it that if I don’t specify an area, pass in its own area
  • In the System Prompt tell it that if I just ask to “play some music” to play the “500 Random Tracks” playlist.

I’m using Qwen3:32b but I’ve tested it on the 8b version and it works great too!

I’m using the Ollama integration which exposes a lot of tool calls automatically though, I couldn’t get the custom openai one to work properly.

If using Music Assistant you can set a default player and when no player is specified it will always play on that player. I have seen some other workaround like using espresense to detect the room and media player to use based on either area or tags. A bit more complicated but works on a lot of stuff. Like just saying “change lights to blue” and it only doing it in the room you are in. Saw this on YouTube with complete setup, let me know if you want the link.

My LLM answered the first question but the second question makes sense. An LLM doesn’t have internet access unless specifically setup as HA does support MCP so they aren’t good for current info like “whats’ the stock price of X or who won the football game last night” because they only have data it was trained on. You can have cloud AI services use internet but it costs money, at least from my understanding.

1 Like