I have set up some custom sentences with wildcards. I am able to verify with the sentence parser that my commands work are parsed as expected. But when I try to run the same command through the voice assist pipeline, I just keep getting the error “Unexpected error during intent recognition (intent-failed)”
What exactly is the difference between the sentence parser in the debug window and whatever Home Assistant is doing with the assist pipeline?
I know the sentence parser doesn’t actually execute actions… So there is probably an error on the assist pipeline when running the execution, but I can’t see anything about the error, just ‘intent-failed’
To add some more details, I am specifically asking Assist to play music like this: “play (artist) on office speaker”
When viewing the sentence parser with this command it shows the parsed (artist) value with an extra trailing space like "artist " instead of just “artist”… That’s the only thing I can see that might cause an issue…
Otherwise, it should be firing off music through the Music Assistant add-on… But it’s not doing that since it’s erroring.
Anyone have thoughts or experience with this scenario?
I stopped looking into the error when it wasn’t obvious what was happening… kinda waiting for Music Assistant / Voice Assist to get more fleshed out at this point, as my problem seems like a bug. I don’t believe I was getting the same error though… I assume your intent ‘MassPlayMediaOnMediaPlayer’ exists? I don’t think I was actually getting an error for an ‘Unknown Intent’… If I get a chance to try it again soon, I will let you know.
So now my problems are even stranger… I resinstalled the HACS integration to play around with Voice Assist some more… it’s working, kinda… It recognizes the intent and it says “Okay”, which is the proper response, but then it just doesn’t actually do anything with Music Assistant… so the intent itself somehow isn’t connected to anything…
Ugh… I feel like I’m even closer, but still so far… I request “play the artist rise against on office speaker” and my atom echo repeats back “okay, playing rise against on the office speaker”… but still nothing, it’s never actually playing the media from Music Assistant… I can manually play the music from Music Assistant to my Office Speaker just fine. The name is correct, the area is correct… I’m not sure what else to look for.
I’m experiencing the same sort of disconnect. The LLM seems to be the right thing in transforming my words to my actual desire…playing this it that song on this or that speaker…but something isn’t making it all the way to the end to play the actual music.
I’m still in the process of educating myself on exactly which events I should be expecting to see and where/how to listen for and inspect them.
But yea, it seems like somewhere between the conversation agent and Music Assistant
FYI, I finally got this working in a decent state by using the official Music Assistant blueprints. I was only able to get the script blueprint to work. The LLM automation still doesn’t work for me for some reason. The “local only” option works ok, but it’s more picky on what I can say.