Expose surface trigger via google_assistant

tl;dr when using google_assistant, is there a way for the device (e.g., Google Home) that captures a voice command to be dynamically selected as the interface that the tts response is targeted back to?

Howdy everyone!

First post so please indulge me as I begin by noting:

(1) Having a blast getting home-assistant working. It’s a joy that it’s coded in Python and so (relatively) easy to tweak.

(2) My heartfelt thanks to the developers, contributors, and user-community for pushing the project along – I also appreciate that the user-forum is a friendly, respectful place. It makes learning and problem-solving a lot easier.

Now down to the question: I’ve succeeded in wiring in google_assistant (standard linux installation), and can say things like “OK Google is the garage door open?” which launches a script that checks the garage door (a brute-force solution, but one I can post later if anyone is interested). Script ends by calling the google_say tts service, which reads back the garage door state (e.g., “The garage door is closed.”)

The problem is I’ve had to hard-code WHERE the response is tts’d, that is, currently on a group of speakers.

What I would love to figure out is whether the device that captures the voice command – I believe google calls this the “surface trigger” – can be passed forward, i.e., as an entity_id that google_say can use as the media_player output.

I have a sense that working directly in Actions on Google makes this possible, but I don’t know whether the feature is able to interact/communicate with Home Assistant.

Any ideas/examples/solutions appreciated!

-Matt

ever figured this out?

Hi Fabio! I’m sorry I missed your post – I kinda stopped checking in at the forum, as threads roll over very quickly and seem to get lost in the shuffle. :frowning:

In any case, I have not yet found an answer. One direction that seemed promising was the Dialogflow integration, which potentially (I believe) has the ability to recognize from which device a command is initiated.

This is speculative, but I also wonder if the kind of HA/Google Assistant integration I’m looking for is not well-aligned with HA’s long-term philosophy. In other words, I’m hoping for a kind of inter-functionality and 2-way communication between the two platforms that may not be totally consistent with the notion of the HA user owning and completely controlling their data.

In the end I’m fairly confident that a solution – perhaps ugly and complicated, is feasible. Please of course post back if you have any additional ideas!

-Matt