Is there anything for native HA automations like NodeRed applestrudel? I need HA to process spoken commands using regex

I have contextually aware Alexa flows setup in Node Red, so that I can say things like, “Turn on the fan” and it will determine which fan by determining which Alexas heard it and then what the command is like this:

Is there a way to do this natively?

I’ve experimented with room-aware scripts based on the last_called_summary attribute with a state-based template sensor. It works, but isn’t 100% reliable. I still use it for things that only I care about, but for things that other people in the house use it’s still more reliable (and less likely to generate error responses from Alexa) to expose entities to Alexa and add them to their respective areas in the Alexa app or to use the Routine+Script method.

  - sensor:
      - name: Alexa Last Summary
        unique_id: alexa_last_summary_0001
        state: |-
          {% set media = states.media_player | selectattr('attributes.source', 'defined')
          | selectattr('attributes.last_called','defined') | selectattr('attributes.last_called','eq',true)
          | map(attribute='entity_id') | first %}
          {{ state_attr(media,'last_called_summary') | regex_replace(find='alexa ') }}
          area_name: |-
            {{ area_name(states.media_player | selectattr('attributes.source', 'defined') | selectattr('attributes.last_called','defined')
            | selectattr('attributes.last_called','eq',true) | map(attribute='entity_id')|first) }}
        availability: |-
          {{ (states.media_player | selectattr('attributes.source', 'defined') | selectattr('attributes.last_called','defined')
          | selectattr('attributes.last_called','eq',true) | map(attribute='entity_id')|first) is defined }}

In the thread I linked, its actually a bit more than that. applestrudel actually receives the text of the Alexas command that can then be parsed with regex like (set|turn) (the )?fan ((on|to) )?low which GREATLY reduces the complexity. Is there anything like that available?

If the intents you want to use already exist in Conversations you can use something like the following (not thoroughly tested, and kind of inflexible) automation:

alias: Process Alexa with Conversations
description: ""
  - platform: state
      - sensor.alexa_last_summary
condition: []
  - service: conversation.process
      agent_id: homeassistant
      text: "{{trigger.to_state.state}} in {{ trigger.to_state.attributes.area_name }}"
mode: queued

You can build your own custom intents and intent scripts to supplement those that are built-in:
Intent Recognition
Intent Scripts

1 Like

Is there any way I can just monitor the Alexa text? Your sensor.alexa_last_summary sounds interesting, but where does that come from?

It’s just a template sensor I made to see if it would work… the data is available from the Alexa Media Player integration.

If you decide to go the custom intent route, it might be better to not remove the “alexa” from the last_called_summary value so you can use it as part of the intent recognition.

1 Like

Just read your links on intents. Thanks! While I could just recreate things using the last_call_summary (thanks, didn’t know that was there!), I suppose I should probably take the time to learn about these conversation intents, as I have a long term goal (like all of us) to get rid of Alexa.

1 Like

Yep, that’s exactly where I am… trying to get up to date and figuring out conversations, intents, etc in hopes of phasing out Alexa once there’s a workable local option.

I set up the last_called_summary sensor a while back to see if it would work as a reliable alternative to exposing entities to Alexa and/or using Routines to trigger HA scripts. As I said earlier, it works but isn’t 100% reliable, and there’s the issue of Alexa returning an error message since she doesn’t understand the intent or doesn’t have an end device to point her actions at. The only way around that seems to be to set up routines for the command phrases… so there’s no reduction in setup to get similar results.

In the video from the last release party (2023.7), Paulus said they’ll be looking into area names for conversations, so they get templatable (is that even a word?). That would work nicely for your goal, as you could easily add the area name to the conversation.

I have something like this running in Rhasspy, but there I get the event from Rhasspy and add the area name based on the satellite that is asking (the satellite is transmitted as well). But I’m quite sure, this will arrive rather sooner than later in HA. Maybe there is something coming up in the “Year of the Voice” livestream, that is broadcasted on 23. July…

1 Like

So I dug into this a bit more. It seems that Alexa can’t be used with Assist. And the Alexa.last_called_summary doesn’t update very quickly.

Is there any way to process Alexa statements quickly in HA that don’t involved Node-Red applestrudel?(which no longer works at all)

Anyone got any ideas?

I am an absolute newbie, so please forgive me if this does not make sense.

If I were to install occupancy or motion sensors and link these to a specific echo device I may then be able to submit room aware aware routines from Node red to Alexa? Any thoughts on this?

Is this likely to be a more sustainable solution given that many existing solutions (last Alexa, remote 2, Applestrudel) all seem to not work any longer?

Yes, you can use room presence/occupancy to help define which room Alexa should speak out of, but it will only be as reliable as your occupancy is accurate. Even with accurate and reliable presence detection it still might not pick the correct room if there are multiple rooms that are occupied.

Personally, I’ve disabled most of my room-aware Alexa automations. They’re just too frustrating for other people in the house. HA Assist has added context-awareness and media player intents should be rolling out next week, so I’ll be putting any future effort there…

Between Amazon’s continuous efforts over the past few years to lock down and reduce traffic to and from the API, the fact that they laid off a large number of their Alexa devs, and the fact that big changes are likely in the near future when Alexa functionality is split between paid and unpaid versions… it seems pretty inevitable that they will become even less functional for our uses.

Where do I go again to see the upcoming release notes??? I’m very interested in reading this!