This method is working here perfectly as well. I enhenced this solution by adding a validity check with my motin sensors. Its necessary because when the echo plays music, it will be updated and therefore triggers the python script. Now I only react, when the last motion in that room was within the last 5 Minutes.
Thank you again for another great mention! I honestly had no idea about the Alexa devices integration until you mentioned it, it does state that it uses official api so I gave that a try and it does work great.
I recently moved my lambda functions to a self hosted flask docker container on my unraid server and things have been running great. I don’t however use your python conversion script from Alexa device ID to HA echo entity, I just send the spoken command directly to the echo that made the request through flask that has the main python script.
Although I may still use the python conversion script if I want to send an announcement with Alexa devices since I currently can only send tts to the device that invoked the skill.
Can anyone provide a little additional guidance on how to incorporate the R2D2 ghost attribute into HA? I added the automation and copied the python script into the python folder and restarted HA. I see the ghost attribute when looking at the echo entity state under developer tools:
Is the automation running? The automation will run on every home assistant restart and every time you activate an alexa device, assuming you have the triggers right in your automation. What do the traces for the automation say? is your input_text entity getting populated?
With Alexa Devices (AD) integration, how would you determine in an automation which device was last called without using Alexa Media Player (AMP)? Trying to figure out how to direct AD notify/speak to the Echo which called the automation.
I switched to AD because of it’s use of an official API.
I created my own method to use the “Last Called Device”. I did it by creating a custom Amazon skill using Amazon Web Services (AWS) and Amazon’s Developer Console. In AWS, I (with immense help of ChatGPT and Grok) created a python LAMBDA script that uses a Home Assistant webhook. I also created a python_script for Home Assistant, as well as a handful of scripts, an input_text helper, a template sensor, and one master automation.
It works like this - when I invoke the custom skill with an intent using Alexa through any of my dots or shows, the LAMBDA script sends a json query to my automation via webhook. My automation then parses the json for the specific intent invoked and a device ID for the Alexa device I just used (using choose block conditions) and it fires off a script and prints the device ID to the input_text helper. The beginning of every script fires off the python_script in HA, which ties the Alexa device ID in the input_text helper to the notify speak entity of the Alexa device I last used (from the Alexa Devices integration) and prints that to the template sensor, which then my script has my Alexa device give me whatever information I programmed in the script to give me, like a laundry status, or a door status, or adding/removing an item from the Home Assistant shopping list.
All of the code is available at my repository on GitHub. The skill is built just like the Alexa Smart Home skill on Home Assistant’s website, except for this skill you have to create intents in the Alexa Developer Console (each intent requires 3 ways of asking a question). I have yet written a tutorial for it, but the code is here -
You could, instead of using all of the code, gut it and only use part of the LAMBDA script, python_script, and part of the automation to query for your last called device only.
I think the approach is very good (albeit quite complex), especially because of the various options (e.g. shopping list).
The official API is used, which in my opinion is the decisive advantage over using AMP. On the other hand, notify.alexa_media is used in the scripts. Doesn’t that come from the AMP? Are you using both?
i switched for all announcements to action: notify.send_message. I created a script for that so it was easy to switch. Regarding last_called_alexa I switched to another solution, which is still based on the media_player entities, as this is obviously depricated. so the media_players are only good for volume changes and to detect a state change.
.
I do currently have the notify.alexa_media as an action in the scripts, but they are not enabled. It’s there as a failsafe, should AD have any issues like it did when it first launched, I could fallback to AMP. AD has stablized and remains stable at it’s current version. The aioamazondevices library currently has several PR’s to add new features like media controls, so I expect to fully migrate to AD and uninstall AMP completely once that’s merged.
Nice work on building your own tailored skill. If I understand correctly you’re using a custom Alexa skill and then defining intents to invoke your custom skill? Since this is a custom skill I don’t think I could use it to invoke a home assistant script and also receive the device_id that was used to activate the script without also making matching intents every time I wanted to make a new script.
Do you know if I could tell Alexa in a custom routine to call a scene exposed through the regular Alexa integration and then call a custom skill to update the last active device within HA?
You can make the skill do anything you want in HA. The skill is invoked simply by using it’s invocation name and a programmed intent for the skill. It will send a json payload via webhook back to HA. You use an automation in HA to parse the json payload for the programmed intent and for the device_Id of the echo device I used using choose blocks. At this point, you can have the action in the choose block do whatever you want. I have mine call scripts that are specific to each intent. The device_Id is pulled because at the beginning of each of my scripts is an action to launch a python_script, which is a device map that ties each device_Id of my echo devices to their respective notify.speak entities from the Alexa Devices core integration (this is how I makeshift my own version of a last_called device).
Regarding your question - tbh, I don’t know. I find it difficult to rely on Amazon routines for anything but custom speech phrases and to create redundancy for HA entities that are dumb devices turned semi-smart or for Amazon-only devices bridged into HA using binary_sensor and input_boolean helpers.
Just to be clear to anyone reading this, AMP and Alexa Devices both use the unofficial API endpoint /nexus/v1/graphql. AMP uses many others which AD will likely access as well as it matures. “web access” vs Alexa Developer Console??? “web access” is a very general term. It often refers to HTTP or HTTPS requests. Alexa Developer Console is ,well, just that…a developer console, which a user accesses to build & configure things. Amazon “anything” has a myriad of API endpoints for accessing information some “undocumented” internal API endpoints and some documented ones. So saying AMP is “web access” while AD is Alexa Developer Console is IMPO totally inappropriate if not just plain wrong. They both access /nexus/v1/graphql via “web access”. Note: the Alexa App also does as well.
Interesting… I was going off what one of the developers had said in another post a few months ago, so I was led to believe they were accessing information differently. To their credit, they didn’t really expand on the subject other than AD is accessing the API that the Alexa App uses, but I see AMP does the same. Thanks for the correction.