Alexa talking to home assistant

I finally got my Echo when it was on sale for Black Friday, and after a lot of fighting with SSL certificates, finally got it talking to HomeAssistant :smiley: (with some help from this project:

I am curious about your setup. What all component types do you have and what are you accomplishing with Alexa? Obviously turning lights on and off is expected but I am wondering if you are getting into anything more complex?

One thing I have been wanting to do is do a test setup and have the speech to text written out or displayed so I can see how accurate it is. I’m at some point going to get voice going with the Echo and a home endpoint but I am interested in doing some things with home media. I’m pretty much already set with the lights since they are all on Wink and Hue hubs.

I run a Subsonic server which has an API. I could spit out a list of all artists to file and then put that into the utterances. After that, and I could build a list of all albums by artist, and then all songs by album. That should keep the utterance lists below the limit. I would have to talk my way to what I want but it dawned on me that I cannot stream music with the Echo :frowning: Or I have yet to find a way to do it. Surprisingly when I Google this I don’t really see anyone talking about that specifically, or trying to work around that. Obviously she can stream music as she has that built in, but they must be doing something special. Although, I have never actually done a test run to see if she would play a stream. I just gathered she won’t from what I have read.

Anyway, just curious about how you are using the Echo with Home Assistant. And thanks for the link, I had not seen that particular GitHub project before.


The home-assistant stuff I’m working on is here, if you want to take a look. I also have a wink hub, so I’m not using Alexa/HA integration for anything that can do (it’s easier to say “Alexa, turn on the bedroom lights” than “Alexa, ask Home Assistant to turn on the bedroom lights”). Right now I’m using it to report location, energy usage (using the efergy component in HA), and locks. My lock is hooked up to the wink hub, but Alexa’s native home automation doesn’t handle locks. I want to do more, but I haven’t found anything else I need it to do yet. I may get it doing scenes next.

There’s a guy in the Amazon forums who’s done a JRiver Media Center skill, but it doesn’t look like he’s released the code anywhere. There’s also this plex project: that I think is next on my list. They’re dealing with tv shows, but you might find something useful there. If you ever do right something for subsonic, let me know…I have one of those too!

@Miniconfig: since your work is based on providing an API endpoint that Alexa can talk to, what would you think about integrating it into Home Assistant as a component? I can guide you through the steps.

To get started, you can register an url on the Home Assistant server like this

Thank you for the additional information and sharing miniconfig. I appreciate you sharing your work.

I see the idea of integration into Home Assistant has appeared. I like that! I hope it is something that can be worked out.

@balloob: I’d love to, but it will probably take significant guidance. The project I started with let me avoid the whole “having to learn javascript” thing :confused: . That is, however, a knowledge deficit I would really like to remedy. Having it integrated into homeassistant is a definite plus, and I’ve been thinking about ways to increase the user friendliness, but as the ASK is today, I don’t think this will ever be a drop-in solution for anyone. Best case scenario, you still have to create an amazon developer account and upload the utterances, intent schema, etc., although I do think we could do some work to ease their creation.

I would also want to do it in a way that makes it easy to extend. For example, the presence stuff is fairly easy if your name happens to fall in Amazon’s list of common US names. For something like sensors, it’s a bit different. I’d like to be able to set it up to have, say, a generic intent for sensors. I could create utterance examples for common things, so that you could say (ask home assistant…)“What’s the sensor humidity?”, “What’s the value of the temperature sensor?”, etc. But then make it easier for someone who was a bit more comfortable getting their feet wet to add a “humidity_sensor” intent, so that they could more accurately ask it (ask home assistant) “What’s the humidity upstairs?”, “what’s the humidity in the basement?”

Does that make any sense?

Yes, right now it will not be an easy process to get it setup but that’s fine for MVP! Small steps at a time.

It looks actually like it is not JavaScript that you’re uploading, just a JSON configuration file and a text file with potential sentences. I can see a basic component that maybe just supports the LocateIntent. For a version 2, we could use the new template support to work with intents, something like this:

    TemperatureIntent: "The temperature is {{ sensor.temperature.state }} degrees"

This would only be for offering information from Home Assistant. For turning devices on/off we should use the Alexa Lightning skill API, something that I know Maddox is already working on.

Well, the stuff you upload is json for the intents, and then a text file for sample utterances…When I mentioned javascript, it was because currently Amazon’s Alexa is talking to a Flask app running on my local network, but I had assumed replacing Flask with HomeAssistant would require some additional Javascript knowlege, since that’s what HomeAssistant uses for the web interface. Although, from the link in your previous post, it sounds like I was wrong. I’ll start looking into it.

I had some extra time this weekend and gave Alexa a try. This is the result… :smiley:

@Balloob, Can you give the config input i have to use at alexa for this intents?
I am new with this intents, i try to learn. Thanks.