In DialogFlow what does your fulfillment URL look like? Something like this?
https://yourdomain.duckdns.org/api/webhook/your_webhook_id
No it looks like this:
because i use nabu casa and requests are hitting my homeassistant, since it tells me that the lms_dialog_intent is unknown
If your HA installation is accessible from the internet directly using https: link (ie: duckdns or some otherway), why not give the webhook method that is tried and tested a shot and see if it’s the same result as per the original instructions. The same error could rule out any issues using the nabu.casa hooks.
Also turning on debug logging and reviewing what happens after a call to HA may also help understand why the intent is not being found. The following logger setting will give some insights:
logger: default: critical logs: homeassistant.components.shell_command: debug homeassistant.components.intent_script: debug homeassistant.components.dialogflow: debug
I assume you did check your config using the config validation tool to ensure not typo’s. https://yourdomain.duckdns.org/config/core
You can check the loaded components in HA https://yourdomain.duckdns.org/dev-info to ensure that intent_script component is loaded.
Sorry but that’s all I got at this time. Ynot
thank you for your input ynot, ill give it a go
LMS Controls Project Updated - January 18, 2019
For those interested, the LMS Controls project for Voice control of your LMS server and players via Google Home / Google Assistant (see here) has been updated to make use of a package file (an all-in-one file) that holds the bulk of the programming for the LMS Controls project.
New features / upgrades include:
- Audio feedback of query results and player status
- Created an env.sh file which contains the bulk of the shell file customization details (much less editing this way :))
- Better error checking on query results
- Support of contractions (it’s, don’t) and
- The ability to handle both secured and unsecured LMS installations.
The installation and troubleshooting documents have also been updated.
Finally, based on much feedback, I created Hass.io / Home Assistant installation document for multiple platforms to help users new to Hass.io / Home Assistant. Other users who are willing to document their setup for other platforms are welcome to submit their how to for inclusion.
The GitHub link is: https://github.com/ynot123/LMS_Controls
Thanks and enjoy.
Ynot.
January 21, 2019 - LMS Controls Project
Minor update to the shell files and the package file was required, basic details are as follows:
Shell files:
- Fixed dangling quote / double quote in all shell files except env_var.sh. - CRITICAL this prevented proper posting of shell query results on some systems
- Fixed hard coded URL in qry_player_stat.sh - CRITICAL
Package file:
- Fixed some duplicate alias’, NON-CRITICAL
Sorry for the inconvenience. Ynot.
Just a quick heads up to anyone updating their Home Assistant configuration to version 0.86 (released 23 January 2019). Changes to the Automation Trigger means that interval triggers are now separated from the ‘time’ platform and moved to a new ‘time_pattern’ platform:
This necessitates a very simple change to LMS Controls. Open the file lmscontrols.yaml in the /packages/ subfolder, and in the automation section, change platform: time to platform: time_pattern. This will ensure that the “LMS GUI Update Player Values” trigger passes the configuration check. Thanks, Ynot.
Thanks ynot for making this happen!
I am a complete newbie at Home Assistant & GitHub but have some experience with Squeezebox and Linux.
I am keen to try out voice commands for my squeezebox system and I have a couple of questions:
- Can I use voice commands to query and play my own muisc library?
- I have Tidal. How much code would I have to write to interface to it so as to get similar functionality to Spotify?
I hope this makes sense,
Best
-thomas
Welcome. Hope you find this useful. Starting to get quite a few installs now.
- Yes this will query your lms music library and queue up song, album, artists, playlists, etc…
- As for Tidal, it all comes down to how the query functions can be handled by their API or other web service. My first thing would be see how it is handled by LMS and see if you can clone that method for use in HA and /or script. For Spotify it was pretty simple, a client ID, password method and a query for what you want returned a JSON structure with the proper links which you queue up to LMS server.
If I had a Tidal account, I would look into it as you’re the second or third person to ask about this one.
Good luck and let me know if you need some guidance. Ynot.
I have now gone through the installation and I am having a strange problem that I cannot access the webpage which disables "testing". I am in contact with dialogflow support about it.
In despair I tried to publish a beta version. Below is the response I got. You might find some of it useful.
I would like to play around with dialogflow some more. First exercise will be to come up with a Danish version. Next I would try to go for an English version that can handle Tidal.
I access Tidal with an LMS app. Would you know how I can pick up the CLI-instructions that I would have to use? I suppose you did something equivalent when you made the Spotify Solution.
Question: I listen to Radio Paradise with an LMS app (try it, it is great). Is there a way launch apps with voice commands?
Thanks for you work - I really look forward to getting rid of the "test messages"
–
-
Your privacy policy URL is invalid. Your privacy policy URL must link to a valid website containing a privacy policy specific to your Action.
-
During our testing, we found that your Action would sometimes leave the mic open for the user without any prompt. Make sure that your Action always says something before leaving the mic open for the user, so that the user knows what they can say. This is particularly important when your Action is first triggered.
-
Thank you for choosing Actions on Google. We are sorry to inform you that your app has been rejected.
-
- Your privacy policy URL must link to a valid website containing a privacy policy specific to your app. Please make sure the web page or document you use contains either the app name how the app handles user data and is a public document.
-
- After the app responds to "yes" / "no" / "set volume followed by 0", the mic remains open without a prompt back to the user. At this point, kindly consider either prompting the user with further options or closing the app. Below are some examples of how to fix the open microphone scenarios that we found:
-
User: "Play album breakfast in america"
-
App: "Playing album breakfast by america with shuffle off. Would you like me to follow up?"
-
User: "yes"
-
App: "OK, your play album request returned no match so I left the spisestuen player queue as is. Is there anything else?"
-
-
User: "Play playlist classic rock in the garage."
-
App: "Playing playlist classic rock on the current player. Would you like me to follow up?"
-
User: "no"
-
App: "Ok."
-
(App left the conversation)
-
-
User: "set volume followed by 0"
-
App: "Setting the volume to 0 for the current player. Is there anything else?"
-
Note: We recommend Direct Action (DA) for the best user experience within smart home integration compared to a Conversational Action (CA). Here is how you get started upgrading your integration to DA: https://developers.google.com/actions/smarthome/. Please contact [email protected] if you have any questions.
-
————————————————————————————————————————————————————————————
-
In addition, we have a few tips to help you build a higher quality app! When testing your app, we noticed the following areas where quality could be improved.
-
● Find different ways for your app to be triggered without using the app’s name. Varied triggers help users use and discover your app as they naturally go through their day. For example, if a user says “OK Google, I’d like to learn more about animals,” their Google Assistant would respond, “OK, here’s All About Animals.” To make your actions more discoverable and intuitive, learn more about app invocation: https://designguidelines.withgoogle.com/conversation/conversational-components/greetings.html.
-
● The VUI should use conversational context (instead of instructions) to help the user intuit what they can say. For example, instead of the app asking, "Would you like to learn about kittens? You can also switch to learn more about dogs, rabbits, and other animals," it could more concisely ask, "Would you like to learn about kittens?" The user understands from context what responses are reasonable to say next. For help on structuring sample user responses, learn more about conversational basics: https://designguidelines.withgoogle.com/conversation/conversation-design-process/how-do-i-get-started.html.
-
● Make sure to accept variations of common user responses. Accepting varied responses makes it easier for users to have a natural and enjoyable experience. For example, if an app asks “What’s your favorite pet?” it should be able to understand these as synonyms of the same thing: kitten, kitty, kitty cat, feline, etc. Learn more about turn taking: https://designguidelines.withgoogle.com/conversation/conversation-design/learn-about-conversation.html#learn-about-conversation-turn-taking.
-
● Consider adding helpful “No input” and “No match” error recovery messages to your app that helps the user continue the conversation. Helpful error messages help users get back on track. For example, if the app doesn’t recognize the user’s answer, it could ask, “Could you say that again?” or “Sorry, what was that?” Learn more about handling errors: https://designguidelines.withgoogle.com/conversation/conversational-components/errors.html.
Feel free to submit a new version once you address this feedback, and we’ll be happy to review your Assistant app again. If you would like additional help, check out our G+ community. If you would like to appeal the review decision, reach out to our support team.
Have questions? Check out this video we prepared on tips to pass review, or reach out directly, and we’ll be happy to help.
Additionally, you can take a moment to let us know how helpful this e-mail was to help you understand how to fix your app.
As a reminder, you can modify your registration information, and track deployment and review status in the Actions Console. Once your first Actions on Google app is approved, you’ll also be eligible for our developer community program that comes with rewards for building with Actions on Google.
Thanks,
The Actions on Google Team
Hi Thomas,
That’s quite an earful from Google team, some of which I will look at in the future especially the mic open thing, but time is limited for this these days and at least for me the tool is fully functional.
-
Radio Paradise is very nice (thank you). So an easy way to make it voice controlled would be to add it to a player in the LMS GUI and save player’s queue as a local LMS playlist (the single radio station) and name the playlist “Radio_Paradise”. Re-scan your media library in LMS to add it to the index and then it can be invoked as follows: “Ask LMS Controls to play radio station Paradise” or “Ask LMS Controls to play radio Paradise”. See the following doc for guidance: https://github.com/ynot123/LMS_Controls/blob/master/LMS/README.md
-
To get rid of the annoying “test” message, I simply published my app as Alpha, using a different developer account (not the account associated with my Google Home), invite my google home user account to test it and wait 6-8 hours for the actions to be published. To pass the basic test in Alpha release does require you to fill out the privacy policy URL for which you can use something like: https://www.freeprivacypolicy.com. Further details about publishing an app can be found here: https://github.com/ynot123/LMS_Controls/blob/master/Installation%20Instructions.md
-
Finally for the Tidal app, you need to review the Tidal Service web API and see how to query it for the url links that you need to queue up to LMS. If you can do it in LMS, then you can likely do it in HA using a similar approach as I did for Spotify (a curl statement issued to their website using your Tidal credentials or Tidal security key) and this will oikely return the url that you meed to queue to LMS. You can probably get an idea of what the url link will look like by queing up some Tidal music to LMS and saving that a s a local LMS playlist and then examining the playlist (which is clear text), in there should be the url’s being played. Another more detailed method is to open a CLI session to your LMS server using putty at address yourlmsip:9090. Once that’s open type "`listen 1’ and press return and you will see all cli commands being sent back and forth to the players, etc… Now queue up some Tidal music and you will see the commands sent to the LMS players. Unfortunately this will not likely show you how the LMS addon is querying the Tidal service.
If your really stuck, let me know and I will give it a shot if you’re willing to temporarily share access to Tidal. This may take some time however as my schedule is pretty tight at the moment.
Take care. Ynot.
Thanks, Ynot.
I only sent the Google stuff FYI - please don’t feel obliged to spend more time on this than you already did.
Thanks also for the suggestions on Radio Paradise, Tidal and how to circumvent the test message.
-thomas
No prob. Was interested in the google feedback anyway. Just never got the time to submit plus there is no real benefit unless I want to make this an official app but that comes with too many strings attached I think. Good luck
Hello, and thank you for this app. I had a question before I tried using it. In the installation notes (and it’s mentioned above, but I did not see an answer there) it says:
Hass.io or Home Assistant must support the following commands in order for the shell commands to work:
- curl, jq, nc, socat (for secure LMS only). Note: if your system doesn’t support nc, netcat can be used but the shell files will need to be modified to reflect this change.
I am running docker, do those commands need to be available on the host system or within the docker container?
They need to be available in the Home Assistant container I believe as that is where the shell command is executed. If running Hass.io they are likely there by default.
Ynot
I now have a functioning voice commands interface to my LMS - thanks a lot!
I have an observation:
Getting spelling of proper names right based on voice input without any context can be challenging. For example I have a playlist called “Doky” (after the two brothers Niel Lan Doky and Chris Minh Doky). When I “ask LMS controls to play playlist Doky” LMS fails to find the playlist. My guess is that Google gets the spelling wrong and subsequently LMS returns an empty search.
Do you know if it is possible to restrict the speech recognition to a “context” e.g. names of playlists in LMS?
It gets a lot worse when I try to say classical composers like Tchaikovsky, Haydn and names of Danish artists.
Again thanks a lot for your efforts ynot. I am keen to contribute if I can.
Yes, that can be a problem. Part of the beauty with using the Google AI is a lot of these more common spelling issues are taken care of automatically. For those intents that don’t get caught properly, you can try the training section in Dialogflow to see how it was interpreted.
You could in Dialogflow to create a new entity for @artists and use that instead of the @sys.any entity (like what I did for the @mediaplayers entity). Ensure you can add automatically to this entity group and then you can clarify the results with the use of synonyms anything that isn’t matched correctly due to annunciation, or funny spelling issues. I didn’t go this way as most of my tunes are picked up easily enough using the English Google AI. If it’s tricky I just use the HA GUI instead.
Just a thought, haven’t exactly tried it. Ynot
Thanks, I tried in the docker image and I’m getting the below error (I’ve confirmed that “jq” and “nc” are not in the image). So my question is this, I’ve set it up and I get the following error when trying to execute some of the scripts, is this because of the missing commands or something else?
Error executing service <ServiceCall script.lms_cmd_pause_player (c:7e8bb776490648c687de02fc9c709469)>
Traceback (most recent call last):
File "/usr/src/app/homeassistant/core.py", line 1139, in _safe_execute
await self._execute_service(handler, service_call)
File "/usr/src/app/homeassistant/core.py", line 1152, in _execute_service
await handler.func(service_call)
File "/usr/src/app/homeassistant/components/script.py", line 123, in service_handler
context=service.context)
File "/usr/src/app/homeassistant/components/script.py", line 181, in async_turn_on
kwargs.get(ATTR_VARIABLES), context)
File "/usr/src/app/homeassistant/helpers/script.py", line 131, in async_run
await self._handle_action(action, variables, context)
File "/usr/src/app/homeassistant/helpers/script.py", line 210, in _handle_action
action, variables, context)
File "/usr/src/app/homeassistant/helpers/script.py", line 299, in _async_call_service
context=context
File "/usr/src/app/homeassistant/helpers/service.py", line 85, in async_call_from_config
domain, service_name, service_data, blocking=blocking, context=context)
File "/usr/src/app/homeassistant/core.py", line 1110, in async_call
processed_data = handler.schema(service_data)
File "/usr/local/lib/python3.6/site-packages/voluptuous/schema_builder.py", line 267, in __call__
return self._compiled([], data)
File "/usr/local/lib/python3.6/site-packages/voluptuous/schema_builder.py", line 589, in validate_dict
return base_validate(path, iteritems(data), out)
File "/usr/local/lib/python3.6/site-packages/voluptuous/schema_builder.py", line 427, in validate_mapping
raise er.MultipleInvalid(errors)
voluptuous.error.MultipleInvalid: not a valid value for dictionary value @ data['entity_id']
Hummm. That’s pretty ugly. Not sure what’s causing that error exactly. nc, socat, jq and curl are used exclusively in the shell commands. They would not be causing that error as far as I know. What Docker image are you using (Hass.io or Home Assistant) and on what platform?
nc can be replaced with netcat but the shell files where its used would need to be changed as well. jq is a must and should be installed in the image. socat is also neccessary if y=using a secured version of LMS.
Ynot.
Thanks @ynot, I’m using the standard Home Assistant docker image on top of an Ubuntu 18.04 host. Pretty sure “jq” and “nc” are not in the docker image, as I’ve consoled in there and they’re not found. Not sure how @opnordahl got it working, he/she seems to be using the same Docker image as me. Underlying host (AFAIK) should not matter.