I am not sure if this is possible, but thought I would just check.
Is there a way to connect to speaker to my Zigbee2MQTT network, so that it announces particular events?
For example:
If it senses that the backdoor has been opened, it will actually announce “back door open” or if it senses that my doorbell has been pressed, it can sound a chime.
In case it is relevant, I am running HA on a RPi3 and using a ZZH CC2652R USB controller.
Thanks in advance.
Edit - Just an extra thought. Could it be another possibility to perhaps plug a USB speaker directly into the RPi3?
This is close to what I and quite a few others do.
I have a few pi’s with usb speakers attached. I run Kodi on them that is auto-started; all pi’s are headless (no monitor, I do have one that has an epaper screen). I run MarryTTS as a container so that all my TTS is local and my system will function without any internet.
I use my speakers for Air Quality Changes, Garage Door open for x time, status of any doors or windows open when the lights are turned off in the room with the speaker. With only a few exceptions, the speakers will only ‘speak’ when the lights in the room the speakers are located in are on.
Thanks Glenn - really encouraging that something I thought was not possible is in fact quite to the contrary.
Unfortunately, I have become stuck with the above tutorial.
I have ended up with this error, when testing the service under developer tools:
Failed to call service tts.google_say. extra keys not allowed @ data[‘sequence’][0][‘language’]. Got ‘en’ extra keys not allowed @ data[‘sequence’][0][‘message’]. Got 'Hello World!'
Any idea what might be causing this and how to remedy it?
Look at the post above about the TTS configuration. Message is not part of that. You will need to setup your general configuration first then setup your automation for the TTS.
Sorry - as a beginner, am finding getting to grips with HA a bit tough at the moment.
This is a copy of my current yaml configuration file:
# Configure a default setup of Home Assistant (frontend, api, etc)
default_config:
# Text to speech
tts:
- platform: google_translate
service_name: google_say
notify:
- platform: tts
name: Announcement
tts_service: tts.google_say
media_player: media_player.mpd
group: !include groups.yaml
automation: !include automations.yaml
script: !include scripts.yaml
scene: !include scenes.yaml
However, when I go to Developer Tools, Services/ Text-to-Speech (TTS): Say an TTS message with google_translate, I get nothing in the Entity dropdown box.
I think that is my problem - I need to get mpd to show up in there.
Right - thanks to Glen’s suggestion, I looked at the log whilst inserting my speakers and nothing happened, so I cannot use them.
I do, however, have some usb headphones, which shall do for now (I can always swap these for a usb speaker, once I know everything is working properly).
I got the following:
21-11-04 20:44:27 INFO (MainThread) [supervisor.hardware.monitor] Detecting HardwareAction.ADD hardware /dev/bus/usb/001/008 - None
21-11-04 20:44:27 INFO (MainThread) [supervisor.hardware.monitor] Detecting HardwareAction.ADD hardware /dev/snd/pcmC1D0p - None
21-11-04 20:44:27 INFO (MainThread) [supervisor.hardware.monitor] Detecting HardwareAction.ADD hardware /dev/snd/pcmC1D0c - None
21-11-04 20:44:27 INFO (MainThread) [supervisor.hardware.monitor] Detecting HardwareAction.ADD hardware /dev/snd/controlC1 - /dev/snd/by-id/usb-Plantronics_Plantronics_.Audio_628_USB-00
21-11-04 20:44:27 INFO (MainThread) [supervisor.host.sound] Updating PulseAudio information
21-11-04 20:44:27 INFO (MainThread) [supervisor.hardware.monitor] Detecting HardwareAction.ADD hardware /dev/input/event0 - /dev/input/by-id/usb-Plantronics_Plantronics_.Audio_628_USB-event-if03
21-11-04 20:44:27 INFO (MainThread) [supervisor.hardware.monitor] Detecting HardwareAction.ADD hardware /dev/usb/hiddev0 - None
21-11-04 20:44:27 INFO (MainThread) [supervisor.hardware.monitor] Detecting HardwareAction.ADD hardware /dev/hidraw0 - None
Hey @GlennHA, it’s been about a year since this comment of yours that I’m replying to, and it’s for a completely different thing… I have a Sonos One, but unfortunately the new Rhasspy voice command integration seems to use only USB microphones. That’s great, but I have my HA instance in a utility room… doesn’t really appeal to me to walk down there every time I want to tell HA to do something.
But… you say you have a few RPs with USB speakers… and there’s a YouTuber that mentions that you can do Rhasspy this way, with microphones. But HOW in the world do you connect seemingly independent RPs to HA, for the purpose of delivering a USB device to HA?
If you use the install guide for Rhasspy, it will can walk you thru setting up a client server setup.
My ‘speaker’ is a USB conference speaker/mic connected to a RPi that is NOT my HA instance.
I previously was using Kodi headless on a few RPis with USB speakers that worked as just speakers to announce what ever I wanted HA to say. With Rhasspy I can do the same, plus now I have Speech-To-Text (STT) with the Text-To-Speech (TTS) that I had before.
Thanks for the quick reply @GlennHA. It stands to reason that I’m an IT professional and I didn’t read the install for Rhasspy yet. (Clear self-deprecating sarcasm there.)
I’ll take a look. I think I have a fairly advanced HA install, purpose-built to be 100% local (except for notifications), and I’m really kind of jonesing for the Rhasspy functionality.