Raspberry Pi as a CHAPTER 5 voice assistant

Thought I’d provide an update. I created my 1st kiosk based on a touch screen and RPi4. After getting that to work, I install the mic/sound hat & the Wyoming voice services & got them working too.

Guess what, the RPi4 voice satellite reconnects to HA automatically after powering down & back up. Go figure. :rofl:

1 Like

I too am using an Anker PowerConf S330, and have the same never-ending arecord issue. Did you get past it?

For now I’ve reverted back to the deprecated homeassistant-satellite build.

sadly not.

@ ignacio82 I’ve had the homeassistant-satellite build back running since yesterday, and it works reasonably well. I notice that it too has a continuously-present arecord process, so I now assume that’s just how it works and is not an symptom of something working incorrectly.

However, my wyoming-satellite build did have issues:

  • initially it responded well, but while replying to my request, would concurrently reissue the awake sound and wait on a second request; this behaviour was fairly consistent, until…
  • later, it would only respond occasionally to the wake word, and either never return from my request, or return after a minutes-long pause with the “sorry, I didn’t understand” message.

These made the wyoming-satellite build unusable for me at the present time.

Hi everyone, question. Is it possible to have the wyoming satellite pass Assist’s response to another media_player? I know how to using an esphome. I would like to use the pi for just input.

I achieved this by using the synthesize-command flag to forward the generated text to a custom event in home assistant and picking that up with node-red to send it to any media_player with tts.speak.
I can send you the config when i am home later, if that approach sounds interresting to you. It is somewhat roundabout though…

1 Like

If it works, it works. Yes, I’d really like that. Thanks :+1:t2:

Sorry, totally spaced on getting back to you. So here it is:

Assuming /home/pi is where you "git clone"ed to:
create the file
/home/pi/wyoming-satellite/examples/commands/synthesize_custom.sh
with the content

#!/usr/bin/env sh

text="$(cat)"
echo "Text to speech text: ${text}"

token='LLA_TOKEN'
echo "${token}"

curlData='{
  "event":"synthesize",
  "satellite": "snapcast-livingroom",
  "text": "'$text'"
}';
echo "$curlData" | jq '.'

curl \
  -X POST \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $token" \
  -d "$curlData" \
  https://HASS_FQDN/api/events/satellite_tts

create yourself a long-lived access token and put it for LLA_TOKEN, and fill in the FQDN of your home assistant instance for HASS_FQDN.

Then, assuming you installed on an OS using systemd, make your

/etc/systemd/system/wyoming-satellite.service

look like so:

[Unit]
Description=Wyoming Satellite
Wants=network-online.target
After=network-online.target

[Service]
Type=simple
ExecStart=/home/pi/wyoming-satellite/script/run \
  --debug \
  --name 'snapcast-livingroom' \
  --uri 'tcp://0.0.0.0:10700' \
  --mic-command 'arecord -D plughw:CARD=Device,DEV=0 -r 16000 -c
1 -f S16_LE -t raw' \
  --snd-command 'aplay -D null' \
  --synthesize-command 'examples/commands/synthesize_custom.sh'

WorkingDirectory=/home/pi/wyoming-satellite
Restart=always
RestartSec=1

Restart=always

[install]

Note the synthesize-command-line and sending snd to null.

Now, anytime the satellite gets send synthesized text, it should create an event in home assistant with the name of the satellite and the text to be spoken. I use this information within node-red like so:

[{"id":"c57a4a8e82851f13","type":"server-events","z":"a533baf02563f05c","name":"","server":"2452f89c.1b7828","version":3,"exposeAsEntityConfig":"","eventType":"satellite_tts","eventData":"","waitForRunning":true,"outputProperties":[{"property":"payload","propertyType":"msg","value":"","valueType":"eventData"},{"property":"topic","propertyType":"msg","value":"$outputData(\"eventData\").event_type","valueType":"jsonata"}],"x":130,"y":400,"wires":[["8a0a12e62be88756"]]},{"id":"eb18cd0b8af1bf7d","type":"api-call-service","z":"a533baf02563f05c","name":"","server":"2452f89c.1b7828","version":5,"debugenabled":false,"domain":"tts","service":"speak","areaId":[],"deviceId":[],"entityId":["tts.piper"],"data":"{\"message\":\"{{payload.event.text}}\",\"media_player_entity_id\":\"media_player.snapcast_player\"}","dataType":"json","mergeContext":"","mustacheAltTags":false,"outputProperties":[],"queue":"none","x":540,"y":400,"wires":[["50f6c0501580df66"]]},{"id":"8a0a12e62be88756","type":"switch","z":"a533baf02563f05c","name":"satellite_name","property":"payload.event.satellite","propertyType":"msg","rules":[{"t":"eq","v":"snapcast-livingroom","vt":"str"},{"t":"eq","v":"snapcast-kitchen","vt":"str"}],"checkall":"true","repair":false,"outputs":2,"x":360,"y":400,"wires":[["eb18cd0b8af1bf7d"],["b142937ea983f692"]]},{"id":"2452f89c.1b7828","type":"server","name":"Home Assistant","addon":true}]

Basically listen for the event type “satellite_tts”, route by satellite and call the tts.speak service on whatever compatible media_player you like.
Hope this helps you along :slight_smile:

2 Likes

Thanks! Im going to try that this weekend.

So I am using a reSpeaker and a Pi 2 W. I setup everything according to the tutorial. What I am confused about is that the wake word specified in my systemd unit file is not being used. It seems to be overriden by the Assist pipeline

[Unit]
Description=Wyoming Satellite
Wants=network-online.target
After=network-online.target
Requires=wyoming-openwakeword.service


[Service]
Type=simple
ExecStart=/home/pi/wyoming-satellite/script/run \
        --name 'kitchen-voice' \
        --uri 'tcp://0.0.0.0:10700' \
        --mic-command 'arecord -D plughw:CARD=seeed2micvoicec,DEV=0 -r 16000 -c 1 -f S16_LE -t raw' \
        --snd-command 'aplay -D plughw:CARD=seeed2micvoicec,DEV=0 -r 22050 -c 1 -f S16_LE -t raw'
        --mic-auto-gain 5 \
        --mic-noise-suppression 2 \
        --wake-uri 'tcp://127.0.0.1:10400' \
        --wake-word-name 'hey_jarvis'

WorkingDirectory=/home/pi/wyoming-satellite
Restart=always
RestartSec=1
[Install]
WantedBy=default.target

Instead of trigging on Jarvis it triggers on Alex which is defined in my pipeline:

What I thought was supposed to happen is that the wake word on the PI triggers and then sends the audio over to HA. What seems to be happening is that the audio is being sent directly to HA, and it is in charge of determining whether it should process the audio.

Any pointers?

I am not exactly sure what made the difference here. I ended up removing the satellite from HA. Then I adjusted the systemd file like this:

[Unit]
Description=Wyoming Satellite
Wants=network-online.target
After=network-online.target
Requires=wyoming-openwakeword.service
Requires=2mic_leds.service
[Service]
Type=simple
ExecStart=/home/pi/wyoming-satellite/script/run \
        --debug \
        --name 'kitchen-voice' \
        --uri 'tcp://0.0.0.0:10700' \
        --mic-command 'arecord -D plughw:CARD=seeed2micvoicec,DEV=0 -r 16000 -c 1 -f S16_LE -t raw' \
        --snd-command 'aplay -D plughw:CARD=seeed2micvoicec,DEV=0 -r 16000 -c 1 -f S16_LE -t raw' \
        --snd-command-rate 16000 \
        --snd-command-channels 1 \
        --wake-uri 'tcp://127.0.0.1:10400' \
        --wake-word-name 'hey_jarvis' \
        --event-uri 'tcp://127.0.0.1:10500'
WorkingDirectory=/home/pi/wyoming-satellite
Restart=always
RestartSec=1
[Install]
WantedBy=default.target

I did find that the event-uri has or systemd complains about unknown arguments.

The device is now working as I expect

My guess is that (like me) you may have missed a system restart. Personally I am not confident with linux stopping and starting services, so do a full reboot just to make sure :wink:

I am not using event-uri, and I guess that it is not working for you either - so try removing that argument, restart the service, and see if that removes the error messages or stops the service. The trick is to change only one thing at a time - which becomes very tedious.

FYI, to prove that my system is no longer using Openwakeword on the HA machine, I disabled it there. When I am happy with this HA Voice Assist I will re-enable it for use by my RasPi Zero satellite

I have Pi and M5Atom. The Pi I use local wake word and notice that I have to say wake word, wait for it to activate then say command. With the AtomEchoM5, I can almost flow the wake word and request.
I assume the local wake word with Pi is the reason as the M5 is always streaming and HA/Wyoming is doing the processing.

I originally thought having local wake word would end up being better/faster, but now I am not totally convinced.

Except for running it as a satellite, is there a way to get a respeaker mic (I have the 4 array USB version) working directly connected to a RPI running HAA?

Mike do what works best for you in your situation. Local wake word reduces data over the network (more important if multiple satellites over wi-fi) - but at the expense of requiring more CPU at the satellite. You didn’t mention whether you are using one of the older slower RasPi models - or one of the faster more expensive models. There are always trade-offs.


You mean like was discussed in the Chapter 4 blog back in October 2023, and other forum topics ?