Sure - whenever you are ready we can discuss. The good news is that Python makes it a lot easier than other languages I have done this kind of stuff in!
i had my share of problems with more threads speaking at the same time at first.
thats why i have chosen this route.
-
a worker loop that looks if there is something to speak out or a sound to be played.
-
you can place 10 (or as much as you like) different texts to say or sounds to play in a list with a starttime and a priority added to the sound or text.
-
if there are more things in the list the the loop sorts on priority and time and plays that what is most important until the list is empty again.
-
even if a problem occurs and ha or appdaemon hangs, the list stays there until the app can play the sounds.
Yes, you and I both took the same approach on that. Makes sense since we worked together on it.
My question was more around contention getting the text to the app from the client so that it can be converted to a MP3 file. Both ways work fine. I was just wondering if there were any pros/cons to each way so I could learn.
i think the biggest difference is that i also made it possible to play sounds in the list.
but i think that its probably the other way around then you say here:
My thought was that getting the app and having the client push it onto the priority stack would risk having two clients trying to speak at the same time, and cause problems getting to the priority stack. Sending it by event would let HA deal with any type of contention
fire 2 HA events within a second and they will both fire your app. so they will speak together.
in my app the loop will only go on AFTER it is finished playing a sound.
in the mean time many other apps can set anything in the list but it wont be played untill the sound that is playing is finished.
i will clean up the app(using the right varnames and getting out testlogs and getting out all kind of other stuff that i use ) tommorow and upload it
my app had many tests with sounds or texts getting called at the same time. and it has been running for a week (when i wasnt there for 5 days), without errors, now,
i added a lot of features to it since i last posted it.
- it says texts out of appdaemon from each app
- it plays sounds out of appdaemon from each app
- it shuts of radio and starts it again (if radio is on with a boolean and a volume slider)
- setting volume for each sound or text as high or low as you like (so its easy to set important messages louder)
- the hourly clock (with its own volume slider)
- i use an extra app that reads text from a website at random times
- during Christmas i had it play a random song every hour
- it plays a soft sound to my speaker (that has an automatic offswitch when no sound is played for 10 minutes) every 9 minutes.
i will try to make all things optional.
Very cool. Mine just says whatever text I send to it. When I fire an event, the event handler in the speak app, creates the mp3 file and adds it to the priority list. The then part that says it still services it from the priority list like yours does. When I was working on the weather alert some of them were rather long and it just kept on saying them one at a time for about two or three minutes after I stopped testing. So like yours it’s not stepping on itself. I like that yours can play music and control radio. You’ve added a lot of nice things to it. Mine talks to Alexa LOL So it’s like my house is talking to itself.
i had that for a while too, but then Alexa also on the RPI. so the RPI was takling to itself
be careful, you are getting close to artificial intelligence there.
in combination with my STT app (appie) i will get there
i just need to find the time to work it out.
i already have it like this:
-
i say something to the RPI.
-
the RPI asks me: “i heard you say: …, is that correct?”
-
if i say yes the RPI saves it under correctly heard text
-
if i say no the RPI saves it under wrongly heard text and asks me if i want to repeat the try.
-
no stops, and yes gives me another try.
-
the saved text can be reused for TTS and for STT. so afterwards the RPI can ask me questions and knows the answer options.
very cool. I’m starting to work on a different aspect of it. I’m going to capture all the events and state changes I do to a database or spreadsheet somewhere. Then run some kind of analytics on it and see if there is anything I can learn about the way we are using our lights that would make the house more intelligent on it’s own. Big Data coming to a little Pi near you soon. LOL
I’d love to learn more about this project. I tried this with logentries and Jupyter notebooks when I was doing my laundry room project and both were dead ends for me - more trouble setting them up and trying to figure out how to access the data than actual analysis.
Of course, you’re assuming this is actually Rene you’re talking to…
Nope it’s him. He never capitalizes his “I”'s
its gonna take me a little more time.
i am rewriting the app a little with comments to make it understandable.
but i am also making these settings optional and to be set out of the config:
- radio (if yes, setting for which radiowebpage to be used, radio on/off boolean)
- volume (if yes, volumesliders for radio, clock and speech/sound)
- controle if mainloop is running
- restart boolean
- default speech language
- start delay
- mainloop delay
- clock (if yes, location for soundfiles)
- seperate sound log (if yes, logfile name and location)
It has arrived!
its uploaded to my github (o yes i have a github, so i guess i start to look like a programmer )
i have YAML and cfg included and also readme on how to implement things.
its all in 1.
Radio, clock, TTS, volume, musicplaylist, etc.