Announcing MQTT/Android bridging app: Zanzito. Beta testers wanted!

I use now this for
script
switch
light
I guess will work also for scene.

But for automation no, I guess
homeassistant.turn_on is only turning ON the automation, but not triggering it?

Many thanks to @Lapatoc for his help! There was a very nasty bug in Zanzito that, under certain conditions, prevented the reception of messages. The fix will be available in the next release.

You can restore the lastprefs from one device unto another, then you’d have to change the device name in the prefs, see the manual :slight_smile:
I’m working on a solution to move around custom topics only, it’ll take some time tough…

Sure, but having 100 activity is a pain. And for each change, is difficult to track changes for each device. Definetely needed automating the process

what happens if two devices have the same name?

That’s why there’s a %devicename% placeholder that you can put in the topics: you have to change the device name only once in the main prefs. :wink: (That was a @masterkenobi’s idea)

Two devices with the same name? I guess just a lot of confusion :smiley:

I don’t see the remaining results in MQTT traffic. Perhaps, you can make it publish all 5 messages each time an activity is triggered? For example if the 5 guesses are…

turn on living room light
turn on living room lights
turn on living room lite
turn on living room like
turn on living room late

Then, it will publish to the same topic 5 times with each message with the payload…

living room light
living room lights
living room lite
living room like
living room late

Or maybe instead of 5 messages, it just publish in 1 message with each guesses separated by a separator?

But that’s what the code does: cmd recognition and extraction of the remainder as payload are done together, it’s weird you don’t see this.

I think publishing multiple payloads would make this function unusable for simple mqtt devices. We could put a prefs option, but then things would become a little too complicated for a normal user, don’t you think?

In general, I understand your frustration (and claudio’s) about Voice recognition, but please understand that that’s why I published it as experimental: I will do anything possibile to improve this functionality in Zanzito, but:

  1. I don’t want things get too complicated to be used;
  2. I doubt I have the resources to come up with a fully working natural language voice recognizer, which after all is what you are asking for;
  3. Zanzito’s voice recognition is very limited in principle: I’m afraid that it will remain so until google make a move making something more powerful available to developers. For example, if I used Google API (online service) things could go a little better but, guess what, it’s not for free…

By the way, let’s keep talking :wink: Something good will come out of it!

Hi all, I setup a small forum support website for Zanzito. As much as I love discussing it here, I think this thread has become a little bit chaotic…

I’ll be happy to continue the discussion here but please, at least for support requests, please use that website, thanks!

gl

1 Like

I find it already excellent. It works super well. I am asking only for multiple voice commands in the same activity, but I do have workaround in doing multiple activity with same topic but different voice commands (turn on living light, living light turn on, etc…)

1 Like

Pay attention to overlapping voice commands from different activities. For example, if I have a voice cmd “Light” and another one “Light off”, Zanzito will probably pick always the first one…

You see, when I say “Jarvis turn on office fan”, this is what I see in the log…

and this what being sent to the broker…

It only sent the first guesses which is wrong. The correct guess is the second one.

I am hoping it will send all the guesses and let HA do the logic to increase accuracy.

I guess the voice command here is Turn on which is found in the first guess, so the remainder of that guess is sent.

The problem with this approach is that we could not use this function with MQTT simple devices, like a switch: they wouldn’t know how to process the payload.

I see what you mean. Perhaps, you can add an option in the add/edit activity page that calls “send all possible guesses in payload” and this option is only available when you select “Get payload from vocal command”?

The guesses are separated by a separator in one topic.

Then in HA, I can do a loop through the list of guesses and execute the first item that match what I am looking for.

but I am italian, I have
accendi xxxx
spegni xxxx

:stuck_out_tongue:

and my case is very simple: ON and OFF as payload, voice recognition only to see which activity to trigger

Hey guys.es h time i start the app it freezes … dunno why it worked with an ancient version I am running android 5 on yotaphone2.

If you use my second method, you just need to create total of 4 activities in Zanzito.

  1. Turn on
  2. Turn off
  3. Switch on
  4. Switch off

and then you can have multiple names for your items such as lights, light, lamp, bulb, etc… for single entity in HA

          {%- elif payload == "living room light" or payload == "living room lights" or payload == "living room lamp" or payload == "living room bulb" -%}
            light.livingroom

The first method is messy and hard to maintain.

Il am Italian, with the first method works great for multiple language variants for​ a single command.
Second method need to check.

I will think how to adapt to italian and try

How you brighten or dim?

How do I detect motion? Can someone support me?