Trying to use ChatGPT as a backup when HA Cloud doesn't understand the request

I’ve been trying to get ChatGPT working as a backup for when the Home Assistant system doesn’t understand the request BUT I haven’t been able to get it working. So I asked for help in the Facebook group. And was told that it’s currently not possible – even though I could’ve sworn that a recent release party videos indicated it COULD BE done.

So I submitted a feature request ([Use ChatGPT (or other conversation agent) as a backup when the first conversation agent doesn't understand the request])

One of the moderators provided a link indicating that it’s already possible and closed the FR thread. But I’m still unable to get it working.

I have Home Assistant Cloud set as the preferred Conversation Agent (CA) and ChatGPT also listed.

When I ask my VPE, to “turn off the lights in the office”, it says “sorry, I don’t understand”. But if I use the HA Companion app to activate Assist, select ChatGPT as the CA and ask the SAME QUESTION, it responds by turning off the lights in the office and by saying that it did so.

Here’s a link to a short video recorded using my phone’s camera when I did the VPE test mentioned above.
https://youtube.com/shorts/-ND_nbKBsDo?si=E29EfsZA8uS5K9u5

Here’s a link to a short video that is a screen recording in which I performed the test mentioned above using the Companion app. Excuse the mess I the background – I’m in the middle of a remodel and there’s dust everywhere right now along with stray papers and other stuff.

https://youtube.com/shorts/6p0Q5KdPT5A?si=4iU3xxw1eJ1dpe6C

This is what the Voice Assistant setting looks like.

I’ll show you ANY setting you want to see. I just don’t know what else to show you. I’m sure I’m missing SOMETHING but I’m at a loss as to what it could be.

You have to use the ChatGPT assistant, not the local one.
And enable the setting to prefer local fallback there.

Then only commands that can’t be solved locally will be sent out to OpenAI

That’s also what the moderator and the docs tried to tell you (and even can be seen in the screenshots in the docs).

So I guess you were just too deep into the idea, that you have to start with a local assist. :wink:

1 Like

I thought maybe it was something as simple as that – but I just wasn’t seeing it.

But I just set ChatGPT to be preferred and left it set to “prefer local handling”. Is HA Cloud not see as “local handling” for this purpose?

Now EVERY request is being routed through ChatGPT.


“What’s the temp” is a custom sentence set up in the automations. It worked even before ChatGPT was configured. Yet the ChatGPT debug shows that it was not “processed locally”.

Well I’m not using Assist in English, so I’m not sure.
But is “What’s the temp” really a sentence that the local conversation agent can understand? :stuck_out_tongue:

If not, it did exactly the right thing: Hand it over to the LLM.

edit: didn’t see that this is a custom sentence in your setup.

1 Like

Yep.

description: ""
mode: single
triggers:
  - trigger: conversation
    command: What's the Temp
    id: Get Temperatures
conditions: []
actions:
  - choose:
      - conditions:
          - condition: trigger
            id:
              - Get Temperatures
        sequence:
          - set_conversation_response: >
              The current temperature is {{
              state_attr('climate.t6_pro_z_wave_programmable_thermostat','current_temperature')
              }} degrees and the thermostat is set to {% if
              states('climate.t6_pro_z_wave_programmable_thermostat') ==
              'heat_cool' %} Heat Cool to keep the temperature between {{
              state_attr('climate.t6_pro_z_wave_programmable_thermostat','target_temp_high')
              }} and {{
              state_attr('climate.t6_pro_z_wave_programmable_thermostat','target_temp_low')
              }}.

              {% elif states('climate.t6_pro_z_wave_programmable_thermostat') ==
              'off' %} off.

              {% elif states('climate.t6_pro_z_wave_programmable_thermostat') ==
              'heat' %} heat to keep the temperature above {{
              state_attr('climate.t6_pro_z_wave_programmable_thermostat','temperature')
              }}.

              {% elif states('climate.t6_pro_z_wave_programmable_thermostat') ==
              'cool' %} cool to keep the temperature below {{
              state_attr('climate.t6_pro_z_wave_programmable_thermostat','temperature')
              }}.

              {% endif %}

Still having issues. I set ChatGPT as the preferred agent and set it use the local agent when possible. But when I make a request that I KNOW HA Cloud can handle, it’s still running through ChatGPT. As noted in my last reply, “What’s the Temp” is a custom sentence defined in the automations. It was working even before I ever attempted to ad ChatGPT. Does it not see HA Cloud as “local” because it goes through the Internet?

I want to use ChatGPT only when I have to. The majority of what I have my voice assistants do can be handled by HA Cloud. But I still want ChatGPT to handle requests that it doesn’t understand.

I don’t use automations with fixed voice commands, so I’m not sure what’s wrong here.

In my setup I use custom intents/intent_scripts which use the language files for describing assist what it has to listen to. This works fine for the LLM + local fallback option here.

Edit: which is also Nabu Casa Cloud, so that shouldn’t be the cause for your problems.

Ok. That makes it even more strange.

I thought I’d remove the custom sentence issue from the equation and just use a request that I knew worked without any custom sentences and before ChatGPT was added. THAT was processed locally with ChatGPT set as preferred.

So I’m not just asking @Thyraz, I’m asking anyone out there who might know: why does it need ChatGPT to process a command that the local option DOES KNOW HOW HANDLE because it handles it fine when ChatGPT is not configured as Preferred? I set HA Cloud as the preferred CA again and asked the same custom sentence. It handled it FINE.

This appears to be a bug. So I added an issue in the GitHub bug tracker.

https://github.com/home-assistant/hassil/issues/202

Usually the issue is a mismatch between the pattern and the phrase the system receives after ASR.
Have you tried copying “What’s the Temp” from the automation and pasting it into the Assist terminal.
For example, in one of the threads, a person had been looking for a problem for a long time, and it was ’ [U+0027] ,’ [U+2019]-as these are different characters.

Where in the process is the decision made to send these characters? The ONLY difference between the two scenarios is that ChatGPT is or is not set to be the preferred CA.

I make the same request of the SAME VPE. When I look at the debug, both of them indicate that that heard the same request. But one is processed locally and one is not.

Edit: I just went back and looked. Speech to text is being handled by HA Cloud in both cases. Just to be sure it wasn’t simply a matter of me saying the phrase slightly different, I recorded myself saying " Hey Jarvis {pause} What’s the Temp".

Then I laid my phone down next to the VPE and played the recording. When I checked the debug, it was handled by ChatGPT. Then I set HA Cloud to be preferred, put my phone in the same spot and played the recording again.

When I check the debug, it WAS processed locally.

First, let’s define that we are only working with the ChatGPT pipeline (оne pipeline cannot be a backup for another pipeline). The “Prefer handling commands locally” option allows you to handle built-in and custom intents without chatgpt.

Like I said, copy the phrase from the automation and check in the interface. You can also check it in DevTools-Assist.
After that do the reverse operation, speak the phrase into the voice terminal, if it is processed incorrectly - take the text from the debug menu and check it in the previous tools.

I hope I can shed some light on this situation.

Sentence triggers are ONLY supported when using the “Home Assistant” conversation agent in your Assist settings.

If you are using Claude or ChatGPT, then your sentence triggers simply will never fire. The “Prefer handling commands locally” toggle has no effect on this.

Refer to the documentation for the OpenAI or Anthropic Integrations

Why is this the case? I’m not sure, I haven’t dove into it.

But imo it is definitely a significant drawback to using the LLM agents :frowning:

1 Like

There are many forks of this integration (for changing endpoints and other improvements), and no such problem there. Is this information up to date?
Unfortunately, I can’t check the functionality in the original integration.

If the OpenAI integration doesn’t support sentence triggers how is ChatGPT accurately processing the requests?

Please be clear: I’m not trying to be argumentative. I’m sincerely trying to understand this.

Edit: I also have one more question about that idea.

Since HA Cloud is the Speech-to-text engine, why isn’t HA checking to see if the local option CAN process it before even passing it to the OpenAI integration? If the local option can process it, the integration shouldn’t even know about it.

This is the text that’s in the automation. Where do I paste it?

It's currently {{ state_attr('climate.t6_pro_z_wave_programmable_thermostat','current_temperature') }} degrees and the thermostat is set to {% if states('climate.t6_pro_z_wave_programmable_thermostat') == 'heat_cool' %} Heat Cool to keep the temperature between {{ state_attr('climate.t6_pro_z_wave_programmable_thermostat','target_temp_low') }} and {{ state_attr('climate.t6_pro_z_wave_programmable_thermostat','target_temp_high') }} degrees.
{% elif states('climate.t6_pro_z_wave_programmable_thermostat') == 'off' %} off.
{% elif states('climate.t6_pro_z_wave_programmable_thermostat') == 'heat' %} heat to keep the temperature above {{ state_attr('climate.t6_pro_z_wave_programmable_thermostat','temperature') }} degree.
{% elif states('climate.t6_pro_z_wave_programmable_thermostat') == 'cool' %} cool to keep the temperature below {{ state_attr('climate.t6_pro_z_wave_programmable_thermostat','temperature') }} degree.
{% endif %}


If you're talking about using the Assist option in the dashboard menu, I've tried that.   But unless I'm mistaken, that has an option to specify which CA it'll use.  Or are you suggesting that would/should let the local handle it if possible?


As I can see, you have everything working fine, but you are confused by the “false” value. Just ignore it. It’s internal information. And the command processing happened locally, according to your custom sentences

If custom sentences are created via GUI - the value will be false
If via yaml - true, for built-in intents alsotrue

If instead of a custom sentence, I just it to turn on the light (something that it also can handle locally), it gives a true value.

So I’m finding it confusing

Yeah, that sounds like a bug. But it is not related to the HassIL repository. You should probably create an issue in Core if you want to.
With a header like this

Custom sentences created in the automations section have a false value of “Processed locally” in the debug menu.

1 Like

It’s not processing a sentence trigger. It’s just reading you the state of your thermostat entity. That works, because the Agent has access to your exposed entities. So it can read the state (and change them if you click the toggle).

That’s you from the beginning of this thread, and what I was referring to and what the documentation is referring to about sentence triggers.

1 Like