I don’t use that service, but here is how I added multiple recipients:
Multiple user notifications
Works perfectly, thank you very much.
Hi! Thank you for this. I got this working with gpt-4-1106-preview, but as soon as I changed back to gpt-3.5-turbo-1106 (way cheaper) I got this error message:
Should it work with gpt-3.5-turbo-1106?
Hello!
weird, I haven’t tested it for a while but I have the impression that openai no longer wants to do anything on my equipment, for example turning on the lights:
it says ok but nothing happened
But read a sensor, it’s working…
There is no error in log
EDIT:
I found it! Maybe it can help someone:
In the conversation agent properties, “Maximum function calls per conversation” was set to 0 … !!
Hi @Dino-Tech,
in your automation, you use a service called “timer.set_duration” but it doesn’t exists for me. I miss somehing
Many apologies, I use the Spook integration that provides that service. I have been using it since it was released and forgot that was exclusive to that integration. Spook - Not your homie
ok no problem
I love this integration. One thing I was struggling with is having the AI start/run my scripts. I get errors that it can’t find the script name. Will I need to add in a spec and define each script?
It is normally not necessary unless you want to use such scripts with a specific template/variable.
For simple script.turn_on services you should ensure that either the script entity name, or friendly name, or alias matches your request to gpt (or at least is understandable by gpt).
Trial and error is your friend here.
Also, if you are using voice, double check the speech to text output to make sure it got your query right.
I have several weather entities exposed that I use for different needs. I’ve been having issues with retrieving accurate and consistent weather queries and after many different attempts, I ended up with this:
- spec:
name: get_current_weather
description: Get the current weather
parameters:
type: object
properties:
location:
type: string
description: Infer this from home location, (e.g. San Francisco, CA)
format:
enum:
- current forecast
- hourly forecast
description: The type of weather forecast information to search for and use.
entity_id:
type: string
description: entity_id
required:
- location
- format
- entity_id
function:
type: template
value_template: "{{states[entity_id]}}"
If anyone has any cleaner suggestions, please fell free to ping me, but this seems to have solved my issues so far.
Hi guys,
I just wanted to ask, everything is working pretty good for me, the lights, the sensors, the air conditioner, everything except the windows. Most of the time I get the following error:
Something went wrong: Service cover.open not found.
And sometimes it opens. Does anyone know how I can fix this?
Thank you very much
Since execute_service
function requires one of entity_id
, area_id
, or device_id
, you can’t call script that doesn’t require it.
If you want to call script that doesn’t require entity_id
, you may try something like this.
- spec:
name: run_script
description: Run script of Home Assistant
parameters:
type: object
properties:
service:
type: string
description: The service
enum:
- script.livingroom_light_on
- script.livingroom_light_off
required:
- service
function:
type: script
sequence:
- service: "{{service}}"
However, if script requires any arguments, LLM doesn’t know what arguments are needed for each script.
Try add following in prompt.
If service cover.open is to be used, use cover.open_cover instead.
See issue
In theory, a script can be an entity_id for the script.turn_on service, can it not?
In practice, I have had gpt run my scripts reproducibly just by asking it to run them without mentioning any more variables. I suspect sticking to the script name/alias is important in that case.
To go along with the add_item_to_list
I have the following working to retrieve list items:
- spec:
name: get_items_from_list
description: Use this function to get list of items in a ToDo list.
parameters:
type: object
properties:
list:
type: string
description: the entity id of the list to get the items from
enum:
- todo.shopping_list
- todo.house_todo
required:
- list
function:
type: script
sequence:
- service: todo.get_items
target:
entity_id: '{{list}}'
data:
status: needs_action
response_variable: _function_result
Be sure to update this part for your lists:
enum:
- todo.shopping_list
- todo.house_todo
I found adding this to the Prompt
section greatly improved weather reporting.
For weather information, be sure to check the weather.munro_home entity.
For example, prior to adding that it would often say it didn’t have access to weather information for next week. I’d then say “how about looking at the Munro home weather entity” and it would apologise for missing it and then give me the forecast.
Maybe try suggesting something in the prompt so you don’t have to guide it each time it misses the sensor.
Thanks for the tip however for me adding the get_attributes function as documented on the (recently updated) github readme does the trick for the weather.
It also has the advantage of being a very generic function that can serve many requests so more possibilities for the same number of input api tokens.
Token pricing at openai has just been reduced by half by the way and new versions of gpt 3.5 and 4.0 should be available next week as well.
That was in reply to the query from @yahav. get_attributes
worked for me too, but about 30% of the time it would tell me that it didn’t have access to the data it actually did have access to. In other words it wasn’t 100% reliable.
The weather was just my example, but I think it would apply to a boiler or other devices that were not being accessed reliably, probably due to naming. Adding a guideline in the prompt made it 100% reliable for me.
Thanks for the response I added
If service cover.open is to be used, use cover.open_cover instead.
to the prompt
But it still gives me the same problem