This one’s working for me. With it I play Amazon Music songs in my echo devices using the Alexa Media integration:
- spec:
name: play_music
description: Play music in echo devices. If I only give you a song name, you must look at the most probable artist that sings the original song and confirm with me if I want to listen to it.
parameters:
type: object
properties:
media_player_id:
type: array
items:
type: string
description: This are the echo devices entity ids
media_id:
type: string
description: The query to alexa with the name of the artist, song, album or playlist. If I only give you the song name, you must put the name of the artist inside the query. Remember, I will always want the artist that sings the original song.
media_type:
type: string
optional: true
enum: ["artist", "album", "playlist", "track", "radio"]
description: this is the media service to be used, which for this case must always be AMAZON_MUSIC
required:
- media_player_id
- media_id
function:
type: script
sequence:
- service: media_player.play_media
target:
entity_id: "{{ media_player_id }}"
data:
media_content_id: "\"{{ media_id }}\""
media_content_type: AMAZON_MUSIC
Remember to fill in the list of your echo devices entities in the description of media_player_id.
Hello everybody and many thanks for that big work.
I must have forgot something, it doesn’t want to work properly here…
I’m using LM Studio which is compatible with OpenAI, the service is able to connect and the Assist give me answers to my questions (sometimes like a m*** but it answers).
LM Studio (tried a few models: openchat-3.5, MistraI, luna AI) is installed on my own computer (Win10 Pro) and HA is virtualized with Hyper-v but I don’t think this is really important as it usually works very well.
I tried OpenAI with linux in Hyper-V but it’s really really slow with cpu processing whereas my 4070Ti is really faster (but I’m not able to use it in Hyper-V).
I exposed a few entities (too many at the begining so I reduced the entities to my 4 lights, the temperature and humidity sensors and myself) and the system is able to give me the temperature or which light is on or off.
But when I ask him to switch a light, the answer is “ok, I’me doing it” but nothing happens…
The Assist Home Assistant is able to control all my entities exposed.
This must be really simple but I’m on it since yesterday and don’t know what else to try…
Any help would be really appreciated.
(and sorry for all english mistakes… another frenchie online…)
Is it possible to make sequential or nested calls to the openai API? I’m trying to improve the list management capabilities, and would love to add the ability to have the assistant read an existing list and intelligently merge new items based on quantity and potential name variants.
For now, I have a modified list update function that merges adding, removing, and checking off an arbitrary number of items. Calls can include any combination of those functions and new or existing items.
- spec:
name: add_items_to_shopping_cart
description: Add, remove, or check off items in shopping cart
parameters:
type: object
properties:
items_to_add:
type: array
items:
type: string
description: List of items to be added to the cart
items_to_remove:
type: array
items:
type: string
description: List of items to be removed from the cart
items_to_check:
type: array
items:
type: string
description: List of items to be checked off on the cart
required:
- items_to_add
- items_to_remove
- items_to_check
function:
type: script
sequence:
- repeat:
count: "{{items_to_add|length}}"
sequence:
- service: todo.add_item
data:
item: "{{items_to_add[repeat.index - 1]}}"
target:
entity_id: todo.shopping_list
- repeat:
count: "{{items_to_remove|length}}"
sequence:
- service: todo.remove_item
data:
item: "{{items_to_remove[repeat.index - 1]}}"
target:
entity_id: todo.shopping_list
- repeat:
count: "{{items_to_check|length}}"
sequence:
- service: todo.update_item
data:
item: "{{items_to_check[repeat.index - 1]}}"
status: "completed"
target:
entity_id: todo.shopping_list
mode: single
You may want to create a custom spec for that. I altered this one to use my sensors for weather. Same could be done for your sensors. Alter it to be about Room conditions?
- spec:
name: get_current_weather
description: Get the current weather using sensor.openweathermap_temperature and sensor.openweathermap_condition
parameters:
type: object
properties:
condition:
type: string
description: The weather condition.
unit:
type: string
enum:
- farenheit
function:
type: template
value_template: The temperature is {{unit}} and it is {{condition}}.
I just installed extended openai conversation and been debugging for a considerable time. It’s extremely impressive and really would like to get it working. At the moment ChatGPT can give me information on devices but it can not control them. I am using version 1.0.0 and the latest home assistant (core 2024.1.3, supervisor 2023.12.0, OS 11.4)
One observation is that I have no functions box in the options section. The options contain the prompt template, the version of GPT, the maximum tokens, top P and temperature and nothing else. I have no idea why the ‘Maximum Function Calls Per Conversation’ and ‘functions’ entries do not exist.
I am using HACS and have checked I that I am downloading extended openai conversation rather than openai conversation.
This integration (extended_openai_conversation) primarily uses function calling of OpenAI behavior.
LocalAI is one of local LLMs which supports function calling.
If LM Studio also supports function calling, possibly your model wouldn’t.
Please check it.
Did you add Extended OpenAI Conversation from devices and services?
It seems you’re using OpenAI Conversation rather than Extended OpenAI Conversation.
Thanks for your answer Jekalmin.
I begin thinking that LM Studio doesn’t support this function. I have tested with different LLMs (openchat-3.5, luna AI, mistral instruct v0.2, etc.) and the result is always the same.
LocalAI is working on a VM but it is soooo slow (even with 12 cores and 10Go RAM)… I’ll try using a dual boot on a test machine with LocalAI and CUDA drivers.
Just to know: lacalai is working but did someone try with ollama? It seems to be also very powerfull.
great work I am impressed and I cannot code a line. I have tried using chat gpt to create a function to add events to the calendar but cannot get it to work.
As anyone tried to code one
Fonction pour ajouter un événement à l’agenda local
spec:
name: add_event
description: Ajoute un événement à l’agenda local
parameters:
type: object
properties:
title:
type: string
description: Le titre de l’événement
start_time:
type: string
description: L’heure de début de l’événement au format ISO 8601 (e.g., 2024-01-17T10:00:00)
end_time:
type: string
description: L’heure de fin de l’événement au format ISO 8601 (e.g., 2024-01-17T11:00:00)
description:
type: string
description: La description de l’événement
required:
- summary
- start_date_time
- end_start_time
- description
function:
type: script
sequence:
Have you tried running that service call in developer tools? It seems to be missing the target entity.
There might be indentation issues as well, we cannot tell from your post as it is not formatted at all.
I fixed a little bit of typo.
As @Rafaille said, you should replace “YOUR_CALENDAR_ENTITY” with your existing calendar.
- spec:
name: add_event
description: Ajoute un événement à l’agenda local
parameters:
type: object
properties:
title:
type: string
description: Le titre de l’événement
start_time:
type: string
description: L’heure de début de l’événement au format ISO 8601 (e.g., 2024-01-17T11:00:00)
end_time:
type: string
description: L’heure de fin de l’événement au format ISO 8601 (e.g., 2024-01-17T11:00:00)
description:
type: string
description: La description de l’événement
required:
- title
- start_time
- end_time
- description
function:
type: script
sequence:
service: calendar.create_event
data:
summary: "{{title}}"
start_date_time: "{{start_time}}"
end_date_time: "{{end_time}}"
description: "{{description}}"
target:
entity_id: YOUR_CALENDAR_ENTITY
Thanks for reading anyway.
I don’t know how to format code in here. I have created my account today
I am still learning, but will do my best to have the correct format for my next post.
I wanted to ask, is there anything I can do to improve its accuracy?
For example, sometimes when I ask it what the temperature in my boiler is, it gives me the correct answer. But sometimes it says it doesn’t have that information. The same thing happens with other lights or sensors in my home.
I am willing to check anything I need to in order to solve the problem.
I also wanted to mention that I have blind people living in my home, and your product is absolutely perfect for them