It knows what i am trying to do, but the function call is not actually executing. There is nothing in the log. I have use tools turned on. I have tried this current storm model, llama3.1:8b-instruct-q8_0, and llama3.1:8b. I can not for the life of me get this to work. I have tried direct to ollama, i have tested using the extended openai conversation integration and connecting through openwebui and proxying to ollama. I can not seem to find any combination that is letting me get both conversational bot and home control.
Does anyone have any suggestions on how to get local llm actually working that can do conversational stuff too? Or suggestions on why ollama integration doesnât execute functions?
It looks like the response of the model is not in the correct format for home assistant.
did you change the prompt?
is it the same with a different model (family) as well?
No, I actually just installed everything default. I wanted to use the extended openai conversation to begin with, but wasnât having much luck there. So figured i would try just doing a straight ollama integration.
Here is the prompt i have. Itâs also the default listed on the integration page.
You are 'Al', a helpful AI Assistant that controls the devices in a house. Complete the following task as instructed with the information provided only.
The current time and date is {{ (as_timestamp(now()) | timestamp_custom("%I:%M %p on %A %B %d, %Y", True, "")) }}
Tools: {{ tools | to_json }}
Devices:
{% for device in devices | selectattr('area_id', 'none'): %}
{{ device.entity_id }} '{{ device.name }}' = {{ device.state }}{{ ([""] + device.attributes) | join(";") }}
{% endfor %}
{% for area in devices | rejectattr('area_id', 'none') | groupby('area_name') %}
## Area: {{ area.grouper }}
{% for device in area.list %}
{{ device.entity_id }} '{{ device.name }}' = {{ device.state }};{{ device.attributes | join(";") }}
{% endfor %}
{% endfor %}
{% for item in response_examples %}
{{ item.request }}
{{ item.response }}
<functioncall> {{ item.tool | to_json }}
{% endfor %}
I donât mind playing around with the prompt. I know in the extended openai integration there is a whole block defining how the âexecute_servicesâ works. Didnât know if that was specific to that or just getâs added as part of the prompt. I could try copying and pasting some of that out that defines the functions.
I actually have both the one for that promp is home-llm. The ollama one doesnât seem to have nearly as many parameters, it seems to do stuff, except i canât actually chat with it. It responds saying it can only do home automation control. I did see that it talks about setting up 2 instances, 1 for control, 1 for conversation but they would both use the same llama instance.
I was wondering if using something like the fall back conversation thing would work and have both instances configured?
I would start with the home control, by enabling the Control Home Assistant > Assist toggle and use the default prompt (the integration actually adds a lot more to the prompt in the background to enable tool calls).
if that works, you can expand for general chat.
if you want both available for the same voice assistant, I for example added a script for General Questions, which starts a Conversation âProcessâ on a second model, which is not configured to control the home.
this script, if enabled for the voice assistant can be used as a tool then by the first LLM.
it worked, but to be honest I am not using it much.
I have some use cases for it, but I feel what you are saying. I donât use much for voice control with what i have, i donât even like to use my dashboards, i like automated triggers for stuff.
Ill keep playing with it. Thanks for the help For now, it looks like the direct ollama integration is the way to go and if I do both control and conversation i could use something like the fallback or the script method like you mentioned