I believe that a temperature setting for the Ollama integration setting would allow for more control over LLM hallucinations. The present Ollama integration does not control the LLM model temperature value setting.
I agree - the ChatGPT integration has the ability to modify the top_p and temperature - I don’t understand why the Ollama integration doesn’t have this. It might also be useful to have the ability to set the seed too in order to get more consistent responses. I’m using various VLMs and LLMs to recognise which refuse bins are out and the inconsistency between each run means I have to run multiple times to get consistent results. Also the AI task settings need the ability to change the system prompt. Telling a VLM or LLM that it’s part of home assistant causes them to imagine all kinds of scenarios when essentially you just want it to concentrate on the task at hand.
As I think this isn’t going to be fixed any time soon I’ve made a python script that’ll patch the relevant Ollama integration files so you can control the Temp, Top_p and Seed for any Ollama instance. It’s got some default settings that you can change before implementing and make sure you backup the files in /usr/src/homeassistant/homeassistant/components/ollama before you run it. NB. needs to be run from within the homeassistant docker instance - if you don’t know how to do that then find that out first.
Anyhoo I can now set Temp, Top_p and Seed in the UI for any ollama agent. Let me know if you need the code or any help getting it working.
