Hi, I have spent days with AI going round and round in circles with no cigar. The inbuilt HA assistant controls things fine, but when I try to use Local LLM runing on my service i get GGGGGGGGGGGGGGG as a response. I can use the LLM when I SSH in to the machine its on (GMKtec K8+) I am using rocm for gpu acceleration, 64gb ram and 16gb dedicated to vram. Everything functions fine in there respective containers the problem appears to be HA sending requests to the K8.
Can anyone help me with this?
I think it's trying to say
I am Groot
Seriously I have no idea, just trying to make you feel better.
Glad my Ollama is still working.

