Ollama integration (offcial one) How to send data to the LLM?

Again talking to myself, I finally found out that there was an issue with the response_variable not being returned to the LLM. Appareantly there was a topic, a bug report and it seems marked at *solved and the topic is closed.

However, I still am unable to get the response_variable to my LLM (and yes it contains the results as I can view them in my notify.file and output them to a input_text)

This is getting really disheartening, since every single thing or workaround I tried seems to end up in nothing, so far I learned:

Ollama:

-can’t view the response_variable (or it is a timing issue? It does say that it was a succes, but that the actual result is an empty object *which it is not)
-can only view the initial state of a input_text at the start of the conversation
-cannot run the get_state() command or anything similar, so it’s not able to ‘see’ what’s in a text_input or even a sensor template, beyond the initial state of the conversation
-same seems to apply form the sensor template, and it’s also not able to see any custom parameters from that entity.
-cannot see exposes automations
-says it can’t list entities?
-does not receive any information trying the conversation.process function (see code previously pasted) with the correct model ‘agent_id’, since it seems there are multiple ‘entities’ of this agent_id?
-is not able to receive more than 4 lines of text, because then my speech-to-text stops (would be nice if you could alter this amount), to clarify, I use the voice-assistant on my phone.

Do I keep talking to myself here, or should I just add stuff in GitHub? How can I move forward?