Using custom logic to turn on lights in both locally as well as LLM handled requests

Hello

I got a Voice PE and an Atom Echo working just fine: STT provided by Google, local preferred, fallback to LLM (currently Gemini 2.0 Flash).

I can control basic stuff just fine but I want to create a script that turns on a light using brightness and color temperature based on the time of day. I already know how to handle this with Home Assistant automations triggered by ‘regular’ triggers (in fact all my switches already work like that), but I’m unsure how to structure it so that it also works when triggered by voice and in particular when the voice input is handed off to an LLM.

In other words:

  1. Local execution: I presume I could use a standard Home Assistant automation as per Automation Trigger - Home Assistant with the logic to turn on the lights
  2. LLM execution: I need a way for an external LLM to interpret and execute the same logic (e.g., in the case that the request is phrased a bit differently and local assist hands it off to the LLM)

I understand I cannot use automations for 2., correct? So what is the proper way to do it so it works in both cases, ideally having the code in just one place?

Maybe a related point (as this will likely be needed for testing), where can I see the actual full prompt going to the LLM and the response it returns? It seems like even turning on debug on the LLM integration (Google GenAI in this case) does not output it to the log?

Thanks a lot for any pointers!