Feature Request: Integrate with Docker Model Runner for Local AI Models

Hi everyone :wave:

I wanted to share an idea for running AI models locally with Home Assistant without relying on external services like Ollama. The concept is to package a model runtime (e.g., via Docker Model Runner) directly alongside Home Assistant in Docker Compose.

This setup would allow Home Assistant to:
• Run AI models locally for automations, text processing, or computer vision
• Keep all data private and self-contained
• Avoid the need for separate integrations or cloud dependencies

Essentially, the model runtime would be just another container in the Compose stack, communicating with Home Assistant via an API. This could simplify experimentation with local AI models while maintaining a clean, reproducible setup.

Please move this to here:

These are not looked for in the Forum.

You can run ollama in local docker container and connect that to Ha

Are you asking for something different?