Hi everyone ![]()
I wanted to share an idea for running AI models locally with Home Assistant without relying on external services like Ollama. The concept is to package a model runtime (e.g., via Docker Model Runner) directly alongside Home Assistant in Docker Compose.
This setup would allow Home Assistant to:
• Run AI models locally for automations, text processing, or computer vision
• Keep all data private and self-contained
• Avoid the need for separate integrations or cloud dependencies
Essentially, the model runtime would be just another container in the Compose stack, communicating with Home Assistant via an API. This could simplify experimentation with local AI models while maintaining a clean, reproducible setup.