Hey everyone!
I’m building a MCP (Model Context Protocol) server that integrates with Home Assistant so AI assistants (local LLMs or others) can safely interact with HA via a clean tool interface.
Why?
I wanted to go beyond “if-this-then-that” and build a kind of “Safety Officer” agent that can monitor hundreds of sensors, detect anomalies, and trigger workflows - without any cloud in sight ![]()
What?
Real-time updates via SSE: subscribe to state changes, automations, service calls
Device control: lights, climate, covers, media players, fans, locks, etc.
Automation management API: list/enable/disable/create/delete/trigger + traces
State querying: getStates, getState, history, search, domain summary
Security features: token auth + rate limiting
Transport modes: stdio, SSE, Streamable HTTP
How?
My current local stack:
Home Assistant (300+ sensors)
Ubuntu machine with Geforce RTX 16GB:
- MCP Server (TypeScript)
- Ollama (Working - local LLM inference)
- OpenWebUI (What project is this without an UI?!)
- LocalAI (In Progress)
Network Attached Storage: - Backups
- Logging
- Docker Containers (TpLink Omada Network Controller, Cloudflare, n8n for orchestration + retries + notifications, Grafana)
Architecture:
Home Assistant → MCP Server (tools/policy) → Ollama (inference) → n8n (workflows) → Actions / Notifications
Example use cases
Proactive safety: water leak / smoke / gas / CO detection → emergency notification + suggested actions (optionally shutoff)
Security: door open while armed_away → critical alert + escalation logic
Energy anomalies: huge power spikes → identify likely loads + recommend changes
Anomaly detection: compare against baseline and alert only if persistent (to reduce spam)
I’d love help to hear from you:
- If anyone else is doing this already
- Best practices for safe tool permissions (locks/alarm/etc.)
- SSE subscription patterns that scale well
- Automation API edge cases (traces, reloads, etc.)
- Anyone wanting to test with their own HA instance + local LLMs
https://github.com/coffeerunhobby/mcp-ha-connect
Right now you can use the MCP server in a couple of ways:
- add it as a tool inside OpenAI ChatGpt, Claude Desktop, LM Studio or even how the project is intended Ollama On premise and n8n

If there’s interest, I can share setup steps, example workflows, and the “Safety Officer” implementation pattern (lack of imagination for better naming)
For me it’s working as intended as we speak: it already detected that my nas temperature was spiking without me even saying something about the nas ![]()
Thanks and happy new year!