Hi,
I recently submitted my entry to the home automation Nvidia RTX Hackathon.
The plugin is available at:
The Github page can be viewed using the homepage link in the modio link above.
It uses your RTX graphics card to power LLMs so that they interpret your natural language chat/voice requests to get the state of or command entities, lists, etc… on your Home Assistant server.
All data related to your Home Assistant server and requests stay on your computer since everything is run locally.
You can for example ask the following:
- hey homie, what’s the thermostat temperature?
- hey homie, open the router dashboard
- hey homie, will it rain soon?
- hey homie, how many lights are on?
It uses Ollama to manage the AI models. You don’t need to do anything special to you home assistant server to run Homie. Assuming you already have a Home Assistant server up and running, what you need is the following:
- Nvidia RTX GPU
- Nvidia App (Windows only) installed so than you can install Nvidia Project G-Assist
- Ollama installed with the desired AI models downloaded (Run
download_default_ollama_models.batto get up and running quickly) - The Homie plugin config.json editied with the Home Assistant server info and the Ollama server info (more info in the project description)
Hopefully this plugin makes it more natural and easier to interact with Home Assistant for everyday use.
I would love to hear your thoughts and feedback.