Llama 2 for voice assistant

Hello

I had the idea to add llama 2 for a voice assistant that can turn off and on my lights in the house. Anyone have an idea how to do this.

Bye fynn

There are a number of missing details, like

  • why do you want an llm? are you doing anything else than light control?
  • how are you going to start talking with the llm? your phone? a dedicated device?

There’s already a lot of good infrastructure for voice, and there’s more in the making, but to apply it you have to figure out the specifics. If all you want is to be able to say something like “home assistant, turn on the lights”, read through $13 voice assistant for Home Assistant - Home Assistant (make sure you set up all the prereqs first though).

I want to use llama 2 on my tablet and phone (android) for now only to turn on my lights.

And it has to be free

If all you want is to turn on the lights, HA already has a intent-based Assist for you to use. It won’t use an LLM, but an LLM is probably overkill. Just go to your HA app’s settings and find the option to set it as the default assistant.

But i also want it to be able to answers questions and replie in full sentences. And with the included one i always have to write the same thing and if i make a mistake it won’t work thats one of the couple reasons i want llama 2 or a another ai like chat gpt

You can already switch to an OpenAI-powered conversation agent, but that’s probably not what you want since you want an LLM and you want the ability to control lights (there are custom integrations that let it call services but it’s still OpenAI).

You can already write a custom integration to do that, but you probably don’t want to figure how that all works from scratch.

For now, I’d suggest checking out Building a fully local LLM voice assistant to control my smart home | Hacker News. tl;dr it is possible now with custom stuff but later in the year you might see this officially in Home Assistant!

Thanks i will check it out

You should check out:

FutureProofHomes on YouTube. He used a GitHub integration to do what you are seeking.

You get $5.00 of tokens free from OpenAI but then have to pay for them. You can limit your token use with changing the prompts you feed ChatGPT.

He also installed a local pipeline with whisper and piper and an open source local LLM, but it runs much slower.