Local AI & LLM on Home Assistant Yellow with Llama 3, Phi 3, Gemma 2, and TinyLlama

With the recent release of state of the art large language models (LLM’s), there is an increased focus on deploying them on-device or with embedded devices. There’s also an opportunity with Home Assistant (HA) to leverage these new advancements. Included are results from testing and experiments on deploying these modules to a HA Yellow kit including a Raspberry Pi Compute Module 4 which validated that they can be reliably deployed and integrated. Note that the HA Green kit should also work and please let me know if you were able to test it.

One of the main usability concerns are the performance of the LLM’s in terms of tokens per second. I’ve provided the rates of the tested models including their descriptions below. The sample sizes (# of tests per LLM) were too low to provide descriptive statistics but are sufficient for a proof of concept. Subjectively, the 1B and 2B models (Gemma 2 and TinyLlama) were the most fun but 3B models (Phi 3) had the best balance of accuracy and performance (tokens/s). If we can extrapolate, the tokens per second is logarithmically decreasing with the increasing number of parameters.

The verbose flag was used to understand various metrics such as Eval rate presented above. The prompts tested include Please write a haiku and Please write a Python function that implements bubble sort on a list of integers. All responses from all models were accurate.

Model ID # of Parameters Size Overview
llama3 8b 4.7 GB The latest LLM from Meta’s 8b version which has been tested to run on a Raspberry Pi.
llama3:8b-instruct-q2_k 8b 3.2 GB This is the same version but quantized to 2 bits instead of 4. The average tokens per second is slightly higher and this technique could be applied to other models.
tinyllama 1.1b 637 MB At about 5 tokens per second, this was the most performant and still provided impressive responses.
phi3 3.8b 2.3 GB At a little more than 1 tokens per second, this was satisfactory but provided a high accuracy.
gemma:2b-instruct 2b 1.6 GB This was at the threshold of fun and was ranked higher in some leaderboards than tinyllama.

The following graphs visualizes the processing and memory load on the Raspberry Pi when launching tinyllama and asking it a programming question. The model had already been downloaded. In the spike of Processor utilization, it was continuously 100% until the end of the response. The total RAM utilization was less than expected and throughout testing was always under 3 GB even for the biggest model tested, llama3-8b.

Click here to demo tinyllama

I want to thank home-llm for their existing work on this. However, the setup recommends or requires a GPU where the intent of this project is to use embedded systems like the one’s traditionally found on Home Assistant setups. The intent of this project is to provide a plug-and-play solution.

This specific solution utilizes Ollama on a Raspberry Pi deployed through Docker containers. There were unique challenges including overcoming the lack of development of LLM’s on ARM systems. Here are the requirements to recreate the results based on my testing,

  • SSH access to the machine
  • 1.5 Ghz Quad-Core processor
  • 4 GB of RAM
  • 10 GB of disk memory

:warning: This involves running (currently) unsupported software :warning:

An overview of the steps taken,

  1. Gain ssh access to a terminal with the ability to execute docker commands
  2. Run docker pull --platform linux/arm64 ollama/ollama to install Ollama, the software that runs LLM’s
  3. Run docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama to run Ollama in the background
  4. Run docker exec -it ollama ollama run tinyllama to play with a performant LLM.

You should now be able to run a chat interface through your terminal. An important consideration is how to use these LLM’s responsibly, and this solution will not actively support models without standard content filtering.

Other References

Next Steps

  • Add-Ons for Home Assistant use a similar containerized environment like Docker. This makes it feasible to provide an opinionated add on for running LLM’s with a web UI.
  • The LLM will be fed the status of the Home Assistant and pertinent information which the user can quickly ask in natural language.
  • Multilingual support will also be tested.
  • Test out LLaVA such that we can generate images and also provide image inputs.
  • Analyze whether the LLM’s should have write access to the machine.
3 Likes