HA core in termux 2025? (solved)

Edit: solved! Ended up using termux-udocker and a container.

Hello folks,

I am trying to install homeassistant 2025.7.3 on termux. So far every dependency worked without any hassle besides uv, while uv runs into a maturin error due to some sys_info quirk. (https://www.reddit.com/r/termux/comments/1elghcf/uv_in_termux/)

pkg update
apt-get update
pkg upgrade
pkg install python
pkg install nano
pkg install mosquitto
pkg install nodejs
pkg install openssh
pkg install termux-api
apt install make
pkg install binutils-is-llvm
pkg install rust
pkg install libjpeg-turbo
pkg install python-greenlet

Anybody with more success? I’m trying to avoid using proot if possible, just for the giggle.

Sorry to ask, but what’s the point of installing a server in termux?
Unless it’s “because I can (could)” ofc :wink:

Ye partially because I can :wink:

I am just starting with HA! I was planning to buy a thin client until I saw a broken galaxy z flip for 60$. It has more connectivity than a thin client and its adreno gpu runs small models on llama.cpp+opencl pretty well. (this part I already got working) The hope is I can make a super ghetto setup that runs both HA and a local voice assistant under 100$ :crossed_fingers:.

For just a bit more than $100, you have a N100 on ali, but hey, it would be hypocritical from me to point fingers for doing unecessary crazy stuff just for fun :slight_smile:

I know, I just want to have the most ghetto setup possible :slight_smile: The snapdragon can run llama 3.1 1B at 9 token/s, which is pretty decent for home assistant. The tricky part is making the integration work…

Not sure ‘ghetto’ gonna get you very far? For an Assist LLM, Decent sized context window and tool use are important - else it’s just a storyteller. I’m targeting 100 TOPS or better with 20 tok/s on a tool use model that supports an 8k context (even quantized thats probably 8-12G, maybe 4 IF you’re reeeeeeeally judicious about what you expose… )

(context window is probably gonna be your Achilles heel. How much ram you got for context - remember you’ve gotta describe all this stuff to your llm and that context is as or possibly More. Important than what model you’re running and context space eats ram exponentially)

Hey guys it’s just a fun project. I don’t exactly have a 4 bedroom mansion, and plan to run very simple automation at the start. I just want to see how far you can push 60$. Yes I have a gpu machine and I work with linux servers, but that would be less of a conversation starter :wink:

Oh i feel ya man I see what you’re doing. Just be realistic that’s what I’m finding to be functional for LLM work. Basically the equivalent of a 3090ti / 12g. Unfortunately it’s a floor so below that an LLM is barely functional not because speed. Because ram… Because Context. So temper your expectations against that. And have a great time doing it.

Not wanting you to fail just when you get there be real on the expectations. There’s a minspec you gotta hit. I’d plan on running ollama on that gpu you have called over network from this - seems way more realistic…

I know. I’m running llama 3.2 1B. It can respond in 10s ish in llama.cpp using openCL and interact with HA using home-llm on the phone. There are simpler models that the acon96 made based on tinyllama tuned for HA tool use that can respond in 5s ish and do basic tasks. It being able to turn on and off a bulb as a party trick is good enough for me. I am used to dumb assistants like google assistant and am not expecting it to work with large context or remember complex automation for me.

@lankef - Hi Bro, I am also trying to install HA server on an old android deveice and also stuck on this error. I came across this post and it seems you were able to get this working with termux-udocker and a container. Could you pleae help me with the steps that you followed to get it working.
P.S. I am relatively new to termux but can follow the steps once a liitle bit of help is provided.