Hi everyone, I am a complete beginner and honestly most of this stuff just flies over my head. I have a desktop PC with a 9950X CPU, 32GB of ram at 6000, and a RTX 5080 GPU, I want to use it to learn how to integrate AI reasoning with Home Assistant. However, I do not want to keep it on all the time for Home Assistant, and as such would like to buy a dedicated piece of hardware such as the Jetson Thor to handle everything related to AI. Has anyone had any success with the older versions of the Jetson? Any pointers where to start please? Many Thanks.
I had a spark on pre-order and abandoned it for an eGPU. Yes the power in the spark would have been nice but the raw price simply wasnt worth it. What last check around 3-4K USD?
Put an eGPU with your favorite Nvidia iron on it on a NUC capable of TB4 or OCuLink gives a way better price/perf curve imho and simple Proxmox capable bonus farm for homelab. (I put things like mealie and grocy in a VM driving the barely used CPU cores on my inference box.)
Make it an AI nuc and experiment with AMD or Intel inference at the same time (sliding memory window can make a big context slow inference engine) on the same box (that’s what mine actually runs on, a nuc14ai)
Right now your BIGGEST PROBLEM won’t be the inference silicon. Your problem is available memory… ![]()
Personally I’d scoop up the biggest mem GPU I could grab at the cheapest price because memory price is not going down anytime soon and it’s the single biggest roadblock to inference.
For me it all boiled down to the spark or a Thor was going to be nice but for less than half the price I could get way more flexible inference and save that other cash for vram. If you go Thor or spark you’re just throwing a big hammer at the problem. If I had the spark or Thor I’d probably be able to run 100% local now on a 120b model. But I’d also be $2000 poorer right now and I can ALMOST run completely local. I’ll run the delta on Oai for the half year I think it’ll take me to optimize my setup to take the final bit local too…
Note I’ve not said anything about those older rigs? If you’re talking about Sparks or Thors you want something that runs oss20b or better, heck with a spark - 120b or better. You won’t be happy with what can only run a 1-4b model at best. Not at all.
So I guess the biggest question is what do you expect to get before you decide to throw a chunk of cash at it. Or if you’re just throwing money around I can DM you my email address… ![]()
Is your concern the power usage of your desktop? Cause that is already a beast if a machine to run some good models.
Personally I have an Orin NX 16GB (non super, it can’t handle the heat
) and it runs well, but I sometimes am limited.
Know that everything in the Nvidia lineup of NPU’s are a pain in the **s to maintain: custom kernels, can’t install it like a normal PC and needs to be flashed, using APT to update might screw up dependencies, the system is ARM so you have to keep that in mind when using software or containers.
The Thor would be complete overkill for what I’m using it for, and probably you if you are only going to use it for HA.
Only if it would completely replace my home server I could kind of consider it.
But as Nathan said: if you have the money ![]()