Struggling with NUC install


I’ve been using homeassistant for years on a PI and have deceided to make the jump to a NUC.

I installed the generic x86 image on the NUC without issue and the machine starts up without issue, however i can’t access it on the web interface.

homeassistant.local resolves fine, I’ve also tried the ipaddress of the machine with no luck.

i can ping the machine, there are no firewalls or anything else Involved.

i ran nmap and I can’t see any of the ports open on the machine and when I “login” from the cli I can’t see any ports mapped when I do a docker ps

is this normal, have i missed something ?

just to be sure: you ARE trying to reach HA on port 8123?
So:
http://192.168.1.236:8123
or:
homeassistant.local:8123

absolutley,

I’ve done a login from the cli and looked at the docker logs - the home assistant container has this…

Uff… a bunch of errors…
i’d start over and follow INSTRUCTIONS . Note that NUC must have internet access in order to complete HA installation.

I’m on my third install, the nuc can ping google.com so internet access confirmed

Did you wait long enough? It can take as much as 15-20 minutes for HA to complete installation, some have reported even longer.
I guess ssd is ok… ?

SSD ok, I left it overnight and still no joy.

Considering getting back to the PI at the moment, or possibly using docker, really want HAOS over docker though

Well, since you have NUC, which is “total overkill” for HA only (as they say…) one option is to install proxmox (VM and container manager), then inside this install HA on VM. Then you can use your NUC for other things as well (tons of stuff, HERE is a list of scripts).
I run proxmox on my NUC and beside HA i also run adblocker, a separate esphome instance, partDB inventory, and a second HA as test instance.

Your screenshots are almost impossible to read. But it looks like your issue is in the first image that shows you have no IPv4 address. Ehternet only. WiFi is strongly not recommended for the Home Assistant server host computer.

Most Intel NUCS boot device is an M.2 SSD. The easiest way to install Home Assistant on a NUC is to put the SSD into a USB adapter, flash the generic x-86 image to the SSD, reinstall the SSD into the NUC, Done.

I don’t understand the fascination with ProxMox.
I run HAOS on bare metal.

  1. Flash the HAOS image to the boot drive.
  2. Reboot.
  3. That’s it. Done.

No learning curve for Proxmox, Docker, VM’s. No USB or Network issue. No managing disk or memory allocations.
The downside of bare metal? Your Home Assistant host computer is just that. Dedicated to one task. It just works. I measure my uptime in years.

If the user needs to run other programs on their Home Assistant server that aren’t available in an add-on, migrating to ProxMox can always be a solution later.

2 Likes

sorry about the image, it’s hard to get a clear one with this setup.

it has an ipv4 address (192.168.1.236) on the first ethernet adapter.

I did exactly this, i booted from a USB live ubunto image and imaged the M.2 nvme with HASOS

the think is on the netwoirk but can’t be accessed from the network it’s like the port is not mapped on docerk, I’m keen to run bare metal, i also would rather swerve proxmox etc

Well, you also don’t buy a bus for 45 people and go to work and into a store alone with it, do you? As much as i also like bare haos it’s really a waste, since cpu on nuc runs at 1-2%… if you have power, why not use it?
It’s not ‘fascination’, it’s plain usefulness. And, if i wouldn’t update regularly (some updates require reboot) i can also count my runtime in years…

1 Like

I think I’ve nailed the problem, it seems it may be a permissions issue related to my main machine (firefox didn’t have permissions to “discover local network devices” or something I did a curl -v homeassistant.local:8123 and got the 302 redirect to the onboarding setup.
I then installed crome and granted it all the perms and finaly. managed to load the page…

thank you all for your input, i really apreciate it…

2 Likes

Great! Glad you’ve got it going. And thank you for posting what you found, so others stumbling onto this thread can learn from it!

Speaking of others stumbling onto this thread, I have a couple of observations which may put some folks’ minds at ease.

First of all, I’m with @stevemann on the value of having a dedicated production HA-only device. Nothing against virtualization, it’s just that there are also good reasons to separate functionality. Hardware is cheap. The big cost is in our time administering it, and the risk of downtime for other, unrelated systems while we’re tinkering with one.

Here’s where I’m starting to diverge from the “conventional wisdom.” I used to believe “wired is always better.” But I really can’t justify that any more.

Wireless throughput and reliability can be every bit as good as wired, at least for the sort of demands a typical HA implementation will place on it. My TVs are on WiFi these days. I’m typing this now on my WiFi-connected laptop. I’m a few paces from a network switch with open ports. I could string a wire to it in less than a minute. But in the past 10 years, I’ve never found a single reason to do so.

I’ll add a caveat that, if you plan on running a lot of streaming video or a lot of audio through your HA box, you still might want to go wired. But there are systems dedicated to that sort of functionality, both commercial and open-source, which do it much better than HA. Just keep them separate. That way you’re not building in dependencies and adding points of failure.

You really can’t compare your pc with HA. Your pc can quite easily and without much damage (even without you knowing it) loose connection for … say 1 second, 5 seconds… while if HA looses connection even for 1/1¸0 of a second some event or automation result can fail. Imagine that mr.Murphy comes to visit and it that exact moment HA turns on your heating. If wifi connection glitch appears heating won’t be turned on and you won’t know unless you see on HA screen.

True, but electricity is not. Not that i watch for every Wh consumed, but some Intel NUC’s can be quite hungry (my old skull canyon is…).

Second good thing is that i can restart whole HAOS remotely (by restarting VM). Although very rarely it can happen that HA locks up to the way of needed to restart whole machine (HAOS). It happened to me once in my HA lifetime - after numerous power dropouts proxmox did come alive, but HA didn’t. And of course mr. Murphy was present, so i was 200km away…

And, just for info: I compared running HA standalone and inside proxmox VM once (i’ve had two identical NUC skull canyons at home) and speed difference is neglible.

Like almost everything we do, our HA implementations are going to always involve compromises. Discussions of them (like this) have some value, not only to beginners but to anyone pondering their next move. In that spirit, let me respond…

This is a very real consideration, and one I don’t take lightly. Basically, HA can fail. For a moment, or for good. In my climate, I can’t give HA total control of, say, my heating system or my sump pumps. Those critical systems should operate on their own, with as few moving parts, and as few dependencies on electronics, as possible.

HA is great for monitoring these systems, and for convenience. As long as HA is up, HA is connected to the LAN, the LAN is connected to the WAN, and everything in the stack is working, I can put the temperature up a degree or two from my couch, or from across the globe. But that’s not critical.

Another good point. These days power consumption should be a factor. But to be honest, it’s not the only factor. If HA catches one of my sump pumps stuck “on” for over a minute, and shuts it down, I’ve probably saved a years’ worth of electricity to run even your power-hungry NUC. And if it allows me to run my HVAC system more efficiently, the overall energy saved could be even more.

Oh, and on the environmental side, my HA is currently running on a relative’s old laptop, which would otherwise have been destined for a landfill. That’s gotta count for something.

This is probably the biggest reason I considered going with a VM instead of HAOS. I like the idea of being able to play with other VMs on the same hardware, but then I remind myself that HA is a production system, and dedicated hardware makes sense for that reason.

Still, being able to restart HA remotely is also a huge reliability gain, although it comes at the cost of complexity and additional points of failure. Not to mention administrative overhead. Again, compromises.

I suppose it’s also possible to install one of those remote reboot devices they sell for servers and remote desktop hosts. Some can even reboot the host machine automatically if the system goes unresponsive. So many choices.

1 Like

Can’t you? Sounds strange that your heating system fails if HA fails. Does it not work independently? If my heat pumps or underfloor heating disconnects from HA it does not stop them supplying heat. I can adjust the temperature with a few button presses on each device, Home Assistant just ads convenience and a automation platform for them.
Does your sump pump (we don’t have them here so genuinely I don’t know) have a mechanical “failback”?

As for the wifi vs cabled discussion you have brought up before, sure in a home environment you will probably not notice packets being resent or having higher latency, but the technology is by no means on par with each other.

Exactly. In my case i have climates (for cooling) and sonoff trv’s (for heating) both controlled via HA: external sensor across the room (away from cooling or heating elements and drafts from them) ensures actual temperature. But if HA (or external sensor) fails both systems are designed to go automatically to “local” control by starting to use internal sensors. Sure, it’s not as accurate as before, but at least it works.

I try to do similar for pretty much all important things in my house. As you said: HA very much adds convenience, but we must design system as “failsafe”.

1 Like

As Proxmox user: the advantage is that it requires virtually no maintenance. At least, that’s been my experience. You can, of course, update it if you feel the need :slight_smile: Like any Linux system, it’s also quite resilient. I’m “taking care” of my brother-in-law’s installation on an HP T430: Proxmox with HA and a container with Wireguard that connects our networks.

He’s constantly building, renovating, and frequently turning off the power. He treats the HA terminal like a router: he unplugs it. Despite this, everything has been working reliably for a year and a half now. And I keep forgetting to add a button on his dashboard, to shut down Proxmox in a civilized manner :slight_smile:

And the added options of having snapshots, HA, DR and so on also is a big plus.

2 Likes