HAOS on Proxmox: experience with Intel NIC stability on NUC?

Wondering if anyone has experience with reported Intel NIC driver stability issues that appear to be specific to Proxmox…

I’m about to rebuild my HA server - I’m currently running HA supervised, which is no longer officially supported. My intention is to move to HAOS in a VM, so that I can run other docker containers outside the VM, but still be considered supported. I’m attracted to Proxmox, given the awesome community guides here and here, which seems to feature many happy customers.

I’ve noticed consistent online complaints about certain Intel NICs hanging when used with Proxmox. There appears to be a well known fix, but many people report mixed success with the fix.

My HA server is an Intel NUC (NUC8I3BEH 8th Gen Core i3), which includes the offending NIC (I219-V). From what I can tell, Intel NUCs are the biggest users of this NIC. Given the Intel NUC is an ideal fit for an HA server, I’m hoping there are lots of people who have experience with running HAOS on Proxmox on an Intel NUC.

I really don’t love the idea of moving my completely-stable-for-six-years system to something that could be flakey, so I’m also considering KVM or other options.

In any case, would be great to hear if people have had positive / negative experiences.

This is not specific to Proxmox VE and affects many other distributions.
Personally I’d just use a USB 3+ NIC and forget about it. I do that anyways for some nodes to get 2.5G without needing a PCI(e) slot.

Would you mind sharing these command outputs from your current system?

lspci -vnnk | awk '/Ethernet/{print $0}' RS=

ls -l /sys/class/net/*/device/driver

journalctl -b0 -kg "hardware unit hang|e1000"

uname -a

Proof for the many others

Hey thanks @Impact for the quick reply. I have to confess I had wondered whether it was bigger than Proxmox.

As requested:

% sudo lspci -vnnk | awk '/Ethernet/{print $0}' RS=

00:1f.6 Ethernet controller [0200]: Intel Corporation Ethernet Connection (6) I219-V [8086:15be] (rev 30)
	DeviceName:  LAN
	Subsystem: Intel Corporation Ethernet Connection (6) I219-V [8086:2074]
	Flags: bus master, fast devsel, latency 0, IRQ 130, IOMMU group 12
	Memory at c0b00000 (32-bit, non-prefetchable) [size=128K]
	Capabilities: [c8] Power Management version 3
	Capabilities: [d0] MSI: Enable+ Count=1/1 Maskable- 64bit+
	Kernel driver in use: e1000e
	Kernel modules: e1000e

% ls -l /sys/class/net/*/device/driver
lrwxrwxrwx 1 root root 0 Oct 10 12:26 /sys/class/net/eno1/device/driver -> ../../../bus/pci/drivers/e1000e
lrwxrwxrwx 1 root root 0 Oct 10 12:26 /sys/class/net/wlp0s20f3/device/driver -> ../../../bus/pci/drivers/iwlwifi

% sudo journalctl -b0 -kg "hardware unit hang|e1000"
-- Journal begins at Mon 2025-09-22 05:44:08 AEST, ends at Fri 2025-10-10 16:12:59 AEDT. --
-- No entries --

% uname -a
Linux mars 5.10.0-35-amd64 #1 SMP Debian 5.10.237-1 (2025-05-19) x86_64 GNU/Linux

Note that the wifi interface is inactive. I can also confirm that current journal entries go back to 22 September (25 days uptime).

Yeah unfortunately you would probably be affected by this. You seem to be on bullseye right now. I haven’t been following the PVE threads about it but from what I can remember it only affects newer kernels and perhaps certain hardware/firmware revisions of the NIC. My USB NICs use the r8152 driver and work fine for me if you want to go that way and need a suggestion.

1 Like

Rats. As you say, I’ve been on bullseye for a while. I also had the impression this issue shows up in newer kernels (which suggests if it can be broken it can also be fixed…)

In any case, it would be great to know which USB NIC you are using - it occurs to me that selecting a USB NIC that itself isn’t riddled with buggy drivers is not necessarily straightforward.

I use two types (USB A) but only this one is still available.

I also use this udev rule file so it uses the right driver.
Some context about why

I’m sure there are better ones, it’s why I didn’t link it before. It was just the cheapest I could find at the time :slight_smile:
Yeah you basically have to find a NIC whose chip has native support in the kernel. Installing debian’s non-free driver package can still make sense.

1 Like

Just an FYI/Datapoint …
I checked and see that I have a NIC using the e1000e driver (on an old HP Desktop). However I’m using Ubuntu 24.04 (kernel 6.8.0-85-generic) and using Libvirt/QEMU/KVM to run my HAOS VM. Also using a Kernel bridge to connect the VM to the host as well as the NIC. I’ve been running this for a few years now (including Ubuntu 20.x,22.x), and have not come across the problem described… but who knows, in a few upgrade cycles from now the kernel version may become an issue for me as well.

Thanks again @Impact. I get why you didn’t immediately share a link to the adapter you’re using - I have to confess I would prefer to go after a known brand if anyone has suggestions.

As an update, I went ahead and rebuilt my HA server over the weekend using Proxmox (9). It all went smoothly; I didn’t apply any of the ethtool fixes (other than disabling ASPM settings in BIOS) and I had stability for two days, including during periods of intense NIC activity.

But this morning it suddenly hung, exactly as described by others. I had to reboot my machine to get it back up again. Interestingly, there really wasn’t much going on networking-wise at the time. I’ve now applied the following minimalist fix, which at least one Proxmox forum user is getting early success from:

# ethtool -K eno1 tso off
# ethtool -K vmbr0 tso off

I’ll keep poking around and reporting back here how I go, but I have little patience for this kind of nonsense, so will be finding a USB adapter as quick as I can. Other suggestions for good ethernet adapters welcome (USB-A connector required, 1 Gbps is enough, known brands preferred).

I was seeing crashes on Proxmox during heavy use and adding those commands to my network startup has resulted in a much more stable system:

iface vmbr0 inet static
        address x.x.x.x/24
        gateway x.x.x.x
        bridge-ports eno1
        bridge-stp off
        bridge-fd 0
        post-up /sbin/ethtool -K eno1 tso off gso off
        post-up /sbin/ethtool -K vmbr0 tso off gso off

Thanks for the note @CO_4X4. I found that tso off alone was not enough - I was still getting NIC hangs. I updated it to tso off gso off gro off and didn’t get any hangs. But I have to confess that I’ve taken the easy way out and bought a cheap USB adapter, which is based on the Realtek RTL8153 chipset. So far so good.

My next challenge is to recognise that 8 GB RAM is not enough for two VMs (HAOS + Debian for docker) plus the Proxmox hypervisor. It was plenty for HA supervised + a few containers (no Proxmox / VMs), but now I’m getting stability issues as Proxmox does its best to balance RAM between the two VMs. Firing up vscode remote server usually ends in tears…

FYI the infamous TTECK scripts now have a script for making the necessary changes to your config to address this driver problem

https://community-scripts.github.io/ProxmoxVE/scripts?id=nic-offloading-fix&category=Proxmox+%26+Virtualization

1 Like