I have been running HA in Proxmox for about 2 years now, but run it in a LXC container. Uses hardly any resources but you get full access as if it were a full blown VM
Not seen that. Been running it like this for years - it is the same as a full blown install only a smaller footprint as in a LXC container rather than VM. Installed in both a Ubuntu container and Centos with no issues.
Edit: Granted a LXC isnāt necessarily for a noob and you need to understand Linux. If you know what you are doing it works a treat.
Edit 2: By the way I am not using a docker or supervisor, but local install of HA. Maybe that is where you are getting confused?
I had been using Whiskerz script, was experiencing strange issues, googled it, came across a post from Frenck in a Issue, stating not to use Whiskerz script due to it causing issues. I found the better solution, have been using it ever since, never looked back.
In the meantime it could be that Whiskerz made some improvements, and I was using what is now an old script, as @DavidFW1960 is indicating.
I have HA core in a virtenv running nicely in a LXC and have separate LXCs for Node Red, mqqt, Mariadb, pihole, plex, Zone minder, NGINX etc and have zero issues.
I have searched the forum and cannot find any evidence to support your claim ?
Can you explain why a LXC is bad news?
I have been running LXCs for years and I support them in our production systems at work and cannot see how HA would be troublesome (itās not)
Edit: just seen posts about install scripts. I didnāt use them - installed from scratch as if HA core was on a Debian machine.
Unless your information is current and has a point of reference, please donāt provide advice in this installation guide for people not to use this method.
I have used the script and tested my guide on a number of different machines before posting and have had zero issues, along with many, many other users.
It hasnāt been improved. There are simply just 2 different scripts - one for a VM and one for an LXC installation. Itās only the LXC one that isnāt supported or recommended.
Not off the top of my headā¦ use search and you will find it. It is not a recommended or supported install method and the devs have said it will cause problems.
(NOTE: if youāre running core and not Home Assistant OS (formerly hass.io) itās probably okā¦ I donāt know but itās not a recommended or documented installation method)
But I installed it as if it was a native Ubuntu server, so didnāt use any scripts?
You can treat and install stuff in a LXC as if it was a physical machine. I just installed it the same way i did on my Pi in a virtenv.
I followed this to the letter - no deviation and it works fine.
As I said, it isnāt for noobs, and it helps if you are good with Linux.
I have installed it a few times this way, and I would recommend it - same with a colleague of mine. I tried a version once in docker, but missed the control over the install.
You say you canāt remember where it wasnāt recommendedā¦that is because it was never said. I have searched.
Automated script issues maybe off github, as one size doesnt always fit all, but installing core as if you would in native Ubuntu in a Ubuntu 18.0.4 LXC is fine.
HA is a just python app and LXCs are ideal for that.
Iām recommending it
EDIT: Not knocking the good work I have seen on automated scripts on Github - some clever stuff going on there.
It was said in the context of weird things people do installing HA - but not HA Core which you say you installed and as I indicated above. It is only an issue with the HA supervised.
In any case it was a recommendation in a post by the Devs. Itās not my job to find things like that for you - I am not doing unsupported installs so I donāt care. In fact when I played with Proxmox I avoided the LXC install because it was unsupported.
I have never had any real issues with most things in a LXC - I have a 5 node Proxmox cluster at work that runs about 35 different LXCs and 15 KVMs on one site and a 4 node one with similar machines in our failover datacentre in production with no issues.
On my home setup, the only server I couldnāt get to work easily on a LXC was deconz - I removed it from a pi as it was unreliable and had all sorts of issues with USB passthrough and Appamor, so in the end I spun up a Ubuntu Server VM and did it in that.
There lies actually one of the fiddly things about LXC - protecting the host kernel from the guest LXC (LXCs are based on this isolation using Linux cgroups) can cause issues when you need access to it and it is easier to just install a KVM sometimes with an independent kernel.
I have got a dev LXC with docker installed - so nested virtualisation - that definitely causes weird stuff, so I wouldnāt recommend that.
A step by step install of HA in Ubuntu isnāt āunsupportedā, so IMO, a step by step install in a LXC isnāt either. But we will leave it there as I feel we will always disagree on this.
All I can say is if you spin up a LXC, ssh to it and follow the Ubuntu/Raspberry Pi step by step install, it will all work fine.
I think thatās the key here. HA Core in a venv in an LXC probably is fine. You can pretty much run HA Core on anything you can run Python on. Home Assistant is pickier about what it will run on, due to it being a Docker-based setup. Since this community guide is focused on Home Assistant on Proxmox, not HA Core, itās probably best to keep the discussion in this thread to that so that people coming here for the guide donāt get confused.
I donāt think I disagreed with you Mark. As mentioned the issue seems to be with HA Supervised in an LXC container, not HA Core that you are running.
Has anyone attempted this on a Mac Mini? I have a macmini4,1 (mid 2010) that I am trying to set up with proxmox. Iām testing everything on an external hdd in a sata enclosure until I get the hang of it and to be sure I wonāt screw up the internal ssd. Iāve tried multiple times to get the proxmox-ve installer to run on a usb stick but every time I try to boot, I just get a blank grey screen. As soon as I reboot and disconnect the usb stick everything boots up normally. Iāve tried using etcher, copying the iso using dd as well as a few other methods. They all result in the same blank grey screen. I even went as far as installing the rEFInd boot manager to see if that might help. It didnāt.
I ended up installing debian 10 and installing proxmox-ve on top of that. It was a messy process and a steep learning curve. I mostly got hung up on partitioning the hard drive for proxmox. I read some mixed suggestions like using one ext4 partition for data and a swap partition for swap. The other method was to create an LVM partition with a volume group named pve, ext4 logical partition for root (/), ext4 logical partition for data (/var/lib/vz), and a swap partition for swap.
Per Proxmox VE Administration Guide v6.2 section 2.3.1:
I ended up using the second method for partitioning. I created a 250GB LVM partition, created the pve virtual group, a root ext4 logical partition (32GB) mounted at /, a data ext4 logical partition (190GB) mounted at /var/lib/vz, and a swap logical partition (6GB) with no mount and left 32GB of free space for snapshot creation.
Iām not 100% sure if I did this the right way. I have proxmox-ve installed on a fresh install of debian 10 on the external hdd. But now I have an install of debian as well as proxmox-ve, which Iām not sure if that is the best way to do this.
If anyone has any tips or suggestions, Iām all ears. I am new to proxmox and would like to get this right before I move everything over to the mac mini ssd.