Supervised Install on Proxmox

You need to enable “local” to store VM images. That would address your issues.

So I got the qcow image installed and working just wondering what the login is for the VM that way I can update the VM when needed

Just wanted to add, since I missed the line about extract on desktop and nothing worked for me. I downloaded the file from shell, and extracted it through the following command before i imported it (also removed the harddisk first, so I only had one disk to keep track of):

gunzip hassos_ova-4.6.qcow2.gz

Hi all

anyone upgraded to Proxmox 6.2 yet?

I did and now I’m having issue when I reboot the VM.
The VM does not boot anymore, it seems there is an issue with UEFI and OVMF.
I tried some solution on Proxmox threads, but no one fixed mine,

Anyone else already with Proxmox 6.2 ?

Thanks

The systems in my cluster are all on 6.2 and it’s working fine for me.

I used this guide as my baseline when I shifted from Ubuntu VM to their image (don’t trust that the Generic Server Supervised won’t be depreciated again in the future).

Here are some details, in case it might help you out:

Thanks @tjhart85

I’ve installed using https://github.com/ guide and it was working fine until today. Yesterday I’ve upgraded Proxmox.

These are my HW and Options in PVE

Schermata 2020-05-30 alle 23.32.05 Schermata 2020-05-30 alle 23.32.20

and I don’t have cluster.

I searched for the error I see in the VM console, BdsDxe: failed to load Boot0002 "UEFI QEMU HARDDISK on the Proxmox thread and I found some similar issue, but following the solutions they don’t work for me…

I just posted a message and hope to have some help…

I wouldn’t think me having a cluster would make a difference, but I mentioned it just in case. The whiskerz script didn’t work for me and it’s possible that the cluster (or me using NFS based locations for the drive images) may have been the issue, but I dunno.

Try changing your display to the VMware compatible … for whatever reason mine wouldn’t boot properly when I omitted that step. I don’t think it was the same error you got, but I can’t quite remember either.

I also don’t know if this makes a difference or not or is just a part of the way the script works, but your drives don’t have an extension listed … verify that they don’t have those extensions listed on that storage device [based on your screenshot, that should be ‘local’ for you] … it could just be the way it works with the script or LVM based thin-images, but it’s an easy enough thing to check.
image

@haxxa
I disagree with not running scripts as well. Scripts are designed to automate the tasks rather than manually typing them in each time.
Whiskerz script runs a script from his Git Hub and performs checks to ensure ID’s are not duplicated as well as the steps you describe, but we just call one line to run the script and it creates the VM flawlessly. Have you looked at Whiskerz007 github and the script to see what it actually does? (just curious on that, not being rude).
It is certainly the way a lot of enterprise automations are done, creating a script and running it instead of manually performing each task along the way. Not that Whiskerz script is designed for enterprise, just making a point to support scripts :slight_smile:.
Obviously, if there are glaring holes in Whiskerz script, then they should probably be addressed.

I think Whiskerz script is designed for a stand alone system, not cluster from what I have read of it.
More aimed for the average home user that just wants to put HA in a VM on more powerful desktop machine or something of the like.
It works flawlessly for me with that use case.
Mine will boot even if display is set to none, but obviously you can’t use a console view if you have display set to none.
I have changed the machine to q35 now though as it does seem to be quicker, even though it works as i440fx.

1 Like

The Github Rate Error was not the script issue, it was because the Home Assistant image was in process of being updated. A few people had that issue and it was on HA side not the script. Issue is now resolved.

Just as a note:
If you don’t use the name “LOCAL” for storing your VM images, instead of using Local to store the VM, just use the Local-lvm (if ProxMox is installed using defaults) or whatever storage name you use for your images. Everyone has there preference, some will not call it local at all. Modify the word “local” to represent what you use

Hello,

I did the described steps for the version QCOW2 4.10 and it worked :slight_smile:

I just wanted to say thank you.

I am running proxmox on a NUC. Before home assistant was installed on a LXC container with whiskerz007/proxmox_hassio_lxc script.
But it was super unstable and sometimes it just crashed, I don’t know why.

Now I migrated successfully and hope for a more stable future of my home assistant.

I also had success with QCOW2 in Proxmox but with v 4.11.

Thanks HA team!

Wanted to see what the difference is compared to the docker version with HACS, followed your guide and everything went perfect. Good job!

Running HA on Proxmox with a NUC, are there any way to use NUC’s audio as output for HA?

Hi can someone help me. I keep getting this error
Can’t install homeassistant/qemux86-64-homeassistant:0.117.6 -> 500 Server Error:

Internal Server Error ("Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)").
Error on Home Assistant installation. Retry in 30sec
Updating image homeassistant/qemux86-64-homeassistant:landingpage to homeassistant/qemux86-64-homeassistant:0.117.6
Downloading docker image homeassistant/qemux86-64-homeassistant with tag 0.117.6

Hi ,

The “Supervised installation” is not anymore depreciated as per this blog post below.

If confirmed , post #1 needs to be updated

Then i will try this installation method

Thank you

EDIT : I may be wrong ( with my excuses) , this is not a supervised installation in Proxmox as indicated in the title but manual HA installation in Proxmox because in Supervised installation method , Home assistant should run over a Debian VM

Did you manage to fix this issue? I just upgraded from proxmox 6.2 to 6.3 and encountered the same problem as also described here: https://pve.proxmox.com/wiki/OVMF/UEFI_Boot_Entries
When following the instructions for adding a boot option the OVMF file explorer does not find any efi bootables on the HA drive. Not sure where to go from here.

I’ve moved to HassOS on Proxmox, and it’s working fine.
I’ve not tested anymore this type of installation…

Ok, maybe I misunderstood this thread. I’m also running HassOS on Proxmox and it has been running fine until i upgraded Proxmox 6.2-15 to 6.3-3 yesterday, I thought your issue was exactly the same as mine (and I still don’t understand the difference TBH). I’m sure it would boot again if I would be able to point the VM to the correct OVMF/EFI boot file.

All these different HA versions and ways to install are very confusing for a new user like me.

I would not mind a fresh install but I have a couple of scripts on the HA drive that I would like to keep…

Now I am on Proxmox 6.3.2 and all is ok, with HassOS updated to latest stable release.
So, now I’m worried to update to 6.3.3 :open_mouth: