Cryout - please add a warning to the docs

@flamingm0e So now I get this:


Setting up libguestfs-hfsplus:amd64 (1:1.36.13-1ubuntu3.2) ...
Setting up python-guestfs (1:1.36.13-1ubuntu3.2) ...
Setting up libguestfs-xfs:amd64 (1:1.36.13-1ubuntu3.2) ...
Processing triggers for ureadahead (0.100.0-20) ...
Processing triggers for libc-bin (2.27-3ubuntu1) ...
Processing triggers for linux-image-4.15.0-46-generic (4.15.0-46.49) ...
/etc/kernel/postinst.d/initramfs-tools:
update-initramfs: Generating /boot/initrd.img-4.15.0-46-generic
/sbin/ldconfig.real: Warning: ignoring configuration file that cannot be opened: /etc/ld.so.conf.d/x86_64-linux-gnu_EGL.conf: No such file or directory
/etc/kernel/postinst.d/zz-update-grub:
Sourcing file `/etc/default/grub'
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-4.15.0-46-generic
Found initrd image: /boot/initrd.img-4.15.0-46-generic
Found linux image: /boot/vmlinuz-4.15.0-39-generic
Found initrd image: /boot/initrd.img-4.15.0-39-generic
Found linux image: /boot/vmlinuz-4.4.0-137-generic
Found initrd image: /boot/initrd.img-4.4.0-137-generic
Found memtest86+ image: /boot/memtest86+.elf
Found memtest86+ image: /boot/memtest86+.bin
Found Mac OS X on /dev/sdc3
done
Errors were encountered while processing:
 update-notifier-common
 update-notifier
 ubuntu-desktop
 update-manager
 ubuntu-release-upgrader-gtk

Also the line that itfound Mac OS X on /dev/sdc3 is justā€¦ what? Itā€™s an SSD with 1 partition and ubuntu on it.

Did you once use this system as a hackintosh? Sounds like it found a UEFI entry for Mac OSX

I have Ubuntu 18.04 installed and run HA in python env (Ubuntu is installed as a virtual machine as well). It has worked fine since I started using it. Tbh I have never tried Hassio, but I donā€™t need Hassio as it has nothing that HA canā€™t do.

I make backups to github so that in the case of a massive failure (like my 10 year old server dying) I can still get my stuff back from github. I also make backups of my ubuntu images so if only the VM fails or gets corrupted for whatever reason I would only need to restore the image. I use a Hyper-V server, but any other VM package will do (e.g. VMware). You could also use docker as said before in this thread, but never worked with docker before so I canā€™t tell you if it is any good.

But tbh, I am not a linux user at all, and I only started touching Linux since I started with HA (and in the past I used some linux webservers). The problem you state doesnā€™t seem like a commandline error, but something else. Maybe conflicting IPā€™s? Conflicting ports? You could also try ā€œdowngradingā€ to the last version that worked for you as it might be a Home Assistant breaking change (this is mostly the case if you havenā€™t updated in a while).

But seeming it looks like you completely messed up your system a new install wouldnā€™t be so bad. Just donā€™t forget to back it up (mainly the yaml config files).

@flamingm0e Never. Before turning it into a NAS I wiped the system drive and dedicated it to Ubuntu only. Both other drives were also wiped before using them in the NAS. I did have a USB drive with Proxmox on it, and this (New) Sony USB drive did not work for me as a bootable. After trying both Etcher and Rufus I just used another (trusty) USB stick and it worked. My only assumption is that maybe it detected this proxmox usb as mac in some wayā€¦ dunno.

Anyway, Proxmox is up and running by now, trying to see what I can do on it as host and what not.

Since I plan to let multiple VMs use the same drive and from what I know itā€™s better to share it using samba\nfs share, not passing it to multiple VMsā€¦ Should I just create this share on the proxmox host (I cant find it in the gui yet, but I suppose it can be done through cli) or should I just really only use host to create VMs, and let one of the VMs handle sharing the drive with other VMs?

Either:

Proxmox Samba Host <=> VM1 VM2 VM3

Or VM1 Samba host <=> VM2 VM3 and proxmox is just passing the HDD through to the VM1?

@jimz011 I just liked how Hassio already has everything nicely setup, one click updates, one click addon installations, one click node red is up and working. Iā€™m a tinkerer but I always value good UI\UX as it saves a lot of precious time. As much as I donā€™t like Apple policies, for example, I always appreciate how much they focus on just making stuff work and making it as easy and quick f or the user as possible.

Certainly ports and IPs have nothing to do with it :slight_smile: If youā€™re talking about my OS dying itā€™s not related to Hass at all, itā€™s related to something that caused apt install command to just wipe a lot of the system libraries. Itā€™s like deleting half of system32 folder in windows.

If you were talking about the DNS, itā€™s a known issue with Ubuntu+Docker+Hassio and the way hassio is setup. I know whatā€™s causing it, and Iā€™ve seen all the fixes, but figured that after I tried everything manually Iā€™ll just try to reinstall and see if it helps. Figured that it maybe needed to recreate containers from scratch to pickup changes properly. I believe I was on the right path, the mistake was that I ran that commandā€¦ but not sure why it did what it did.

In general when working with terminal itā€™s very easy to break linux installation. Itā€™s usually much easier to fix it compared to windows, and I believe that itā€™s almost impossible to break it so much that with some experience you wonā€™t be able to fix it. But right now I feel itā€™s just better to start anew.

Since Proxmox is not a NAS, it does not have such options in the GUI.

Since I have a NAS, I use that for storage for my Proxmox host and all the guest VMs/LXCs

Got it. I suppose it should be logical to create a ā€˜virtual nasā€™ then, a separate vm thatā€™s only responsible for storage, and connect everything else to it.

I suppose no point in running OMV in VM if all I need it just samba\nfs share?

However I also saw someone install OMV alongside Proxmox host. Not sure how good of idea that is.

I use Proxmox to run the HassOS VMDK. I also like some of the built in functionality that comes with Hassio for simplicity even though I could definitely manage to do it it all myself.

Proxmox also comes with templates for a bunch of TurnKey Linux LXC containers (ā€œCreate CTā€ rather than ā€œCreate VMā€). One of those is a File Server, which Iā€™ve been using to make my own sort of diy NAS.

Ooh, right! Forgot that thereā€™s full HassOS that can be ran as full VM. Cool, thanks for reminding me.

And thanks! I was just hurting my head trying to find the best approach to this. Was about to start drawing diagrams to show the options I had in mind for discussion :smiley:

So I suppose Iā€™ll use a file server CT to share storage, HassOS in one VM, and another VM for other docker containers (plex, medusa, etc), and another container for non-docker stuff including my own scripts and things.

Alsoā€¦ nginx? I suppose nginx should be on one of these VMs and proxy_pass to other VMsā€™ IPs and ports, right? And then Iā€™ll need to forward ports 433 and 80 from that Nginx-VM to the host, right?

Use Docker.
You donā€™t need to forward anything to the host. You should be using bridged networking on your VMs and Containers, so that they get their own IP address on your network. Forward 80/443 to the NGINX instance. Let it handle authentication/certificates/and reverse proxy

You mean run nginx as docker container?

What are the benefits of giving them full IPs within LAN? Why not just use IPs within host?

And thanks a lot for all the help!

Yes.

Why deal with a double NAT issue? Put the VMs on your network. Flat network makes it easier.

As long as there are enough IPs :smiley: Iā€™m not sure if my asus router can handle multiple sub networks. Not that I have that many devices (yet). But thatā€™s another topic for research!

And nginx in docker sounds goodā€¦ However is it worth it? I mean the overhead of the container?

I will preface this by saying I do things the lazy way and thereā€™s probably better solutions.

You can actually install the Portainer addon for hassio and add whatever Docker containers you want that way if you want things that arenā€™t available as addons. As far as I can tell, thereā€™s really not that much of a difference between that and running Ubuntu server with Docker except that itā€™s HassOS instead of Ubuntu.

I prefer Caddy over NGINX and just use an addon for that. You can set up additional vhosts to point to your other VMs with their own IPs as flamingm0e said. The obvious downside to this is that your docker containers need their ports exposed (or maybe not, I havenā€™t checked) and if you shut down the HassOS VM you lose your proxies.

I do have another server setup where I use the caddy-docker-proxy to do the proxying based on container labels and such as well. It works really well. I just havenā€™t bothered trying to apply that to the HassOS VM and its addons yet.

I have recently switched to Traefik

I actually used Traefik a couple years ago. Itā€™s what led me to look for the caddy-docker-proxy that works in much the same way. I went with Caddy because there was an addon already available at the time. Looks like thereā€™s a Traefik addon now but it works a bit differently and hasnā€™t been updated in awhile (maybe doesnā€™t need to).

I havenā€™t done a comparison to see what advantages one might have over the other though.

I donā€™t use hassio so I donā€™t know how the add-ons are configured. I manage my own Traefik on another server separate from my home assistant host. Itā€™s actually in a completely different state. Lol.

I rely on IIS so Nginx is out of the question no? But I am guessing I could do the same with IIS right? I canā€™t get rid of IIS because I have a windows environment with Active Directory and I am pretty sure that core functionality of this relies on IIS. Has anyone tried it with IIS by any means?

Active Directory doesnā€™t rely on IIS at all.

Srry you are right, I use server essentials which does rely on IIS if I want to use itā€™s features like anywhere access. But I am guessing I could do the same with IIS as with Nginx?

I checked out both Caddy and Traefik and then looked at benchmarks, and it looks like Nginx is way ahead of them in performance. That combined with the fact that Iā€™m familiar with nginx and SSL is trivial with certbot --nginx command, I decided to just stick with nginx.

@flamingm0e So I slept on it and now have another question. You said that itā€™s better to give VMs their own IPs within network. But in my case I am going to use this same PC for file storage. So I donā€™t think itā€™s logical to send all traffig through 1Gbit connection to the router, it will cripple performance not only within this machine but also of the whole LAN.

Is there any way to get the best of both worlds? Have LAN IPs for each VM but also allowing direct access to data share?