Most of the guides I see for installing on proxmox are using a VM. I’m curious if this is required, or, if not, what the pros/cons of using a VM vs a linux container (LXC) are. The impression I’ve gotten from setting up other services on my server is that an LXC is generally preferred as it has faster start up times and is more resource efficient compared to a VM.
While there are guides for docker containers, that appears to not quite be the same thing as an LXC
I’m sort of at the level of expertise where I’m generally good enough to do most things/follow most guides, but I don’t often understand the underlying “why” of choices like VM/LXC/Docker for a given application (and most guides don’t go into that much depth).
The only way to run HAOS is in a VM. All other methods Supervised, Core, etc are possible in a CT, but it is not recommended to run Docker in a CT. Therefore VM is the safest method to run HA in Proxmox.
Hi, I’m running HA in a LXC because my server is at it’s limits on RAM.
I have struggled to get my zigbee stick working and finally it’s connected to a RPi.
At some point, I will give it another go to connect it to my server.
Apart from that, I’m pleased with my setup.
Hi all. Fairly new to this technology (proxmox) so please forgive the silly questions.
I have tried both VM and LXC. LXC seems incredibly efficient and runs quite happily on around 500MB.
I would like to create proxmox failover instances using node groups. I know LXC completely shuts the container down on failover, that’s ok
The only problem I see is the physical connection of my zigbee dongle via usb
Has anyone any experience of usb over IP? Do these work reliably with our zigbee dongles?
Thanks!
USB over IP is elegant but there are rough edges. In my opinion the tech is not production ready. As hardware/firmware stacks change you might find it is not a viable long term solution.
One way around this is to use IoT tech based on WiFi/Ethernet, or IP-based protocols which abstract the hardware layers, i.e. you can make simple hardware changes without affecting the network/application stack.
I’ve migrated my servers from Windows to Proxmox, and I’m currently using the Home Assistant Container LXC from TTECK.
Home Assistant Container LXC:
This Proxmox container has been exceptionally trouble-free for me. It seamlessly handles USB passthrough, Z2M (Zigbee to MQTT), RFXCOM, and Z-Wave integration.
I did experiment with running Home Assistant within a virtual machine (VM) on Proxmox, but the experience left much to be desired.
So, here’s what I’ve set up now:
Distributed Server Setup:
I’ve taken the approach of splitting various services across separate servers to enhance reliability. This means that if something goes wrong with one server, other services remain unaffected. For instance:
The databases for Grafana and Home Assistant, such as MariaDB, are now hosted on a dedicated server. This setup ensures data integrity and reliability.
I’ve documented a comprehensive tutorial on setting up a new Proxmox server. This guide covers optimizing HDD usage and other settings to ensure everything works seamlessly.
Here’s a brief overview of my setup:
I’ve distributed most of the necessary containers across three servers.
My server specifications include a t630 QUADCORE CPU, 30GB of RAM, and a storage configuration with a 250GB HDD, a 128GB HDD, and network-attached storage (NAS) for backups.
This setup has proven to be a robust and scalable solution for my needs.
Not sure I understand completely your point. Are you saying a virtual USB transport over IP is not mature enough to concider using in HA (High Availability, not Home Assistant!) distributed environments?
I haven’t really put much effort into this research wise, but wondered if anyone else out there figured this out and has the battle scars to prove it!
A number of storages, and the Qemu image format qcow2, support thin provisioning. With thin provisioning activated, only the blocks that the guest system actually use will be written to the storage.
Say for instance you create a VM with a 32GB hard disk, and after installing the guest system OS, the root file system of the VM contains 3 GB of data. In that case only 3GB are written to the storage, even if the guest VM sees a 32GB hard drive. In this way thin provisioning allows you to create disk images which are larger than the currently available storage blocks. You can create large disk images for your VMs, and when the need arises, add more disks to your storage without resizing the VMs’ file systems.
All storage types which have the “Snapshots” feature also support thin provisioning.
Local-LVM storage typically provides better performance compared to standard file-based storage, as it uses block devices directly. This can lead to faster read and write operations, which is especially important for I/O-intensive workloads.
With Local-LVM, you can isolate storage for each virtual machine or container using logical volumes. This helps prevent one VM or container from impacting others in terms of storage I/O or running out of disk space.
Using LVM helps reduce file system fragmentation, which can improve performance and storage efficiency.
@tteck i understand what you say.
I really appreciate all your hard work for all your scripts, but 3. Resize the HDD (Single HDD) works perfect and never had an issue, i sold many units (348) doing this way never had one come back with an issue… for now a year.
myself running 3 t630 systems and 5 big servers with all kind of vm’s so i disagree with you.
I did setup them all like that…
otherwise i would not make this tutorial and post it even here.
Even my old tutorial has been proven and used by many Youtubers.
I did some testing on the old proxmox first and back than i released a small tutorial on reddit a long while ago…
its even featured in the video of (network chuck) just seen it whil searching on my line.
look here at the proxmox forum, there are so many forums even on reddit discussing this, not even 1 person that says better don’t use it.
my 3. Resize the HDD (Single HDD) is just safe to use.
people can research the code and look it up, even on the proxmox forums they talk about it…
In that case, we appear to have differing opinions. Could you please clarify your reasons for removing local-lvm, or are you in a similar position to Network Chuck, lacking insight? My concern lies in ensuring that users receive accurate information and are not led astray.
About chuck was just an example (yes that guy i don’t even like him), there are more YouTubers doing this.
i just did a seach on it and came across him and a lot of reddit posts and proxmox forum posts.
check the link from 2017 in that forum there is an other link.
believe me, i sold many many systems never had any issue, and ofcourse we have different opinions and thats ok.
the reason mainly (reclaiming storage)
Changing Storage Configurations:
Reclaiming Storage:
Simplifying Configuration:
i added a warning on step 3 to let people first do some extra research.
Performance: Local-LVM storage can offer better performance for I/O-intensive workloads due to its direct block-level access. This can be important for applications with high storage demands.
Isolation: Local-LVM allows you to create logical volumes for each VM or container, providing isolation and preventing one VM’s or container’s storage from impacting others.
Reduced Fragmentation: By using LVM, you can reduce file system fragmentation, which can lead to improved performance and storage efficiency.
Considerations for Removing Local-LVM Storage:
Usage Profile: If you’re running less I/O-intensive workloads or don’t require the specific benefits of Local-LVM storage, you may not notice a significant performance difference.
Management: Removing Local-LVM storage simplifies your storage management, which can be advantageous for environments where complexity is a concern. This might be the case for smaller or less complex setups.
Resource Reallocation: If you’re removing Local-LVM storage to free up resources for other storage solutions or for a different configuration, ensure that the changes align with your current and future needs.
In the end, the choice to use or remove Local-LVM storage depends on your use case and your performance and management requirements. If you have been successfully operating without it and don’t require the performance benefits, it might be a valid decision to remove it.
If you decide to remove Local-LVM storage, it’s crucial to have a proper backup of any VMs or containers hosted on that storage
You are not reclaiming storage, you’re just moving it to standard file-based storage, which I explained is not the best way to go.
I provided crystal-clear justifications for retaining local-LVM, so what’s the true rationale behind getting rid of it? Just because Network Chuck said so or because I stumbled upon it on the internet?
yes i am reclaiming storage.
by running the full code after like its written in step 3
i mean look up the proxmox forums/ reddit, people are talking about it how to reclaim extra space…
Reclaiming Storage: If you’re running low on storage space and don’t need the “local-lvm” storage, removing it can free up space for other purposes.
example they talk about resizing/reclaiming storage that’s how the rest of the code of step 3 works.
like it says on my post here that’s the correct information.
I think what 𝙩𝙩𝙚𝙘𝙠𝙨𝙩𝙚𝙧 is politely making you aware of is that you are following a treacherous path. The reason why you have not hit the wall yet is because your user base are most likely underutilising their systems, plus your systems are not battle tested for 5-10 years, to see why 𝙩𝙩𝙚𝙘𝙠𝙨𝙩𝙚𝙧 is advising you to reconsider your future actions.
You can argue your case all you want, but at the end of the day you are not following the right path. Just a friendly advice: there are good reasons why certain technologies exist and are highly recommended.