I accept that this is solely my fault, carelessness and responsibility. However hear me out.
I was struggling with the DNS resolution issue of Hass.io for half a year, and I finally came to a point where I decided to just reinstall hassio. So I backup up Hass data (stupid me for not backing up the whole system drive image), stopped and removed containers, and went to follow the steps described here all over again: https://www.home-assistant.io/hassio/installation/#alternative-install-on-generic-linux-server
Andā¦ It destroyed my server. Specifically this command: apt-get install -y apparmor-utils apt-transport-https avahi-daemon ca-certificates curl dbus jq network-manager socat software-properties-common - Iām stupid for not checking it out properly. It went on and removed a lot of packages and then broke in the middle. At first nginx stopped, then there were errors while installing it, somehow I managed to get it back up, owncloud clearly was missing packages. Plex and other stuff were still working. until a reboot. After that the system jsut did not recover. No SSH connection. It was an install with GUI, and I connected a monitor to it. It loaded up, let me pick the user and then showed the ubuntu wallpaper. No desktop other than that. And only the notification kept popping up āConnected to networkā followed shortly by āDisconnected from networkā and went on in a loop.
I went into tty, tried some things, but then I justā¦ gave up. Iām going to get a beer, get a few days off from work and do a clean install now, of everything. From scratch. Will try out Proxmox, hopefully it will make it easy to make backups so I wont need to worry about breaking stuff. Maybe it will also allow me to compeltely isolate Hass and other parts of the system to be able to break stuff without breaking everything.
If the packages were already installed, running that command again, will do literally nothing.
I can guarantee simply running apt install again will NOT render your system useless.
Proxmox is merely a wrapper for KVM and LXC. If you can mess up a simple desktop, you will no doubt be able to mess up a virtualization host.
Docker already does thatā¦
What you experienced has nothing to do with running apt install on packages that are already installed. Your issue is something beyond HA, and has nothing to do with the script.
apt-get lets you manage software on your computer
Install is the command to install software. It does NOTHING else.
-y just skips the āAre you sure?ā prompt my answering āYesā
The remaining are just a list of software packages to install, along with installing any dependencies.
That command will only install a package or overwrite an older version. Something else, (perhaps coincidentally) may have happened.
I recently experienced a power outage at home. When the power came back, my Bluetooth mouse did not work. Apparently it chose that moment to have dead batteries, unrelated to the home power outage.
Well, guys, say what you will, but this is what happened. Iām well aware what apt does and how, and that is the reason why I did not even consider this command to be dangerous in any way. But when I ran it it started to remove packages, supposedly to update them, and failed to update, I dont know what else couldāve happened.
@flamingm0e Running it in VMs will allow me to easily create full system backups on schedule, so no matter what happens Iāll be able to easily recover. Docker, at least with hassio, does not do a full isolation, otherwise I would not be required to run any other apt commands. Iām talking about full isolation.
@anon34565116 Iām aware of how apt works and what it does. What else couldāve caused this? I literally just ran these commands:
And it started to remove a lot of packages including nginx and stuff, and then installing them anew. Before that system was running fine for about 2 years. Ideas?
Assumption based on what? One case of breaking ubuntu server by running perfectly safe command? Also I wonāt have to do much with the host. All the testing and experimenting will happen in VMs. Which is much safer than doing it on a bare metal OS which runs lots of non isolated services.
Docker with hassio is full isolation. You didnāt need to run any apt commands. The only apt commands you need to run are when you install. FULL STOP.
DOCKER IS FULL ISOLATION.
Why would you try to RE-ENABLE a repo and REINSTALL packages that are already working?
I was unsure if I ran them properly the first time, and assumed that if something was not installed it would be added. As you said I assumed that these commands will merely check if those packages are installed and only install missing ones. Maybe update them, though thereās a separate flag for that.
So Iāll repeat my question, what else do you think mightāve gone wrong? A post before you said that apt install canāt destroy the system. Now youāre asking why I ran it again. Well if it canāt hurt then why do you even ask it?
Yes and no. In case of HASSIO the hassio_install script does stuff with host os, like adding systemd services and editing docker config. It adds google dns into it for examile, which, by the way, is not what I need, I need my router to be the dns server for everything in my network.
This is slightly incorrect. The install script creates and starts a systemd service to make sure the SUPERVISOR is running so that it can manage hassio and the add-ons. Nothing more than that.
No idea, but running a single apt install command would not have caused your entire system to be unusable unless you modified repos that affected the base system.
I was asking why you were running it again, because obviously it DOESNāT need to be run again after it has been run once. If HASSIO was working at all, then all the packages were installed correctly. Itās literally the first step of the script to check for those packages.
modify the script or the json file yourself then. I think this is done because people tend to rely on their ISP DNS unknowingly and a lot of them donāt work.
In my understanding this is still not full isolation. Iād like to just avoid any chance of what happened and isolate it even more. MORE ISOLATION!
Did not. Other than adding the one required by hassio and some others like DockerCE. But youāre trying to argue with the fact of what happened. Iām not making this up. I SSHed, and ran those commands. Then I saw how packages started to get removed and stuff. If there was no -y flag maybe Iād be able to stop it before it happened. But when I saw like 50+ āRemovingā lines I decided to better not touch it and let it do itās thing, assuming it will reinstall them, maybe new versions. It did not.
True, but as you said, it should not matter if I ran them again or not.
Thatās what I did, but did not get to that.
If you never ran into the issue does not mean it does not exist. You and I and anyone else can say a 100 times that āapt install canāt ruin your systemā but thatās what happened. Iām unsure why it started to remove anything at all. But it did.
Then run straight Docker and manage Home Assistant on your own. Stop using HASSIO.
The only time this will happen is when you have conflicting packages, or packages have been REPLACED.
I am a 20 year Linux userā¦never seen apt remove everything just for shits and giggles. There is a reason. Something was modified to allow it to happen.
sudo do-release-upgrade actually upgrades the OS version.
sudo apt upgrade merely upgrades package versions
sudo apt update updates the local ādatabaseā informing the system what newer versions are out there for packages.
sudo apt dist-upgrade will force an upgrade for all the installed packages, including kernel.
The only one to ever be wary of is sudo do-release-upgrade. I run the others on my production servers all the time during maintenance. I have never had any of them fail.
I am guessing that by trying to force a new version of some installed packages without doing an upgrade, you caused incompatibility issues with packages. Run sudo apt update && sudo apt dist-upgrade && sudo apt autoremove -y and it should get you a running system again. You will need to install some packages to get your desktop back if you donāt have one now, but that should be sudo apt install ubuntu-desktop
Can cause incompatibilities between some of the newer packages, or with existing running services. Quite often running apt upgrade broke my python services. For example because it also updated some of pythonās modules and they changed the API of those modules and I had to go in and fix the code of my services. Sure I can run as virtualenv. Should probably. But still.
Thanks, Iāll try that. I am currently in the process of making a backup image of the system drive. So I still have it in itās current state.
I donāt need desktop, tbh, I only use it as headless by now. As long as everything else works I donāt care about desktop, might as well remove it.
One of the reasons I wanted to run Proxmox, however, is to also have an easy(ish) GUI for managing VMs, because I thought about running some game servers on it as well. Figured VMs might work better for it than Docker, especially if its Windows only server. Or not.
I did not expect them to change their API so much. It was the socketio client.
Because itās been working for 2 years and I did not bother changing it.
Just realized the problem with running apt update - network canāt connect. Shows that connection is established and then disconnected right after that. I know that lots of stuff has been wrong with this OS for a while, so maybe a clean install is a good thing at this point. IT was also upgraded from 16:04 to 18:04 a few months ago.
EDIT: Oh no, itās working even though it shows its disconnecting lol
Why not proxmox then? I thought itās GUI is quite nice for managing the host. Cockpit and webmin are not quite there in terms of VM and disk managment. I can do stuff with terminal, itās just that I like GUIs, they are often faster or easier to work with.
I run docker in an Ubuntu vm, using a vm does help simplify backups, especially as the images are stored on my nas via nfs. Also means a 30 min job to build a new server and point it at the images if I have a hardware issue.
The config of each vm is very standard, enough to run docker and add some security. Other than that everything is in docker, all my home automation on one and management stuff on another, test stuff on another.
Iāve never found the need for a gui to manage kvm, itās really just a few simple commands once things are installed. I used portainer to help get started with docker but have pretty much moved to docker compose and the command line now.
I find this setup keeps all my containers independent and makes everything very manageable.
I use the hell out of Proxmox at home. I have been running Proxmox servers for over 10 years. They are great if you actually want a virtual host. If you want to deal with virtualization and networking, then itās great. If all you want is a simple server, Ubuntu server works great.
At this point with all Iām doing with it itās growing out of āa simple serverā. I mean, if it was just media storage, media server and downloader and hass, sure, but itās also a development server and I plan to run some game servers. Running all of that on ubuntu host clearly is a right way into troubles. Thatās why Iāve been delaying all that for later when I have more space for other hardware and\or time to recreate everything from scratch.
At this point even for a media server I find the idea of being able to make a quick backup and quick restore of the whole thing very appealing. Basically - what @eggman said.
Tried running those commands, ubuntu desktop encountered errors. Some stuff seems to be back up and running. Also noticed that I used the wrong ethernet port so router assigned the wrong IP to it (bang my head on the wall).
Panic is never a good thing. Even if I managed to get it back up, Iād still like to move to proxmox one day.