I re-fetched from upstream nowβ¦I see your latest mods nowβ¦
While testing, I make changes in a crude/easy way to make sure it works, before production, I try to tidy things up
You already made the modifications I would have made, except oneβ¦
I think I see it nowβ¦
if [ "$STORAGE_TYPE" == "zfspool" ]; then
CT_FEATURES="fuse=1,keyctl=1,mknod=1,nesting=1"
else
CT_FEATURES="nesting=1"
fi
pct create $CTID $TEMPLATE_STRING -arch $ARCH -features $CT_FEATURES \
-hostname $HOSTNAME -net0 name=eth0,bridge=vmbr0,ip=dhcp -onboot 1 -cores 2 -memory 2048 \
-ostype $OSTYPE -rootfs $ROOTFS,size=$DISK_SIZE -storage $STORAGE >/dev/null
I Like! Thank you!
The other two were like that one, but you already implemented them.
Podman also has the ZFS automatic installation. Not yet tested, but should be the same as Docker
I still have to study Podman a little bitβ¦I read itβs compatible with docker-compose files, right?
Another question: what does the autodev script do exactly? Iβm having problems making NUT work in a container, and itβs all due to USB passthrough I think. I can add the USB device via lxc-device, I can see it in the container after I add it, but for some strange reason the NUT driver doesnβt see the UPS.
Itβs just a USB device hooks script. It adds USB devices (188:* & 189:*) shared (passthrough) with the LXC
Sorry for the delay, wife needed help.
I heard a few mention that the autodev script has worked while other means didnβt. Unexplainable.
Solved it: installed NUT directly on PVE.
I centralized all dockers into one LXC container, seems to be working (14 containers up to now). HASS is in its own LXC containerβ¦do you advise to keep it separate?
One of the CONs of the separation/isolation approach is the update of all the apps, how are you managing it, manually? For Docker, I installed Watchtower that should manage the updates automatically (didnβt test it yet), but for all other types, are you managing the updates manually?
Another thing: in the repo you put an icon when a script changes, could you put a 1-line note of the change? Just to understand if something has to be updated on instanced containers. Thanks.
no, you can make it #15
I use Webmin for auto OS security updates, but all of my installs are bare-metal with the exception being HA. I prefer to manually update after reading any breaking changes. Besides, I only have 8 total, very simple setup.
Good idea! Thanks
Good. Is there a way to backup the existing hass docker container and migrate it to another docker? I havenβt yet looked at thisβ¦
I now have 2 dockers, HASS and the main central docker with 14 containers (Traefik), the other 10 are bare-metal.
Copy everything from /var/lib/docker/volumes/hass_config/_data
Edit: Including hidden folders
Is frigate bare-metal? If so, tell me how!
no, sorry, thatβs the last one I want to move to the central docker. I left it there because I struggled to get GPU acceleration working in that container. Now itβs working fine and Iβm a little bit scared to move it.
For HASS: Iβm creating a docker-compose, will create a new container in the central docker, and then copy hass_config data manually. I found some cool docker commands to backup and move the container to another instance, but I prefer to do it manuallyβ¦
I stopped HA, then used tar -zcvpf
to backup /var/lib/docker/volumes/hass_config/_data
and restored to the new named volume.
Created this docker-compose.yaml based on your ha_setup.sh
run parameters. But I had to comment those two lines, otherwise it wouldnβt start. I can understand the network_mode: host
, but I donβt know why privileged: true
doesnβt work and if itβs really needed for HASS.
HASS is running on the central docker now, and it seems to be working fineβ¦
version: '3.7'
services:
homeassistant:
container_name: homeassistant
image: homeassistant/home-assistant:stable
#network_mode: "host"
#privileged: true
volumes:
- /dev:/dev
- ./hass_config:/config
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
- /var/run/docker.sock:/var/run/docker.sock
restart: unless-stopped
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8123"]
interval: 30s
timeout: 10s
retries: 6
networks:
- proxy
ports:
- "8123:8123"
networks:
proxy:
external: true
I try to do everything as written in the official docs.