To Proxmox or not to Proxmox

I re-fetched from upstream now…I see your latest mods now…:slight_smile:

1 Like

While testing, I make changes in a crude/easy way to make sure it works, before production, I try to tidy things up

You already made the modifications I would have made, except one…:slight_smile:

I think I see it now… :man_facepalming:

if [ "$STORAGE_TYPE" == "zfspool" ]; then  
  CT_FEATURES="fuse=1,keyctl=1,mknod=1,nesting=1"
else
  CT_FEATURES="nesting=1"
fi
pct create $CTID $TEMPLATE_STRING -arch $ARCH -features $CT_FEATURES \
  -hostname $HOSTNAME -net0 name=eth0,bridge=vmbr0,ip=dhcp -onboot 1 -cores 2 -memory 2048 \
  -ostype $OSTYPE -rootfs $ROOTFS,size=$DISK_SIZE -storage $STORAGE >/dev/null
1 Like

I Like! Thank you!

1 Like

The other two were like that one, but you already implemented them. :slight_smile:

Podman also has the ZFS automatic installation. Not yet tested, but should be the same as Docker :crossed_fingers:

I still have to study Podman a little bit…I read it’s compatible with docker-compose files, right?

Another question: what does the autodev script do exactly? I’m having problems making NUT work in a container, and it’s all due to USB passthrough I think. I can add the USB device via lxc-device, I can see it in the container after I add it, but for some strange reason the NUT driver doesn’t see the UPS. :frowning:

It’s just a USB device hooks script. It adds USB devices (188:* & 189:*) shared (passthrough) with the LXC

Sorry for the delay, wife needed help. :man_shrugging:

I heard a few mention that the autodev script has worked while other means didn’t. Unexplainable.

Solved it: installed NUT directly on PVE. :wink:

I centralized all dockers into one LXC container, seems to be working (14 containers up to now). HASS is in its own LXC container…do you advise to keep it separate?

One of the CONs of the separation/isolation approach is the update of all the apps, how are you managing it, manually? For Docker, I installed Watchtower that should manage the updates automatically (didn’t test it yet), but for all other types, are you managing the updates manually?

Another thing: in the repo you put an icon when a script changes, could you put a 1-line note of the change? Just to understand if something has to be updated on instanced containers. Thanks. :slight_smile:

1 Like

no, you can make it #15 :slightly_smiling_face:

I use Webmin for auto OS security updates, but all of my installs are bare-metal with the exception being HA. I prefer to manually update after reading any breaking changes. Besides, I only have 8 total, :neutral_face: very simple setup.
Screenshot 2022-01-20 12.01.54 PM

Good idea! Thanks

Good. Is there a way to backup the existing hass docker container and migrate it to another docker? I haven’t yet looked at this…

I now have 2 dockers, HASS and the main central docker with 14 containers (Traefik), the other 10 are bare-metal. :slight_smile:

image

Copy everything from /var/lib/docker/volumes/hass_config/_data

Edit: Including hidden folders

Is frigate bare-metal? If so, tell me how!

no, sorry, that’s the last one I want to move to the central docker. I left it there because I struggled to get GPU acceleration working in that container. Now it’s working fine and I’m a little bit scared to move it. :slight_smile:

For HASS: I’m creating a docker-compose, will create a new container in the central docker, and then copy hass_config data manually. I found some cool docker commands to backup and move the container to another instance, but I prefer to do it manually…

I stopped HA, then used tar -zcvpf to backup /var/lib/docker/volumes/hass_config/_data and restored to the new named volume.

Created this docker-compose.yaml based on your ha_setup.sh run parameters. But I had to comment those two lines, otherwise it wouldn’t start. I can understand the network_mode: host, but I don’t know why privileged: true doesn’t work and if it’s really needed for HASS.

HASS is running on the central docker now, and it seems to be working fine…

version: '3.7'

services:
  homeassistant:
    container_name: homeassistant
    image: homeassistant/home-assistant:stable
    #network_mode: "host"
    #privileged: true
    volumes:
      - /dev:/dev
      - ./hass_config:/config
      - /etc/timezone:/etc/timezone:ro
      - /etc/localtime:/etc/localtime:ro
      - /var/run/docker.sock:/var/run/docker.sock
    restart: unless-stopped
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:8123"]
      interval: 30s
      timeout: 10s
      retries: 6
    networks:
      - proxy
    ports:
      - "8123:8123"

networks:
  proxy:
    external: true
1 Like

I try to do everything as written in the official docs.