Strategies for keeping separate stable and dev environments

I’m just curious about other workflows for people that want to have both a stable and a development environment on the same machine, and switch from one to another easily. First post here, by the way.

I’m running hass on a RPi 3, which does the job fairly well in terms of running, but takes more time than I would like to:

  • update
  • run script/setup
  • rebuild config/deps/

I do occasional dev work on it, especially since I use some 433 mhz RF gadgets as one of my primary home automation tools, and those are only attached to this RPi.

For day-to-day use, I run vanilla homeassistant installed by pip into a virtualenv under a dedicated user with fairly restricted permissions (e.g. can rw necessary files / db in config/, but ro for most else). I figure this should be fairly stable.

For development, I’ve been wiping the venv/ completely, recreating, and reinstalling via script/setup after pulling latest changes into dev branch. Then, I run on dev for a few days pending PR merges and such, and eventually delete the venv and go back to the PyPI version once everything is merged, and wait for PyPI to be updated to take advantage of any changes.

Lately, I’m thinking that this sounds ridiculous, and I should just create a separate venv with for development with standard permissions for my regular user. I can run interactively for testing, and when I’m done for the day resume running my separate stable installation (kept alive as a systemd service). The only minor issues I see with this:

  • duplicated usage of drive space
  • keeping changes to the config up to date (perhaps could use a symlink?)

Another strategy I considered was tinkering in my separate dev branch, and just dealing with the time sink of git stash && git checkout master && script/setup when I’m done messing around for the day, with the disadvantage of likely having to rebuild dependencies in config/deps/ every time.

Anyway, for anyone else that likes to run stable day-to-day but also does some development – especially if running on an underpowered machine – what do you do?

Thanks in advance

Related threads: Keeping local development up to date with upstream repo

Ok, showing my ignorance here. But I thought the whole purpose behind the virtual environments was to have everything encapsulated there so that you could copy it around at will. Simply have one virtual environment for dev and one for prod. Shut down prod when you want to play in dev and when you are done, bring prod back online. Yes it uses twice the disk space, but disk space is cheap these days especially when it’s just a memory stick.

No, you’re right – that’s why I’m asking how people are managing it.

Part of the issue is that the config folder has dependencies that get installed outside of the venv (config/deps/). Otherwise, you could run separate venvs and just point them at the same config/ directory and that would work great. But as is, you also have to keep separate config directories, and then would likely have to either manually copy over all your configuration.yaml as well as all the sub-configs (your !include something.yaml) every time you wanted to do some dev work.

I use two different users with entirely separate everything.

Just a quick update,

Since posting this, I’ve found a strategy that works okay for me – simply have a separate venv/ and venv-dev/, and a separate config/ and config-dev/ in a root hass directory under version control. The main homeassistant repo is cloned into that root folder as well, and directed at the config-dev/ when I’m running manually / testing. HomeAssistant for “production use” is installed from PyPI into the other venv/.

I manually copy over my configuration.yaml when hacking on something new, which is a slight hassle, but otherwise works okay.