I wanted to share my configuration to store hass secrets in hashi vault. This is configured and running on a k8s cluster, so I don’t know how useful this will be to many people, however its been a fun challange nonetheless.
The general idea:
- secrets are stored securely in vault
- when a pod starts, secrets are injected into the pod
- secrets are converted to a format that hass understands (secrets.yaml)
- do not break hass. Everything should continue working as before
I have 2 hass deployments in k8s - one of them I use for testing, etc. This also means I have that deployment connect to a different postgres and influxdb databases, and also run on a different port than the standard 8123.
I am using k8s@home helm chart, so all of the configuration is based on that.
1. Injecting secrets into the container
NOTE I am skipping a lot of vualt specific configurations like kubernetes auth, policies, setting up a kv store, etc here.
- Create a secret in vault for common values (those shared between the 2 hass deployments). Example:
secret/hass/common
- Create a secret in vault for individual deployments. Example:
secret/hass/prod
andsecret/hass/dev
- Create a service account for each deployment that will have access to retrieve secrets from vault and set that service account in the helm chart. The 2 accounts are associated with individual polices in vault which grant them access only to resources they need. Example - service account for prod has access to common and prod secrets, but not dev secrets
- Configure helm chart to use vault agent-injector:
podAnnotations:
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/role: "hass-prod"
vault.hashicorp.com/agent-inject-default-template: "json"
vault.hashicorp.com/agent-inject-file-common: "common.json"
vault.hashicorp.com/agent-inject-secret-common: "secret/hass/common"
vault.hashicorp.com/agent-inject-file-env: "env.json"
vault.hashicorp.com/agent-inject-secret-env: "secret/hass/prod"
- Now, when we scale this deployment, we should see a vault-agent container in the hass pod. You can also attach to the hass container and examine
/vault/secrets
folder. If everything is configured correctly, there will be 2 json files. One containing common values, one containing environment specific values.
2. Passing those secrets to hass
Home assistant expects secrets.yaml. The below set of steps is how I ensure the injected json files stored in /vault/secrets
can be used by home assistant.
The idea is simple:
- we already have python in the hass container, so lets use python to a) convert json to yaml and b) combine the common and the environment specific values into 1 file.
- Once we convert values from json files into a secrets.yaml file, we also need to make sure hass can access that file - here i decided to use a symlink to point my /config/secrets.yaml (where hass expects it) to /vault/secrets/secrets/yaml (where the secrets are really located).
My python script looks like this.
"""parse secrets"""
import os
import json
import yaml
secrets_distination = "/config/secrets.yaml"
secrets_source = "/vault/secrets/secrets.yaml"
common_json_file = json.load(open("/vault/secrets/common.json", encoding="utf-8"))
env_json_file = json.load(open("/vault/secrets/env.json", encoding="utf-8"))
with open(secrets_source, "w", encoding="utf-8") as f:
yaml.dump(env_json_file["data"], f, allow_unicode=True)
with open(secrets_source, "a", encoding="utf-8") as f:
yaml.dump(common_json_file["data"], f, allow_unicode=True)
if not os.path.islink(secrets_distination):
print(f"Creating symlink for {secrets_distination}")
os.symlink(secrets_source, secrets_distination)
else:
print(f"{secrets_distination} exists")
The only thing left to do is make sure this script executes before hass starts. For that I am going back to my k8s helm chart to modify the command.
command:
- "/bin/ash"
args: ["-c", "python init_secrets.py && /init"]
And this is it. I am also using the same method for injecting secrets into ESPHome with great success for a few month now.