thanx…
yess it will be the hassbian docker image for pi…
as you can tell i’m too much of a noob. Been dealing with Cloud Foundry too much…ready to switch to containers learning in my life
thanx…
yess it will be the hassbian docker image for pi…
as you can tell i’m too much of a noob. Been dealing with Cloud Foundry too much…ready to switch to containers learning in my life
Good day, looking for a bit of nginx guidance. got it working and can see the HA instance. this is the nginx.conf https://hastebin.com/qarevajizo.coffeescript i used.
how do I add additional proxy_pass to ssl other components such as Configurator for instance, so that i can see it within HA.
Thank you.
I don’t use configurator or hassio, so I don’t know what else is needed.
Im not using Hassio, how would additional components be passed. for instance in your configuration you have portainer. If you want to see it in HA you need to add another proxy_pass.
How would I see portainer in my HA?
You can add more blocks to your server confs, to get more proxy pass, so I’m not sure what you’re asking.
For instance, on one of my servers I run in a datacenter, I have Portainer running behind a reverse proxy.
upstream portainer {
server <IPADDRESS>:9000;
}
location /portainer/ {
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_pass http://portainer/;
}
It’s that simple really.
Can you then use portainer as an iframe in homeassistant and then access homeassistant externally with the iframe working?
I have no idea. I have zero use for that type of configuration.
I have no need to access my docker containers externally. If I need to restart one, I have all my containers added as switches in HA that I can switch on/off. If something really fails, I have VPN access into my network and can get into my server.
I was using this blog for ideas… shows how to setup nicely… but I need to learn how to mod my image so I have mqtt in python installed.
https://blog.luciow.pl/automation/2018/02/10/dont-reinvent-the-wheel/
Nice i’m trying to do this but can’t get it to work. I can only access the node red iframe locally. When trying to do this externally it can’t find node-red.
@sjofel - think of the iframe as just a mini-browser. So, whatever “website” you are going to display in the iframe ALSO has to be accessible from wherever you are accessing HA. So, in other words, if you would also have to have node-red’s port exposed (forwarded) on your router.
If you do this, make sure you set up SSL and password on node-red! Otherwise, it’s wide-open for anyone to hack.
Yeah i thought so. I was hoping i could use it without exposing node red.
I am in the process of converting home assistant on a virtual environment on Raspberry pi to a portainer environment.
I got it up and running with the default home assistant container. However, after replacing the configuration with my existing configuration home assistant fails to start saying it a secret doesn’t exist. However, the secrets.yaml file exists.
Are you sure that the supposedly missing secret is present in the yaml file and spelled exactly the same as in the call?
It was the same, the issue was more benign, I had named the file secrets.yml instead of secrets.yaml
Now it’s failing to start due to ecobee errors and zwave errors, but I can look into that.
Inspired by this, I’m just starting to test putting my home automation stack in docker/docker-compose. It looks awesome, that is, having several programs all described textually in version control. I am facing one issue, though: I mount the configuration directory to a host directory (git repo). All new files created in the container are then owned by root. Is this by design? Any best practices here?
By default most Docker images fire up as root user within the container. I don’t know if you can pass through a UID/User. Since my NUC is dedicated to only running home automation tasks, I don’t have a problem running as root.
You could utilize groups rather than users to access/modify the data.
Personally since I don’t have to edit the files directly on the NUC, and I don’t have to login to the host unless it’s time for an upgrade, it hasn’t caused me any issues.
If you use syncthing, your directory is synced automatically and without user intervention
What does your influxdb.conf look like? I’ve been borrowing from your stack pretty heavily but influxdb isn’t working.
reporting-disabled = false
bind-address = "0.0.0.0:8088"
[meta]
dir = "/var/lib/influxdb/meta"
retention-autocreate = true
logging-enabled = true
[data]
dir = "/var/lib/influxdb/data"
index-version = "inmem"
wal-dir = "/var/lib/influxdb/wal"
wal-fsync-delay = "0s"
query-log-enabled = true
cache-max-memory-size = 1073741824
cache-snapshot-memory-size = 26214400
cache-snapshot-write-cold-duration = "10m0s"
compact-full-write-cold-duration = "4h0m0s"
max-series-per-database = 1000000
max-values-per-tag = 100000
max-concurrent-compactions = 0
trace-logging-enabled = false
[coordinator]
write-timeout = "10s"
max-concurrent-queries = 0
query-timeout = "0s"
log-queries-after = "0s"
max-select-point = 0
max-select-series = 0
max-select-buckets = 0
[retention]
enabled = true
check-interval = "30m0s"
[shard-precreation]
enabled = true
check-interval = "10m0s"
advance-period = "30m0s"
[monitor]
store-enabled = true
store-database = "_internal"
store-interval = "10s"
[subscriber]
enabled = true
http-timeout = "30s"
insecure-skip-verify = false
ca-certs = ""
write-concurrency = 40
write-buffer-size = 1000
[http]
enabled = true
bind-address = ":8086"
auth-enabled = false
log-enabled = true
write-tracing = false
pprof-enabled = true
https-enabled = false
https-certificate = "/etc/ssl/influxdb.pem"
https-private-key = ""
max-row-limit = 0
max-connection-limit = 0
shared-secret = ""
realm = "InfluxDB"
unix-socket-enabled = false
bind-socket = "/var/run/influxdb.sock"
max-body-size = 25000000
[ifql]
enabled = false
log-enabled = true
bind-address = ":8082"
[[graphite]]
enabled = true
bind-address = ":2003"
database = "graphite"
retention-policy = ""
protocol = "tcp"
batch-size = 5000
batch-pending = 10
batch-timeout = "1s"
consistency-level = "one"
separator = "."
udp-read-buffer = 0
[[collectd]]
enabled = false
bind-address = ":25826"
database = "collectd"
retention-policy = ""
batch-size = 5000
batch-pending = 10
batch-timeout = "10s"
read-buffer = 0
typesdb = "/usr/share/collectd/types.db"
security-level = "none"
auth-file = "/etc/collectd/auth_file"
parse-multivalue-plugin = "split"
[[opentsdb]]
enabled = false
bind-address = ":4242"
database = "opentsdb"
retention-policy = ""
consistency-level = "one"
tls-enabled = false
certificate = "/etc/ssl/influxdb.pem"
batch-size = 1000
batch-pending = 5
batch-timeout = "1s"
log-point-errors = true
[[udp]]
enabled = false
bind-address = ":8089"
database = "udp"
retention-policy = ""
batch-size = 5000
batch-pending = 10
read-buffer = 0
batch-timeout = "1s"
precision = ""
[continuous_queries]
log-enabled = true
enabled = true
query-stats-enabled = false
run-interval = "1s"
That worked great!
Thanks for your quick response.