Installing apps in running container is not the “docker way” as I understand it. Normally you would make image with needed application installed and create container for that.
To answer your question, YES you can likely install it but not sure if it will be persistent. Best way is make image with it.
from the docker container you should ssh into host (or remote server) then execute command on host
to test
login docker container “docker exec -it yourcontainer /bin/bash”
ssh server your trying to control “ssh user@IPaddress”
have script installed there and run script there “/config/turn_off_pc.sh”
so you’d likely have script in HA Docker that ssh and run script on host(remote server)
EDIT
Docker container should be treated as separate host with its own IP assuming --net=host was not used (I believe)
I tested this (manually/no scripts) and it worked OK
I did some testing. And I ran in to a problem instantly. I found out my docker container had changed name, because Watchtower had updated it, this happens every time a new Home Assistant update is out, which makes the script useless every time?
But I did try what you said, and I managed to connect to it, with user root, but I got a few questions first:
root@nas:/usr/src/app# ssh [email protected]
The authenticity of host ‘192.168.1.2 (192.168.1.2)’ can’t be established.
ECDSA key fingerprint is XX:XX:XX:XX:XX:XX. (redacted the real key if it is important)
Are you sure you want to continue connecting (yes/no)? y
Please type ‘yes’ or ‘no’: yes
Warning: Permanently added ‘192.168.1.2’ (ECDSA) to the list of known hosts. [email protected]’s password:
But then I was connected. So how do I solve the problem of container changing name?
And how would I write the script for all this to work?
EDIT: I could replace the container name with the name of the Docker itself, which is Homeassistant, so that will work every time!
Got some more testing done, I was confused, I don’t need the dockers name itself, it was only for testing.
What is my trouble is how to write the script itself.
“ssh user@IPaddress” asks for password. How do I put this in the script? So that it will automatically connect to ssh and run net rpc shutdown -f -t 0 -I 192.168.1.3 -U username%password?
Try setting up ssh key for the docker container to host(remote server). This way user/pass not needed.
You may be able to maintain persistence of this by mounting folder holding key to docker container
Other than that may require google (how ssh bash script).
Using "echo’ command may be possible but I think I try echo’ before with no success. Key may be best solution.
Great idea, I tried it now.
I managed to make the key, did this:
ssh-copy-id [email protected]
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed – if you are prompted now it is to install the new keys [email protected]’s password:
Number of key(s) added: 1
Now try logging into the machine, with: “ssh ‘[email protected]’”
Tested the command above, and it worked.
So now I edited the container and added this:
For host: /root/.ssh/ and for container: /root/.ssh/ which seems to be the standard location.
And restarted the container, but no success. So I think it might now have copied over to host?
But i checked the hosts /root/.ssh/ folder and the files were not updated i think, only one had been changed today. So How do i copy all the files, and can i overwrite the existing files on host? Because this looks like the standard folder for this.
Not my expertise area but on host you should ADD public key you generate. The website provided method for doing this.
In container the private key should have been save in location that was mounted. I think you used /root/.ssh
The /root/.ssh folder should have been mounted before running commands. So it look like you created keys, tested, and then went back and made container with /root/.ssh mounted(or mapped…basically the -“v folder:folder” used when make container)
I think you need to rerun all commands again because private key does not exist in container any more. You will need to recreate all keys and save to public key to host again. Then test
Had to use ‘bash /pathtoscript/script.sh’
Without bash it wouldnt work.
Your script worked great!
And I restarted the container, it was given a new ID and ssh with root still works, hope an update doesnt break it.
I really like watchtower, it works flawlessly for me this far, thats why I wanted to keep Home Assistant on Docker, smooth updates and running great, except this workaround
Thank you for the great help, wouldn’t ever have made it without you!
EDIT: I figured it out. I was trying to store the keys in /.ssh in the config directory, but I realized that host wasn’t looking there so it wouldn’t persist over restarts. I changed to the default - v /root/.ssh directory and it seems to we working now.
I wanted to thank you guys for this discussion. I’ve recently migrated to docker and used your discussion as a guide to setting up the SSH keys between container and shell.
The commentary about persisting keys in a mounted volume was very helpful.
One question: On other real machines (as opposed to containers ) I often find a known_hosts file generated after connection and the ‘accept keys?’ prompt. Are you seeing this file? From reading on https://www.ssh.com/ssh/host-key it seems unnecessary since we are generating our own key pairs.
I wanted to circle back around to this topic after a recent rebuild.
I’ve found that upon starting a new container, I need to execute the following in the container to get ssh keys copied and I wonder if there is a good way to automate this when the container comes up.
What I’m going to do i to mount my containers /root/.ssh directory to where I have my working ssh keypairs and see it this will eliminate any manual intervention. Otherwise, I’d have to shell script the procedure every time the container comes up. I deleted the known_hosts file and it seems to work fine with that missing and eliminates that step.
Those steps are obviated by doing the mount, so far so good. So I added the following to my docker-compose:
where the config directory is my backed up homeassistant configuration folder and in it there are ssh keys that will allow the container to ssh into the host. Note that permissions need to be 700 or more restrictive or ssh will reject them.
However, when I try this using commands defined in shell_commands I get a 255 error.
Checking on the docker.lan host I get:
Apr 21 14:10:13 docker sshd[3884119]: Failed password for derek from 192.168.1.14 port 41396 ssh2
Apr 21 14:10:13 docker sshd[3884119]: Failed password for derek from 192.168.1.14 port 41396 ssh2
Apr 21 14:10:13 docker sshd[3884119]: Connection closed by authenticating user derek 192.168.1.14 port 41396 [preauth]
When I try exactly the same command, copied and pasted from shell_commands and running in the HA container shell, it works. Any ideas?