I want to automate a sftp-based backup in a alpine-based docker container.
I've got a /bin/sh script that should check if the connection is established successfully.
Clients connect via ssh keys, so it is passwordless authentication.
timeout -k 1 4 sshfs -p $port -o IdentityFile=/home/ssh/ssh_host_rsa_key,StrictHostKeyChecking=accept-new,_netdev,reconnect $user#$address:/ /mnt/sftp/
This line establishes the sftp connection. It works just fine if the key is correct and even when the server is refusing the connection. But there is a problem when the server isn't accepting the key provided, then it's asking for a password in the interactive shell like:
user123#backup.example.xyz's password:
and timeout just does not kill the process, the script doesn't go foroward after this since it waits for user input (which is not going to happen).
I use this script at startup to check the connection and stop the container immediately if it fails, so the user notices configuration errors right as he starts the container.
Is there a way to kill this command after a certain time or as a workaround prohibit interactive input for the sshfs command?
Thanks!
Related
I am trying to test a c/s application running on remote machine. The expected result is client calling for service, the service node will coredump with dumpfile generated.
When I ssh onto the remote machine, it works with coredump generated.
When I run the same cmd with sshpass, however, my service does't crash and no coredump file generated, but I get the same stdout info.
I am totally puzzed, sshpass differs with ssh not only password management?
I have a docker container that internally starts a server. (I don't own this. I am just reusing it)
Once the server starts, I am running some curl commands that hit this server.
I am running the above steps in a script. Here's the issue:
the docker container starts but internally I think it is taking some time to actually start the server in it.
Before that server is up and running, it looks like the curl commands start executing and give an error that server could not be found. If I manually run this a few seconds later, it works fine though.
Please let me know if there is a way to solve this. I don't think using entry point or CMD will work for similar reasons.
Also, if that matters, the server I am using is kong.
thanks, Om.
The general answer to this is to perform some sort of health check; once you've verified that the server is healthy you can start making live requests to it. As you've noticed, the container existing or the server process running on its own isn't enough to guarantee that the container can handle requests.
A typical approach to this is to make some request to the server that will fail until the server is ready. You don't need to modify the container to do this. In some environments like Kubernetes, you can specify health checks or probes as part of the deployment configuration, but for a simple shell script, you can just run curl in a loop:
docker run -p 8080:8080 -d ...
RUNNING=false
for i in $(seq 30); do
# Try GET / and see if it succeeds
if curl -s http://localhost:8080/
then
echo Server is running
RUNNING=true
break
else
echo Server not running, waiting
sleep 1
fi
done
if [ "$RUNNING" = false ]; then
echo Server did not start within 30s
# docker stop ... && docker rm ...
exit 1
fi
If you just need to know the port is up, this simple script is very handy:
https://github.com/vishnubob/wait-for-it
Background:
I am running a Google Compute Engine VM, called host.
There is a Docker container running on the machine called container.
I connect to the VM using an account called user#gmail.com.
I need to connect through ssh from the container to the host, without being prompted for the user password.
Problem:
Minutes after successfully connecting from the container to the host, the user/.ssh/authorized_keys gets "modified" by some process from Google itself. As far as I understood this process appends some ssh keys needed to connect to the VM. In my case though, the process seems to overwrite the key that I generated from the container.
Setup:
I connect to host using Google Compute Engine GUI, pressing on the SSH button.
Then I follow the steps described in this answer on AskUbuntu.
I set the password for user on host:
user#host:~$ sudo passwd user
I set PasswordAuthentication to yes in sshd_config, and I restart sshd:
user#host:~$ sudo nano /etc/ssh/sshd_config
user#host:~$ sudo systemctl restart sshd
I enter in the Docker container using bash, I generate the key, and I copy it on the host:
user#host:~$ docker exec -it container /bin/bash
(base) root#container-id:# ssh-keygen
(base) root#container-id:# ssh-copy-id user#host
The key is successfully copied to the host, the host is added to the known_hosts file, and I am able to connect from the container to the host without being prompted for the password (as I gave it during the ssh-copy-id execution).
Now, if I detach from the host, let some time pass, and attach again, I find that the user/.ssh/authorized_keys file contains some keys generated by Google, but there is no trace of my key (the one that allows the container to connect to the host).
What puzzles me more than everything is that we consistently used this process before and we never had such problem. Some accounts on this same host have still keys from containers that no longer exist!
Does anyone has any idea about this behavior? Do you know about any solutions that let me keep the key for as long as it is needed?
It looks like the accounts daemon is doing this task. You could refer this discussion thread for more details about this.
You might find the OS Login API a easier management option. Once enabled, you can use a single gcloud command or API call to add SSH keys.
In case anyone has trouble with this even AFTER adding SSH keys to the GCE metadata:
Make sure your username is in the SSH key description section!
For example, if your SSH key is
ssh-rsa AAAA...zzzz
and your login is ubuntu, make sure you actually enter
ssh-rsa AAAA...zzzz ubuntu
since it appears Google copies the key to the authorized_keys of the user specified inside the key.
In case anyone is still looking for solution for this, I solved this issue by storing the SSH Keys in Compute Engine Metadata https://cloud.google.com/compute/docs/instances/adding-removing-ssh-keys
I want to have a server to transparently forward an incoming ssh connection from a client to a docker container. This should include scp, git transport and so forth. This must work with keys, passwords are deactivated. The user should not see the server. Update: Yes, this really means that the user shall be unaware that there is a server. The configuration must take place entirely on the server!
client -----> server -----> container (actual connection)
client -------------------> container (what the user should see)
So, what is given is this:
user#client$ ssh user#server
user#server$ ssh -p 42 user#localhost
user#container$
But what I want is this:
user#client$ ssh user#server
user#container$
I tried using the command="ssh -p 42 user#localhost" syntax in the authorized_keys files, which kinda works, only that in the second ssh connection the user has to enter their password as the authentication is not passed (the server doesn't has the private key of user).
Further this approach doesn't work with scp even if one enters a password.
I also heard about the tunnel= command, but I don't know how to set that up (and the manpage is less than helpful).
I am using OpenSSH 7.5p1 on Arch.
Put this in your ~/.ssh/config file:
Host server-container
ProxyCommand ssh server -W localhost:42
Then simply do:
ssh server-container
As long as your usernames are consistent. If not, you can specify them as this:
Host server-container
ProxyCommand ssh server-user#server -W localhost:42
Then simply do:
ssh container-user#server-container
Just as a bonus, you can avoid to use ssh to enter into the container using docker exec. Like this:
ssh -t server docker exec -it <container-id> bash
This is the solution I came up with now. I'm a bit unhappy with the second key, as it's public part will be visible in the container's ~/.ssh/authorized_keys which very slightly breaks transparency, but other than that all other things seem to work.
user#server$ cat .ssh/authorized_keys
command="ssh -q -p 42 user#localhost -- \"$SSH_ORIGINAL_COMMAND\"",no-X11-forwarding ssh-rsa <KEYSTRING_1>
user#server$ cat .ssh/id_rsa.pub
<KEYSTRING_2>
user#container$ cat .ssh/authorized_keys
ssh-rsa <KEYSTRING_2>
The client authorises against server with their private key. Then the server jumps to the container with a dedicated key that is only there for that particular auth. I'm a bit worried that you can break out of command= by injecting some commands, but so far I found no permutation that allows to break out.
Due to passing $SSH_ORIGINAL_COMMAND, you can even do scp and ssh-copy-id and so forth.
Note: To disallow ssh-copy-id, which I want for other reasons, simply make authorized_keys non-writeable for user inside the container.
I'm using the default HAProxy Docker image from https://github.com/dockerfile/haproxy
Unfortunately I can't get it to reload my config properly.
If I run
$ sudo docker exec haprox haproxy -f /etc/haproxy/haproxy.cfg -p '$(</var/run/haproxy.pid)' -st '$(</var/run/haproxy.pid)'
it just dumps out the help file. If I run
$ sudo docker exec haprox 'haproxy -f /etc/haproxy/haproxy.cfg -p $(</var/run/haproxy.pid) -st $(</var/run/haproxy.pid)'
I get
2014/12/30 00:03:23 docker-exec: failed to exec: exec: "haproxy -f /etc/haproxy/haproxy.cfg -p $(</var/run/haproxy.pid) -st $(</var/run/haproxy.pid)": stat haproxy -f /etc/haproxy/haproxy.cfg -p $(</var/run/haproxy.pid) -st $(</var/run/haproxy.pid): no such file or directory
Boo. None of those things are what I want. I can run docker exec haprox service haproxy reload - but this ends out spawning several haproxy processes, so when I connect via the unix socket I get one set of information from show stat but I get an entirely different set of information from the http stats page.
I'm trying to set it up so that I can do graceful redeploys of our legacy software, but it does very very bad things with Tomcat sessions, so my only option is to keep existing sessions alive and pinging the same server.
backend legacy
cookie SERVERID insert indirect preserve
server A 123.0.0.123:8080 cookie A check
server B 123.0.0.123:8080 cookie B check
does the trick. I can call up the socket and run set weight legacy/A 0 and it will drain connections from server A.
But (remember that legacy part?) I have to bop my server A/B containers on the head and bring up new ones. I've got my system setup where I'm generating the new config just fine, but when I reload... strange things happen.
As mentioned earlier, it ends out spawning several haproxy processes. I get different information from the stats page and the unix socket. It also appears that the pid file of the process that I'm communicating with in the browser vs. socat are different.
Worst of all, though, is that it will stop http connections with a 503 - and using ab to test will report some dropped connections. This part is not OK.
Old sessions MUST continue to function until the old server goes down/cookies are cleared. It seems like the rest of the Internet is able to do what I'm trying to do... What am I doing wrong here?
You can now reload the config:
docker kill -s HUP haproxy_container_name
More info: https://hub.docker.com/_/haproxy
I know this is an old question and does not help after 6 years! :) But maybe useful for someone!
if you run ps inside the container as follows you will see the container you have linked runs haproxy as pid 1 which cannot be killed without killing the container and also it is run in foreground so without a pid file. If you want to reload run haproxy in the background in your container and make some other process such as supervisor the primary process.
docker exec -it haproxy ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
haproxy 1 0.0 0.2 28988 4576 ? Ss 02:41 0:00 haproxy -f /etc/haproxy/haproxy.cfg -p /var/run/haproxy.pid