I have a remote docker container that I access over SSH. I start the container normally with the docker start command.
sudo docker start 1db52045d674
sudo docker exec -it 1db52045d674; bash
This starts an interactive tty in the container, which I access over ssh.
I'd like the container to kill itself if I close the SSH connection. Is there anyway to do this?
.bash_logout is executed every time you use exit command to end a terminal session.
So you can use this file to run the docker stop command when you exit the ssh connection on the remote server.
Create ~/.bash_logout file if not existing.
Add following command to stop the docker container in this file.
Example :
docker stop container_name
Note: If a user closes the terminal window instead of writing the exit command, this file is not executed.
I was hoping for a more elegant solution but in the end I launched a bash script over ssh to trap for a SIGHUP
something like:
trap 'docker stop CONTAINER_NAME' SIGHUP;
while sleep 5;
do echo "foo";
done;
so when the operator closes the SSH connection, the trap gets trigger and docker nicely stops
You can use the --init parameter for initializing. This way, your container will be able to take over the init process and you can send a kill signal to it: https://docs.docker.com/engine/reference/run/#specify-an-init-process
Start the server:
docker run --init \
-p 2222:2222 \
-e USER_NAME=user \
-e USER_PASSWORD=pass \
-e PASSWORD_ACCESS=true \
-e SUDO_ACCESS=true \
linuxserver/openssh-server
Just note the --init and -e SUDO_ACCESS=true parameters here.
In another (client) shell,
ssh into container:
$ ssh user#127.0.0.1 -p 2222 -oStrictHostKeyChecking=accept-new
Warning: Permanently added '[127.0.0.1]:2222' (ECDSA) to the list of known hosts.
user#127.0.0.1's password:
Welcome to OpenSSH Server
2a. send kill signal to PID1 (docker-init):
$ sudo kill -s SIGINT 1
[sudo] password for user:
$ Connection to 127.0.0.1 closed by remote host.
Connection to 127.0.0.1 closed.
Container is gone.
I hope this helps.
Related
I want to run this command in docker.
./kafka_2.12-2.2.1/bin/connect-distributed.sh","./kafka_2.12-2.2.1/config/connectdistributed.properties"
This command starts Kafka connect and connected to my azure event hub.
After the Kafka is connected then I want to run the script file in docker.
./script.sh
In this script checks if Kafka is connected successfully. If it is not connected then wait for connection. After the connection is done check if XYZ name connectors not available then create it otherwise script is terminated. I am able to do these things on a local machine but I' not in docker.
I needed to do something similar to wait for db to initialize before server starts otherwise it would fail/exit before whole compose process was done. I add an entrypoint script to the server with something like below.
I imagine there is a way to call your service and send the command to quit immediately. You might have to install the basic tools on that other image.
So, I have a db image. I need to install basic db connect (psql, sqlcmd) on server image to be able to contact my db container to ensure it is running.
host="$1"
shift
cmd="$#"
if [ "${PROVIDER}" == "postgres" ]; then
echo -e "Running under Postgres"
until PGPASSWORD=$POSTGRES_PASSWORD psql -h "$host" -U "$POSTGRES_USER" "$POSTGRES_DB" -c '\q'; do
>&2 echo -e "Postgres is unavailable - sleeping"
sleep 2
done
# https://learn.microsoft.com/en-us/sql/linux/sql-server-linux-configure-docker?view=sql-server-ver15
# The IP address in the connection string is the IP address of the host machine that is running the container.
else
echo -e "Running under Sql Server"
# sqlcmd -?
# sqlcmd -S db.mssql -U rust -P -d rust_test -q "Select * from rust.content_categories"
# sqlcmd -S 192.168.1.108 -U test -P -d shortpoetdb -q "Select * from vcc.admin_users"
until /opt/mssql-tools/bin/sqlcmd -S db.mssql -U "${MSSQL_USER}" -P "${MSSQL_PASSWORD}" -d "${MSSQL_DB}" -q ":exit"; do
>&2 echo -e "$Mssql is $unavailable - sleeping"
sleep 2
done
fi
>&2 echo -e "${PROVIDER} Database is up - executing command"
exec $cmd
docker docs wait
ms docs sql server linux setup
want to run this command in docker.
./kafka_2.12-2.2.1/bin/connect-distributed.sh
confluentinc/cp-kafka-connect images run this already
You should not run Connect and Brokers on the same machines or multiple processes in a container
Run your script as background process which will keep on checking Kafka is connected or not. Once connected it will run its commands and exit and your main command will keep on running and docker won't exit
Eg.
CMD nohup bash -c "script.sh &" && your-command
I need to know if other container has been restarted to run some command in my container. Is there a way to be aware of container restarting within another container?
You didn't give any specifics so I assume both containers are running on the same host. In this case a simple solution is to get the restart event from the docker
daemon running on the host and then send a signal to the other container. docker events can easily do that
Run on the host using, for example, the docker-compose service name to filter the events notified for the restarting container:
docker events | \
grep --line-buffered 'container restart.*com.docker.compose.service=<compose_service_name>' | \
while read ; do docker kill --signal=SIGUSR1 my_ubuntu ; done
The docker kill --signal=SIGUSR1 my_ubuntu send the USR1 signal to the other container where the command needs to be run. To test it, run ubuntu with a sigtrap for USR1:
docker run --rm --name my_ubuntu -it ubuntu /bin/bash \
-c "trap 'echo signal received' USR1; \
while :; do echo loop; sleep 10 & wait ${!}; done;"
Now restart the container and the signal handler will execute echo inside the other container. It can be replaced with the real command.
The docker events are part of the Docker REST API (see Monitor Docker's events) so if the other container can connect to the docker daemon running on host, it can get the restart notification directly.
Hope it helps.
Request
This docker command injects my ssh key and username into a container, connects that container to a remote host, and then runs: echo hello world on the host:
docker run --rm \
-e "host=the.host" \
-e "user=my.user.name" \
-v "/home/me/.ssh/id_rsa:/key" \
ubuntu:18.04 /bin/bash -c \
'apt update && apt install -y ssh \
&& ssh -o StrictHostKeyChecking=no $user#$host -i /key
echo hello world'
I want the command to be able to connect to the remote host, but I don't want it to be able to cat /key and see my ssh key.
What changes can I make to achieve this?
Context
I'm writing a test runner. My code is responsible for determining which tests can be run against which hosts, but the test itself might not be written by me (it gets pulled in from a git repo when my test runner starts up).
I am not worried about my colleagues abusing the server with their test code because that abuse would be visible in source control. They are semi-trusted in this case. I am worried about somebody writing a test which causes my ssh key to appear in log output somewhere.
Ideally, I would set up the ssh connection first, then create the container--somehow granting it access to the connection, but not the key.
The feature I needed was SSH Multiplexing which I learned about here
This file goes in the docker image at ~/.ssh.config
Host foo
ControlMaster auto
ControlPath ~/.ssh/cm_socket/%r#%h:%p
And this file goes on the host:
Host foo
HostName foo.bar.com
User my.username
IdentityFile /path/to/key
IdentitiesOnly yes
ControlMaster auto
ControlPath ~/.ssh/cm_socket/%r#%h:%p
I called the image keylesssh, so this command creates a container which doesn't have the key, but does have the folder which will contain a socket if there is an existing connection.
docker run --rm \
-v "/home/matt/.ssh/cm_socket:/root/.ssh/cm_socket" \
keylesssh /bin/bash -c 'ssh dev1 hostname'
It comes together like this:
I open an ssh connection from the machine that is hosting the docker daemon to the remote host
SSH will creates a socket and put it in ~/.ssh/cm_socket with a predictable name
I create a container, sharing that folder with it
The container tries to ssh to the host, notices that the socket exists, and uses the existing connection without authenticating--no key required
Once the test has finished running, the container shuts down
When my code notices the container shutting down, I kill the master ssh connection
I know it works because the hostname command resolves the hostname of the remote server, not of the container or of the docker host.
According to the information on docker hub (https://hub.docker.com/r/voltdb/voltdb-community/) I was able to start the three nodes after adding the nodenames to my /etc/hosts file. Commands I executed:
docker pull voltdb/voltdb-community:latest
docker network create -d bridge voltLocalCluster
docker run -d -P -e HOST_COUNT=3 -e HOSTS=node1,node2,node3 --name=node1 --network=voltLocalCluster voltdb/voltdb-community:latest
docker run -d -P -e HOST_COUNT=3 -e HOSTS=node1,node2,node3 --name=node2 --network=voltLocalCluster voltdb/voltdb-community:latest
docker run -d -P -e HOST_COUNT=3 -e HOSTS=node1,node2,node3 --name=node3 --network=voltLocalCluster voltdb/voltdb-community:latest
docker exec -it node1 bash
sqlcmd
> Output:
Unable to connect to VoltDB cluster
localhost:21212 - Connection refused
According to log files the voltdb has started and is running normally.
Does anyone have an idea why the connection is refused?
You have to follow the given example and fix your HOSTS argument.
It should be HOSTS=node1,node2,node3 instead of yours, thus you let your service know about all nodes in cluster.
There might exists a bug in the docker-entrypoint.sh I don't see yet because I shouldn't need to connect into the container and run these commands manually, but doing this solved my issue:
docker exec -it node1 bash
voltdb init
voltdb start
I created a fresh Digital Ocean server with Docker on it (using Laradock) and got my Laravel website working well.
Now I want to automate my deployments using Deployer.
I think my only problem is that I can't get Deployer to run docker exec -it $(docker-compose ps -q php-fpm) bash;, which is the command I successfully manually use to enter the appropriate Docker container (after using SSH to connect from my local machine to the Digital Ocean server).
When Deployer tries to run it, I get this error message:
➤ Executing task execphpfpm
[1.5.6.6] > cd /root/laradock && (pwd;)
[1.5.6.6] < /root/laradock
[1.5.6.6] > cd /root/laradock && (docker exec -it $(docker-compose ps -q php-fpm) bash;)
[1.5.6.6] < the input device is not a TTY
➤ Executing task deploy:failed
• done on [1.5.6.6]
✔ Ok [3ms]
➤ Executing task deploy:unlock
[1.5.6.6] > rm -f ~/daily/.dep/deploy.lock
• done on [1.5.6.6]
✔ Ok [188ms]
In Client.php line 99:
[Deployer\Exception\RuntimeException (1)]
The command "cd /root/laradock && (docker exec -it $(docker-compose ps -q php-fpm) bash;)" failed.
Exit Code: 1 (General error)
Host Name: 1.5.6.6
================
the input device is not a TTY
Here are the relevant parts of my deploy.php:
host('1.5.6.6')
->user('root')
->identityFile('~/.ssh/id_rsa2018-07-09')
->forwardAgent(true)
->stage('production')
->set('deploy_path', '~/{{application}}');
before('deploy:prepare', 'execphpfpm');
task('execphpfpm', function () {
cd('/root/laradock');
run('pwd;');
run('docker exec -it $(docker-compose ps -q php-fpm) bash;');
run('pwd');
});
I've already spent a day and a half reading countless articles and trying so many different variations. E.g. replacing the -it flag with -i, or setting export COMPOSE_INTERACTIVE_NO_CLI=1 or replacing the whole docker exec command with docker-compose exec php-fpm bash;.
I expect that I'm missing something fairly simple. Docker is widely used, and Deployer seems popular too.
To use Laravel Deployer you should connect via ssh directly to the workspace container.
You can expose the container's ssh port:
https://laradock.io/documentation/#access-workspace-via-ssh
Let's say you've forwarded container ssh port 22 to vm port 2222. In that case you need configure your Deployer to use the port 2222.
Also remember to set proper secure SSH keys, not the default ones.
You should try
docker-compose exec -T php-fpm bash;
The -T option will
Disable pseudo-tty allocation. By default docker-compose exec allocates a TTY.
In my particular case I had separate containers for PHP and Composer. That is why I could not connect to the container via SSH while deploying.
So I configured the bin/php and bin/composer parameters like this:
set('bin/php', 'docker exec php php');
set('bin/composer', 'docker run --volume={{release_path}}:/app composer');
Notice that here we use exec for a persistent php container which is already running at the moment and run to start a new instance of composer container which will stop after installing dependencies.