Docker SSH or Detach/Attach - docker

I have a docker image with all the necessary tools and environment properly set up. However, I am having a hard time running it in the background.
Seems like there are two approaches:
(1) can run the box as daemon and I can attach to it whenever I want to use the box. However, the container exit with code zero right after I run it as daemon.
$:~/docker/docker_scrapy$ sudo docker run -ti -v ~/docker/docker_scrapy/myvolume:/var/myvolume 3fb9894af1d9 /bin/bash
root#3fc39116a586:/# python -c 'from bs4 import BeautifulSoup'
root#3fc39116a586:/# cd /var/myvolume/
root#3fc39116a586:/var/myvolume#
$:~/docker/docker_scrapy$ sudo docker run -d -v ~/docker/docker_scrapy/myvolume:/var/myvolume 3fb9894af1d9
c5fab6e6ac02a579e3371aa641b18ca67feb93a9f4f4934b6d083157182fe4e1
$:~/docker/docker_scrapy$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
Clearly, I can start the box in the interactive mode, but when I try to run it as a daemon, it will exit with code 0 right after I started. And I can not attach to it because I need to start it. Does that mean you can not run a image in the daemon mode if it is idle?
(2 )Or setting it up as a SSH server, and I can ssh in and do the work whenever I want. Like Vagrant up/ssh..
In summary:
(1) What did I do wrong with the detach/attach?
(2) Which is the proper way to have a run docker in the background? daemon/ssh

If you give it another command to run after starting the service that waits for input then the container will keep running until you attach and exit that command. I usually leave a shell running after the service starts so I can debug things. here's a simple example:
First let's create a service that runs in the background
arthur#a:~$ docker run -ti ubuntu bash
root#5dc7f330b947:/# cat <<'EOF' >start-service.sh
> while true
> do
> echo service is running >> service.log
> sleep 10
> done
> EOF
root#5dc7f330b947:/# chmod +x start-service.sh
root#5dc7f330b947:/# exit
arthur#a:~$ docker ps -l
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5dc7f330b947 ubuntu:12.04 bash 50 seconds ago Exited (0) 3 seconds ago jolly_nobel
arthur#a:~$ docker commit 5dc7f330b947 service/example
4c37b69b129287d79a6fe3916e4293f935194966b1de49d125f1cf8d6ab14f6f
then we can start it (i background it with a & here. in your example the & would not be required). Note it's fine to use both the interactive and detach options.
arthur#a:~$ docker run -ti -d service/example bash -c "./start-service.sh & bash"
b35a5397ea2d29b4085d93ef32270379b09e49118380b0376309bca74fd719d0
arthur#a:~$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b35a5397ea2d service/example:latest bash -c './start-ser 7 seconds ago Up 7 seconds cranky_wright
later we can attach and check on the service by looking in it's log file:
arthur#a:~$ docker attach b35a5397ea2d
root#b35a5397ea2d:/# cat service.log
service is running
service is running
service is running
root#b35a5397ea2d:/#
I don't recommend running sshd inside the container because it leaves an option for attackers that isn't strictly useful for me.

A lots of questions in there. I would firstly suggest you go through the docker tutorial to grasp some of the underlying concepts. That said...
That Dockerfile will never run in the background, that's not how docker works. There is no cmd, no entrypoint, nothing to run.
Docker by default runs one task, when that returns the container stops. So if all you wanted was a sshd you would run that as your CMD in non daemon mode. (sshd -D)
There are ways to run daemonized apps though:
Using supervisord, as documented on the docker site.
Another alternative is phusion/baseimage.
Phusion/baseimage provides ssh access, but honestly to do what I need in containers I find nsenter easier to use. Especially when paired with the phusion docker-bash tool.

Notice: this answer promotes a tool I've written.
First of all, conceptually running multiple processes in one container is not the right approach (https://docs.docker.com/articles/dockerfile_best-practices/). A more favorable solution is one that involves multiple containers each running their own process/service. Linking them together would result in a coherent application.
I've created a containerized SSH server that you can 'stick' to any running container. This way you can create compositions with every container, without that container even knowing about ssh. The only requirement is that the container has bash.
The following example would start an SSH server attached to a container with name 'sshd-web-server1'.
docker run -ti --name sshd-web-server1 -e CONTAINER=web-server1 -p 2222:22 \
-v /var/run/docker.sock:/var/run/docker.sock -v $(which docker):/usr/bin/docker \
jeroenpeeters/docker-ssh
You connect to the SSH server with your ssh client of choice, just as you normally would.
Be adviced: Docker-SSH is currently still under development, but it does work! Please let me know what you think
For more pointers and documentation see: https://github.com/jeroenpeeters/docker-ssh

Related

How to see the docker state instead of interacting with the shell?

I use following command to build web server
docker run --name webapp -p 8080:4000 mypyweb
When it stopped and I want to restart, I always use:
sudo docker start webapp && sudo docker exec -it webapp bash
But I can't see the server state as the first time:
Digest: sha256:e61b45be29f72fb119ec9f10ca660c3c54c6748cb0e02a412119fae3c8364ecd
Status: Downloaded newer image for ericgoebelbecker/stackify-tutorial:1.00
* Running on http://0.0.0.0:4000/ (Press CTRL+C to quit)
How can I see the state instead of interacting with the shell?
When you use docker run, the default behavior is to run the container detached. This runs in the background and is detached from your shell's stdin/out.
To run the container in the foreground and connected to stdin/out:
docker run --interactive --tty --publish=8080:4000 mypyweb
To docker start a container, similarly:
docker start --interactive --attach [CONTAINER]
NB --attach rather than -tty
You may list (all add --all) running containers:
docker container ls
E.g. I ran Nginx:
CONTAINER ID IMAGE PORTS NAMES
7cc4b4e1cfd6 nginx 0.0.0.0:8888->80/tcp nostalgic_thompson
NB You may use the NAME or any uniquely identifiable subset of the ID to reference the container
Then:
docker stop nostalgic_thompson
docker start --interative --attach 7cc4
You may check the container's logs (when running detached or from another shell) by grabbing the container's ID or NAMES
docker logs nostalgic_thompson
docker logs 7cc4
HTH!
Using docker exec is causing the shell to attach to the container. If you are comparing the behavior of docker run versus docker start, they behave differently, and it is confusing. Try this:
$ sudo docker start -a webapp
the -a flag tells docker to attach stdout/stderr and forward signals.
There are some other switches you can use with the start command (and a huge number for the run command). You can run docker [command] --help to get a summary of the options.
One other command that you might want to use is logs which will show the console output logs for a running container:
$ docker ps
[find the container ID]
$ docker logs [container ID]
If you think your container's misbehaving, it's often not wrong to just delete it and create a new one.
docker rm webapp
docker run --name webapp -p 8080:4000 mypyweb
Containers occasionally have more involved startup sequences and these can assume they're generally starting from a clean slate. It should also be extremely routine to delete and recreate a container; it's required for some basic tasks like upgrading the image underneath a container to a newer version or changing published ports or environment variables.
docker exec probably shouldn't be part of your core workflow, any more than you'd open a shell to interact with your Web browser. I generally don't tend to docker stop containers, except to immediately docker rm them.

running a docker container with systemd

I have a service definition that starts a docker container using systemd on CentOS 7:
[Unit]
Description=MappingService
After=portal.service
Requires=docker.service
[Service]
TimeoutStartSec=3000
Type=forking
WorkingDirectory=/home/user/Downloads/MS_0.3.4_artifact
ExecStartPre=-/bin/docker rm -f eb-mapping-service-container
ExecStartPre=/home/user/Downloads/MS_0.3.4_artifact/deploy.sh /home/user/Downloads/MS_0.3.4_artifact/eb-mapping-service.tgz
ExecStartPre=/bin/docker run -v /dev/log:/dev/log -d -ti --log-driver=journald --network=bridge -p 9090:9090 --name eb-mapping-service-container eb-mapping-service /bin/bash -c "cd /build/MappingService; ./start_multiple_clients_mapping_service.sh"
ExecStart=/bin/docker start -a eb-mapping-service-container
ExecStop=/bin/docker stop eb-mapping-service-container
[Install]
WantedBy=multi-user.target
This service works. The Docker container it launches is up.
Whenever I boot the computer it is running. My problem with this service is that it's never reaching the Active(Running) status. Instead, it is stuck in 'Activating(start)' status.
The 'start_multiple_clients_mapping_service.sh' script starts a node.js server and it starts listening, so it isn't exiting directly.
I've searched everywhere and scoured Docker's documentation about this and couldn't find an answer.
Also, if I remove the '-a' from the docker start command, then the status will be Inactive(dead) even though the container will be up and running.
Update:
After a while, I don't have an exact number, the service fails with the timeout reason. This isn't after 3000 seconds but way earlier. Although the service failed, the docker is still on the air and can be used.
I've verified with docker container ls
Question:
How do I change my service definition to reflect the Active(Running) status for the docker?
I understood the problem. There were a couple of things wrong:
the docker run command should not be used with the flags -d and -ti.
the Type should be set to exec instead of forking.
After doing these two changes, I got the much sought after Active(Running) status with the Docker successfully launched.

How to wait until `docker start` is finished?

When I run docker start, it seems the container might not be fully started at the time the docker start command returns. Is it so?
Is there a way to wait for the container to be fully started before the command returns? Thanks.
A common technique to make sure a container is fully started (i.e. services running, ports open, etc) is to wait until a specific string is logged. See this example Waiting until Docker containers are initialized dealing with PostgreSql and Rails.
Edited:
There could be another solution using the HEALTHCHECK of Docker containers.The idea is to configure the container with a health check command that is used to determine whether or not the main service if fully
started and running normally.
The specified command runs inside the container and sets the health status to starting, healthy or unhealthy
depending of its exit code (0 - container healthy, 1 - container is not healthy). The status of the container can then be retrieved
on the host by inspecting the running instance (docker inspect).
Health check options can be configured inside Dockerfile or when the container is run. Here is a simple example for PostgreSQL
docker run --name postgres --detach \
--health-cmd='pg_isready -U postgres' \
--health-interval='5s' \
--health-timeout='5s' \
--health-start-period='20s' \
postgres:latest && \
until docker inspect --format "{{json .State.Health.Status }}" postgres| \
grep -m 1 "healthy"; do sleep 1 ; done
In this case the health command is pg_isready. A web service will typically use curl, other containers have their specific commands
The docker community provides this kind of configuration for several official images here
Now, when we restart the container (docker start), it is already configured and we need only the second part:
docker start postgres && \
until docker inspect --format "{{json .State.Health.Status }}" postgres|\
grep -m 1 "healthy"; do sleep 1 ; done
The command will return when the container is marked as healthy
Hope that helps.
Disclaimer, I'm not an expert in Docker, and will be glad to know by myself whether a better solution exists.
The docker system doesn't really know that container "may not be fully started".
So, unfortunately, there is nothing to do with this in docker.
Usually, the commands used by the creator of the docker image (in the Dockerfile) are supposed to be organized in a way that the container will be usable once the docker start command ends on the image, and its the best way. However, it's not always the case.
Here is an example:
A Localstack, which is a set of services for local development with AWS has a docker image, but once its started, for example, S3 port is not ready to get connections yet.
From what I understand a non-ready-although-exposed port will be a typical situation that you refer to.
So, out of my experience, in the application that talks to docker process the attempt to connect to the server port should be enclosed with retries and once it's available.

Build docker ubuntu image by Dockerfile

If command "docker run ubuntu bash" the container won't last.
but if I command "docker run -it ubuntu bash"
the container will make a pseudo-tty and keep this container alive.
my question is
is there any way I can make a Dockerfile for building an image based on ubuntu/centos then I just need to command "docker run my-image" and
the container will last.
apologize for my poor english, I don't know if my question is clear enough.
thanks for any response
There are three ways to run containers:
task containers - do one thing and then exit, like docker run ubuntu ls /
interactive containers - open a connection to the container with -it, like docker run -it ubuntu bash
background containers - keep a container running detached in the background with -d, like docker run -d ubuntu:14.04 ping localhost
Docker keeps the container running as long as there is an active process in the container. The first container exits when the ls command completes. The second container will exit when you exit the bash session. The third container will stay running as long as the ping process keeps running (note that ping has been removed from the recent Ubuntu images, which is why the example specifies 14.04).

Killing a Process in a Running Docker Container

I have a server-side application running in a Docker container. One of the processes in it has hung and needs to be killed (the application will then spawn another process to replace it).
Is there any way to kill that process without stopping the Docker container?
It is not possible with Docker for now, but seems to be scheduled for 0.8, see issue #1228 here.
It is however possible to use lxc-attach to run a shell in an existing container (seen in the above issue comments) and you then can kill your hung process from there :
$ lxc-attach -n FULLCONTAINERID /bin/bash
You can get the FULLCONTAINERID with docker ps --no-trunc=true :
root#turmes /home/zoobab [35]# docker ps --no-trunc=true
CONTAINER ID IMAGE > COMMAND C STATUS PORTS NAMES
2741d88a51148e66d7b2b44d8c1cc6ed7d1515f370be5d00bd003d40cf8d575b zoobab/centos57:latest kamailio -P /var/run/kamailio.pid -m 64 -M 4 -u kamailio -g kamailio -D 1 Up 19 minutes angry_fermat
root#turmes /home/zoobab [36]#
If running Docker 1.3 or newer is not an option, you can still obtain access to a root shell inside the Docker Container using nsenter.
This blog post has all the instructions you need.
Once you have root shell access, you can of course perform any operation you like.
maybe times have changed but as of docker.latest:
docker kill 593690fe0087 killed the CONTAINER ID when I ran docker ps. I had a container there for 2 weeks and only saw it now when the environment wasn't up.
You can do this now in Docker 1.3 using the exec command:
docker exec container_name kill process_name

Resources