docker run with --interactive and --tty flag - docker

Edit:
Someone mark duplicate of this question, but it does not explain the underlying mechanism at all.
But on contrast, this stack overflow solve my confusion in Case I, but not Case II.
I am newbie in docker and i am confused about the usage of --interactive, --attach flag and those concepts involved
I will show my confusion using busybox in docker hub.
Case I :
When I run a container using the below commands. docker run --interactive --tty busybox sh
A container is running and accepting input
According to the document, --interactive flag used to
Keep STDIN open even if not attached
I don't understand what is the meaning of even if not attached to, attached to what?
Case II :
Then I exit the container and try to start it using
docker start --attach abdd796820b1 .
The terminal seems accepting input as well, but when I type ls or echo, it does not give response.
What did the --attach flag do?
Please help.

There are two ways in which you can interact with a running container
attach
exec
--interactive flag
As you mentioned it already says
Keep STDIN open even if not attached
Which from my understanding means it will read inputs from your terminal/console and reacts or present output to it. If you run docker run --tty alpine /bin/sh and docker run --tty --interactive alpine /bin/sh. One with --interactive will react to it.
attach
Attach to a running process
If the docker container was started using /bin/bash command, you can access it using attach, if not then you need to execute the command to create a bash instance inside the container using exec.
More in depth: If docker container is started using /bin/bash then it becomes containers PID 1 and attach command will attach you to PID 1.
exec
Creates new process
If you want to create a new process inside container than exec it used like exec is used to execute apt-get command inside container without attaching to it or run a node or python script.
Eg: docker exec -it django-prod python migrate
See here -i is for interactive and -t is for --tty that is pseudo-TTY. Interactive so that you can enter if something is prompted by this command.

You will need to provide the -i/--interactive option to forward your terminal STDIN to the container's sh.
Try this:
docker start -ai CONTAINER
https://docs.docker.com/engine/reference/commandline/start/

Related

Command docker start -a <Container ID> do nothing, "Ubuntu 20.04.02 LTS"

i'm trying to run the start command in docker after creating a container.
for example :
$ docker create busybox echo hi there
it gives me the id of that container like 4e59d0fe8584bb4dcaf44dbce100253b6767bf51546edc27f29f39f52ed57957
and when i try to start that container without any flags like: -a flag
it works, but it only gives me that id back again.
but when i try to show the output using the attach -a flag, actually nothing happened, even it didn't give me an error, simply the command still running without anything happened.
i also couldn't kill the command and stop the execution by Ctrl+c, so the only option i had to close the terminal
i tried to make the problem clear as possible as i can
You can run the image via this command:
docker run -it busybox
It goes you to shell environment and you have -i (interactive) -t (tty) terminal, which means you see the terminal.
The default CMD(PID1) for busybox image is sh, see this.
For docker create busybox echo hi there, the COMMAND becomes echo hi there. This means after container starts, it will first execute echo hi there, then as PID1 exit, the container exit too. If you use docker ps, you won't find your container, you could just find your exited container with docker ps -a.
So,
If you intend to run a one time task, then as the container finish its task, it's normal you could not enter into container anymore.
If you intend to run a daemon task, leave the container service there, you should choose a command which won't finish after run, then your container will still there.
For your case, to quick let you understand it, you could use next to have a quick understanding, use tail -f /dev/null to let the container not exit:
# docker create busybox sh -c "echo hi there; tail -f /dev/null"
840d7c972a96712e48c9aa391aa63638fb10e12307797e338157105bdfb6934e
root#shlava:~# docker start 840d7c972a96712e48c9aa391aa63638fb10e12307797e338157105bdfb6934e
840d7c972a96712e48c9aa391aa63638fb10e12307797e338157105bdfb6934e
root#shlava:~# docker logs 0ae2d689e63a8688213d1eaf285e555ba3d672b8953f0d2730a1897c9d648a26
hi there
root#shlava:~# docker exec -it 0ae2d689e63a8688213d1eaf285e555ba3d672b8953f0d2730a1897c9d648a26 /bin/sh
/ #

running docker container without /bin/bash command

I create docker container with
sudo docker run -it ubuntu /bin/bash
in book The docker book I read
The container only runs for as long as the command we specified, /bin/bash , is running.
didn't I created terminal with option -it and /bin/bash isn't required? will anything change if I don't pass any command in docker run?
You will get the same behavior if you run
sudo docker run -it ubuntu
because the ubuntu docker image specifies /bin/bash as the default command. You can see that in the ubuntu Dockerfile. As #tadman wrote in their answer, providing a command (like /bin/bash) overrides the default CMD.
In addition, -it does not imply a bash terminal. -t allocates a pseudo-tty, and -i keeps STDIN open even if not attached. See the documentation for further details.
That's an override to the default CMD specification. You can run a container with defaults, that's perfectly normal, but /bin/bash is a trick to pop open a shell so you can walk around and check out the built container to see if it's been assembled and configured correctly.

What is the meaning of running container?

'docker exec' can only used on running container, but what is the meaning of running container? Is that means the container should be computing something? or is the issue about the [command] which I define for the container? Why my TensorFlow container always be stopped status?
After I used 'docker run' to build a tensorflow container, the container stopped automatically. I need to restart it and then execute command on it. Why the container cannot be always in running since I build it?
docker run -it --runtime=nvidia tensorflow/tensorflow:latest-gpu-py3
It will then pops up a bash which I can use to control the container. But after I exit, the container stopped itself. Which means, I can only use docker ps -a to see my container but docker pscan not. I have to restart the container if I want to use my container again.
UPDATE1: If I want create a container like VM, I cannot use docker run with a temporal [command] like python ... The container will lose control permanently after the command finished. docker restart cannot start the container again. Hence, docker exec cannot apply on it. Instead, using bash or nothing as the [command] can create a container which can be restart, therefore, can be applied withdocker exec.
UPDATE2: docker run -d -it can create a running container (but the bash shell won't pops up, neither even with bash). Directly using docker exec -it container_name bash can take the control of the running container again, without docker restart. In this time, exiting bash shell will not stop the container.
A container is running when there is an active process running inside it.
When you are running this tensorflow container, it will exit due to there being no running process
If you were to run
docker run -it --runtime=nvidia tensorflow/tensorflow:latest-gpu-py3 bash
or
docker run -it --runtime=nvidia tensorflow/tensorflow:latest-gpu-py3 python <python script name>
then the container would run the bash/python script as a process and therefore remain up whilst that process is running
View running processes with:
docker ps
See all containers (including stopped/exited tasks) with:
docker ps -a
The difference between docker ps -a and docker ps is exactly what you are looking for:
From the documentation:
--all , -a Show all containers (default shows just running)
So
docker ps gives you only running container
docker ps -a also shows you the stopped ones
So probably, if you expect you container to be long running, (like it would for a web server), then indeed your container command could have an issue and is not keeping your container alive.
Also mind that, if you run your container with the options -ti, like you did, you get an interactive tty attached to it.
--tty , -t Allocate a pseudo-TTY
--interactive , -i Keep STDIN open even if not attached
That basically means that, as soon as you exit that interactive context, your container will shut down.
Running it in a detached mode, with the options -d, is maybe what you are looking for
docker run -d --runtime=nvidia tensorflow/tensorflow:latest-gpu-py3
Related documentation:
--detach , -d Run container in background and print container ID
https://docs.docker.com/engine/reference/commandline/run/

How do I run a command on an already existing Docker container?

I created a container with -d so it's not interactive.
docker run -d shykes/pybuilder bin/bash
I see that the container has exited:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d6c45e8cc5f0 shykes/pybuilder:latest "bin/bash" 41 minutes ago Exited (0) 2 seconds ago clever_bardeen
Now I would like to run occasional commands on the machine and exit. Just to get the response.
I tried to start the machine. I tried attaching. I thought I could call run with a container, but that does not seem to be allowed. Using start just seems to run and then exist quickly.
I'd like to get back into interactive mode after exiting.
I tried:
docker attach d6c45e8cc5f0
But I get:
2014/10/01 22:33:34 You cannot attach to a stopped container, start it first
But if I start it, it exits anyway. Catch 22. I can't win.
In October 2014 the Docker team introduced docker exec command: https://docs.docker.com/engine/reference/commandline/exec/
So now you can run any command in a running container just knowing its ID (or name):
docker exec -it <container_id_or_name> echo "Hello from container!"
Note that exec command works only on already running container. If the container is currently stopped, you need to first run it with the following command:
docker run -it -d shykes/pybuilder /bin/bash
The most important thing here is the -d option, which stands for detached. It means that the command you initially provided to the container (/bin/bash) will be run in the background and the container will not stop immediately.
Your container will exit as the command you gave it will end. Use the following options to keep it live:
-i Keep STDIN open even if not attached.
-t Allocate a pseudo-TTY.
So your new run command is:
docker run -it -d shykes/pybuilder bin/bash
If you would like to attach to an already running container:
docker exec -it CONTAINER_ID /bin/bash
In these examples /bin/bash is used as the command.
So I think the answer is simpler than many misleading answers above.
To start an existing container which is stopped
docker start <container-name/ID>
To stop a running container
docker stop <container-name/ID>
Then to login to the interactive shell of a container
docker exec -it <container-name/ID> bash
To start an existing container and attach to it in one command
docker start -ai <container-name/ID>
Beware, this will stop the container on exit. But in general, you need to start the container, attach and stop it after you are done.
To expand on katrmr's answer, if the container is stopped and can't be started due to an error, you'll need to commit it to an image. Then you can launch bash in the new image:
docker commit [CONTAINER_ID] temporary_image
docker run --entrypoint=bash -it temporary_image
Some of the answers here are misleading because they concern containers that are running, not stopped.
Sven Dowideit explained on the Docker forum that containers are bound to their process (and Docker can't change the process of a stopped container, seemingly due at least to its internal structure: https://github.com/docker/docker/issues/1437). So, basically the only option is to commit the container to an image and run it with a different command.
See https://forums.docker.com/t/run-command-in-stopped-container/343
(I believe the "ENTRYPOINT with arguments" approach wouldn't work either, since you still wouldn't be able to change the arguments to a stopped container.)
I had to use bash -c to run my command:
docker exec -it CONTAINER_ID bash -c "mysql_tzinfo_to_sql /usr/share/zoneinfo | mysql mysql"
Creating a container and sending commands to it, one by one:
docker create --name=my_new_container -it ubuntu
docker start my_new_container
// ps -a says 'Up X seconds'
docker exec my_new_container /path/to/my/command
// ps -a still says 'Up X+Y seconds'
docker exec my_new_container /path/to/another/command
If you are trying to run shell script, you need run it as bash.
docker exec -it containerid bash -c /path/to/your/script.sh
This is a combined answer I made up using the CDR LDN answer above and the answer I found here.
The following example starts an Arch Linux container from an image, and then installs git on that container using the pacman tool:
sudo docker run -it -d archlinux /bin/bash
sudo docker ps -l
sudo docker exec -it [container_ID] script /dev/null -c "pacman -S git --noconfirm"
That is all.
Pipe a command to docker exec bash stdin
Must remove the -t for it to work:
echo 'touch myfile' | docker exec -i CONTAINER_NAME bash
This can be more convenient that using CLI options sometimes.
Tested with:
docker run --name ub16 -it ubuntu:16.04 bash
then on another shell:
echo 'touch myfile' | docker exec -i ub16 bash
Then on first shell:
ls -l myfile
Tested on Docker 1.13.1, Ubuntu 16.04 host.
I would like to note that the top answer is a little misleading.
The issue with executing docker run is that a new container is created every time. However, there are cases where we would like to revisit old containers or not take up space with new containers.
(Given clever_bardeen is the name of the container created...)
In OP's case, make sure the docker image is first running by executing the following command:
docker start clever_bardeen
Then, execute the docker container using the following command:
docker exec -it clever_bardeen /bin/bash
I usually use this:
docker exec -it my-container-name bash
to continuously interact with a running container.
Assuming the image is using the default entrypoint /bin/sh -c, running /bin/bash will exit immediately in daemon mode (-d). If you want this container to run an interactive shell, use -it instead of -d. If you want to execute arbitrary commands in a container usually executing another process, you might want to try nsenter or nsinit. Have a look at https://blog.codecentric.de/en/2014/07/enter-docker-container/ for the details.
Unfortunately it is impossible to override ENTRYPOINT with arguments with docker run --entrypoint to achieve this goal.
Note: you can override the ENTRYPOINT setting using --entrypoint, but
this can only set the binary to exec (no sh -c will be used).
For Mac:
$ docker exec -it <container-name> sh
if you want to connect as root user:
$ docker exec -u 0 -it <container-name> sh
Simple answer: start and attach at the same time. In this case you are doing exactly what you asked for.
docker start <CONTAINER_ID/CONTAINER_NAME> && docker attach <CONTAINER_ID/CONTAINER_NAME>
make sure to change <CONTAINER_ID/CONTAINER_NAME>
I am running windows container and I need to look inside the docker container for files and folder created and copied.
In order to do that I used following docker entrypoint command to get the command prompt running inside the container or attach to the container.
ENTRYPOINT ["C:\\Windows\\System32\\cmd.exe", "-D", "FOREGROUND"]
That helped me both to the command prompt attach to container and to keep the container a live. :)
# docker exec -d container_id command
Ex:
# docker exec -d xcdefrdtt service jira stop
A quick way to resume and access the most recently exited container:
docker start -a -i `docker ps -q -l`
An easy solution that solved a similar problem for me:
docker run --interactive --tty <name_of_image>

Is it possible to start a shell session in a running container (without ssh)

I was naively expecting this command to run a bash shell in a running container :
docker run "id of running container" /bin/bash
it looks like it's not possible, I get the error :
2013/07/27 20:00:24 Internal server error: 404 trying to fetch remote history for 27d757283842
So, if I want to run bash shell in a running container (ex. for diagnosis purposes)
do I have to run an SSH server in it and loggin via ssh ?
With docker 1.3, there is a new command docker exec. This allows you to enter a running docker:
docker exec -it "id of running container" bash
EDIT: Now you can use docker exec -it "id of running container" bash (doc)
Previously, the answer to this question was:
If you really must and you are in a debug environment, you can do this: sudo lxc-attach -n <ID>
Note that the id needs to be the full one (docker ps -notrunc).
However, I strongly recommend against this.
notice: -notrunc is deprecated, it will be replaced by --no-trunc soon.
Just do
docker attach container_name
As mentioned in the comments, to detach from the container without stopping it, type Ctrlpthen Ctrlq.
Since things are achanging, at the moment the recommended way of accessing a running container is using nsenter.
You can find more information on this github repository. But in general you can use nsenter like this:
PID=$(docker inspect --format {{.State.Pid}} <container_name_or_ID>)
nsenter --target $PID --mount --uts --ipc --net --pid
or you can use the wrapper docker-enter:
docker-enter <container_name_or_ID>
A nice explanation on the topic can be found on Jérôme Petazzoni's blog entry:
Why you don't need to run sshd in your docker containers
First thing you cannot run
docker run "existing container" command
Because this command is expecting an image and not a container and it would anyway result in a new container being spawned (so not the one you wanted to look at)
I agree with the fact that with docker we should push ourselves to think in a different way (so you should find ways so that you don't need to log onto the container), but I still find it useful and this is how I work around it.
I run my commands through supervisor in DEAMON mode.
Then I execute what I call docker_loop.sh
The content is pretty much this:
#!/bin/bash
/usr/bin/supervisord
/usr/bin/supervisorctl
while ( true )
do
echo "Detach with Ctrl-p Ctrl-q. Dropping to shell"
sleep 1
/bin/bash
done
What it does is that it allows you to "attach" to the container and be presented with the supervisorctl interface to stop/start/restart and check logs.
If that should not suffice, you can Ctrl+D and you will drop into a shell that will allow you to have a peek around as if it was a normal system.
PLEASE DO ALSO TAKE INTO ACCOUNT that this system is not as secure as having the container without a shell, so take all the necessary steps to secure your container.
Keep an eye on this pull request: https://github.com/docker/docker/pull/7409
Which implements the forthcoming docker exec <container_id> <command> utility. When this is available it should be possible to e.g. start and stop the ssh service inside a running container.
There is also nsinit to do this: "nsinit provides a handy way to access a shell inside a running container's namespace", but it looks difficult to get running.
https://gist.github.com/ubergarm/ed42ebbea293350c30a6
You can use
docker exec -it <container_name> bash
Here's my solution
In the Dockerfile:
# ...
RUN mkdir -p /opt
ADD initd.sh /opt/
RUN chmod +x /opt/initd.sh
ENTRYPOINT ["/opt/initd.sh"]
In the initd.sh file
#!/bin/bash
...
/etc/init.d/gearman-job-server start
/etc/init.d/supervisor start
#very important!!!
/bin/bash
After image is built you have two options using exec or attach:
Use exec (preferred) and run:
docker run --name $CONTAINER_NAME -dt $IMAGE_NAME
then
docker exec -it $CONTAINER_NAME /bin/bash
and use CTRL + D to detach
Use attach and run:
docker run --name $CONTAINER_NAME -dit $IMAGE_NAME
then
docker attach $CONTAINER_NAME
and use CTRL + P and CTRL + Q to detach
Note: The difference between options is in parameter -i
There is actually a way to have a shell in the container.
Assume your /root/run.sh launches the process, process manager (supervisor), or whatever.
Create /root/runme.sh with some gnu-screen tricks:
# Spawn a screen with two tabs
screen -AdmS 'main' /root/run.sh
screen -S 'main' -X screen bash -l
screen -r 'main'
Now, you have your daemons in tab 0, and an interactive shell in tab 1. docker attach at any time to see what's happening inside the container.
Another advice is to create a "development bundle" image on top of the production image with all the necessary tools, including this screen trick.
There are two ways.
With attach
$ sudo docker attach 665b4a1e17b6 #by ID
With exec
$ sudo docker exec - -t 665b4a1e17b6 #by ID
If the goal is to check on the application's logs, this post shows starting up tomcat and tailing the log as part of CMD. The tomcat log is available on the host using 'docker logs containerid'.
http://blog.trifork.com/2013/08/15/using-docker-to-efficiently-create-multiple-tomcat-instances/
It's useful assign name when running container. You don't need refer container_id.
docker run --name container_name yourimage
docker exec -it container_name bash
first, get the container id of the desired container by
docker ps
you will get something like this:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3ac548b6b315 frontend_react-web "npm run start" 48 seconds ago Up 47 seconds 0.0.0.0:3000->3000/tcp frontend_react-web_1
now copy this container id and run the following command:
docker exec -it container_id sh
docker exec -it 3ac548b6b315 sh
Maybe you were mislead like myself into thinking in terms of VMs when developing containers. My advice: Try not to.
Containers are just like any other process. Indeed you might want to "attach" to them for debugging purposes (think of /proc//env or strace -p ) but that's a very special case.
Normally you just "run" the process, so if you want to modify the configuration or read the logs, just create a new container and make sure you write the logs outside of it by sharing directories, writing to stdout (so docker logs works) or something like that.
For debugging purposes you might want to start a shell, then your code, then press CTRL-p + CTRL-q to leave the shell intact. This way you can reattach using:
docker attach <container_id>
If you want to debug the container because it's doing something you haven't expect it to do, try to debug it: https://serverfault.com/questions/596994/how-can-i-debug-a-docker-container-initialization
No. This is not possible. Use something like supervisord to get an ssh server if that's needed. Although, I definitely question the need.

Resources