getting a ptty for docker start - docker

Docker's run and exec command both allow the -t flag to get a psuedo-terminal. This works nicely for something like: docker run -it --name deb debian bash. Once the user exits the interactive bash shell in this container, the container stops. While the exec command could similarly take the same flag and would work well doing so, the stopped container may not have anything executed within unless it is started.
The start command does not take the -t flag. Using just -ai with say, docker start -ai deb would run bash again but fail to show an interactive prompt, making use kind of... cumbersome. Without -ai the bash process would just exit, making the container stop right after starting.
While this sub-optimal attached, interactive, bash is open, ^z will not background the current docker command. If however you open a new terminal, setup your docker environment variables correctly, and now, finally, issue docker exec -it deb bash, you can get back to the nicer interactive bash prompt when you first started the container.
This seems quite involved. Am I missing something about start or exec that might make it easier to use bash or another interactive command? Perhaps there is some preferred command to run that will never exit, keep the container running (until told to stop) and use very little processor time so as to make exec the preferred method of attaching to an interactive process in the container.

Related

How to create a Dockerfile so that container can run without an immediate exit

Official Docker images like MySQL can be run like this:
docker run -d --name mysql_test mysql/mysql-server:8.0.13
And it can run indefinitely in the background.
I want to try to create an image which does the same, specifically a Flask development server (just for testing). But my container exit immediately. My Dockerfile is like this:
FROM debian:buster
ENV TERM xterm
RUN XXXX # some apt-get and Python installation stuffs
ENTRYPOINT [ "flask", "run", "--host", "0.0.0.0:5000" ]
EXPOSE 80
EXPOSE 5000
USER myuser
WORKDIR /home/myuser
However it exited immediately as soon as it is ran. I also tried "bash" as an entry point just so to make sure it isn't a Flask configuration issue and it also exited.
How do I make it so that it runs as THE process in the container?
EDIT
OK someone posted below (but later deleted), the command to test is to use tail -f /dev/null, and it does run indefinitely. I still don't understand why bash doesn't work as a process which doesn't exist (does it?). But my flask configuration is probably off.
EDIT 2
I see that running without the -d flag print out the stdout (or stderr) so I can diagnose the problem.
Let's clear things out.
In general, a container exits as soon as its entrypoint is successfully executed.
In your case, without being a python expert this ENTRYPOINT [ "flask", "run", "--host", "0.0.0.0:5000" ] would be enough to keep the container alive. But I guess you have some configuration error and due to that error the container exited before running flask command. You can validate this by running docker ps -a and inspect the exit code(possibly 1).
Let's now discuss about the questions in your edits.
The key part of your misunderstanding derives from the -d flag.
You are right to think that setting bash as entrypoint would be enough to keep container alive but you need to attach to that shell.
When running in detach mode(-d), container will execute bash command but as soon as no one is attached to that shell, it will exit. In addition, using this flag will prevent you from viewing container logs lively(however you may use docker logs container_id to debug) which is very useful when you are in an early phase of setting thing up. So I recommend using this flag only when you are sure that everything works as intended.
To attach to bash shell and keep container alive, you should use the -it flag so that the bash shell will be attached to the current shell invoking the docker run command.
-t : Allocate a pseudo-tty
-i : Keep STDIN open even if not attached
Please also consult official documentation about foreground vs background mode.
The answer to your edit is: when do docker run <container> bash it will literally call bash and exit 0, because the command (bash) was successful. Bash isn't a shell, it's a command.
If you ran docker run -it <container> tail -f /dev/null and then docker exec -it /bin/bash. You'd drop into the shell, because its the command you ran.
Your Dockerfile doesn't have a command to run in the background that is persistent, in mysqls case, it runs mysqld, which starts a server on PID 0.
When PID 0 exits, the container stops.
Your entrypoint is most likely failing to start, or starting and exiting because of how your command is running.
I would try changing your entrypoint to a

unable to automatically stop container running jupyter notebook

If I start a container with --rm like so:
docker run --rm -it image-name /bin/bash
Then from the repl I can close it exit and it automatically removes the container, so I don't have to.
It's very nice.
But now I'm starting a container and telling it to automatically run jupyter notebook like so:
docker run --rm -it image-name /bin/bash -c "jupyter notebook ..."
In this case, when I press control+c I exit the container, but it doesn't shut down the container, because jupyter notebook is still running. Then I have to go in and docker stop container-id and then docker rm container-id which is annoying.
Is there any way to tell the container to automatically close when I exit it?
So the reason the container removes itself in the first scenario is because the "bash" script is what your execution was tied to. When you exit, BASH exits, so the container is removed.
In the second scenario, you're not actually doing that. You're executing a command interactively via bash's command function, and you exiting the terminal isn't tied to the termination of that command.
The way the Docker --rm flag works is that, once the command requested by the execution statement ends, remove the container". If you execute bash to get an interactive shell, and then run the command, you'll get a different experience.

I typed docker run ubuntu, now what?

I know, I know, I should have typed
docker run -it ubuntu bash
But the fact remains, a container has been created, it is there, and it is stopped. It stops as soon as it is started, so there's no way to attach or exec in it.
Is it really the case that there's is absolutely no way to change it's state so that bash is started instead ? This seems to be kind of a showstopper to me. Or maybe there's something I didn't get about the marvelous possibilities of docker that would make such a thing complicated to do ? I doubt that.
Why is it that way ?
Keep in mind two things:
1st: a container is up and running as long as its main process is up and running.
2nd: ubuntu has a default command: CMD ["/bin/bash"]. When you use docker run ubuntu bash, you overwrite it to CMD ["bash"]. No big difference.
Why docker run ubuntu fails:
Because bash simply exits. Remember, bash is the default command.
Why docker run -it ubuntu succeeds:
Because -t makes bash keep running. From docker run --help:
-t, --tty Allocate a pseudo-TTY
Also, you mention:
But the fact remains, a container has been created, it is there, and it is stopped. It stops as soon as it is started, so there's no way to attach or exec in it.
Containers can be better considered as processes and this is why you should see them as something ephemeral. If it happens to run a container with the wrong configuration (exiting right after start), remove it and spin up a new one, this time with the correct parameters.
when you run image like ubuntu you have to give it a command or process to keep it started. In my case when I use ubuntu image for tests (principally), I write docker run --name myubuntu -d ubuntu:16.04 sleep 3000
you can verify if it runing with a docker ps.
After this you can go inside with docker exec -it myubuntu /bin/bash

Why docker container exits immediately

I run a container in the background using
docker run -d --name hadoop h_Service
it exits quickly. But if I run in the foreground, it works fine. I checked logs using
docker logs hadoop
there was no error. Any ideas?
DOCKERFILE
FROM java_ubuntu_new
RUN wget http://archive.cloudera.com/cdh4/one-click-install/precise/amd64/cdh4-repository_1.0_all.deb
RUN dpkg -i cdh4-repository_1.0_all.deb
RUN curl -s http://archive.cloudera.com/cdh4/ubuntu/precise/amd64/cdh/archive.key | apt-key add -
RUN apt-get update
RUN apt-get install -y hadoop-0.20-conf-pseudo
RUN dpkg -L hadoop-0.20-conf-pseudo
USER hdfs
RUN hdfs namenode -format
USER root
RUN apt-get install -y sudo
ADD . /usr/local/
RUN chmod 777 /usr/local/start-all.sh
CMD ["/usr/local/start-all.sh"]
start-all.sh
#!/usr/bin/env bash
/etc/init.d/hadoop-hdfs-namenode start
/etc/init.d/hadoop-hdfs-datanode start
/etc/init.d/hadoop-hdfs-secondarynamenode start
/etc/init.d/hadoop-0.20-mapreduce-tasktracker start
sudo -u hdfs hadoop fs -chmod 777 /
/etc/init.d/hadoop-0.20-mapreduce-jobtracker start
/bin/bash
This did the trick for me:
docker run -dit ubuntu
After it, I checked for the processes running using:
docker ps -a
For attaching again the container
docker attach CONTAINER_NAME
TIP: For exiting without stopping the container type: ^P^Q
A docker container exits when its main process finishes.
In this case it will exit when your start-all.sh script ends. I don't know enough about hadoop to tell you how to do it in this case, but you need to either leave something running in the foreground or use a process manager such as runit or supervisord to run the processes.
I think you must be mistaken about it working if you don't specify -d; it should have exactly the same effect. I suspect you launched it with a slightly different command or using -it which will change things.
A simple solution may be to add something like:
while true; do sleep 1000; done
to the end of the script. I don't like this however, as the script should really be monitoring the processes it kicked off.
(I should say I stole that code from https://github.com/sequenceiq/hadoop-docker/blob/master/bootstrap.sh)
I would like to extend or dare I say, improve answer mentioned by camposer
When you run
docker run -dit ubuntu
you are basically running the container in background in interactive mode.
When you attach and exit the container by CTRL+D (most common way to do it), you stop the container because you just killed the main process which you started your container with the above command.
Making advantage of an already running container, I would just fork another process of bash and get a pseudo TTY by running:
docker exec -it <container ID> /bin/bash
Why docker container exits immediately?
If you want to force the image to hang around (in order to debug something or examine state of the file system) you can override the entry point to change it to a shell:
docker run -it --entrypoint=/bin/bash myimagename
whenever I want a container to stay up after finish the script execution I add
&& tail -f /dev/null
at the end of command. So it should be:
/usr/local/start-all.sh && tail -f /dev/null
If you need to just have a container running without exiting, just run
docker run -dit --name MY_CONTAINER MY_IMAGE:latest
and then
docker exec -it MY_CONTAINER /bin/bash
and you will be in the bash shell of the container, and it should not exit.
Or if the exit happens during docker-compose, use
command: bash -c "MY_COMMAND --wait"
as already stated by two other answers here (though not that clearly referring to docker-compose, that is why I still mention the "wait" trick again).
I tried this --wait later again, did not work. It must have been an argument for some self-written python or shell code. If I ever find the time, I will look it up. It should be a good default since it was written by professionals. Perhaps it also just shadowed the workaround of another answer in this Q/A.
Add this to the end of Dockerfile:
CMD tail -f /dev/null
Sample Docker file:
FROM ubuntu:16.04
# other commands
CMD tail -f /dev/null
Reference
A nice approach would be to start up your processes and services running them in the background and use the wait [n ...] command at the end of your script. In bash, the wait command forces the current process to:
Wait for each specified process and return its termination status. If n is not given, all currently active child processes are waited for, and the return status is zero.
I got this idea from Sébastien Pujadas' start script for his elk build.
Taking from the original question, your start-all.sh would look something like this...
#!/usr/bin/env bash
/etc/init.d/hadoop-hdfs-namenode start &
/etc/init.d/hadoop-hdfs-datanode start &
/etc/init.d/hadoop-hdfs-secondarynamenode start &
/etc/init.d/hadoop-0.20-mapreduce-tasktracker start &
sudo -u hdfs hadoop fs -chmod 777 /
/etc/init.d/hadoop-0.20-mapreduce-jobtracker start &
wait
You need to run it with -d flag to leave it running as daemon in the background.
docker run -d -it ubuntu bash
My pracitce is in the Dockerfile start a shell which will not exit immediately CMD [ "sh", "-c", "service ssh start; bash"], then run docker run -dit image_name. This way the (ssh) service and container is up running.
I added read shell statement at the end. This keeps the main process of the container - startup shell script - running.
Adding
exec "$#"
at the end of my shell script was my fix!
Coming from duplicates, I don't see any answer here which addresses the very common antipattern of running your main workload as a background job, and then wondering why Docker exits.
In simple terms, if you have
my-main-thing &
then either take out the & to run the job in the foreground, or add
wait
at the end of the script to make it wait for all background jobs.
It will then still exit if the main workload exits, so maybe run this in a while true loop to force it to restart forever:
while true; do
my-main-thing &
other things which need to happen while the main workload runs in the background
maybe if you have such things
wait
done
(Notice also how to write while true. It's common to see silly things like while [ true ] or while [ 1 ] which coincidentally happen to work, but don't mean what the author probably imagined they ought to mean.)
There are many possible ways to cause a docker to exit immediately. For me, it was the problem with my Dockerfile. There was a bug in that file. I had ENTRYPOINT ["dotnet", "M4Movie_Api.dll] instead of ENTRYPOINT ["dotnet", "M4Movie_Api.dll"]. As you can see I had missed one quotation(") at the end.
To analyze the problem I started my container and quickly attached my container so that I could see what was the exact problem.
C:\SVenu\M4Movie\Api\Api>docker start 4ea373efa21b
C:\SVenu\M4Movie\Api\Api>docker attach 4ea373efa21b
Where 4ea373efa21b is my container id. This drives me to the actual issue.
After finding the issue, I had to build, restore, publish my container again.
If you check Dockerfile from containers, for example
fballiano/magento2-apache-php
you'll see that at the end of his file he adds the following command:
while true; do sleep 1; done
Now, what I recommend, is that you do this
docker container ls --all | grep 127
Then, you will see if your docker image had an error, if it exits with 0, then it probably needs one of these commands that will sleep forever.
#camposer
The solution is the solution that works for me.
I am running docker on my macbook.
The container was not firing. thanks to your friend's method, I was able to start it correctly.
`docker run -dit ubuntu`
Since the image is a linux, one thing to check is to make sure any shell scripts used in the container have unix line endings. If they have a ^M at the end then they are windows line endings. One way to fix them is with dos2unix on /usr/local/start-all.sh to convert them from windows to unix. Running the docker in interactive mode can help figure out other problems. You could have a file name typo or something. see https://en.wikipedia.org/wiki/Newline

Is it possible to start a shell session in a running container (without ssh)

I was naively expecting this command to run a bash shell in a running container :
docker run "id of running container" /bin/bash
it looks like it's not possible, I get the error :
2013/07/27 20:00:24 Internal server error: 404 trying to fetch remote history for 27d757283842
So, if I want to run bash shell in a running container (ex. for diagnosis purposes)
do I have to run an SSH server in it and loggin via ssh ?
With docker 1.3, there is a new command docker exec. This allows you to enter a running docker:
docker exec -it "id of running container" bash
EDIT: Now you can use docker exec -it "id of running container" bash (doc)
Previously, the answer to this question was:
If you really must and you are in a debug environment, you can do this: sudo lxc-attach -n <ID>
Note that the id needs to be the full one (docker ps -notrunc).
However, I strongly recommend against this.
notice: -notrunc is deprecated, it will be replaced by --no-trunc soon.
Just do
docker attach container_name
As mentioned in the comments, to detach from the container without stopping it, type Ctrlpthen Ctrlq.
Since things are achanging, at the moment the recommended way of accessing a running container is using nsenter.
You can find more information on this github repository. But in general you can use nsenter like this:
PID=$(docker inspect --format {{.State.Pid}} <container_name_or_ID>)
nsenter --target $PID --mount --uts --ipc --net --pid
or you can use the wrapper docker-enter:
docker-enter <container_name_or_ID>
A nice explanation on the topic can be found on Jérôme Petazzoni's blog entry:
Why you don't need to run sshd in your docker containers
First thing you cannot run
docker run "existing container" command
Because this command is expecting an image and not a container and it would anyway result in a new container being spawned (so not the one you wanted to look at)
I agree with the fact that with docker we should push ourselves to think in a different way (so you should find ways so that you don't need to log onto the container), but I still find it useful and this is how I work around it.
I run my commands through supervisor in DEAMON mode.
Then I execute what I call docker_loop.sh
The content is pretty much this:
#!/bin/bash
/usr/bin/supervisord
/usr/bin/supervisorctl
while ( true )
do
echo "Detach with Ctrl-p Ctrl-q. Dropping to shell"
sleep 1
/bin/bash
done
What it does is that it allows you to "attach" to the container and be presented with the supervisorctl interface to stop/start/restart and check logs.
If that should not suffice, you can Ctrl+D and you will drop into a shell that will allow you to have a peek around as if it was a normal system.
PLEASE DO ALSO TAKE INTO ACCOUNT that this system is not as secure as having the container without a shell, so take all the necessary steps to secure your container.
Keep an eye on this pull request: https://github.com/docker/docker/pull/7409
Which implements the forthcoming docker exec <container_id> <command> utility. When this is available it should be possible to e.g. start and stop the ssh service inside a running container.
There is also nsinit to do this: "nsinit provides a handy way to access a shell inside a running container's namespace", but it looks difficult to get running.
https://gist.github.com/ubergarm/ed42ebbea293350c30a6
You can use
docker exec -it <container_name> bash
Here's my solution
In the Dockerfile:
# ...
RUN mkdir -p /opt
ADD initd.sh /opt/
RUN chmod +x /opt/initd.sh
ENTRYPOINT ["/opt/initd.sh"]
In the initd.sh file
#!/bin/bash
...
/etc/init.d/gearman-job-server start
/etc/init.d/supervisor start
#very important!!!
/bin/bash
After image is built you have two options using exec or attach:
Use exec (preferred) and run:
docker run --name $CONTAINER_NAME -dt $IMAGE_NAME
then
docker exec -it $CONTAINER_NAME /bin/bash
and use CTRL + D to detach
Use attach and run:
docker run --name $CONTAINER_NAME -dit $IMAGE_NAME
then
docker attach $CONTAINER_NAME
and use CTRL + P and CTRL + Q to detach
Note: The difference between options is in parameter -i
There is actually a way to have a shell in the container.
Assume your /root/run.sh launches the process, process manager (supervisor), or whatever.
Create /root/runme.sh with some gnu-screen tricks:
# Spawn a screen with two tabs
screen -AdmS 'main' /root/run.sh
screen -S 'main' -X screen bash -l
screen -r 'main'
Now, you have your daemons in tab 0, and an interactive shell in tab 1. docker attach at any time to see what's happening inside the container.
Another advice is to create a "development bundle" image on top of the production image with all the necessary tools, including this screen trick.
There are two ways.
With attach
$ sudo docker attach 665b4a1e17b6 #by ID
With exec
$ sudo docker exec - -t 665b4a1e17b6 #by ID
If the goal is to check on the application's logs, this post shows starting up tomcat and tailing the log as part of CMD. The tomcat log is available on the host using 'docker logs containerid'.
http://blog.trifork.com/2013/08/15/using-docker-to-efficiently-create-multiple-tomcat-instances/
It's useful assign name when running container. You don't need refer container_id.
docker run --name container_name yourimage
docker exec -it container_name bash
first, get the container id of the desired container by
docker ps
you will get something like this:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3ac548b6b315 frontend_react-web "npm run start" 48 seconds ago Up 47 seconds 0.0.0.0:3000->3000/tcp frontend_react-web_1
now copy this container id and run the following command:
docker exec -it container_id sh
docker exec -it 3ac548b6b315 sh
Maybe you were mislead like myself into thinking in terms of VMs when developing containers. My advice: Try not to.
Containers are just like any other process. Indeed you might want to "attach" to them for debugging purposes (think of /proc//env or strace -p ) but that's a very special case.
Normally you just "run" the process, so if you want to modify the configuration or read the logs, just create a new container and make sure you write the logs outside of it by sharing directories, writing to stdout (so docker logs works) or something like that.
For debugging purposes you might want to start a shell, then your code, then press CTRL-p + CTRL-q to leave the shell intact. This way you can reattach using:
docker attach <container_id>
If you want to debug the container because it's doing something you haven't expect it to do, try to debug it: https://serverfault.com/questions/596994/how-can-i-debug-a-docker-container-initialization
No. This is not possible. Use something like supervisord to get an ssh server if that's needed. Although, I definitely question the need.

Resources