Running a script in docker container and not killing the script when leaving terminal - docker

I have got some docker container for instance my_container
I Want to run a long living script in my container, but not killing it while leaving the shell
I would like to do something like that
docker exec -ti my_container /bin/bash
And then
screen -S myScreen
Then
Executing my script in screen and exit the terminal
Unfortunately, I cannot execute screen in docker terminal

this maybe help you.
docker exec -i -t c2ab7ae71ab8 sh -c "exec >/dev/tty 2>/dev/tty </dev/tty && /usr/bin/screen -r nmsrv -s /bin/bash"
and this is the reference link

Only way I can think of is to run your container with your script at the start;
docker run -d --name my_container nginx /etc/init.d/myscript

If you have to run the script directly in an already running container, you can do that with exec:
docker exec my_container /path/to/some_script.sh
or if you wanna run it through Php:
docker exec my_container php /path/to/some_script.php
That said, you typically don't want to run scripts in already running containers, but to just use the same image as some already running container. You can do that with a standard docker run:
docker run -a stdout --rm some_repo/some_image:some_tag php /path/to/some_script.php

Related

Any commands hang inside docker container

Any commands hang terminal inside docker container.
I login in container with docker exec -t php-zts /bin/bash
And then print any elementary command (date, ls, cd /, etc.)
Command hang
When I press ctrl+c I going back to host machine.
But, if I run any command without container - it's work normally
docker exec -t php-zts date
Wed Jan 26 00:04:38 UTC 2022
tty is enabled in docker-compose.yml
docker system prune and all cleanups can not help me.
I can't identify the problem and smashed my brain. Please help :(
The solution is to use the flag -i/--interactive with docker run. Here is a relevant section of the documentation:
--interactive , -i Keep STDIN open even if not attached
You can try to run your container using -i for interactive and -t for tty which will allow you to navigate and execute commands inside the container
docker run -it --rm alpine
In the other hand you can run the container with docker run then execute commands inside that container like so:
tail -f /dev/null will keep your container running.
-d will run the command in the background.
docker run --rm -d --name container1 alpine tail -f /dev/null
or
docker run --rm -itd --name container1 alpine sh # You can use -id or -td or -itd
This will allow you to run commands from inside the container.
you can choose sh, bash, or any other shell you prefer.
docker exec -it container1 alpine sh

Life-cycle difference between docker run and docker start

I have a fundamental question about container life cycle.
For example I run the following command
Create new ubuntu container and run the bash command
docker run -it ubuntu bash
In the container's bash
exit
The new container will be in state EXITED
docker ps -a
Then I use docker start to restart the container
docker start xxxx(container name)
docker exec -it xxxx(container name) /bin/bash
In the restarted container's bash
exit
The restarted container is still running
docker ps -a
May I know the reason behind for this behavior? Thank you!
With the docker run command:
docker run -it ubuntu bash
the container is started with the execution of the bash command, so when you exit from the bash, the container also exits as bash is the main process running inside the container.
However with the docker exec command:
docker exec -it xxxx(container name) /bin/bash
the container is already running the command defined by the CMD/ENTRYPOINT and bash is the command executed as a separate process. So, exiting from bash after docker start exits the bash process and the main process is still continued.

Docker run command without -it option

Why when i run the command
docker run ubuntu
without option '-it' is not possible to interact with the created container even when running command start with the -a -i options
docker start -a -i CONTAINER_ID
or when i run
docker start CONTAINER_ID
simply the container has the status "Exit (0) 4 seconds ago"
But when i run
docker run -it ubuntu
i can use bash shell of ubuntu using 'docker start -a -i'
When you run docker run without -it it's still running the container but you've not given it a command, so it finishes and exits.
If you try:
docker run ubuntu /bin/bash -c "echo 'hello'";
It'll run ubunu, then the command, and then finish because there is no reason for it to be kept alive afterwards.
-i is saying keep it alive and work within in the terminal (allow it to be interactive), but if you type exit, you're done and the container stops.
-t is showing the terminal of within the docker container (see: What are pseudo terminals (pty/tty)?)
-it allows you to see the terminal in the docker instance and interact with it.
Additionally you can use -d to run it in the background and then get to it afterwards.
Ex:
docker run -it -d --name mydocker ubuntu;
docker exec -it mydocker /bin/bash;
TLDR is -it allows you connect a terminal to interactively connect to the container.
If you run docker run --help, you can find the details about docker run options.
$ docker run --help
Usage: docker run [OPTIONS] IMAGE [COMMAND] [ARG...]
Run a command in a new container
Options:
...
-i, --interactive Keep STDIN open even if not attached
...
-t, --tty Allocate a pseudo-TTY

Running dev container exec bash not responding

I have following Dockerfile:
FROM elixir:1.4.5
COPY . /
RUN mix compile
CMD echo "Application started" && elixir --name $MY_POD_NAMESPACE#$MY_POD_IP --no-halt --cookie $ERLANG_COOKIE -S mix run
It starts and runs well, but when I try either attatch or exec XXX bash it does not respond at all.
Both the commands are different as such
docker attach containerid gets your to main process which was running and if it doesn't output anything further then you will not see anything. You should rather use docker logs containerid to see the output of your code
docker exec containerId bash means you want to get to a bash process inside the container. This command would execute and end immediately as you have not specified the interactive and the tty flags. Update it to use it as below
docker exec -it containerId bash
And you should be able to get a bash. If it still doesn't work then use docker stats containerId to see what kind of CPU and memory usage your container has
If docker exec -it container-id bash doesn't work then try docker exec -it container-id sh
Sometimes bash command doesn't work.

How do I run a command on an already existing Docker container?

I created a container with -d so it's not interactive.
docker run -d shykes/pybuilder bin/bash
I see that the container has exited:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d6c45e8cc5f0 shykes/pybuilder:latest "bin/bash" 41 minutes ago Exited (0) 2 seconds ago clever_bardeen
Now I would like to run occasional commands on the machine and exit. Just to get the response.
I tried to start the machine. I tried attaching. I thought I could call run with a container, but that does not seem to be allowed. Using start just seems to run and then exist quickly.
I'd like to get back into interactive mode after exiting.
I tried:
docker attach d6c45e8cc5f0
But I get:
2014/10/01 22:33:34 You cannot attach to a stopped container, start it first
But if I start it, it exits anyway. Catch 22. I can't win.
In October 2014 the Docker team introduced docker exec command: https://docs.docker.com/engine/reference/commandline/exec/
So now you can run any command in a running container just knowing its ID (or name):
docker exec -it <container_id_or_name> echo "Hello from container!"
Note that exec command works only on already running container. If the container is currently stopped, you need to first run it with the following command:
docker run -it -d shykes/pybuilder /bin/bash
The most important thing here is the -d option, which stands for detached. It means that the command you initially provided to the container (/bin/bash) will be run in the background and the container will not stop immediately.
Your container will exit as the command you gave it will end. Use the following options to keep it live:
-i Keep STDIN open even if not attached.
-t Allocate a pseudo-TTY.
So your new run command is:
docker run -it -d shykes/pybuilder bin/bash
If you would like to attach to an already running container:
docker exec -it CONTAINER_ID /bin/bash
In these examples /bin/bash is used as the command.
So I think the answer is simpler than many misleading answers above.
To start an existing container which is stopped
docker start <container-name/ID>
To stop a running container
docker stop <container-name/ID>
Then to login to the interactive shell of a container
docker exec -it <container-name/ID> bash
To start an existing container and attach to it in one command
docker start -ai <container-name/ID>
Beware, this will stop the container on exit. But in general, you need to start the container, attach and stop it after you are done.
To expand on katrmr's answer, if the container is stopped and can't be started due to an error, you'll need to commit it to an image. Then you can launch bash in the new image:
docker commit [CONTAINER_ID] temporary_image
docker run --entrypoint=bash -it temporary_image
Some of the answers here are misleading because they concern containers that are running, not stopped.
Sven Dowideit explained on the Docker forum that containers are bound to their process (and Docker can't change the process of a stopped container, seemingly due at least to its internal structure: https://github.com/docker/docker/issues/1437). So, basically the only option is to commit the container to an image and run it with a different command.
See https://forums.docker.com/t/run-command-in-stopped-container/343
(I believe the "ENTRYPOINT with arguments" approach wouldn't work either, since you still wouldn't be able to change the arguments to a stopped container.)
I had to use bash -c to run my command:
docker exec -it CONTAINER_ID bash -c "mysql_tzinfo_to_sql /usr/share/zoneinfo | mysql mysql"
Creating a container and sending commands to it, one by one:
docker create --name=my_new_container -it ubuntu
docker start my_new_container
// ps -a says 'Up X seconds'
docker exec my_new_container /path/to/my/command
// ps -a still says 'Up X+Y seconds'
docker exec my_new_container /path/to/another/command
If you are trying to run shell script, you need run it as bash.
docker exec -it containerid bash -c /path/to/your/script.sh
This is a combined answer I made up using the CDR LDN answer above and the answer I found here.
The following example starts an Arch Linux container from an image, and then installs git on that container using the pacman tool:
sudo docker run -it -d archlinux /bin/bash
sudo docker ps -l
sudo docker exec -it [container_ID] script /dev/null -c "pacman -S git --noconfirm"
That is all.
Pipe a command to docker exec bash stdin
Must remove the -t for it to work:
echo 'touch myfile' | docker exec -i CONTAINER_NAME bash
This can be more convenient that using CLI options sometimes.
Tested with:
docker run --name ub16 -it ubuntu:16.04 bash
then on another shell:
echo 'touch myfile' | docker exec -i ub16 bash
Then on first shell:
ls -l myfile
Tested on Docker 1.13.1, Ubuntu 16.04 host.
I would like to note that the top answer is a little misleading.
The issue with executing docker run is that a new container is created every time. However, there are cases where we would like to revisit old containers or not take up space with new containers.
(Given clever_bardeen is the name of the container created...)
In OP's case, make sure the docker image is first running by executing the following command:
docker start clever_bardeen
Then, execute the docker container using the following command:
docker exec -it clever_bardeen /bin/bash
I usually use this:
docker exec -it my-container-name bash
to continuously interact with a running container.
Assuming the image is using the default entrypoint /bin/sh -c, running /bin/bash will exit immediately in daemon mode (-d). If you want this container to run an interactive shell, use -it instead of -d. If you want to execute arbitrary commands in a container usually executing another process, you might want to try nsenter or nsinit. Have a look at https://blog.codecentric.de/en/2014/07/enter-docker-container/ for the details.
Unfortunately it is impossible to override ENTRYPOINT with arguments with docker run --entrypoint to achieve this goal.
Note: you can override the ENTRYPOINT setting using --entrypoint, but
this can only set the binary to exec (no sh -c will be used).
For Mac:
$ docker exec -it <container-name> sh
if you want to connect as root user:
$ docker exec -u 0 -it <container-name> sh
Simple answer: start and attach at the same time. In this case you are doing exactly what you asked for.
docker start <CONTAINER_ID/CONTAINER_NAME> && docker attach <CONTAINER_ID/CONTAINER_NAME>
make sure to change <CONTAINER_ID/CONTAINER_NAME>
I am running windows container and I need to look inside the docker container for files and folder created and copied.
In order to do that I used following docker entrypoint command to get the command prompt running inside the container or attach to the container.
ENTRYPOINT ["C:\\Windows\\System32\\cmd.exe", "-D", "FOREGROUND"]
That helped me both to the command prompt attach to container and to keep the container a live. :)
# docker exec -d container_id command
Ex:
# docker exec -d xcdefrdtt service jira stop
A quick way to resume and access the most recently exited container:
docker start -a -i `docker ps -q -l`
An easy solution that solved a similar problem for me:
docker run --interactive --tty <name_of_image>

Resources