I know I can use the dockerfile's CMD RUN and ENTRYPOINT commands to run a script when the container initiates, but how can I make the container run a script every time when the container restarts on failure?
entrypoint runs every time a container starts, or restarts. It's common practice to put startup configuration in a shell script that then execs the application's "true" entrypoint at the end. (See What purpose does using exec in docker entrypoint scripts serve? for why exec is important).
Remember, docker is really just a wrapper around filesystem , process, and network namespacing. It can't restart your container in any way other than rerunning the same process it started in the first place.
You can try it yourself with an invocation something like this:
docker run -d --restart=always --entrypoint=sh alpine -c "sleep 5; echo Exiting; exit"
if you docker logs -f that container, you'll see the Exiting come out after every 5 seconds. Note that the container stopping will also stop the log following though, so you'll have to run it again to see the next restart.
I'm using Windows server 2016 to spin up windowsservercore docker containers and am noticing what I think is incorrect behavior where the container exits very quickly even though it should be sleeping for over 15 minutes. I have the following Dockerfile:
FROM microsoft/windowsservercore
RUN powershell Start-Sleep -s 1000
I build the container with docker build -t mybuild . when in the same directory as the Dockerfile. I then run the container with docker run mybuild and it exits very quickly.
Looking at this answer it seems that a sleep should keep the container alive. That answer was showing Linux so not sure if that matters but I feel like the sleep process is running in either case and that's what determines if the container should exit or not on default
If I use interactive mode and/or (I tried all 3 combinations) a tty (docker run -it mybuild) it stays up until I exit the container's shell
Looking at the docker docs run executes the container in the foreground (like -it) although I don't understand why that would matter since the process should still be running regardless of the container being detached or not. I also tried running it in detached mode with docker run -d and it exits very quickly in that case as well.
I also tried running another command after the sleep but that still didn't work. The docker file then looked like:
FROM microsoft/windowsservercore
RUN powershell Start-Sleep -s 1000
RUN echo "hello" > C:\hello.txt
I looked at the dockerfile reference for RUN and it says that RUN in shell form executes the command using cmd /S /C on Windows. So I tried running this from my normal shell on my host Windows machine exactly like the Dockerfile specifies (cmd /S /C powershell Start-Sleep -s 1000) and verified that it works as expected.
What am I not understanding here? I'm new to docker and trying to learn but I can't figure out what's going on searching the internet and reading docs
I think that there is a confusion about the RUN command in dockerfile: its not saying what is going to run when the container will start, it just a command to run when building the image (like run this installation command..).
I think you are looking for one of the two options:
The CMD line in the dockerfile (doc):
FROM microsoft/windowsservercore
CMD ["powershell", "Start-Sleep", "-s", "1000"]
or, you can run it from command-line:
docker run -d mybuild <your command>
I use docker build -t iot . to build a image
my Dockerfile is :
FROM centos
USER root
ADD jdk1.8.0_101.tar.gz /root
COPY run.sh /etc
RUN chmod 755 /etc/run.sh
CMD "/etc/run.sh"
my run.sh is:
#!/bin/bash
echo "aaaa"
I use docker run -itd iot to run a container,but I find my container can not be run.
what should I do?
Your image builds and runs correctly. You just need to remove the d flag from run (for detached) or the docker command will exit immediately and run your container in the background. You can see that it actually exited with code zero according to the status column in docker ps -a.
You can corroborate this by running docker logs d63a (which is your container id). You should see aaaa.
Your description is inaccurate. When you docker run the container, it normally started, printed aaaa, and then exited.
So I guess what you would ask is "why my container cannot keep running, such as a daemon process". This is because you're executing a shell script which is actually a one-shot thing. Modify the CMD line in your Dockerfile to CMD "bash", your container will then not exit.
A jar need to deploy in docker.I know how to write Dockerfile for a running jar.
this jar is a commandline option application.it has serveral arguments.and will be needed to run serveral times with different arguments.
for example. It has arg1,arg2.
User can run with arg1=A,arg2=B then run with arg1=A2. No arg2.
Docker cannot run this, i have specified these arguments when they run and the container stop once the jar main task finished. I need to start another container to run jar.
Don't think this is friendly. My question is in this case, is it not suitable to deploy with docker?
You can configure the container to run a script that will never end just to keep the container running.
As an example you can include the following in the Dockerfile:
RUN echo 'sleep infinity' >> /bootstrap.sh && chmod +x /bootstrap.sh
You can start the container in the following way:
docker run -d --name <container-name> <image> ./bootstrap.sh
To run the jar you can use:
docker exec <container-name> java [arguments]
Having in mind it is a java program and it is OS agnostic you don't have a huge benefit in running inside a container but is possible.
You can use a simple "hack" for this purpose... But I do not think this is the best solution.
Start a container with a process that is not supposed to be ended soon, e.g. bash. Also, lets say you want to use the latest ubuntu image. Then you can start the container with:
$ docker run -d -it ubuntu bash
This starts a ubuntu container and keeps it running as a daemon edit: detached (-d) in the background.
Lets lookup the container's name:
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS NAMES
59104211e795 ubuntu "bash" 2 seconds ago Up 1 seconds jolly_hawking
It is jolly_hawking. Your commands (here: ls /) can then be sent to the container with this command:
$ docker exec jolly_hawking ls /
But that is definitely not the best solution. Maybe just keep this as an example how this might work for you and how Docker containers are working.
I tried to start a exited container like follows,
I listed down all available containers using docker ps -a. It listed the following:
I entered the following commands to start the container which is in the exited stage and enter into the terminal of that image.
docker start 79b3fa70b51d
docker exec -it 79b3fa70b51d /bin/sh
It is throwing the following error.
FATA[0000] Error response from daemon: Container 79b3fa70b51d is not running
But when I start the container using docker start 79b3fa70b51d. It throws the container ID as output which is normal if it have everything work normally.
What is the cause of this error?
By default, docker container will exit immediately if you do not have any task running on the container.
To keep the container running in the background, try to run it with --detach (or -d) argument.
For examples:
docker pull debian
docker run -t -d --name my_debian debian
e7672d54b0c2
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e7672d54b0c2 debian "bash" 3 minutes ago Up 3 minutes my_debian
#now you can execute command on the container
docker exec -it my_debian bash
root#e7672d54b0c2:/#
Container 79b3fa70b51d seems to only do an echo.
That means it starts, echo and then exits immediately.
The next docker exec command wouldn't find it running in order to attach itself to that container and execute any command: it is too late. The container has already exited.
The docker exec command runs a new command in a running container.
The command started using docker exec will only run while the container's primary process (PID 1) is running
If it's not possible to start the main process again (for long enough), there is also the possibility to commit the container to a new image and run a new container from this image. While this is not the usual best practice workflow (the new image is not repeatable), I find it really useful to debug a failing script once in a while.
docker exec -it 6198ef53d943 bash
Error response from daemon: Container 6198ef53d9431a3f38e8b38d7869940f7fb803afac4a2d599812b8e42419c574 is not running
docker commit 6198ef53d943
sha256:ace7ca65e6e3fdb678d9cdfb33a7a165c510e65c3bc28fecb960ac993c37ef33
docker run -it ace7ca65e6e bash
root#72d38a8c787d:/#
This happens with images for which the script does not launch a service awaiting requests, therefore the container exits at the end of the script.
This is typically the case with most base OS images (centos, debian, etc.), or also with the node images.
Your best bet is to run the image in interactive mode. Example below with the node image:
docker run -it node /bin/bash
Output is
root#cacc7897a20c:/# echo $SHELL
/bin/bash
First of all, we have to start the docker container
ankit#ankit-HP-Notebook:~$ sudo docker start 3a19b39ea021
3a19b39ea021
After that, check the docker container:
ankit#ankit-HP-Notebook:~$ sudo docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3a19b39ea021 coreapps/ubuntu16.04:latest "bash" 13 hours ago
Up 9 seconds ubuntu1
455b66057060 hello-world "/hello" 4 weeks ago
Exited (0) 4 weeks ago vigorous_bardeen
Then execute by using the command below:
ankit#ankit-HP-Notebook:~$ sudo docker exec -it 3a19b39ea021 bash
root#3a19b39ea021:/#
Here is what worked for me.
Get the container ID and restart.
docker ps -a --no-trunc
ace7ca65e6e3fdb678d9cdfb33a7a165c510e65c3bc28fecb960ac993c37ef33
docker restart ace7ca65e6e3fdb678d9cdfb33a7a165c510e65c3bc28fecb960ac993c37ef33
docker run -it --entrypoint /bin/bash <imageid>
This was posted by L0j1k in the below post and worked for me.
How do I get into a Docker container's shell?
use command
> docker container ls
> docker image ls
Check your Image id and note it down. Here my Image id is "6c929ca002da" , you guys have to use your own Image id instead of mine..
> docker start 6c929ca002da
here our image is in down mode we have to start it first by using image id.
6c929ca002da is my image id
> `docker exec -it 6c929ca002da bash`
after running this command you can see
your image file in running mode like this
root#6c929ca002da
Here I am using root mode go root mode by using command
sudo su
The reason is just what the accepted answer said. I add some extra information, which may provide a further understanding about this issue.
The status of a container includes Created, Running, Stopped,
Exited, Dead and others as I know.
When we execute docker create, docker daemon will create a
container with its status of Created.
When docker start, docker daemon will start a existing container
which its status may be Created or Stopped.
When we execute docker run, docker daemon will finish it in two
steps: docker create and docker start.
When docker stop, obviously docker daemon will stop a container.
Thus container would be in Stopped status.
Coming the most important one, a container actually imagine itself
holding a long time process in it. When the process exits, the
container holding process would exit too. Thus the status of this
container would be Exited.
When does the process exit? In another word, what’s the process, how did we start it?
The answer is CMD in a dockerfile or command in the following expression, which is bash by default in some images, i.e. ubutu:18.04.
docker run ubuntu:18.04 [command]
docker run -it <image_id> /bin/bash
Run in interactive mode executing then bash shell
For anyone attempting something similar using a Dockerfile...
Running in detached mode won't help. The container will always exit (stop running) if the command is non-blocking, this is the case with bash.
In this case, a workaround would be:
1. Commit the resulting image:
(container_name = the name of the container you want to base the image off of,
image_name = the name of the image to be created
docker commit container_name image_name
2. Use docker run to create a new container using the new image, specifying the command you want to run. Here, I will run "bash":
docker run -it image_name bash
This would get you the interactive login you're looking for.
Here's a solution when the docker container exits normally and you can edit the Dockerfile.
Generally, when a docker container is run, an application is served by running a command. From the Dockerfile reference,
Both CMD and ENTRYPOINT instructions define what command gets executed when
running a container. ...
Dockerfile should specify at least one of CMD or ENTRYPOINT commands.
When you build a image and not specify any command with CMD or ENTRYPOINT, the base image's CMD or ENTRYPOINT command would be executed.
For example, the Official Ubuntu Dockerfile has CMD ["/bin/bash"] (https://hub.docker.com/_/ubuntu). Now, the bin/bash/ command can accept input and docker run -it IMAGE_ID command attaches STDIN to the container. The result is that you get an interactive terminal and the container keeps running.
When a command with CMD or ENTRYPOINT is specified in the Dockerfile, this command gets executed when running the container. Now, if this command can finish without requiring any input, it will finish and the container will exit. docker run -it IMAGE_ID will NOT provide the interactive terminal in this case. An example would be the docker image built from the Dockerfile below-
FROM ubuntu
ENTRYPOINT echo hello
If you need to go to the terminal of this image, you will need to keep the container running by modifying the entrypoint command.
FROM ubuntu
ENTRYPOINT echo hello && sleep infinity
After running the container normally with docker run IMAGE_ID, you can just go to another terminal and use docker exec -it CONTAINER_ID bash to get the container's terminal.
Perhaps too late for this active community, but there are a lot of causes because a container may not execute correctly and exit writing a console message or not. For all the newbies making nodeJS containers I'll recommend you to change the Dockerfile and erase all CMD and ENTRYPOINT you may have, and add only an ENTRYPOINT to ["/bin/sh"] (See my attached test Dockerfile example). Then rebuild the Docker image and run it with the command:
docker run -it --rm your_named_image:tag
Voilà you will be getting inside the container with a shell. Then you can test your app typing the command yourself i.e. node app.js and see what is happening. After you see all is ok, you can then change your docker file and erase the ENTRYPOINT to "/bin/sh" and use yourself i.e ["node","app.js"] or whatever. Always consider the previous answers to this post; When the app inside the container finish it will stop the running container.
Here is an example for my "test" Dockerfile:
FROM node:16.4.0-alpine
ENV NODE_ENV=production
WORKDIR /app
COPY ["package.json","package-lock.json*", "./"]
RUN npm install --production
COPY ./dist .
ENTRYPOINT ["/bin/sh"]
NOTE: My source files for the app (.js) on the local computer are on directory ./dist, so I have to copy at the container as you can see.
In my case , i changed certain file names and directory names of the parent directory of the Dockerfile . Due to which container not finding the required parameters to start it again.
After renaming it back to the original names, container started like butter.
I have a different take on this. I could do a docker ps and see that there is a docker container running, I even tried to restart it, but as soon as I tried to get a session for it with New-PSSession -ContainerId $containerId -RunAsAdministrator It would error out, saying:
##[error]New-PSSession : The input ContainerId xxx does not exist,
##[error]or the corresponding container is not running.
My problem was I was running with network service and it did not have enough permissions to see the container, even though I had given it permissions to run docker commands (with docker security group configuration)
I didn't know how to enable working with containers, so I had to revert to running it as an admin user instead
In my case, I had previously killed the running container with,
sudo docker kill testdeb
So when I exec the container I got the error,
Error response from daemon: Container fcc29295fe78a425155c533506f58fc5b30a50ee9eb85c21031e8699b3f6ff01 is not running
The solution was to start the container with,
sudo docker start testdeb
Now I have a container running ,
sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
fcc29295fe78 debian "bash" 9 hours ago Up 11 seconds testdeb
Which wasn't previously running
The below approach I tried works in an windows vscode environment.
docker run --name yourcontainer -p 3306:3306 -e MYSQL_ROOT_PASSWORD=your password -d mysql
I see lot of similar answers but adding port number '-p 3306:3306', made the status up and running. You can verify by using the command docker ps -a