Docker container not running after creating with mounted volume - docker

I am trying to use an image that I pulled from the docker database. However I need data from the host to use some programs loaded into the image. I created a container with this
sudo docker run --name="mdrap" -v "/home/ubuntu/profile/reads/SE:/usr/local/src/volume" sigenae/drap
it appears that everything works and then I start the container
sudo docker start mdrap
but when I check the running containers it is not listed there and if I try to load the container into /bin/bash it tells me the container is not running. I am a beginner with docker and am only trying to use an image to run programs with all the required dependencies, what am I doing wrong?

docker start is only to start a stopped container. It's not necessary after a docker run. (but more after a docker **create**, like in the documentation)
A container is started as long as it's main process is running.
As soon as the main process stops, the container stops.
The main process of a container can be either:
the ENTRYPOINT if defined
the CMD if no ENTRYPOINT and no command line argument
the command line argument
In your case, as you don't have any command line argument (after the image name on the docker run command) and the image only defines a CMD (=/bin/bash), your container is trying to start a /bin/bash.
But, as you don't launch the container with the --interactive/-i nor --tty/-t (again like in the documentation), your process as nothing to interact with and stops (idem for each start of this container).
So your solution is simply to follow the documentation:
docker create --name drap --privileged -v /home/ubuntu/profile/reads/SE:/usr/local/src/volume -i -t sigenae/drap /bin/bash
docker start drap
docker exec -i -t drap /bin/bash
Or even simpler:
docker run --name drap --privileged -v /home/ubuntu/profile/reads/SE:/usr/local/src/volume -i -t sigenae/drap /bin/bash

Related

What is the difference between "docker run -it" versus docker run without --detach?

I heard that in case of no --detach in docker run option my terminal is attach to the container, is it this the same as attaching terminal with docker run -it options? What is the difference?
You can start a docker container in detached mode with a -d option. So the container starts up and run in background. That means, you start up the container and could use the console after startup for other commands.
This example runs a container named test using the debian:latest image. The -it instructs Docker to allocate a pseudo-TTY connected to the container’s stdin; creating an interactive bash shell in the container.
docker run --name test -it debian

re-running a script in a docker container

I have created a docker image that includes some python code and a shell script that can execute it. It is going to process a bunch of images from the host system.
This command should create a new contaier and run it.
sudo docker run -v /host/folder:/container/folder opencv:latest bash /extract-embeddings.sh
At the end, the container exits. If I type the same command, then another container is created and exited at completion. But how is the correct usage of containers? Should I use restart, start or run (and then clean up exited containers after)? It just seems unnessary to create a new container each time.
I basically just want a docker image containing some code and 3-4 different commands I can execute whenever needed.
And the docker start command doesn't seem to accept "bash /extract-embeddings.sh" as parameters, instead things bash and extract-embeddings.sh are containers. So maybe I am misunderstanding the lifecycle of containers or the usage.
edit:
Got it to work with:
docker run -t -d --name opencv -v /host/folder:/container/folder
docker exec -it opencv bash /extract-embeddings.sh
You can write the Dockerfile to create your docker image and keep the scripts into it-
Dockerfile:
FROM opencv:latest
COPY ./your-script /some_folder
Create image:
docker build -t my_image .
Run your container:
docker run -d --name my_container
Run the script inside the container:
docker exec -it <container_id_or_name> bash /some_folder/your-script
Build your own docker image that starts with opencv:latest and give the command you run as the entrypoint. Dockerfile could be like
FROM opencv:latest
CMD ["/bin/bash", "/extract-embeddings.sh"]
Use docker create to create a named container.
sudo docker create --name=processmyimage -v /host/folder:/container/folder myopencv:latest
Then use docker start each time you want to run it.
sudo docker start processmyimage
This works well if there is only one command you want to run. If there is more than one command, I would take the approach of building an image that runs unrelated command forever (like a tail -f < /dev/null). Then you can use
sudo docker exec -d /bin/bash < cmd-to-run >
for each command

What is the meaning of running container?

'docker exec' can only used on running container, but what is the meaning of running container? Is that means the container should be computing something? or is the issue about the [command] which I define for the container? Why my TensorFlow container always be stopped status?
After I used 'docker run' to build a tensorflow container, the container stopped automatically. I need to restart it and then execute command on it. Why the container cannot be always in running since I build it?
docker run -it --runtime=nvidia tensorflow/tensorflow:latest-gpu-py3
It will then pops up a bash which I can use to control the container. But after I exit, the container stopped itself. Which means, I can only use docker ps -a to see my container but docker pscan not. I have to restart the container if I want to use my container again.
UPDATE1: If I want create a container like VM, I cannot use docker run with a temporal [command] like python ... The container will lose control permanently after the command finished. docker restart cannot start the container again. Hence, docker exec cannot apply on it. Instead, using bash or nothing as the [command] can create a container which can be restart, therefore, can be applied withdocker exec.
UPDATE2: docker run -d -it can create a running container (but the bash shell won't pops up, neither even with bash). Directly using docker exec -it container_name bash can take the control of the running container again, without docker restart. In this time, exiting bash shell will not stop the container.
A container is running when there is an active process running inside it.
When you are running this tensorflow container, it will exit due to there being no running process
If you were to run
docker run -it --runtime=nvidia tensorflow/tensorflow:latest-gpu-py3 bash
or
docker run -it --runtime=nvidia tensorflow/tensorflow:latest-gpu-py3 python <python script name>
then the container would run the bash/python script as a process and therefore remain up whilst that process is running
View running processes with:
docker ps
See all containers (including stopped/exited tasks) with:
docker ps -a
The difference between docker ps -a and docker ps is exactly what you are looking for:
From the documentation:
--all , -a Show all containers (default shows just running)
So
docker ps gives you only running container
docker ps -a also shows you the stopped ones
So probably, if you expect you container to be long running, (like it would for a web server), then indeed your container command could have an issue and is not keeping your container alive.
Also mind that, if you run your container with the options -ti, like you did, you get an interactive tty attached to it.
--tty , -t Allocate a pseudo-TTY
--interactive , -i Keep STDIN open even if not attached
That basically means that, as soon as you exit that interactive context, your container will shut down.
Running it in a detached mode, with the options -d, is maybe what you are looking for
docker run -d --runtime=nvidia tensorflow/tensorflow:latest-gpu-py3
Related documentation:
--detach , -d Run container in background and print container ID
https://docs.docker.com/engine/reference/commandline/run/

How can I keep docker container running?

I want to run multiple containers automatically and create something,
but some images, such as swarm, will automatically stop after run or start.
I already try like that
docker run -d swarm
docker run -d swarm /bin/bash tail -f /dev/null
docker run -itd swarm bash -c "while true; do sleep 1; done"
but 'docker ps' show nothing, and I tried to build Dockerfile by typing:
FROM swarm
ENTRYPOINT ["echo"]
and The image does not run with error message :
docker: Error response from daemon: invalid header field value "oci runtime error: container_linux.go:247: starting container process caused \"exec: \\\"echo\\\": executable file not found in $PATH\"\n".
I can't understand this error... How can I keep swarm container running..?
(Sorry,my English is not good))
using -d is recommended because you can run your container with just one command and you don’t need to detach terminal of container by hitting Ctrl + P + Q.
However, there is a problem with -d option. Your container immediately stops unless the commands are not running on foreground.
Docker requires your command to keep running in the foreground. Otherwise, it thinks that your applications stops and shutdown the container.
The problem is that some application does not run in the foreground.
In this situation, you can add tail -f /dev/null to your command.
By doing this, even if your main command runs in the background, your container doesn’t stop because tail is keep running in the foreground.
docker run -d swarm tail -f /dev/null
docker ps shows container
Now you can attach your container by using docker exec container_name command
or
docker run -d swarm command tail -f /dev/null
First of all you don't want to mix the -i and -d switches. Either you would like to run the container in interactive or detached mode. In your case in detached mode:
docker run -d swarm /bin/bash tail -f /dev/null
There are also no need to allocate a tty using the -t flag, since this only needs to be done in interactive mode.
You should have a look at the Docker run reference
Docker container does two type of task. One is to perform and exit & other is to run it in background.
To run docker container in background, there are few options.
Run using shell. docker run -it <image> /bin/bash
For continuously running container. docker run -d -p 8080:8080 <image>. Assuming image will expose port 8080 and in listening mode.
It's fine to to a tail on /dev/null, but why not make it do something useful?
The following command will reap orphan processes, so no zombie (defunct) precesses are left floating around. Also some init.d / restart scripts doesn't allow this.
exec sh -c 'while true ;do wait ;done'
You are right docker run -itd swarm ( Without give argument for container( bash -c "while true; do sleep 1; done" ) )works fine .If you pass argument for docker run it will run the command and terminates the container.If you want to run the container permanently first start the container with docker run -itd swarm and check if the container runs or not by docker ps now the container runs , if you want to execute any command in container use docker exec -itd container_name command Remember : Only use the command which not stop the container. bash -c "while true; do sleep 1; done this command will stop the container ( Because this is complete command if we execute in normal terminal it execute and terminate , this type of command also terminates the container ).
I Hope this Helps..
Basically this is the method , but your docker image is swarm so it is different and i don't know about swarm docker image and i am not using that .But after i research about that . First i run the docker swarm image it shows.,
After that i understand we run docker swarm image by using only five commands in picture like create, list manage, join, help . if we run swarm image without command like docker run -itd swarm it takes command as --help . Sorry, But i don't know what is the purpose of swarm image. For more usage check https://hub.docker.com/_/swarm/ .
The answer that i added docker run -itd image tail -f /dev/null is not for swarm image , it is for docker images like ubuntu,fedora, centos.
Just read the usage of swarm image and why it is used .
After if you have any issue post your issue in https://github.com/docker/swarm-library-image/issues
Thank You...
have a container running
docker run --rm -d --name=tmp ubuntu sleep infinity
example of requesting a command from the dorment container
docker exec tmp echo hello from container
notes:
--rm removes the container if it is stopped
-d runs the container in the background
--name=tmp name the container so you control how to denote it
ubuntu pushes a light image to exec your commands
sleep infinity keeps the container dorment

Container is not running

I tried to start a exited container like follows,
I listed down all available containers using docker ps -a. It listed the following:
I entered the following commands to start the container which is in the exited stage and enter into the terminal of that image.
docker start 79b3fa70b51d
docker exec -it 79b3fa70b51d /bin/sh
It is throwing the following error.
FATA[0000] Error response from daemon: Container 79b3fa70b51d is not running
But when I start the container using docker start 79b3fa70b51d. It throws the container ID as output which is normal if it have everything work normally.
What is the cause of this error?
By default, docker container will exit immediately if you do not have any task running on the container.
To keep the container running in the background, try to run it with --detach (or -d) argument.
For examples:
docker pull debian
docker run -t -d --name my_debian debian
e7672d54b0c2
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e7672d54b0c2 debian "bash" 3 minutes ago Up 3 minutes my_debian
#now you can execute command on the container
docker exec -it my_debian bash
root#e7672d54b0c2:/#
Container 79b3fa70b51d seems to only do an echo.
That means it starts, echo and then exits immediately.
The next docker exec command wouldn't find it running in order to attach itself to that container and execute any command: it is too late. The container has already exited.
The docker exec command runs a new command in a running container.
The command started using docker exec will only run while the container's primary process (PID 1) is running
If it's not possible to start the main process again (for long enough), there is also the possibility to commit the container to a new image and run a new container from this image. While this is not the usual best practice workflow (the new image is not repeatable), I find it really useful to debug a failing script once in a while.
docker exec -it 6198ef53d943 bash
Error response from daemon: Container 6198ef53d9431a3f38e8b38d7869940f7fb803afac4a2d599812b8e42419c574 is not running
docker commit 6198ef53d943
sha256:ace7ca65e6e3fdb678d9cdfb33a7a165c510e65c3bc28fecb960ac993c37ef33
docker run -it ace7ca65e6e bash
root#72d38a8c787d:/#
This happens with images for which the script does not launch a service awaiting requests, therefore the container exits at the end of the script.
This is typically the case with most base OS images (centos, debian, etc.), or also with the node images.
Your best bet is to run the image in interactive mode. Example below with the node image:
docker run -it node /bin/bash
Output is
root#cacc7897a20c:/# echo $SHELL
/bin/bash
First of all, we have to start the docker container
ankit#ankit-HP-Notebook:~$ sudo docker start 3a19b39ea021
3a19b39ea021
After that, check the docker container:
ankit#ankit-HP-Notebook:~$ sudo docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3a19b39ea021 coreapps/ubuntu16.04:latest "bash" 13 hours ago
Up 9 seconds ubuntu1
455b66057060 hello-world "/hello" 4 weeks ago
Exited (0) 4 weeks ago vigorous_bardeen
Then execute by using the command below:
ankit#ankit-HP-Notebook:~$ sudo docker exec -it 3a19b39ea021 bash
root#3a19b39ea021:/#
Here is what worked for me.
Get the container ID and restart.
docker ps -a --no-trunc
ace7ca65e6e3fdb678d9cdfb33a7a165c510e65c3bc28fecb960ac993c37ef33
docker restart ace7ca65e6e3fdb678d9cdfb33a7a165c510e65c3bc28fecb960ac993c37ef33
docker run -it --entrypoint /bin/bash <imageid>
This was posted by L0j1k in the below post and worked for me.
How do I get into a Docker container's shell?
use command
> docker container ls
> docker image ls
Check your Image id and note it down. Here my Image id is "6c929ca002da" , you guys have to use your own Image id instead of mine..
> docker start 6c929ca002da
here our image is in down mode we have to start it first by using image id.
6c929ca002da is my image id
> `docker exec -it 6c929ca002da bash`
after running this command you can see
your image file in running mode like this
root#6c929ca002da
Here I am using root mode go root mode by using command
sudo su
The reason is just what the accepted answer said. I add some extra information, which may provide a further understanding about this issue.
The status of a container includes Created, Running, Stopped,
Exited, Dead and others as I know.
When we execute docker create, docker daemon will create a
container with its status of Created.
When docker start, docker daemon will start a existing container
which its status may be Created or Stopped.
When we execute docker run, docker daemon will finish it in two
steps: docker create and docker start.
When docker stop, obviously docker daemon will stop a container.
Thus container would be in Stopped status.
Coming the most important one, a container actually imagine itself
holding a long time process in it. When the process exits, the
container holding process would exit too. Thus the status of this
container would be Exited.
When does the process exit? In another word, what’s the process, how did we start it?
The answer is CMD in a dockerfile or command in the following expression, which is bash by default in some images, i.e. ubutu:18.04.
docker run ubuntu:18.04 [command]
docker run -it <image_id> /bin/bash
Run in interactive mode executing then bash shell
For anyone attempting something similar using a Dockerfile...
Running in detached mode won't help. The container will always exit (stop running) if the command is non-blocking, this is the case with bash.
In this case, a workaround would be:
1. Commit the resulting image:
(container_name = the name of the container you want to base the image off of,
image_name = the name of the image to be created
docker commit container_name image_name
2. Use docker run to create a new container using the new image, specifying the command you want to run. Here, I will run "bash":
docker run -it image_name bash
This would get you the interactive login you're looking for.
Here's a solution when the docker container exits normally and you can edit the Dockerfile.
Generally, when a docker container is run, an application is served by running a command. From the Dockerfile reference,
Both CMD and ENTRYPOINT instructions define what command gets executed when
running a container. ...
Dockerfile should specify at least one of CMD or ENTRYPOINT commands.
When you build a image and not specify any command with CMD or ENTRYPOINT, the base image's CMD or ENTRYPOINT command would be executed.
For example, the Official Ubuntu Dockerfile has CMD ["/bin/bash"] (https://hub.docker.com/_/ubuntu). Now, the bin/bash/ command can accept input and docker run -it IMAGE_ID command attaches STDIN to the container. The result is that you get an interactive terminal and the container keeps running.
When a command with CMD or ENTRYPOINT is specified in the Dockerfile, this command gets executed when running the container. Now, if this command can finish without requiring any input, it will finish and the container will exit. docker run -it IMAGE_ID will NOT provide the interactive terminal in this case. An example would be the docker image built from the Dockerfile below-
FROM ubuntu
ENTRYPOINT echo hello
If you need to go to the terminal of this image, you will need to keep the container running by modifying the entrypoint command.
FROM ubuntu
ENTRYPOINT echo hello && sleep infinity
After running the container normally with docker run IMAGE_ID, you can just go to another terminal and use docker exec -it CONTAINER_ID bash to get the container's terminal.
Perhaps too late for this active community, but there are a lot of causes because a container may not execute correctly and exit writing a console message or not. For all the newbies making nodeJS containers I'll recommend you to change the Dockerfile and erase all CMD and ENTRYPOINT you may have, and add only an ENTRYPOINT to ["/bin/sh"] (See my attached test Dockerfile example). Then rebuild the Docker image and run it with the command:
docker run -it --rm your_named_image:tag
Voilà you will be getting inside the container with a shell. Then you can test your app typing the command yourself i.e. node app.js and see what is happening. After you see all is ok, you can then change your docker file and erase the ENTRYPOINT to "/bin/sh" and use yourself i.e ["node","app.js"] or whatever. Always consider the previous answers to this post; When the app inside the container finish it will stop the running container.
Here is an example for my "test" Dockerfile:
FROM node:16.4.0-alpine
ENV NODE_ENV=production
WORKDIR /app
COPY ["package.json","package-lock.json*", "./"]
RUN npm install --production
COPY ./dist .
ENTRYPOINT ["/bin/sh"]
NOTE: My source files for the app (.js) on the local computer are on directory ./dist, so I have to copy at the container as you can see.
In my case , i changed certain file names and directory names of the parent directory of the Dockerfile . Due to which container not finding the required parameters to start it again.
After renaming it back to the original names, container started like butter.
I have a different take on this. I could do a docker ps and see that there is a docker container running, I even tried to restart it, but as soon as I tried to get a session for it with New-PSSession -ContainerId $containerId -RunAsAdministrator It would error out, saying:
##[error]New-PSSession : The input ContainerId xxx does not exist,
##[error]or the corresponding container is not running.
My problem was I was running with network service and it did not have enough permissions to see the container, even though I had given it permissions to run docker commands (with docker security group configuration)
I didn't know how to enable working with containers, so I had to revert to running it as an admin user instead
In my case, I had previously killed the running container with,
sudo docker kill testdeb
So when I exec the container I got the error,
Error response from daemon: Container fcc29295fe78a425155c533506f58fc5b30a50ee9eb85c21031e8699b3f6ff01 is not running
The solution was to start the container with,
sudo docker start testdeb
Now I have a container running ,
sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
fcc29295fe78 debian "bash" 9 hours ago Up 11 seconds testdeb
Which wasn't previously running
The below approach I tried works in an windows vscode environment.
docker run --name yourcontainer -p 3306:3306 -e MYSQL_ROOT_PASSWORD=your password -d mysql
I see lot of similar answers but adding port number '-p 3306:3306', made the status up and running. You can verify by using the command docker ps -a

Resources