I run a web using sudo docker compose up but it has a container that recreate infinitely. Therefor, I want to remove that container and build it again, but when I run sudo docker container rm f8df3e233d00 (f8df3e233d00 is ID of container that i listed by sudo docker ps), it does nothing, just break line and do nothing.
How can I remove that container?
Related
I start my docker container with a name. like this:
docker run --name regsvc my-registrationservice
If I call docker stop regsvc, the next time I run the above command it fails. I have to run these commands first.
docker kill regsvc
docker system prune
That seems excessive. Is there a better way to stop the container and restart it?
Thnx
Matt
When you stop a container, you can still see it with:
docker ps -a
Now the container is not alive but it is still there. So, you only need to restart it if you want it to work again:
docker restart regsvc
The command docker run will create a container from your image. So if you want to use docker run again, you need firstly remove your container (after stop it):
docker rm regsvc
docker run --name regsvc my-registrationservice
I believe you want to run a new container every time you issue docker run and it would be better for you to use --rm flag:
docker run --rm --name regsvc my-registrationservice
This will remove the container when your container exits. This is better if you don't want to save data of container.
As suggested by #trong-lam-phan you could restart your existing container using
docker restart regsvc
I am trying to use an image that I pulled from the docker database. However I need data from the host to use some programs loaded into the image. I created a container with this
sudo docker run --name="mdrap" -v "/home/ubuntu/profile/reads/SE:/usr/local/src/volume" sigenae/drap
it appears that everything works and then I start the container
sudo docker start mdrap
but when I check the running containers it is not listed there and if I try to load the container into /bin/bash it tells me the container is not running. I am a beginner with docker and am only trying to use an image to run programs with all the required dependencies, what am I doing wrong?
docker start is only to start a stopped container. It's not necessary after a docker run. (but more after a docker **create**, like in the documentation)
A container is started as long as it's main process is running.
As soon as the main process stops, the container stops.
The main process of a container can be either:
the ENTRYPOINT if defined
the CMD if no ENTRYPOINT and no command line argument
the command line argument
In your case, as you don't have any command line argument (after the image name on the docker run command) and the image only defines a CMD (=/bin/bash), your container is trying to start a /bin/bash.
But, as you don't launch the container with the --interactive/-i nor --tty/-t (again like in the documentation), your process as nothing to interact with and stops (idem for each start of this container).
So your solution is simply to follow the documentation:
docker create --name drap --privileged -v /home/ubuntu/profile/reads/SE:/usr/local/src/volume -i -t sigenae/drap /bin/bash
docker start drap
docker exec -i -t drap /bin/bash
Or even simpler:
docker run --name drap --privileged -v /home/ubuntu/profile/reads/SE:/usr/local/src/volume -i -t sigenae/drap /bin/bash
Need to write a Dockerfile that installs docker in container-a. Because container-a needs to execute a docker command to container-b that's running alongside container-a.
My understanding is you're not supposed to use "sudo" when writing the Dockerfile.
But I'm getting stuck -- what user to I assign to docker group? When you run docker exec -it, you are automatically root.
sudo usermod -a -G docker whatuser?
Also (and I'm trying this out manually inside container-a to see if it even works) you have to do a newgrp docker to activate the changes to groups. Anytime I do that, I end up sudo'ing when I haven't sudo'ed. Does that make sense? The symptom is -- I go to exit the container, and I have to exit twice (as if I changed users).
What am I doing wrong?
If you are trying to run the containers alongside one another (not container inside container), you should mount the docker socket from the host system and execute commands to other containers that way:
docker run --name containera \
-v /var/run/docker.sock:/var/run/docker.sock \
yourimage
With the the docker socket mounted you can control docker on the host system.
I'm experimenting with Docker, and I set up a Node App.
The App is in a GIT Repo in my Gogs Container.
I want to keep all the code inside my container, so at the app root I have my Dockerfile.
I want to create a Shell script to automatically ReBuild my Container and ReRun it.
This script is calling later through a "webhook-container" during a GIT push.
The Docker CLI has only a build and a run command. But both fails if a image or a container with the name already exists.
What is the best practice to handle this?
Remark: I don't want to keep my app sources on the host and update only the source and restart the container!
I like the idea that my entire app is a container.
You can remove docker containers and images before running build or run commands.
to remove all containers:
docker rm $(docker ps -a -q)
to remove all images:
docker rmi $(docker images -q)
to remove a specific container:
docker rm -f containerName
then after executing the relevant commands above, then run your script. your script will typically build, run or pull as required.
I built a docker image from a docker file. Build said it succeeded. But when I try to show docker containers through docker ps (also tried docker ps -a), it shows an empty list. What is weird is that I'm still able to somehow push my docker image to dockerhub by calling docker push "container name".
I wonder what's going on? I'm on Windows 7, and just installed the newest version of dockertoolbox.
docker ps shows (running) containers. docker images shows images.
A successfully build docker image should appear in the list which docker images generates. But only a running container (which is an instance of an image) will appear in the list from docker ps (use docker ps -a to also see stopped containers). To start a container from your image, use docker run.
For me, docker ps -a and docker images both returned an empty list even tho I had many docker containers running. I tried rebooting system with no luck. A quick sudo systemctl restart docker fixed this "bug".
try restarting
sudo systemctl restart docker.socket
sudo systemctl restart docker
You can run the command without the -d option.
So you have the output displayed.
It may be that the application failed to start.
For me, the only thing resolving the issue is to reinstall docker. Also, one must be sure that the disk is not full.
This is the command that I use, but it may vary depending on the version of docker already installed:
apt-get install --reinstall docker.io
If prompted, choose "yes" to automatically restart docker daemon
for Linux,
at first, see all the running container
sudo docker ps
try restarting
sudo systemctl restart docker
remove previous docker image with the same name if there is any
sudo docker rm docker_container_id
once again run
sudo docker run -d --name container_name image_name
This should work
or uninstall docker and install it again
In the Dockerfile instructions, make sure the CMD commands are in between double-quotes not single-qoute
for example:
CMD [ "node" , 'index.js'] Here there is a mistake !!
Correct one is :
CMD [ "node" , "index.js"]
This mistake will make the container run and exit immediately.