we are trying to get rid of an artifactory container.
Nothing helps, things tried so far:
docker rm -f artifactory
docker update --restart=no artifactory
reboot
container keeps starting up:
docker.bintray.io/jfrog/artifactory-oss:latest "/entrypoint-artifac…" 17 minutes ago Up 17 minutes 0.0.0.0:8081->8081/tcp artifactory
What options do we have?
We do not have a docker-compose yaml file
Thanks
are you using docker desktop? I am wondering if you have a k8s artifactory deployment. That can explain this behavior.
Try these commands
kubectl get deployments -A
if you find one, you can delete using
#kubectl -n namespace delete deployment deployment-name
kubectl -n artifactory delete deployment artifactory
i was able to remove the container by hand:
systemctl stop docker
docker ps --no-trunc to get the full container ID
cd /var/lib/docker/containers
rm -r <full container ID>
systemctl start docker
Thanks to you all for your help,
Bodo
Related
we have Linux machine with the following container
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6de660db9fdb kafka-exporter:v1.9.0 "/bin/kafka_export" 23 hours ago Up 17 hours kafka-export
we want to kill the container so we did that:
docker kill 6de660db9fdb
but its hang for along time ( more then hour and not killed )
any advice how to stop/kill the container ?
You could try restarting the Docker service first:
sudo systemctl restart docker
And then removing the container with the force -f flag:
sudo docker rm -f 6de660db9fdb
I am trying to remove container but when i run docker-compose rm ,runs fine but when i run docker ps then again it shows container:
root#datafinance:/tmp# docker-compose rm
Going to remove tmp_zookeeper_1_31dd890a1cbf
Are you sure? [yN] y
Removing tmp_zookeeper_1_31dd890a1cbf ... done
root#datafinance:/tmp# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
03b08e4ef0b3 confluentinc/cp-zookeeper:latest "/etc/confluent/dock…" 14 hours ago Up 14 hours docker_c_zookeeper_1_7c953dce7d69
Use docker-compose ps, it will show the container which only launched by docker-compose up. If it shows there is no container, then this means this container was not launched by this docker-compose.yaml.
And Error starting userland proxy: listen tcp 0.0.0.0:32181: bind: address already in use' means the port 32181 is occupied either by other docker container or other process. You could use docker rm -f $(docker ps -qa) to delete all containers or more you can use netstat -oanltp | grep 32181 to find which process really occupy 32181.
Finally, if for any reason you did not able to delete container as you said, you can just use service docker restart or systemctl restart docker to make all container down. Then repeat above docker rm xxx.
With above steps, you can use docker compose up -d to use your service now.
try this :
docker rm -f 03b08e4ef0b3
DANGER
you may also try this, but be aware that will delete everything (Containers, Images, Networks, ....)
docker system prune -a -f
when all not helped your last resort is to restart Docker daemon
service docker restart
and then repeat the steps...
I think what you are looking for is :
docker-compose down
which removes the containers after stopping them according to this.
According to this, docker-compose rm removes the "stopped" containers. If your container(s) are running, I think it won't remove to prevent accidents.
When I launch docker, it launch by default a few containers that I have build in the past.(I've use docker-compose at the time, but deleted the repo since)
I kill them, but each time i restart docker, they are back.
What can I do ?
I know there is something like "docker system prune",
but I would like to delete the less possible .
You can try running docker ps -a to get a list of all containers including the ones which are not running but stopped.
You can then docker rm each container you do not wish to start on each docker restart.
Use docker ps to see what containers are running.
Use this command to kill/stop all running containers.
docker rm $(docker ps -a -q)
Use docker images to get list of all images.
Use docker rmi <image_id> to delete desired image.
I use docker-compose to create a bunch of containers and link them together. For some of the container definitions, I might have restart: always as the restart policy.
Now I have a postgres container that respawns back to life if stopped.
$docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a8bb2b781630 postgres:latest "docker-entrypoint.s…" About an hour ago Up About an hour 5432/tcp dcat_postgres.1.z3pyl24kiq2n4clt0ua77nfx5
docker stop a8bb2b781630
a8bb2b781630
$ docker rm -f a8bb2b781630
a8bb2b781630
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
93fa7b72c2ca postgres:latest "docker-entrypoint.s…" 12 seconds ago Up 4 seconds 5432/tcp dcat_postgres.1.oucuo5zg3y9ws3p7jvlfztflb
Using docker-compose down in the dir that started the service doesn't work either.
$ docker-compose down
Removing dcat_postgres_1 ... done
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7ee7fb0e98cd postgres:latest "docker-entrypoint.s…" 13 seconds ago Up 5 seconds 5432/tcp dcat_postgres.1.jhv1q6vns1avakqjxdusmbb78
How can I kill a container and keep it from coming back to life?
EDIT: The container respawns even after restarting the Docker service.
Docker - 18.06.1-ce-mac73 (26764)
macOS High-Sierra, (10.13.6)
I figured it out. Turns out it was related to docker swarm. I had experimented with it at some point without fully understanding what it is and what it does and apparently it just stayed there.
All I had to do was:
docker swarm leave --force
and it worked like a head-shot to an actual zombie.
Can you try an option like moby/moby issue 10032:
docker stop $(docker ps -a -q) &
docker update --restart=no $(docker ps -a -q) &
systemctl restart docker
(this assume here you have only one running container: the one you cannot prevent to start)
A docker rm -f should be enough though, unless you are using docker with a provision tool like Puppet.
As it turned out, another process (other than docker itself) was responsible for the container to restart (here docker swarm)
Update 2020/2021: For multiple containers, possibly without having to restart the docker daemon
docker ps -a --format="{{.ID}}" | \
xargs docker update --restart=no | \
xargs docker stop
Check if you need, as in the issue, remove the images as well ( | xargs docker rmi $(docker images -qa) --force)
I try to Install Stratos with Kubernetes in a Testing Environment to build Stratos.I downloading the Kubernetes binaries and provisioned a Docker registry to VAGRANT_KUBERNETES_SETUP folder (in 2.c. i in page).But it gives 3 Failed Units(docker.service,setup-network-environment.service and docker.socket) When I Log into the master node.So I can't view Docker images by using 'docker images' command.when I view docker images it give this error-"FATA[0000] Cannot connect to the Docker daemon. Is 'docker -d' running on this host?" how can i fixed this problem?do i need to install in different way to work with vagrant?
Did you do a sudo -s on the node ? You have to be an admin to connect to the docker daemon and do queries using docker command line client.