Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
In case multiple containers are running then deleting one by one is time wasting
docker container stop $(docker container ls –aq) && docker system prune –af ––volumes
The above command tells Docker to stop the containers listed in the parentheses.
Inside the parentheses, you tell Docker to generate a list of all the containers, and then the information is passed back to the container stop command and stops all the containers.
The && attribute tells Docker to remove all stopped containers and volumes.
–af means this should apply to all containers (a) without a required confirmation (f).
Docker cli command :
docker rm -f $(docker ps -qa)
or
docker system prune
Create an Alias to do so every time
vi .bash_profile
alias dockererase='docker rm -f $(docker ps -qa)'
source ~/.bash_profile
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
I'm a bit confused as to how to go ahead with docker.
I can build an image with the following Dockerfile:
FROM condaforge/mambaforge:4.10.1-0
# Use bash as shell
SHELL ["/bin/bash", "-c"]
# Set working directory
WORKDIR /work_dir
# Install vim
RUN ["apt-get", "update"]
RUN ["apt-get", "install", "-y", "vim"]
# Start Bash shell by default
CMD /bin/bash
I build it with docker build --rm . -t some_docker but then I'd like to enter the container, and install things individually interactively, so that later on I can export the whole image with all additional installations. So I then can start it interactively with docker run -it some_docker, after which I do my things. I would then like to export it.
So here are my specific questions:
Is there an easier way to build (and keep) the image available so that then I can come back to it at another point? When I run docker ps -a I see so many images that I dont know what they do since many of them dont have any tag.
After building I get the warning Use 'docker scan' to run Snyk tests against images to find vulnerabilities and learn how to fix them. Is this a problem and if so, how to solve it?
How can I specify in my Dockerfile (or docker build?) that ports for rstudio should be open? I saw that docker-compose allows you to specify ports: 8787:8787, how do I do it in here?
With docker ps -a, what you're seeing is container rather than images. To list images, use docker image ls instead. Whether you should delete images depends on what containers you're going to run in the future. Docker uses layer architecture with Copy-on-write strategy. So for example, in the future, if you want to use the image FROM condaforge/mambaforge:4.10.1-0, docker won't have to download and install it again. Your example is fairly simple, but with more complicated apps, it may take a lot of time to build images and run container from scratch (the longest I have experienced is about 30 mins). However, if storage is your concern, just go ahead delete images that you don't use very often. Read more
Yes, of course. However, it depends on the details that you have from docker scan. To see more details, you can run docker scan --file PATH_TO_DOCKERFILE DOCKER_IMAGE. Read more
Dockerfile is for building images, and Docker-compose file is for orchestrating containers. That's why you cannot publish ports in Dockerfile. This also creates problems like security or conflicts. All you can do is to expose container ports, then run docker run -d -P --name app_name app_image_name to publish all ports exposed in the container.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I recently started migrating my self-hosted services to docker. To simplify maintenance, I'm using docker-compose. Some of the containers depend on each other, others are independent.
With more than 20 containers and almost 500 lines of code, maintainability has now decreased.
Are there good alternatives to keeping one huge docker-compose file?
That's a big docker-compose.yml! Break it up into more than one docker-compose file.
You can pass multiple docker-compose files into one docker-compose command. If the many different containers break up logically, one way of making them easier to work with is breaking apart the docker-compose.yml by container grouping, logical use case, or if you really wanted to you could do one docker-compose file per service (but you'd have 20 files).
You can then use a bash alias or a helper script to run docker-compose.
# as an alias
alias foo='docker-compose -f docker-compose.yml -f docker-compose-serviceA.yml -f docker-compose-serviceB-yml $#'
Then:
# simple docker-compose up with all docker-compose files
$ foo -d up
Using a bash helper file would be very similar, but you'd be able to keep the helper script updated as part of your codebase in a more straightforward way:
#!/bin/bash
docker-compose -f docker-compose.yml \
-f docker-compose-serviceA.yml \
-f docker-compose-serviceB.yml \
-f docker-compose-serviceC.yml \
$#
Note: The order of the -f <file> flags does matter, but if you aren't overriding any services it may not bite you. Something to keep in mind anyway.
You could look at kubernetes
If you didn't want to go all in, you can use minikube
or maybe kubernetes baked into docker on the edge channel for windows or mac but that is beta so perhaps not for a production system
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I was restoring a MongoDB environment, then it failed for no space in disk.
After that I cannot execute any docker-compose command, in each attempt this error message is displayed:
Failed to write all bytes for _bisect.so
I found some references about to free space in /tmp, although I want to be sure that was the best alternative of solution.
Remove the docker images:
docker rmi $(docker images -f dangling=true -q)
UPDATE:
you can now use prune
docker system prune -af
https://docs.docker.com/engine/reference/commandline/system_prune/
check df
normally you will find 100% for /var/lib/docker and 100% for/
try to free some space, may be stop syslog service.
Then remove and restart your containers
recheck df
now /var/lib/docker should be around 15%
During a docker-compose command, I got a similar error ("Failed to write all bytes for _ctypes.pyd") because my drive had no space left on it.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I have deleted all the images/containers
ubuntu#ubuntu:/var/lib/docker$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
ubuntu#ubuntu:/var/lib/docker$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
but I notice that there are still about 15GB inside /var/lib/docker
ubuntu#ubuntu:/var/lib/docker$ sudo du --max-depth=1 -h .
12G ./volumes
104K ./aufs
4,0K ./containers
1,3M ./image
4,0K ./trust
4,0K ./swarm
2,6G ./tmp
108K ./network
15G .
Questions:
How can I free up this space?
Is it safe to remove things inside /var/lib/docker?
Try (from docker 1.13):
docker system df
it shows you size of:
Images
Containers
Local Volumes
and remove local volumes using:
docker volume prune
For older Dockers try:
docker volume rm $(docker volume ls -q)
For my current docker version (1.12.1 for both Client & Server) a way to delete all volumes is by using:
docker volume rm $(docker volume ls -q)
but the following is safer: (thanks Matt for your comment)
$(docker volume ls -qf dangling=true)
Also from version: 1.13.0 (2017-01-18) some commands were added:
$ docker system prune
$ docker container prune
$ docker image prune
$ docker volume prune
$ docker network prune
Changelog: Add new docker system command with df and prune subcommands for system resource management, as well as docker {container,image,volume,network} prune subcommands #26108 #27525 / #27525
Most of the space is occupied by docker volume as you can see from your output:
12G ./volumes
Docker volumes are used to persist data for docker container and to share data between containers, and they are independent of the container’s lifecycle. So removing image/container will not free the disk space they occupied. Please refer to their official docs for more details.
If you're using latest version of docker, you can find volume related commands docs for more details(list/remove/create volumes e.g), for older version of docker, you can refer to this script on github for how to clean up volumes.
Hope this could be helpful:-)
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
The command docker run is used to create container from image and make container running. When calling docker run I can pass CMD to tell docker to run some service at startup.
But when I call docker stop to stop the running container and then call docker start, does it run the same with the above docker run, for example, does it start all service the same as docker run
The docker client is a convenience wrapper for many calls to the Docker API.
Docker run will :
Attempt to create the container
If the dockerimage isnt found, it will attempt to pull it
If the image is pulled successfully, it will then create the container
Once the container is created, it will then call Docker start on the new container
The short answer to your question is: Docker stop is the opposite of the Docker Start command. Docker run calls docker start at the end, but it also does a bunch of other things.
Docker run will always try and create a new container, and throw an error if the container name already exists. Docker start can be used to manually start an existing container. (you could also look into the "docker restart" command, which I believe calls docker stop and then docker start.)
Hope this helps!