Docker image layer always already exists - docker

I've got this very weird issue where a layer of a docker image always Already exists, even if I remove /var/lib/docker and pull the image again.
The image in question is a simple nginx server containing an Angular webapp. The "already existing" layer is the one containing the webapp. Which means I can't update it at the moment.
FROM nginx:alpine
COPY nginx.conf /etc/nginx/nginx.conf
WORKDIR /usr/share/nginx/html
COPY dist/ .
I've build the image on my local machine and pushed it to the gitlab container registry. I've checked if the image is fine by pulling the image back from the registry. So the issue is clearly with my server.
The server is an ubuntu machine running Docker version 19.03.8, build afacb8b.
How can the layer still exist if I delete /var/lib/docker?
I got stucked at this point ... Any help is highly appreciated.
Update
I think I should extend my question a bit: I've tried all sorts of combinations of removing docker images:
docker rmi with and without force option etc.
docker image prune
docker system prune with and without --all, --volumes options
Like I said above even removing the whole /var/lib/docker directory (while docker service was stopped) didn't solve the issue. In my understanding this is the hard way of removing everything - so to say the brutal way of doing the steps above. But maybe I'm missing something here!?
Before pulling the image after all above measures again, docker images ls -a didn't list any image.
So why is there a single layer left?
cbdbe7a5bc2a: Already exists
10c113fb0c77: Pull complete
9ba64393807b: Pull complete
262f9908119d: Pull complete

to clean all images
docker rmi $(docker images -a --filter=dangling=true -q)
to clean ps
docker rm $(docker ps --filter=status=exited --filter=status=created -q)
after running this 2 commands, everything will be deleted

Related

is it possible recover the deleted docker image

I execute the docker(Docker version 19.03.1, build 74b1e89) clean command in the server:
docker system prune
to my surprise, this command delete the container that stopped. That's the problem, some container stopped by some reason but I still want to use it in the furture. Now it is deleted, is it possible to recover the mistaking deleted container that stopped?
No, it is not possible. You have to repull the image docker pull image, or rebuild the image.
Docker images and containers are not the same. See this answer.
docker system prune tells you:
WARNING! This will remove:
- all stopped containers
- all networks not used by at least one container
- all dangling images
- all dangling build cache
Are you sure you want to continue? [y/N]
So it should be no surprise that it removed your container and possibly also the image it was based on (if no other container was running based on that same image).
I believe it is not possible to recover your image or container, however you can rebuild them. Depending on how the image was obtained you have to run:
docker pull <image> # for an image on dockerhub or other registry
docker build <arguments> # for an image based on a dockerfile
After that you will have your image and you can run a container again with:
docker run <image>

Updating a docker image without the original Dockerfile

I am working on Flask app running on ec2 server inside a docker image.
The old dev seems to have removed the original Dockerfile, and I can't seem to find any instructions on a way to push my changes into to the docker image with out the original.
I can copy my changes manually using:
docker cp newChanges.py doc:/root/doc/server_python/
but I can't seem to find a way to restart flask. I know this is not the ideal solution but it's the only idea I have.
There is one way to add newChanges.py to existing image and commit that image with a new tag so you will have a fall back option if you face any issue.
Suppose you run alpine official image and you don't have DockerFile
Everytime you restart the image you will not have your newChanges.py
docker run --rm -name alpine alpine
Use ls inside the image to see a list of existing files that are created in Dockerfile.
docker cp newChanges.py alpine:/
Run ls and verify your file was copied over
Next Step
To commit these changes to your running container do the following:
Docker ps
Get the container ID and run:
docker commit 4efdd58eea8a updated_alpine_image
Now run your alpine image and you will the see the updated changes as suppose
docker run -it updated_alpine_image
This is what you will see in your update_alpine_image with having DockerFile
This is how you can rebuild the image from existing image. You can also try #uncletall answer as well.
If you just want to restart after docker cp, you can just docker stop $your_container, then docker start $your_container.
If you want to update newChanges.py to docker image without original Dockerfile, you can use docker export -o $your_tar_name.tar $your_container, then docker import $your_tar_name.tar $your_new_image:tag. Later, always reserve the tar to backup server for future use.
If you want continue to develop later use a Dockerfile in the future for further changes:
you can use docker commit to generate a new image, and use docker push to push it to dockerhub with the name something like my_docker_id/my_image_name:v1.0
Your new Dockerfile:
FROM my_docker_id/my_image_name:v1.0
# your new thing here
ADD another_new_change.py /root/
# others
You can try to examine the history of the image, from there you can probably re-create the Dockerfile. Try using docker history --no-trunc image-name
See this answer for more details

docker pull always show "Already exists" for layers during pull even after deleting all images

here is my input and output:
shshenhx#shshenhx:~/Desktop/Docker$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
python latest 336d482502ab 4 days ago 692 MB
shshenhx#shshenhx:~/Desktop/Docker$ docker rmi 336
Untagged: python:latest
Untagged: python#sha256:bf0718e2882cabc144c9c07d758b09c65efc104a6ddc72a9a976f8b26f67c2ee
Deleted: sha256:336d482502ab564b0b2964b2ed037529ba40c7b4ade2117ca11d74edbf11f99e
Deleted: sha256:1e2f72b0bf844de7bfe57a32115b660770355843ef8078fb55629e720147e6a9
Deleted: sha256:b5818ba96f33835318f3e9d7b4094e1007be247f04ab931ea9ef83b45b451f90
Deleted: sha256:0c2d7cafdab1084ebbd0112b3bedf76458ae6809686fb0ad9bae8022f11c3a84
shshenhx#shshenhx:~/Desktop/Docker$ docker pull python
Using default tag: latest
latest: Pulling from library/python
4176fe04cefe: Already exists
851356ecf618: Already exists
6115379c7b49: Already exists
aaf7d781d601: Already exists
40cf661a3cc4: Already exists
975fe2fd635f: Pull complete
bf4db784e7fd: Pull complete
0491f7e9426b: Pull complete
Digest: sha256:bf0718e2882cabc144c9c07d758b09c65efc104a6ddc72a9a976f8b26f67c2ee
Status: Downloaded newer image for python:latest
My question is, I have already rm python image, why it still shows already exist for some of layers? How can I totally delete all the python layers.
Thanks.
What helped for me was to run docker-system prune after removing all containers and images. So the whole process I got it to work was:
Remove all containers docker rm -vf $(docker ps -a -q)
Remove all images docker rmi -f $(docker images -a -q)
Prune system with docker system prune
From reference of docker images command
Docker images have intermediate layers that increase reusability, decrease
disk usage, and speed up docker build by allowing each step to be cached.
These intermediate layers are not shown by default.
Maybe those Already exists are intermediate layers. By default, they are hided when you run docker images, please try docker images --all.
Docker has a solution for this on docker dashboard, just press the button below and presto it works! (This is in the troubleshooting section. or the bug icon)
There is a docker command that deletes images that do not have an associated running container:
docker image prune
allows you to clean up dangling images (a dangling image is not tagged and not referenced by any container)
docker image prune -a
will remove all images that are not used by any existing containers
docker image prune -a --filter "until=48h"
you can add a filter (--filter) to limit the images to be pruned, as in this example, only images that are older than 48 hours
You can add the -f or --force flag to prune the images and not prompt for confirmation
You could use this in a scheduled bash script to keep your drive tidy
docker system prune -a helped me.It will remove both unused and dangling images.
You can use docker system prune but it will clear everything in docker.
Its not an image problem its actually cache problem so deleting your images wont work because the problem relies in layer caching.
There is a --no-cache flag in docker build but I don't know if it works with docker pull too.
For someone interested in: (1) using the command line; (2) removing cache only for images that have been removed with docker rmi (i.e., not visible with docker images --all), but for which caches are still present, and for which the Already exists shows for layers when pulling image; you can try this command:
docker builder prune
(It may take up to 15 minutes to run this command, so be patient.)
This command seems to have solved the problem for me. I did not want to use the general, "system-wide" prune / purge / clean docker commands and functionalities, because I had some images and containers running (not related to the image that I have already removed), that I wished to have left as they were.
Source of the solution: https://forums.docker.com/t/how-to-delete-cache/5753/7
I think that the situation of "invisible" cached layers for images removed with docker rmi may happen due to the fact of using docker compose (as suggested also by John Doe in his comment to one of the answers in the current question: docker pull always show "Already exists" for layers during pull even after deleting all images).
So after a lot of searching, I have a solution although I don't think it's necessarily a great one.
The "sledge hammer solution" would be to do:
rm -rf /var/lib/docker
This basically factory resets docker, so you would lose everything. This was fine for my case, but be very careful and think this through before using this one.
Please try this command docker rmi 336d482502ab and then try to pull it again.

Quick docker container refresh workflow

This is probably a duplicate, but all answers that I saw didn't work for me.
I'm using docker (17.06.2-ce), docker-compose (1.16.1).
I have an image of solr which I use for development and testing purposes (and on CI too).
When making changes to the image I need to rebuild the image and recreate containers, so that the containers use the latest possible image, which, in turn, takes the latest possible code from the local repo.
I've created my own image which is based on official solr-docker image. The repo is a folder with additional steps that I'm applying to the image, such as copying files and making changes to existing configs using sed.
I'm working in the repo and have the containers running in the background.
When I need to refresh the containers, I usually do these commands
sudo docker-compose stop
sudo docker rm $(sudo docker ps -a -q)
sudo docker rmi $(sudo docker images -q)
sudo docker-compose up
The above 4 commands is the only way it works for me. All other approaches that I've tried din't rebuild the images and didn't create the containers based off the new, rebuilt images. In other words, the code in the image would be stale.
Questions:
Is it possible to refresh the image + rebuild the container using fewer commands?
Every time I'm running above 4 commands, docker would download ~500MB of dependencies. Is it possible to not to download them and just rebuild the image using updated local code and existing cached dependencies?
I usually do docker-compose rm && docker-compose build && docker-compose up for recreating docker containers: it won't download 500mb.
You can use docker-compose down which does the following:
down Stop and remove containers, networks, images, and volumes
Therefore the command to use will be: docker-compose down --rmi local && docker-compose up
The --rmi local option will remove the built image, and thus forcing a rebuild on up

Cached Docker image?

I created my own image and pushed it to my repo on docker hub. I deleted all the images on my local box with docker rmi -f ...... Now docker images shows an empty list.
But when I do docker run xxxx/yyyy:zzzz it doesn't pull from my remote repo and starts a container right away.
Is there any cache or something else? If so, what is the way to clean it all?
Thank you
I know this is old now but thought I'd share still.
Docker will contain all those old images in a Cache unless you specifically build them with --no-cache, to clear the cache down you can simply run docker system prune -a -f and it should clear everything down including the cache.
Note: this will clear everything down including containers.
You forced removal of the image with -f. Since you used -f I'm assuming that the normal rmi failed because containers based on that image already existed. What this does is just untag the image. The data still exists as a diff for the container.
If you do a docker ps -a you should see containers based on that image. If you start more containers based on that same previous ID, the image diff still exists so you don't need to download anything. But once you remove all those containers, the diff will disappear and the image will be gone.

Resources