is it possible recover the deleted docker image - docker

I execute the docker(Docker version 19.03.1, build 74b1e89) clean command in the server:
docker system prune
to my surprise, this command delete the container that stopped. That's the problem, some container stopped by some reason but I still want to use it in the furture. Now it is deleted, is it possible to recover the mistaking deleted container that stopped?

No, it is not possible. You have to repull the image docker pull image, or rebuild the image.

Docker images and containers are not the same. See this answer.
docker system prune tells you:
WARNING! This will remove:
- all stopped containers
- all networks not used by at least one container
- all dangling images
- all dangling build cache
Are you sure you want to continue? [y/N]
So it should be no surprise that it removed your container and possibly also the image it was based on (if no other container was running based on that same image).
I believe it is not possible to recover your image or container, however you can rebuild them. Depending on how the image was obtained you have to run:
docker pull <image> # for an image on dockerhub or other registry
docker build <arguments> # for an image based on a dockerfile
After that you will have your image and you can run a container again with:
docker run <image>

Related

Docker image doesnt retain changed information

OK so my requirement is to modify the docker image. Following are my steps:
Pull the docker image
Run the container.
Step into the container and modified the file.
Create an image from container
Stop the container
Rerun the container with new image.
I was expecting the file i had modified should be updated, but its not. Is there anything i am missing ?
You need to delete you image docker with docker rmi <imagename> for list all images run docker images

Check if local docker image latest

In my use case I always fetch the image tagged with "latest" tag. This "latest" tag gets updated regularly. So even if the latest tag image is updated on registry, the "docker run" command does not update it on local host. This is expected behavior as the "latest" image exists on local host.
But I want to make sure that if the "latest" image on local host and registry are different then it should pull the latest image from registry.
Is there any way to achieve this?
You can manually docker pull the image before you run it. This is fairly inexpensive, especially if the image hasn't changed. You can do it while the old container is still running to minimize downtime.
docker pull the-image
docker stop the-container
docker rm the-container
docker run -d ... --name the-container the-image
In an automated environment you might consider avoiding the latest tag and other similar fixed strings due to exactly this ambiguity. In Kubernetes, for example, the default behavior is to reuse a local image that has some name, which can result in different nodes running different latest images. If you label your images with a date stamp or source-control ID or something else such that every image has a unique tag, you can just use that tag.
Finding the tag value can be problematic outside the context of a continuous-deployment system; Docker doesn't have any built-in way to find the most recent tag for an image.
# docker pull the-image:20220704 # optional
docker stop the-container
docker rm the-container
docker run -d ... --name the-container the-image:20220704
docker rmi the-image:20220630
One notable advantage of this last approach is that it's very easy to go back to an earlier build if today's build happens to be broken; just switch the image tag back a build or two.

Docker image layer always already exists

I've got this very weird issue where a layer of a docker image always Already exists, even if I remove /var/lib/docker and pull the image again.
The image in question is a simple nginx server containing an Angular webapp. The "already existing" layer is the one containing the webapp. Which means I can't update it at the moment.
FROM nginx:alpine
COPY nginx.conf /etc/nginx/nginx.conf
WORKDIR /usr/share/nginx/html
COPY dist/ .
I've build the image on my local machine and pushed it to the gitlab container registry. I've checked if the image is fine by pulling the image back from the registry. So the issue is clearly with my server.
The server is an ubuntu machine running Docker version 19.03.8, build afacb8b.
How can the layer still exist if I delete /var/lib/docker?
I got stucked at this point ... Any help is highly appreciated.
Update
I think I should extend my question a bit: I've tried all sorts of combinations of removing docker images:
docker rmi with and without force option etc.
docker image prune
docker system prune with and without --all, --volumes options
Like I said above even removing the whole /var/lib/docker directory (while docker service was stopped) didn't solve the issue. In my understanding this is the hard way of removing everything - so to say the brutal way of doing the steps above. But maybe I'm missing something here!?
Before pulling the image after all above measures again, docker images ls -a didn't list any image.
So why is there a single layer left?
cbdbe7a5bc2a: Already exists
10c113fb0c77: Pull complete
9ba64393807b: Pull complete
262f9908119d: Pull complete
to clean all images
docker rmi $(docker images -a --filter=dangling=true -q)
to clean ps
docker rm $(docker ps --filter=status=exited --filter=status=created -q)
after running this 2 commands, everything will be deleted

SonanQube Docker Container won't start due to backelite jar. How to remove it?

I installed a backelite-sonar-swift-plugin.jar and because of it my sonarqube docker container won't start anymore. I cannot docker exec because the container won't start so is there a way to delete the file without starting it?
I am using docker-compose.yml to run my containers and i've tried removing the sonarqube container and images but when i docker-compose up although they will download the sonarqube image again but when starting the container it the backelite-sonar-swift-plugin.jar is in there. Why is it in there? Doesn't it suppose to be gone cause a new image has been downloaded already?
Check if the image is using already cached data or existing volumes from previous container. Remove previous volumes too along with removing the container. If you wanna see how volumes are removed, you will have to look for it yourself, since I know of removing all volumes of all container and removing individual volumes can be done by using docker volume ls and docker volume rm. I prefer removing all docker volumes by:
docker volume rm $(docker volume ls -q)
And if this does not helps, then please provide the errors which are shown before the container exits.

Cached Docker image?

I created my own image and pushed it to my repo on docker hub. I deleted all the images on my local box with docker rmi -f ...... Now docker images shows an empty list.
But when I do docker run xxxx/yyyy:zzzz it doesn't pull from my remote repo and starts a container right away.
Is there any cache or something else? If so, what is the way to clean it all?
Thank you
I know this is old now but thought I'd share still.
Docker will contain all those old images in a Cache unless you specifically build them with --no-cache, to clear the cache down you can simply run docker system prune -a -f and it should clear everything down including the cache.
Note: this will clear everything down including containers.
You forced removal of the image with -f. Since you used -f I'm assuming that the normal rmi failed because containers based on that image already existed. What this does is just untag the image. The data still exists as a diff for the container.
If you do a docker ps -a you should see containers based on that image. If you start more containers based on that same previous ID, the image diff still exists so you don't need to download anything. But once you remove all those containers, the diff will disappear and the image will be gone.

Resources