I've tried pushing the newly built image to ECR but it seemed that it always kept pushing the old code from cache to ECR. I've tried to clear out cache with the following commands and rebuilt the image but the issue remained:
Clear out docker
docker rmi $$(docker images | grep none | awk '{print $$3}') -f
docker system prune -a -f
Rebuild the image
docker build -t $(DOMAIN)/$(REPO_NAME):$(IMAGE_VERSION) -f docker/Dockerfile . --no-cache
pushing the image to ECR
docker push $REPO_ID.dkr.ecr.$REGION.amazonaws.com/$(DOMAIN)/$(REPO_NAME):$(IMAGE_VERSION)
I've tested the built image locally and it works with the new code. However, when I run the image remotely, it ran the old code and failed
I have no idea what was going on. Can someone help me on this please?
I have sorted this out. I would need to update the function image by the following:
aws lambda update-function-code --function-name my-lambda-func --image-uri $REPO_ID.dkr.ecr.$REGION.amazonaws.com/$(DOMAIN)/$(REPO_NAME):latest
You don't explicitly say where/how you are running that image "remotely". Could it be that you are re-using tags and those images are cached on the "remote" nodes you are deploying to (e.g. EKS cluster or ECS/EC2 cluster)? In that case, depending on the host configuration/state it may not even reach to ECR to pull the new image if the node (thinks to have) found the same image cached.
[Update] Following up from the comments, the problem occurs in a Lambda function and this blog has hints about how to update the code in the Lambda.
At first sight, the issue is that there is a mismatch in the name (a.k.a. tag) of the two images:
docker build -t $(DOMAIN)/$(REPO_NAME):$(IMAGE_VERSION) -f docker/Dockerfile . --no-cache
docker push $REPO_ID.dkr.ecr.$REGION.amazonaws.com/$(DOMAIN)/$(REPO_NAME):$(IMAGE_VERSION)
so you should probably write:
docker build -t $REPO_ID.dkr.ecr.$REGION.amazonaws.com/$(DOMAIN)/$(REPO_NAME):$(IMAGE_VERSION) -f docker/Dockerfile . --no-cache
instead.
Related
I'm in low cost project that we send to container registry (DigitalOcean) only latest image.
But all time, after running:
docker build .
Is generating the same digest, every time.
This is my script for build:
docker build .
docker tag {image}:latest registry.digitalocean.com/{company}/{image}:latest;
docker push registry.digitalocean.com/{company}/{image}
I tried:
BUILD_VERSION=`date '+%s'`;
docker build -t {image}:"$NOW" -t {image}:latest .
docker tag {image}:latest registry.digitalocean.com/{company}/{image}:latest;
docker push registry.digitalocean.com/{company}/{image}
but not worked.
Editing my answer, what David said is correct - the push with out the tag should pick up latest tag.
If you provide what you have in your local repository and the output of the above commands, it would shed more light to your problem.
Edit 2:
I think I have figured out on why:
Is generating the same digest, every time.
This means, although you are running your docker build - there has been no change to the underlying artifacts which are being packaged into the image and hence it results into the same digest.
Sometimes layers are cached but there are changes that aren't detected so you can delete the image or use 'docker system prune' to force clearing cache here
I've running several containers on my host (Ubuntu Server)
I ran docker container via a command, like below
sudo docker run -d -p 5050:80 gitname/reponame
And after I call sudo docker ps and it shows
CONTAINER ID: e404ffa2bc6b
IMAGE: gitname/reponame
COMMAND: "dotnet run --server…"
CREATED: 14 seconds ago
STATUS: Up 12 seconds
PORTS: 5050/tcp, 0.0.0.0:5050->80/tcp
NAMES: reverent_mcnulty
And in a week I run sudo docker ps again and it shows another info where IMAGE was changed and it looks like ba2486f19dc0
I don't understand why.
It's problem for me because for stopping containers I use the command:
sudo docker stop $(sudo docker ps | awk '{ print $1,$2 }' | grep gitname/reponame | awk '{print $1 }')
And it doesn't work because image name already was changed
Every Docker image has a unique hex ID. These IDs will be different on different systems (even docker pull of the same image), but every image has exactly one ID.
Each image has some number of tags associated with it. This could be none, or multiple. docker tag will add a tag to an existing image; docker rmi will remove a tag, and also (if there are no other tags on the image, no other images using the image as a base, and no extant containers using the image) remove the image.
It's possible to "steal" the tag from an existing image. The most obvious way to do this is with docker build:
cat >Dockerfile <<EOF
FROM busybox
COPY file.txt /
EOF
echo foo > file.txt
docker build -t foo .
docker images
# note the ID for foo:latest
echo bar > file.txt
docker build -t foo .
docker images
# note the old ID will show as foo:<none>
# note a different ID for foo:latest
An explicit docker tag can do this too. Images on Docker Hub and other repositories can change too (ubuntu:16.04 is routinely re-published with security updates), and so if you docker pull an image you already have it can cause the old image to "lose its name" in favor of a newer version of that image.
How does this interact with docker run and docker ps? docker run can remember the image tag an image was started with, but if the image no longer has that tag, it forgets that datum. That's why your docker ps output reverts to showing the hex ID of the image.
There are a couple of ways around this for your application:
If the image is one you're building yourself, always use an explicit version tag (could just be the current datestamp) both when building and running the image. Then you'll never overwrite an existing image's tag. ("Don't use :latest tags.")
Use docker run --name to keep track of which container is which, and filter based on that, not the image tag. (#quentino's suggestion from comments.)
Don't explicitly docker pull in your workflow. If you don't have an image, docker run will automatically pull it for you. This will avoid existing images getting minor updates and also will avoid existing images losing their names.
I am working on Flask app running on ec2 server inside a docker image.
The old dev seems to have removed the original Dockerfile, and I can't seem to find any instructions on a way to push my changes into to the docker image with out the original.
I can copy my changes manually using:
docker cp newChanges.py doc:/root/doc/server_python/
but I can't seem to find a way to restart flask. I know this is not the ideal solution but it's the only idea I have.
There is one way to add newChanges.py to existing image and commit that image with a new tag so you will have a fall back option if you face any issue.
Suppose you run alpine official image and you don't have DockerFile
Everytime you restart the image you will not have your newChanges.py
docker run --rm -name alpine alpine
Use ls inside the image to see a list of existing files that are created in Dockerfile.
docker cp newChanges.py alpine:/
Run ls and verify your file was copied over
Next Step
To commit these changes to your running container do the following:
Docker ps
Get the container ID and run:
docker commit 4efdd58eea8a updated_alpine_image
Now run your alpine image and you will the see the updated changes as suppose
docker run -it updated_alpine_image
This is what you will see in your update_alpine_image with having DockerFile
This is how you can rebuild the image from existing image. You can also try #uncletall answer as well.
If you just want to restart after docker cp, you can just docker stop $your_container, then docker start $your_container.
If you want to update newChanges.py to docker image without original Dockerfile, you can use docker export -o $your_tar_name.tar $your_container, then docker import $your_tar_name.tar $your_new_image:tag. Later, always reserve the tar to backup server for future use.
If you want continue to develop later use a Dockerfile in the future for further changes:
you can use docker commit to generate a new image, and use docker push to push it to dockerhub with the name something like my_docker_id/my_image_name:v1.0
Your new Dockerfile:
FROM my_docker_id/my_image_name:v1.0
# your new thing here
ADD another_new_change.py /root/
# others
You can try to examine the history of the image, from there you can probably re-create the Dockerfile. Try using docker history --no-trunc image-name
See this answer for more details
When I build image in CI I push it with a unique SHA tag. Then when I deploy it to production I want to change :latest alias to point to the same image like so:
docker pull org/foo:34f8a342
docker tag org/foo:34f8a342 org/foo:latest
docker push org/foo:latest
Now I want to avoid pulling this image. The problem is that container for deploy script is different from container that was used to build it so I don't have this image locally. Is there any way to add a tag alias on docker hub without the need to have this image locally?
Using the experimental docker manifest command:
docker manifest create $REPOSITORY:$TAG_NEW $REPOSITORY:$TAG_OLD
docker manifest push $REPOSITORY:$TAG_NEW
For a private registry, you may need to prepend $REGISTRY/ to the repository.
As it can be seen here its not allowed, but if the problem is that pulling the entire image is slow as someone mentions in the comments it can be acheived faster just "pulling" the manifest as here through the Docker Registry API
MANIFEST=$(curl -H "Accept: application/vnd.docker.distribution.manifest.v2+json" "${REGISTRY_NAME}/v2/${REPOSITORY}/manifests/${TAG_OLD}")
curl -X PUT -H "Content-Type: application/vnd.docker.distribution.manifest.v2+json" -d "${MANIFEST}" "${REGISTRY_NAME}/v2/${REPOSITORY}/manifests/${TAG_NEW}"
I'm not aware of tagging a docker image directly on docker hub. There's a workaround for your problem, that is tagging the image with two tags when building it. docker build allows to create multiple tags for one build:
docker build -t org/foo:34f8a342 -t org/foo:latest .
I am playing with docker and plan to use it in a GitLab CI environment to package the current project state to containers and provide running instances to do reviews.
I use a very simple Dockerfile as follows:
FROM php:7.0-apache
RUN sed -i 's!/var/www/html!/var/www/html/public!g' /etc/apache2/sites-available/000-default.conf
COPY . /var/www/html/
Now, as soon as a I a new (empty) file (touch foobar) to the current directory and call
docker build -t test2 --rm .
again, a full new layer is created, containing all of the code.
If I do not create a new file, the old image seems to be nicely reused.
I have a half-way solution using the following Dockerfile:
FROM test2:latest
RUN sed -i 's!/var/www/html!/var/www/html/public!g'
/etc/apache2/sites-available/000-default.conf
COPY . /var/www/html/
After digging into that issue and switching the storage driver to overlay, this seems to be what I want - only a few bytes are added as a new layer.
But now I am wondering, how I could integrate this into my CI setup - basically I would need two different Dockerfiles - depending on whether the image already exists or it doesn't.
Is there a better solution for this?
Build your images with same tags or no tags
docker build -t myapp:ci-build ....
or
docker build ....
If you use same tags then old images will be untagged and will have "" as name. If you don't tag them then also they will have "" in name.
Now you can schedule below command
docker system prune -f
This will remove all dangling images containers etc
One suggestion is to use the command docker image prune to clean dangling images. This can save you a lot of space. You can run this command regularly in your CI.