How to catch changes in building docker image? - docker

I am new to docker container. I met a problem about how to catch changes of my code. Some lines in my local style.css file have been changed. Then I built docker image again, but actually nothing changed when I browsed my app.
Here are some methods I found online and tried, but didn't work.
remove image and build again
--no-cache=true
add a comment in Dockerfile to make it different
docker system prune
--pull
(I also used git pull to get code on my cloud instance, these files were checked to be the latest.)
I know little about docker mechanism, could anyone tell me what the problem is?
Extra info I found:
After stopping container and removing image, I restart my instance, then build image, run container again. Only in this way, can I catch those changes. Does anyone know the problem?
Many thanks!

There appears to be a disconnect on the difference between a container and an image. The container is the running instance of your application. It is based on an image that you have already built. That image has a tag for easier referencing, but the real reference to the image is a sha256 hash and building a new image will change the hash that the tag points to without impacting any of your running containers.
Therefore, the workflow to update your running application in docker is to:
Build a new image
Stop the running container
Start a new container pointing to that image
Cleanup any old images or stopped containers
If you are using docker-compose, it automates the middle two steps with a docker-compose up command, and that even deletes the old container. Most users keep a few copies of older images to allow easy rollback.

Related

How can I save any changes of containers?

If I have one ubuntu container and I ssh to it and make one file after the container is destroyed or I reboot the container the new file was destroyed because the kubernetes load the ubuntu image that does not contain my changes.
My question is what should I do to save any changes?
I know it can be done because some cloud provider do that.
For example:
ssh ubuntu#POD_IP
mkdir new_file
ls
new_file
reboot
after reboot I have
ssh ubuntu#POD_IP
ls
ls shows nothing
But I want to it save my current state.
And I want to do it automatically.
If I use docker commit I can not control my images because it makes hundreds of images. because I should make images by every changes.
If I want to use storage I should mount /. but kubernetes does not allow me to mount /. and it gives me this error
Error: Error response from daemon: invalid volume specification: '/var/lib/kubelet/pods/26c39eeb-85d7-11e9-933c-7c8bca006fec/volumes/kubernetes.io~rbd/pvc-d66d9039-853d-11e9-8aa3-7c8bca006fec:/': invalid mount config for type "bind": invalid specification: destination can't be '/'
You can try to use docker commit but you will need to ensure that your Kubernetes cluster is picking up the latest image that you committed -
docker commit [OPTIONS] CONTAINER [REPOSITORY[:TAG]]
This is going to create a new image out of your container which you can feed to Kubernetes.
Ref - https://docs.docker.com/engine/reference/commandline/commit/
Update 1 -
In case you want to do it automatically, you might need to store the changed state or the files at a centralized file system like NFS etc & then mount it to all running containers whenever required with the relevant permissions.
K8s ref - https://kubernetes.io/docs/concepts/storage/persistent-volumes/
Docker and Kubernetes don't work this way. Never run docker commit. Usually you have very little need for an ssh daemon in a container/pod and you need to do special work to make both the sshd and the main process both run (and extra work to make the sshd actually be secure); your containers will be simpler and safer if you just remove these.
The usual process involves a technique known as immutable infrastructure. You never change code in an existing container; instead, you change a recipe to build a container, and tell the cluster manager that you want an update, and it will tear down and rebuild everything from scratch. To make changes in an application running in a Kubernetes pod, you typically:
Make and test your code change, locally, with no Docker or Kubernetes involved at all.
docker build a new image incorporating your code change. It should have a unique tag, often a date stamp or a source control commit ID.
(optional but recommended) docker run that image locally and run integration tests.
docker push the image to a registry.
Change the image tag in your Kubernetes deployment spec and kubectl apply (or helm upgrade) it.
Often you'll have an automated continuous integration system do steps 2-4, and a continuous deployment system do the last step; you just need to commit and push your tested change.
Note that when you docker run the image locally in step 3, you are running the exact same image your production Kubernetes system will run. Resist the temptation to mount your local source tree into it and try to do development there! If a test fails at this point, reduce it to the simplest failing case, write a unit test for it, and fix it in your local tree. Rebuilding an image shouldn't be especially expensive.
Your question hints at the unmodified ubuntu image. Beyond some very early "hello world" type experimentation, there's pretty much no reason to use this anywhere other than the FROM line of a Dockerfile. If you haven't yet, you should work through the official Docker tutorial on building and running custom images, which will be applicable to any clustering system. (Skip all of the later tutorials that cover Docker Swarm, if you've already settled on Kubernetes as an orchestrator.)

How to delete cached/intermediate docker images after the cache gets invalidated?

I have a CI-pipeline that builds a docker image for my app for every run of the pipeline (and the pipeline is triggered by a code-push to the git repository.)
The docker image consists of several intermediate layers which progressively become very large in size. Most of the intermediate images are identical for each run, hence the caching mechanism of docker is significantly utilized.
However, the problem is that the final couple layers are different for each run, as they result from a COPY statement in dockerfile, where the built application artifacts are copied into the image. Since the artifacts are modified for every run, the already cached bottommost images will ALWAYS be invalidated. These images have a size of 800mb each.
What docker command can I use to identify (and delete) these image that gets replaced by newer images, i.e. when they get invalidated?
I would like to have my CI-pipeline to remove them at the end of the run so they don't end up dangling on the CI-server and waste a lot of disk space.
If I understand correctly: With every code push, CI pipeline creates new image, where new version of application is deployed. As a result, previously created image becomes outdated, so you want to remove it. To do so, you have to:
Get rid of all outdated containers, which where created from outdated image
display all containers with command docker ps -a
if still running, stop outdated containers with command docker stop [containerID]
remove them with command docker rm [containerID]
Remove outdated images with command: docker rmi [imageID]
To sum up why this process is needed: you cannot remove any image, until it is used by any existing container (even stopped containers still require their images). For this reason, you should first stop and remove old containers, and then remove old images.
Detection part, and automation of deletion process should be based on image versions and container names, which CI pipeline generates while creating new images.
Edit 1
To list all images, which have no relationship to any tagged images, you can use command: docker images -f dangling=true. You can delete them with the command: docker images purge.
Just one thing to remember here: If you build an image without tagging it, the image will appear on the list of "dangling" images. You can avoid this situation by providing a tag when you build it.
Edit 2
The command for image purging has changed. Right now the proper command is:
docker image prune
Here is a link with a documentation

updating docker image given changes to local filesystem

I am trying to work out how I can update an existing image when I make changes to the local filesystem that was used to create the docker image. I thought that I could use docker commits to do that, but it seems that that allows you to change the image when there are changes to the filesystem on a running image?
/app.py
build from file system
sudo docker build -t app
now there are local changes to /app.py. How do I change the image app to reflect the changes to /app.py? right now I'm having to delete the old image and then create a new one.
sudo docker rmi app
sudo docker build -t app
any help is appreciated!
First of all, there's no running image, only running container. Image is something deliverable in Docker way, you build your image and then start a container from it.
To your problem, I think you have mentioned your options:
Rebuild your image
Go inside a running container, make changes and docker commit it back. Personally I only use this way to fix a tiny problem or make a hotfix to my image if docker build takes a really long time.
Docker uses union FS with copy on write to build image, which means if you want make a change to an image, you can't change it in-place, it'll create extra layer(s) to reflect your change(s), it'll just use the same image name in some cases. And from the perspective of delivery, I think it's totally OK to build a new image (with different tag) for each release, or even it should be done this way, that's why you have an Dockerfile, and images are not only something you start your container, they're actually versioned delivery artifacts and you can roll back to any version if you want/need. So I think your current solution is OK.
A few more words here: for local development and test, you can just mount your /app.py as a volume to your container when you start it, something like docker run -v /path/to/host/app.py:/path/to/container/app.py your_base_image_to_run_app, then anything you changed on your local FS to app.py, it'll reflect to the container. When you finish your job, build a new image.
As per your current design Solution is to create new image and assign same tag.
Best solution is expose environment variables from docker image and use those variable to update app.py so that you don't need to change image every time.Only one image is sufficient.

How can I copy a Docker container's configuration when I commit an image?

Ideally everything will be sorted out with a Dockerfile and volumes, but sometimes that isn't practical or convenient.
For example, I found an image with Ghost already set up, and it seemed to work. So I added a few blog entries. Then I realized that I actually needed to modify the config.js to set up the mail.
So I stopped the container, committed, made some changes in bash, committed again, and then went to go start the container again running Ghost. But I had trouble getting it to work because the new image didn't have the configuration with the working directory and environment.
How can I copy the Docker container's configuration when I commit an image? Maybe I need to write a script that runs docker inspect on the container, pulls the config out, and then includes that in the docker commit command line?
This is a known issue: https://github.com/dotcloud/docker/issues/1141
The way you describe is still the best to achieve that I think, but I'd try using docker insert and see if that yield better results.

Docker updating image along when dockerfile changes

I'm playing with docker by creating a Dockerfile with some nodejs instructions. Right now, every time I make changes to the dockerfile I recreate the image by running sudo docker build -t nodejstest . in my project folder however, this creates a new image each time and swallows my ssd pretty soon.
Is there a way I can update an existing image when I change the dockerfile or I'm forced to create a new one each time I make changes to the file?
Sorry if it's a dumb question
Docker build support caching as long as there is no ADD instruction. If you are actively developing and changing files, only what is after the ADD will be rebuilt.
Since 0.6.2 (scheduled today), you can do docker build --rm . and it will remove the temporary containers. It will keep the images though.
In order to remove the orphan images, you can check them out with docker images, and perform a docker rmi <id> on one of them. As of now, there is an auto-prune and all untagged images (orphans, previous builds) will be removed.
According to this best practices guide if you keep the first lines of your dockerfile the same it'll also cache them and reuse the same images for future builds
During development, it makes less sense to re-build a whole container for every commit. Later, you can automate building a Docker container with your latest code as part of your QA/deployment process.
Basically, you can choose to make a minimal container that pulls in code (using git when starting the container, or using -v /home/myuser/mynode:/home/myuser/mynode with ENTRYPOINT to run node).
See my answer to this question:
Docker rails app and git

Resources