Docker updating image along when dockerfile changes - docker

I'm playing with docker by creating a Dockerfile with some nodejs instructions. Right now, every time I make changes to the dockerfile I recreate the image by running sudo docker build -t nodejstest . in my project folder however, this creates a new image each time and swallows my ssd pretty soon.
Is there a way I can update an existing image when I change the dockerfile or I'm forced to create a new one each time I make changes to the file?
Sorry if it's a dumb question

Docker build support caching as long as there is no ADD instruction. If you are actively developing and changing files, only what is after the ADD will be rebuilt.
Since 0.6.2 (scheduled today), you can do docker build --rm . and it will remove the temporary containers. It will keep the images though.
In order to remove the orphan images, you can check them out with docker images, and perform a docker rmi <id> on one of them. As of now, there is an auto-prune and all untagged images (orphans, previous builds) will be removed.

According to this best practices guide if you keep the first lines of your dockerfile the same it'll also cache them and reuse the same images for future builds

During development, it makes less sense to re-build a whole container for every commit. Later, you can automate building a Docker container with your latest code as part of your QA/deployment process.
Basically, you can choose to make a minimal container that pulls in code (using git when starting the container, or using -v /home/myuser/mynode:/home/myuser/mynode with ENTRYPOINT to run node).
See my answer to this question:
Docker rails app and git

Related

How to instruct docker or docker-compose to automatically build image specified in FROM

When processing a Dockerfile, how do I instruct docker build to build the image specified in FROM locally using another Dockerfile if it is not already available?
Here's the context. I have a large Dockerfile that starts from base Ubuntu image, installs Apache, then PHP, then some custom configuration on top of that. Whether this is a good idea is another point, let's assume the build steps cannot be changed. The problem is, every time I change anything in the config, everything has to be rebuilt from scratch, and this takes a while.
I would like to have a hierarchy of Dockerfiles instead:
my-apache : based on stock Ubuntu
my-apache-php: based on my-apache
final: based on my-apache-php
The first two images would be relatively static and can be uploaded to dockerhub, but I would like to retain an option to build them locally as part of the same build process. Only one container will exist, based on the final image. Thus, putting all three as "services" in docker-compose.yml is not a good idea.
The only solution I can think of is to have a manual build script that for each image checks whether it is available on Dockerhub or locally, and if not, invokes docker build.
Are there better solutions?
I have found this article on automatically detecting dependencies between docker files and building them in proper order:
https://philpep.org/blog/a-makefile-for-your-dockerfiles
Actual makefile from Philippe's git repo provides even more functionality:
https://github.com/philpep/dockerfiles/blob/master/Makefile

Docker - Upgrading base Image

I have a base which is being used by 100 applications. All 100 applications have the common base image in their Dockerfile. Now, I am upgrading the base image for OS upgrade or some other upgrade and bumping up the version and I also tag with the latest version.
Here, the problem is, whenever I change the base image, all 100 application needs to change the base image in their dockerfile and rebuild the app for using the latest base image.
Is there any better way to handle this?
Note :- I am running my containers in Kubernetes and Dockerfile is there in GIT for each application.
You can use a Dockerfile ARG directive to modify the FROM line (see Understand how ARG and FROM interact in the Dockerfile documentation). One possible approach here would be to have your CI system inject the base image tag.
ARG base=latest
FROM me/base-image:${base}
...
This has the risk that individual developers would build test images based on an older base image; if the differences between images are just OS patches then you might consider this a small and acceptable risk, so long as only official images get pushed to production.
Beyond that, there aren't a lot of alternatives beyond modifying the individual Dockerfiles. You could script it
# Individually check out everything first
BASE=$(pwd)
TAG=20191031
for d in *; do
cd "$BASE/$d"
sed -i.bak "s#FROM me/base-image.*#FROM:me/base-image:$TAG/" Dockerfile
git checkout -b "base-image-$TAG"
git commit -am "Update Dockerfile to base-image:$TAG"
git push
hub pull-request --no-edit
done
There are automated dependency-update tools out there too and these may be able to manage the scripting aspects of it for you.
You don't need to change the Dockerfile for each app if it uses base-image:latest. You'll have to rebuild app images though after base image update. After that you'll need to update the apps to use the new image.
For example using advises from this answer
whenever I change the base image, all 100 application needs to change the base image in their dockerfile and rebuild the app for using the latest base image.
That's a feature, not a bug; all 100 applications need to run their tests (and potentially fix any regressions) before going ahead with the new image...
There are tools out there to scan all the repos and automatically submit pull requests to the 100 applications (or you can write a custom one, if you don't have just plain "FROM" lines in Dockerfiles).
If you need to deploy that last version of base image, yes, you need to build, tag, push, pull and deploy each container again. If your base image is not properly tagged, you'll need to change your dockerfile on all 100 files.
But you have some options, like using sed to replace all occurrences in your dockerfiles, and execute all build commands from a sh file pointing to every app directory.
With docker-compose you may update your running 100 apps with one command:
docker stack deploy --compose-file docker-compose.yml
but still needs to rebuild the containers.
edit:
with docker compose you can build too your 100 containers with one command, you need to define all of them in a compose file, check the docks for the compose file.

How to update my app when I deploy it using docker

I'm deployed a nodejs app using docker, I don't know how to update the deploy after my nodejs app updated.
Currently, I have to remove the old docker container and image when updating the nodejs app each time.
I expect that it's doesn't need to remove the old image and container when I nodejs app updated.
You tagged this "production". The standard way I've done this is like so:
Develop locally without Docker. Make all of your unit tests pass. Build and run the container locally and run integration tests.
Build an "official" version of the container. Tag it with a time stamp, version number, or source control tag; but do not tag it with :latest or a branch name or anything else that would change over time.
docker push the built image to a registry.
On the production system, change your deployment configuration to reference the version tag you just built. In some order, docker run a container (or more) with the new image, and docker stop the container(s) with the old image.
When it all goes wrong, change your deployment configuration back to the previous version and redeploy. (...oops.) If the old versions of the images aren't still on the local system, they can be fetched from the registry.
As needed docker rm old containers and docker rmi old images.
Typically much of this can be automated. A continuous integration system can build software, run tests, and push built artifacts to a registry; cluster managers like Kubernetes or Docker Swarm are good at keeping some number of copies of some version of a container running somewhere and managing the version upgrade process for you. (Kubernetes Deployments in particular will start a copy of the new image before starting to shut down old ones; Kubernetes Services provide load balancers to make this work.)
None of this is at all specific to Node. As far as the deployment system is concerned there aren't any .js files anywhere, only Docker images. (You don't copy your source files around separately from the images, or bind-mount a source tree over the image contents, and you definitely don't try to live-patch a running container.) After your unfortunate revert in step 5, you can run exactly the failing configuration in a non-production environment to see what went wrong.
But yes, fundamentally, you need to delete the old container with the old image and start a new container with the new image.
Copy the new version to your container with docker cp, then restart it with docker restart <name>

How to catch changes in building docker image?

I am new to docker container. I met a problem about how to catch changes of my code. Some lines in my local style.css file have been changed. Then I built docker image again, but actually nothing changed when I browsed my app.
Here are some methods I found online and tried, but didn't work.
remove image and build again
--no-cache=true
add a comment in Dockerfile to make it different
docker system prune
--pull
(I also used git pull to get code on my cloud instance, these files were checked to be the latest.)
I know little about docker mechanism, could anyone tell me what the problem is?
Extra info I found:
After stopping container and removing image, I restart my instance, then build image, run container again. Only in this way, can I catch those changes. Does anyone know the problem?
Many thanks!
There appears to be a disconnect on the difference between a container and an image. The container is the running instance of your application. It is based on an image that you have already built. That image has a tag for easier referencing, but the real reference to the image is a sha256 hash and building a new image will change the hash that the tag points to without impacting any of your running containers.
Therefore, the workflow to update your running application in docker is to:
Build a new image
Stop the running container
Start a new container pointing to that image
Cleanup any old images or stopped containers
If you are using docker-compose, it automates the middle two steps with a docker-compose up command, and that even deletes the old container. Most users keep a few copies of older images to allow easy rollback.

updating docker image given changes to local filesystem

I am trying to work out how I can update an existing image when I make changes to the local filesystem that was used to create the docker image. I thought that I could use docker commits to do that, but it seems that that allows you to change the image when there are changes to the filesystem on a running image?
/app.py
build from file system
sudo docker build -t app
now there are local changes to /app.py. How do I change the image app to reflect the changes to /app.py? right now I'm having to delete the old image and then create a new one.
sudo docker rmi app
sudo docker build -t app
any help is appreciated!
First of all, there's no running image, only running container. Image is something deliverable in Docker way, you build your image and then start a container from it.
To your problem, I think you have mentioned your options:
Rebuild your image
Go inside a running container, make changes and docker commit it back. Personally I only use this way to fix a tiny problem or make a hotfix to my image if docker build takes a really long time.
Docker uses union FS with copy on write to build image, which means if you want make a change to an image, you can't change it in-place, it'll create extra layer(s) to reflect your change(s), it'll just use the same image name in some cases. And from the perspective of delivery, I think it's totally OK to build a new image (with different tag) for each release, or even it should be done this way, that's why you have an Dockerfile, and images are not only something you start your container, they're actually versioned delivery artifacts and you can roll back to any version if you want/need. So I think your current solution is OK.
A few more words here: for local development and test, you can just mount your /app.py as a volume to your container when you start it, something like docker run -v /path/to/host/app.py:/path/to/container/app.py your_base_image_to_run_app, then anything you changed on your local FS to app.py, it'll reflect to the container. When you finish your job, build a new image.
As per your current design Solution is to create new image and assign same tag.
Best solution is expose environment variables from docker image and use those variable to update app.py so that you don't need to change image every time.Only one image is sufficient.

Resources