Rebuild same docker image with only the additional changes in the Dockerfile - docker

I build an docker image using a Dockerfile. After building the image, I made some basic changes on the Dockerfile. Is it possible to rebuild the same image with just the additional changes. Since, it takes very long time to create the image, I don't want to build it completely. Thanks in advance.

All docker build work in the way, that you describe.
The only thing need to be taken into account is layer dependencies.
Consider Dockerfile
FROM something
RUN cmd1
RUN cmd2
RUN cmd3
RUN cmd4
If you change cmd1 then all layers will be rebuilt, because they could be different with respect to cmd1
If you change cmd4 than only this command will be rebuilt, because it has not affect any other layers.
Think about what commands need to be run in what order - maybe you can improve it by reordering the statements.

Yes, if you tag your docker image myimage, just start your other Dockerfile with
FROM myimage
and put after this your additional changes

You can't rebuild it with the changes, you would need to store persistent data on a volume for that.
To save your changes,however, you can use commit:
https://docs.docker.com/engine/reference/commandline/commit/
Create a new image from a container's changes
docker commit [OPTIONS] CONTAINER [REPOSITORY[:TAG]]
It can be useful to commit a container’s file changes or settings into
a new image. This allows you debug a container by running an
interactive shell, or to export a working dataset to another server.
Generally, it is better to use Dockerfiles to manage your images in a
documented and maintainable way. Read more about valid image names and
tags.
The commit operation will not include any data contained in volumes
mounted inside the container.
By default, the container being committed and its processes will be
paused while the image is committed. This reduces the likelihood of
encountering data corruption during the process of creating the
commit.

Related

In docker, Why I can not see any intermediate images when build an image?

The docker docs said:
The Docker daemon runs the instructions in the Dockerfile one-by-one, committing the result of each instruction to a new image if necessary, before finally outputting the ID of your new image. The Docker daemon will automatically clean up the context you sent.
So against the quote above, my question is below:
When is it necessary to commit the result of one instruction to a new image?
If the new images is generated, why I can not see any new image except the final image when finishing the build process?
Thanks.
As a Dockerfile author, you never need to manually commit anything. Almost every Dockerfile instruction results in a new layer; the only exceptions I can think of are FROM (which starts a new stage in a multi-stage build) and maybe ARG (documented to "not persist into the built image").
docker images doesn't display images that don't have tags but do have additional images built on top of them. docker images -a will display everything. It's usually an implementation detail, though, and the long list of <none> images tends to be more a source of confusion.
Also note that intermediate image IDs are printed out by docker build as it executes each step, and you should also be able to find them via docker history. Running docker run on an intermediate image is a pretty useful debugging technique if you can tell the input to some step isn't right but you're not sure why.

Override a volume when Building docker image from another docker image

sorry if the question is basic but would it be possible to build a docker image from another one with a different volume in the new image? My use case is the following:
Start From image library/odoo (cfr. https://hub.docker.com/_/odoo/)
upload folders into the volume "/mnt/extra-addons"
build a new image, tag it then put it in our internal image repo
how can we achieve that? I would like to avoid putting extra folders into the host filesystem
thanks a lot
This approach seems to work best until the Docker development team adds the capability you are looking for.
Dockerfile
FROM percona:5.7.24 as dbdata
MAINTAINER monkey#blackmirror.org
FROM centos:7
USER root
COPY --from=dbdata / /
Do whatever you want . This eliminates the VOLUME issue. Heck maybe I'll write tool to automatically do this :)
You have a few options, without involving the host OS that runs the container.
Make your own Dockerfile, inherit from the library/odoo Docker image using a FROM instruction, and COPY files into the /mnt/extra-addons directory. This still involves your host OS somewhat, but may be acceptable since you wouldn't necessarily be building the Docker image on the same host you were running it.
Make your own Dockerfile, as in (1), but use an entrypoint script to download the contents of /mnt/extra-addons at runtime. This would increase your container startup time since the download would need to take place before running your service, but no host directories would need be involved.
Personally I would opt for (1) if your build pipeline supports it. That would bake the addons right into the image, so the image itself would be a complete, ready-to-go build artifact.

updating docker image given changes to local filesystem

I am trying to work out how I can update an existing image when I make changes to the local filesystem that was used to create the docker image. I thought that I could use docker commits to do that, but it seems that that allows you to change the image when there are changes to the filesystem on a running image?
/app.py
build from file system
sudo docker build -t app
now there are local changes to /app.py. How do I change the image app to reflect the changes to /app.py? right now I'm having to delete the old image and then create a new one.
sudo docker rmi app
sudo docker build -t app
any help is appreciated!
First of all, there's no running image, only running container. Image is something deliverable in Docker way, you build your image and then start a container from it.
To your problem, I think you have mentioned your options:
Rebuild your image
Go inside a running container, make changes and docker commit it back. Personally I only use this way to fix a tiny problem or make a hotfix to my image if docker build takes a really long time.
Docker uses union FS with copy on write to build image, which means if you want make a change to an image, you can't change it in-place, it'll create extra layer(s) to reflect your change(s), it'll just use the same image name in some cases. And from the perspective of delivery, I think it's totally OK to build a new image (with different tag) for each release, or even it should be done this way, that's why you have an Dockerfile, and images are not only something you start your container, they're actually versioned delivery artifacts and you can roll back to any version if you want/need. So I think your current solution is OK.
A few more words here: for local development and test, you can just mount your /app.py as a volume to your container when you start it, something like docker run -v /path/to/host/app.py:/path/to/container/app.py your_base_image_to_run_app, then anything you changed on your local FS to app.py, it'll reflect to the container. When you finish your job, build a new image.
As per your current design Solution is to create new image and assign same tag.
Best solution is expose environment variables from docker image and use those variable to update app.py so that you don't need to change image every time.Only one image is sufficient.

Can you share Docker containers?

I have been trying to figure out why one might choose adding every "step" of their setup to a Dockerfile which will create your container in a certain state.
The alternative in my mind is to just create a container from a simple base image like ubuntu and then (via shell input) configure your container the way you'd like.
But can you share containers? If you can only share images with Docker then I'd understand why one would want every step of their container setup listed in a Dockerfile.
The reason I ask is because I imagine there is some amount of headache involved with porting shell commands, file changes for configs, etc. to correct Dockerfile syntax and have them work correctly? But as a novice with Docker I could be overestimating the difficulty of that task.
EDIT: I suppose another valid reason for having the Dockerfile with each setup step is for documentation as to the initial state of the container. As opposed to being given a container in a certain state, but not necessarily having a way to know what all was done from the container's image base state.
But can you share containers? If you can only share images with Docker then I'd understand why one would want every step of their container setup listed in a Dockerfile.
Strictly speaking, no. However, you can create a new image from an existing container using the docker commit command:
$ docker commit <container-name> <image-name>
This command will create a new image from the existing container that you can push and pull from/to registries, export and import and create new containers from.
The reason I ask is because I imagine there is some amount of headache involved with porting shell commands, file changes for configs, etc. to correct Dockerfile syntax and have them work correctly? But as a novice with Docker I could be overestimating the difficulty of that task.
If you're already using some other mechanism for automated configuration, you can simply integrate your existing automation into the Docker build. For instance, if you are already configuring your images using shell scripts, simply add a build step in your Dockerfile in which to add your install scripts to the container and execute it. In theory, this can also work with configuration management utilities like Puppet, Salt and others.
EDIT: I suppose another valid reason for having the Dockerfile with each setup step is for documentation as to the initial state of the container. As opposed to being given a container in a certain state, but not necessarily having a way to know what all was done from the container's image base state.
True. As mentioned in comments, there are clear advantages to have an automated and reproducible build of your image. If you build your containers manually and then create an image with docker commit, you don't necessarily know how to re-build this image at a later point in time (which may become necessary when you want to release a new version of your application or re-build the image on top of an updated base image).

Docker updating image along when dockerfile changes

I'm playing with docker by creating a Dockerfile with some nodejs instructions. Right now, every time I make changes to the dockerfile I recreate the image by running sudo docker build -t nodejstest . in my project folder however, this creates a new image each time and swallows my ssd pretty soon.
Is there a way I can update an existing image when I change the dockerfile or I'm forced to create a new one each time I make changes to the file?
Sorry if it's a dumb question
Docker build support caching as long as there is no ADD instruction. If you are actively developing and changing files, only what is after the ADD will be rebuilt.
Since 0.6.2 (scheduled today), you can do docker build --rm . and it will remove the temporary containers. It will keep the images though.
In order to remove the orphan images, you can check them out with docker images, and perform a docker rmi <id> on one of them. As of now, there is an auto-prune and all untagged images (orphans, previous builds) will be removed.
According to this best practices guide if you keep the first lines of your dockerfile the same it'll also cache them and reuse the same images for future builds
During development, it makes less sense to re-build a whole container for every commit. Later, you can automate building a Docker container with your latest code as part of your QA/deployment process.
Basically, you can choose to make a minimal container that pulls in code (using git when starting the container, or using -v /home/myuser/mynode:/home/myuser/mynode with ENTRYPOINT to run node).
See my answer to this question:
Docker rails app and git

Resources