Disable cache for Docker compose - docker

I've met the case where Dockerfile looks like this:
FROM image with fully-configured server (without application)
COPY war created locally by IntelliJ inside server in Docker image
Basically every time when I'm starting container with this Dockerfile, Docker creates new image. Because this .war file is changing often (this is the whole purpose of using Docker here - for easy deploying .war images during development), I've a lot of not used images with older versions of this application. This causes space on disk problems and I have to manually prune all deprecated images.
Is there any way to disable Docker caching? I'm using a set of servers connected by docker-compose file, so maybe it can somehow manage those images to automatically remove them when it is not needed anymore?
docker build has --no-cache parameter, but it only invalidates cache for every layer (every command is always executed, but still saved inside images/layers repository). Also --force-rm is not working for me.

as Docker documentation says, if the file change on every docker build the cache will be ignored because the checksum file change, maybe you have to remove the COPY command from the Dockerfile and use a VOLUME and copy the war file on every docker run to ensure that war will change on every start

Related

How to include files outside of build context without specifying a different Dockerfile path?

This is basically a follow-up question to How to include files outside of Docker's build context?: I'm using large files in all of my projects (several GBs) which I keep on an external drive, only used for development.
I want to COPY or ADD these files to my docker container when building it. The answer linked above allows one to specify a different path to a Dockerfile, potentially extending the build context. I find this unpractical, since this would require setting the build context to system root (?), to be able to include a single file.
Long story short: Is there any way or workaround to include a file that is far removed from the docker build context?
Three suggestions on things you could try:
include a file that is far removed from the docker build context?
You could construct your own build context by cp (or tar) files on the host into a dedicated directory tree. You don't have to use the actual source tree or your build tree.
rm -rf docker-build
mkdir docker-build
cp -a Dockerfile build/the-binary docker-build
cp -a /mnt/external/support docker-build
docker build ./docker-build
# reads docker-build/Dockerfile, and the files in the
# docker-build directory, but nothing else; only sends
# the docker-build directory to Docker as the build context
large files [...] (several GBs)
Docker doesn't deal well with build contexts this large. In the past I've at least seen docker build take a long time just on the step of sending the build context to itself, and docker push and docker pull have network issues when trying to send the gigabyte+ layer around.
It's a little hacky and breaks the "self-contained image" model a little bit, but you can provide these files as a Docker bind-mount instead of including them in the image. Your application needs to know what to do if the data isn't there. When you go to deploy the application, you also need to separately distribute the files alongside the Docker image and other deployment artifacts.
docker run \
-v /mnt/external/support:/app/support
...
the-image-without-the-support-files
only used for development
Potentially you can get away with not using Docker at all during this phase of development. Use a local source tree and local development tools; run your unit tests against these large test fixtures as needed. Build a Docker image only when you're about to run pre-commit integration tests; that may be late enough in the development cycle that you don't need these files.
I think the main thing you are worried about is that you do not want to send all files of a directory to docker daemon while it builds the image.
When directory was so big (in GBss) it takes lot of time to build an image.
If the requirement is to just use those files while you build anything inside docker, you can mount those to the container.
A tricky way
Run a container with base image and mount the direcotries inside it. docker run -d -v local-path:container-path
Get inside the container docker exec -it CONTAINER_ID bash
Run build step ./build-something.sh
Create image from the running container docker commit CONTAINER_ID
Tag the image docker tag IMAGE_ID tag:v1. You can get Image ID from previous command
From long term perspective this method may seem to be very tedious, but if you want to build image for 1 or 2 times , you can try this method.
I tried this for one of my docker image, as I want to avoid large amount of files sent to docker daemon during image build
The copy command gets source and destination values,
just specify full absolute path to your hard drive mount point as the src directory
COPY /absolute_path/to/harddrive /container/path

How can I update Docker image after changing few lines of code in my app?

Dockerfile I just built a Docker image using the below command in my app working directory:
docker build -t imagename:latest .
The Docker image is successfully built after a few minutes and the application is running as well once I used the below command:
docker run -p portnumber:portnumber imagename:latest
But now I want to update 2 lines of code in my application codebase. Suppose I added the code and wants to see if my application is working or not so how could I do that? Do I need to follow the below steps?
1. Delete the Docker image
2. Rebuild the image using the above command
3. See if the app is working or not using the "docker run" command?
I want to know that how can I update my Docker image? My Dockerfile is the same and there won't be any changes. I don't want to rebuild the whole Docker image again because initially, the size of all packages were around 2GB. Can anyone help me that what should I do next? Thanks in advance.
OS: Ubuntu
Application framework: Streamlit
Although you asked specifically how to update (rebuild) your docker image, it is my guess that you are in fact in need of a different solution.
If you are developing on a dockerized version of your application (which is good), it is impractical to rebuild the image with every change you do in your code.
A better, and more common approach, is to mount your local folder into the container, so the running container and your local machine actually share a folder.
This way you can just edit your code, and it is reflected in the container immediately.
So, your docker run command might look something like this:
$ docker run -v $PWD:/path/to/app/in/container -p PORT:PORT IMAGE_NAME
Read more about docker volumes.
Read more about docker for development environments.
Read about using docker-compose for development.
Rebuilding your docker image might not be as much as a hassle as you think !
When you build an image, each line with the command RUN, COPY or ADD of your Dockerfile is made into a layer of your image. When you rebuild the image, only the updated lines of the Dockerfile should rebuild. If you do not delete the old image so that it's in cache, that is.
If you try it, you should see only one or so layers of your image updating (and those below it)
An alternative would be to not put your code into your build and to insert it in the container with a volume at runtime. Depending on your use, it could be something. But it is a quite different use case and might not apply.

How docker detects which changes should be saved and which not?

I know that when we stop docker our changes are lost. There are many answers how to prevent this - commit each time. Idea is that when docker runs it will spin up a fresh container based on the image. On the other hand container persists some data after it exists unless you start using --rm.
Just to simplify:
If you run apt-get install vim, you must commit to save the change
BUT If you change nginx.conf or upload new file to HDFS, you do not lose the data.
So, just curious:
How docker knows what to save and what not? Ex: At the end of apt-get-install we have new files in the system. The same is when I upload new file. for the container/image there is NO difference , Right? Just I/O modification. So how docker know which modification should be saved when we stop the image?
The basic rules here:
Anything you explicitly store outside the container — a database, S3 — will outlive the container.
If you attach a volume to the container when you create the container using a docker run -v option or a Docker Compose volumes: option, any data written to that directory outlives the container. (If it’s a named volume, it lasts until you docker volume rm it.)
Anything else in the container filesystem is lost as soon as you docker rm the container.
If you need things like your application source code or a helper tool installed in an image, write a Dockerfile to describe how to build the image and run docker build. Check the Dockerfile into source control alongside your application.
The general theory of working with Docker is that you always start from a clean slate. When you docker build an image, you start from a base image and install your application into it; you never try to upgrade an installed application. Similarly, when you docker run a container, you start from a fresh copy of its image.
So the clearest answer to the question you ask is really, if you consistently docker rm a container when you stop it, when you docker run a new container, it will have the base image plus the content from the mounted volumes. Docker will never automatically persist anything outside of this.
You should never run docker commit: this leads to magic images that can’t be recreated later (in six months when you discover a critical security issue that risks taking your site down). Similarly, you should never install software in a running container, because it will be lost as soon as the container exits; add it to your Dockerfile and rebuild.
For any Container working with the Docker platform by default all the data generated is temporary and all the file generation or data generation is temporary and no data will persist if you have not mounted the filesystem part of if you have not attached volumes to the container.
IF you are finding that the nginx.conf is getting reused even after changes i would suggest try to find what directories are you trying to mount or mapped to the docker volumes.
The configurations for nginx which reside at /etc/nginx/conf.d/* and you might be mapping the volume with this directory. So if you make any changes in a working container and then remove the container the data will still persist as the data gets written to the writable layer. If the new container which you deploy later with the same volume mapping you will find all the changes you had initially done in the previous case are reflected in the newer container as well.

How to run docker-compose with docker image?

I've moved my docker-compose container from the development machine to a server using docker save image-name > image-name.tar and cat image-name.tar | docker load. I can see that my image is loaded by running docker images. But when I want to start my server with docker-compose up, it says that there isn't any docker-compose.yml. And there isn't really any .yml file. So how to do with this?
UPDATE
When I've copied all my project files to the server (including docker-compose.yml), everything started to work. But is it normal approach and why I needed to save-load image first?
What you achieve with docker save image-name > image-name.tar and cat image-name.tar | docker load is that you put a Docker image into an archive and extract the image on another machine after that. You could check whether this worked correctly with docker run --rm image-name.
An image is just like a blueprint you can use for running containers. This has nothing to do with your docker-compose.yml, which is just a configuration file that has to live somewhere on your machine. You would have to copy this file manually to the remote machine you wish to run your image on, e.g. using scp docker-compose.yml remote_machine:/home/your_user/docker-compose.yml. You could then run docker-compose up from /home/your_user.
EDIT: Additional info concerning the updated question:
UPDATE When I've copied all my project files to the server (including docker-compose.yml), everything started to work. But is it normal approach and why I needed to save-load image first?
Personally, I have never used this approach of transferring a Docker image (but it's cool, didn't know it). What you typically would do is pushing your image to a Docker registry (either the official DockerHub one, or a self-hosted registry) and then pulling it from there.

How to delete files sent to Docker daemon build context

I ran this command in my home directory:
docker build .
and it sent 20 GB files to the Docker daemon before I knew what was happening. I have no space left on my laptop. How do I delete the files that were replicated? I can't locate them.
What happens when you run docker build . command:
Docker client looks for a file named Dockerfile at the same directory where your command runs. If that file doesn't exists, an error is thrown.
Docker client looks a file named .dockerignore. If that file exists, Docker client uses that in next step. If not exists nothing happens.
Docker client makes a tar package called build context. Default, it includes everything in the same directory with Dockerfile. If there are some ignore rules in .dockerignore file, Docker client excludes those files specified in the ignore rules.
Docker client sends the build context to Docker engine which named as Docker daemon or Docker server.
Docker engine gets the build context on the fly and starts building the image, step by step defined in the Dockerfile.
After the image building is done, the build context is released.
So, your build context is not replicated anywhere but in the image you just created if only it needs all the build context. You can check image sizes by running this: docker images. If you see some unused or unnecessary images, use docker rmi unusedImageName.
If your image does'nt need everything in the build context, I suggest you to use .dockerignore rules, to reduce build context size. Exclude everything which are not necessary for the image. This way, the building process will be shorter and you will see if there is any misconfigured COPY or ADD steps in the Dockerfile.
For example, I use something like this:
# .dockerignore
* # exclude everything
!build/libs/*.jar # include just what I need in the image
https://docs.docker.com/engine/reference/builder/#dockerignore-file
https://docs.docker.com/engine/docker-overview/
Likely the space is being used by the resulting image. Locate and delete it:
docker images
Search there by size column.
Then delete it:
docker rmi <image-id>
Also you can delete everything docker-related:
docker system prune -a
In case of stopping the building context for some reason, you can go as well to /var/lib/docker/tmp/, with root access, and then erase the tmp files of docker builder. In this situation, the building context doesn't build correctly, so the part that it did build, was saved in a tmp file in /var/lib/docker/tmp/

Resources