How to speed up pulling image speed for k8s cluster - docker

I have one application will start one pod in any node of cluster, but if that node doesn't have this image, it will downloading it the first time and it takes a lot of time (it is around 1gb, and takes more than 3 minutes to download the image), what is the best practice to solve this kind of issue ? Pre pull image or share docker image via nfs ?

Deploy a personal docker repository.
https://docs.docker.com/registry/deploying/

Try to reduce the size of the image.
This can be done a few ways depending on your project. For example, if you are using Node you could use FROM node:11-alpine instead of FROM node:11 for a significantly smaller image. Also, make sure you are not putting build files inside the image. Languages like C# and Java have separate build and runtime images. For example, use java-8-jdk to build your project but use java-8-jre your final image as you only need the runtime.
Good luck!

Continuation to #anskurtis-streutker answer, this page explains in detail how to create smaller images.
Build smaller images - Kubernetes best practises

You can give a try to pod disruption budget. With it you can achieve high availability in your app and the download time shouldn't be a problem.
Regards

Related

How to Dockerize multiple scripts that share requirements.txt and need frequent update

I'm new to Docker so I want to find best practices for my specific problem.
PROBLEM:
I have 6 python web-scraping scripts that run on same libraries (same requiraments.txt).
My scripts would need frequent updating (few times per week).
Also, my scripts have excel files from which they read and write stuff to, and I need to be able to update that excel files from time to time.
SOLUTIONS?
Do I really need 6 images and 6 containers even doe my containers will have same libraries? I find it time consuming to delete container and image every time I update my code.
For accessing files my excel files, I read about VOLUMES and I intend to implement them. Is that good solution?
Do I really need 6 images and 6 containers even doe my containers will have same libraries?
It depends on technical possibility and personal preference. If you find a good, maintainable way to run all scripts in one Docker container, there's no reason you cannot do it. You could easily use a cron-like solution such as this image.
There are advantages to keeping Docker images single-purpose, though. One of them is clear isolation. If one of your scripts fails to run, you'll have one failing container only and five others that still run successfully. Plus you have full transparency over what exactly fails where.
I find it time consuming to delete container and image every time I update my code.
I would propose to use some CI pipeline to do things like this. The pipeline would automatically build the images on a push, publish them to a registry and recreate the containers/services on your server.
For accessing files my excel files, I read about VOLUMES and I intend to implement them. Is that good solution?
Yes, that's what volumes were made for: Accessing and storing data that isn't part of your image.

Tool to view tree of docker image layers

Is there any way to look the layers of a set of docker images in a tree fashion? It will help examine if any siblings are serving the same purpose and one can be replaced with the other.
Earlier one used to be able to do this using docker images --tree. However that is no longer available in the latest versions of docker.
Here is an external tool that can help achieve the same visualization

Outsource source code in docker-compose to use minimal disk space

I am using docker successfully in dev environment and want to use it now at staging and prod too.
I am developing a web application with symfony where the code is mounted local to the docker container. For staging and prod i want to "bake" the source code to the image, cause theres no need to change it anymore at this time.
At the moment my services "php" and "nginx" needs access to the src files. For staging/prod i would create a extra volume called "src" and mount it to both services. In one of the services (nginx/php) i would add a COPY command to copy the src code on build to the mounted "src" volume.
The problem now is the following:
Whenever a new version of my code exist, the whole image have to build new ... the smallest image (nginx) has a size of 200MB. So every time i want to update only my code (size just 10MB) the whole container (200MB) has to build new ...
In addition i want to check in all builds into a repository.
That is quite expensive with time ...
My thought is the following:
Is it possible to only build the data volume "src" new on each code update (triggered trough a jenkins build job) and check them in?
I think, there is no need to build rarely changed environments like php/nginx/mysql new on every build ...
Or is there another approach?
Initially having 1,5GB for all needed services is quite ok, But having for each version another 200 MB in the repository is too heavy.
Thanks
First the approach you are following is definitely a bad practice. A docker container should be portable and self-contained. Relying on data volumes that are bounded to the host machine will make your container not portable.
By design containers should package all of the dependencies needed to run the application. You should thus add the source to each image if the source code is a dependency that must be provided.
You should investigate other options to make the image size smaller. Depending on the programming language you are using, it is possible to compile/compress the source code and have a smaller binary for instance that can be copied into the image.
One final note is that using very different appraoches to deploy between environment(dev/staging/prod) is usually a bad idea. It is much preferable to have similar deployment strategies to avoid unexpected errors.
If you set up your Dockerfile properly (see docs) so you are adding the code last, it should be a pretty quick operation to update as all the other unchanged layers will be cached. This is pretty common practice as part of a Docker workflow.
You can use this same image for your local development and mount your working code over the code in the container for active development. As long as that exact same code is used to rebuild your images, you should maintain consistency. You could optimize further by choosing which parts of your code are likely to change and order your build accordingly.
You may also want to look into multi-stage build process where you can further optimize your base image and reduce final image size.

what are the advantages of having layers in a docker image?

Let's say I have two different Dockerfiles.
Image one called nudoc/my-base-image:1.1
FROM ubuntu:16.10
COPY . /test.war
Image two called nudoc/my-testrun-image:1.1
FROM acme/my-base-image:1.1
CMD /test/start.sh
Both have the layers in common.
What are the advantages of having layers in a docker image? does it benefit from pulling from the registry?
As Henry already stated
Common layers are downloaded only once and are stored only once. So this has benefits for download as well as storage.
Additionaly building an image will reuse layers if the creating command allows. This reduces the build time. For example if you copy a file into your image and the file is the same as in the last build the old layer will be reused. See the best practices for writing dockerfiles for more details.
Common layers are downloaded only once and are stored only once. So this has benefits for download as well as storage.
A layer will be downloaded once, and possibly reused for other images. You can see them as intermediate images, and those intermediate images are combined together to create a bigger one.
In continuous integration, this can save quite some time !
I suggest you read the official documentation page: https://docs.docker.com/engine/userguide/storagedriver/imagesandcontainers/
Docker uses aufs file system as default where each instruction defined in your Dockerfile will acts as a each individual layer, if you add or update an instruction it will effect the respective layer hence it helps you to build, reuse or update your Docker image instantly, to learn more about layers and image read here

Dockerfile vs Docker image

I'm working on creating some docker images to be used for testing on dev machines. I plan to build one for our main app as well as one for each of our external dependencies (postgres, elasticsearch, etc). For the main app, I'm struggling with the decision of writing a Dockerfile or compiling an image to be hosted.
On one hand, a Dockerfile is easy to share and modify over time. On the other hand, I expect that advanced configuration (customizing application property files) will be much easier to do in vim before simply committing an new image.
I understand that I can get to the same result either way, but I'm looking for PROS, CONS, and gotchas with either direction.
As a side note, I plan on wrapping this all together using Fig. My initial impression of this tool has been very positive.
Thanks!
Using a Dockerfile:
You have an 'audit log' that describes how the image is built. For me this is fundamental if it is going to be used in a production pipeline where more people are working and maintainability should be a priority.
You can automate the building process of your image, being an easy way of updating the container with system updates, or if it has to take part in a continuous delivery pipeline.
It is a cleaner way of create the layers of your container (each Dockerfile command is a different layer)
Changing a container and committing the changes is great for testing purposes and for fast development for a conceptual test. But if you plan to use the result image for some time, I would definitely use Dockerfiles.
Apart from this, if you have to modify a file and doing it using bash tools (awk, sed...) results very tedious, you can add any file you wish from outside during the building process.
I totally agree with Javier but you need to understand that one image created with a dockerfile can be different with an image build with the same version of the dockerfile 1 day after.
maybe in your build process you retrieve automatically last updates of an app or the os etc …
And at this time if you need to reproduce a crash or whatever you can’t rely on the dockerfile.

Resources