local docker Repos keep getting bigger and bigger - docker

i make my first steps with docker repos in artifactory (5.1.3) and theres something that scares me a little bit.
I pushed different tags from the same docker image (abaout 500MB) to a repo.
I'd expected that storage use and size of the repo would stay at about 500 MB.
But with 5 Image-Versions in it, for example, the repo is about 2,5 GB in size.
Also the the "Max Unique Tags" setting in the local docker repo settings has no effect - i set 3 but nothing is deleted - there are again 5 versions.
With this behaviour we will fill our storage system by the end of the month easily - did i miss something or is this docker stuff in artifactory still beta ?

Artifactory is physically storing the layers for those tags only once, so the actual storage being used should be ~500MB (deduplication).
The reported size you are seeing in the UI (artifacts count / size) is the amount of physical storage that would be occupied if each artifact was a physical binary (not just a link). Since deduplication can occur between different repositories, there is no good way of reporting the physical storage size per repository (one image/tag/layer can be shared between multiple repositories).
In the Storage Summary page you can see both the physical storage size used by Artifactory and how much you gained by deduplication.

Related

What exactly are Binaries in Artifactory

In Artifactory, under the Storage in Monitoring tab, the Binaries size is 1.57 GB, whereas the Artifacts size is just 6.27 MB. What exactly are Binaries in Artifactory, as they are taking a lot of Storage.
Is it possible to delete these Binaries without affecting the Artifacts?
Binaries size = sum of the sizes all the binaries you uploaded
Artifacts size = sum of the size of all the artifacts stored
Optimization is the amount of optimization provided due to checksum based storage which is in your case 25617%
It appears you are uploading the same binary multiple times to different folders and hence the above situation.

Docker image size doesn’t make sense

I’ve created a ubuntu:bionic base image on my computer. Originally super large size but I deleted 80% of the content by running container and then committing. If I got to root directory and do “du -sh”, it said disk usage 4.5GB. Curious enough, the size of docker image when I do "docker images’ show 11 GB. After pushing to docker hub, I see that it’s 3.34 GB. So I thought perhaps it cleaned up something before compressing? I ran the new image, deleted some more content, commit, and pushed again. This time, “du–sh” said 3.0 GB, “docker images” still said 11GB and docker hub also 3.34 GB. Clearly it is compressing the 11GB file and not the 3.0GB content I’m expecting. Is there a easy way to “clean up” the image?
Docker images are built from layers. When you add a new layer, it doesn't remove the previous layers, it just adds a new one, rather like a new Git commit—the history is still there.
That means when you deleted the content, you made it invisible but it's still there in earlier layers.
You can see the layers and their sizes with docker history yourimagename.
Your options:
Make sure files you don't need don't make it in the first place, e.g. with .dockerignore.
Use a multi-stage build to create new image from the old one with only the files you need. https://docs.docker.com/develop/develop-images/multistage-build/

How to speed up pulling image speed for k8s cluster

I have one application will start one pod in any node of cluster, but if that node doesn't have this image, it will downloading it the first time and it takes a lot of time (it is around 1gb, and takes more than 3 minutes to download the image), what is the best practice to solve this kind of issue ? Pre pull image or share docker image via nfs ?
Deploy a personal docker repository.
https://docs.docker.com/registry/deploying/
Try to reduce the size of the image.
This can be done a few ways depending on your project. For example, if you are using Node you could use FROM node:11-alpine instead of FROM node:11 for a significantly smaller image. Also, make sure you are not putting build files inside the image. Languages like C# and Java have separate build and runtime images. For example, use java-8-jdk to build your project but use java-8-jre your final image as you only need the runtime.
Good luck!
Continuation to #anskurtis-streutker answer, this page explains in detail how to create smaller images.
Build smaller images - Kubernetes best practises
You can give a try to pod disruption budget. With it you can achieve high availability in your app and the download time shouldn't be a problem.
Regards

Outsource source code in docker-compose to use minimal disk space

I am using docker successfully in dev environment and want to use it now at staging and prod too.
I am developing a web application with symfony where the code is mounted local to the docker container. For staging and prod i want to "bake" the source code to the image, cause theres no need to change it anymore at this time.
At the moment my services "php" and "nginx" needs access to the src files. For staging/prod i would create a extra volume called "src" and mount it to both services. In one of the services (nginx/php) i would add a COPY command to copy the src code on build to the mounted "src" volume.
The problem now is the following:
Whenever a new version of my code exist, the whole image have to build new ... the smallest image (nginx) has a size of 200MB. So every time i want to update only my code (size just 10MB) the whole container (200MB) has to build new ...
In addition i want to check in all builds into a repository.
That is quite expensive with time ...
My thought is the following:
Is it possible to only build the data volume "src" new on each code update (triggered trough a jenkins build job) and check them in?
I think, there is no need to build rarely changed environments like php/nginx/mysql new on every build ...
Or is there another approach?
Initially having 1,5GB for all needed services is quite ok, But having for each version another 200 MB in the repository is too heavy.
Thanks
First the approach you are following is definitely a bad practice. A docker container should be portable and self-contained. Relying on data volumes that are bounded to the host machine will make your container not portable.
By design containers should package all of the dependencies needed to run the application. You should thus add the source to each image if the source code is a dependency that must be provided.
You should investigate other options to make the image size smaller. Depending on the programming language you are using, it is possible to compile/compress the source code and have a smaller binary for instance that can be copied into the image.
One final note is that using very different appraoches to deploy between environment(dev/staging/prod) is usually a bad idea. It is much preferable to have similar deployment strategies to avoid unexpected errors.
If you set up your Dockerfile properly (see docs) so you are adding the code last, it should be a pretty quick operation to update as all the other unchanged layers will be cached. This is pretty common practice as part of a Docker workflow.
You can use this same image for your local development and mount your working code over the code in the container for active development. As long as that exact same code is used to rebuild your images, you should maintain consistency. You could optimize further by choosing which parts of your code are likely to change and order your build accordingly.
You may also want to look into multi-stage build process where you can further optimize your base image and reduce final image size.

Number of commands in Dockerfile

I noticed that each line in the Dockerfile creates a separate image. Is there any limit on the number of images that are created?
Should we try to do a oneliner of RUN cmd1 && cmd2 && cmd3 instead?
How would this differ if we use a service like Quay?
Thanks!
As Alister said, there is an upper limit on the number of layers in a Docker image if you are using the AUFS file system. At Docker version 0.7.2 the limit was raised to 127 layers (changelog).
Since this a limitation of the underlying union file system (in the case of AUFS), using Quay or other private registries won't change the outcome. But you could use a different file system.
The current alternative filesystem is to use devicemapper (see CLI docs). These other filesystems may have different limitations on the number of layers -- I don't think devicemapper has an upper limit.
You're right, by RUNning multiple commands in a single RUN statement, you can reduce the number of layers.
Alternatively, if you really need a lot of layers to build your image, you could build an image until it reaches the maximum and then use docker export to create an un-layered copy of the image's file system. Then docker import to turn it back into an image again, this time with just one layer, and continue building. You lose the history that way though.
There is a limit, of 42 - apparently a hard limit imposed by AUFS.
It can help to be somewhat avoided by putting what would be done in individual RUN commands into a script, and then running that script. You would then end up with a single, larger image layer, rather than a number of smaller files to merge. Smaller images (with multiple RUN lines) make initial testing easier (since a new addition on the end of the RUNlist can re-use the previous image), so it's typical to wait until your Dockerfile has stabilised before merging the lines.
You can also reduce the potential number of images when ADDing a number of files, by adding a directory-full, rather than a number of individual files.

Resources