So the Jenkins controller's built-in node that I have is running out of disk space.
The error that I am getting is following-
Built-In Node (the Jenkins controller's built-in node)
Disk space is too low. Only 1.022GB left on /var/lib/jenkins.
Projects tied to Built-In Node
None
But I am not sure how to free up the disk space on it. I attached a picture of the error as well. IS there any way to free up the space on this node?
Related
I’ve created a ubuntu:bionic base image on my computer. Originally super large size but I deleted 80% of the content by running container and then committing. If I got to root directory and do “du -sh”, it said disk usage 4.5GB. Curious enough, the size of docker image when I do "docker images’ show 11 GB. After pushing to docker hub, I see that it’s 3.34 GB. So I thought perhaps it cleaned up something before compressing? I ran the new image, deleted some more content, commit, and pushed again. This time, “du–sh” said 3.0 GB, “docker images” still said 11GB and docker hub also 3.34 GB. Clearly it is compressing the 11GB file and not the 3.0GB content I’m expecting. Is there a easy way to “clean up” the image?
Docker images are built from layers. When you add a new layer, it doesn't remove the previous layers, it just adds a new one, rather like a new Git commit—the history is still there.
That means when you deleted the content, you made it invisible but it's still there in earlier layers.
You can see the layers and their sizes with docker history yourimagename.
Your options:
Make sure files you don't need don't make it in the first place, e.g. with .dockerignore.
Use a multi-stage build to create new image from the old one with only the files you need. https://docs.docker.com/develop/develop-images/multistage-build/
i make my first steps with docker repos in artifactory (5.1.3) and theres something that scares me a little bit.
I pushed different tags from the same docker image (abaout 500MB) to a repo.
I'd expected that storage use and size of the repo would stay at about 500 MB.
But with 5 Image-Versions in it, for example, the repo is about 2,5 GB in size.
Also the the "Max Unique Tags" setting in the local docker repo settings has no effect - i set 3 but nothing is deleted - there are again 5 versions.
With this behaviour we will fill our storage system by the end of the month easily - did i miss something or is this docker stuff in artifactory still beta ?
Artifactory is physically storing the layers for those tags only once, so the actual storage being used should be ~500MB (deduplication).
The reported size you are seeing in the UI (artifacts count / size) is the amount of physical storage that would be occupied if each artifact was a physical binary (not just a link). Since deduplication can occur between different repositories, there is no good way of reporting the physical storage size per repository (one image/tag/layer can be shared between multiple repositories).
In the Storage Summary page you can see both the physical storage size used by Artifactory and how much you gained by deduplication.
I am trying to create a thinpool for my docker container. I am following their guide here
https://docs.docker.com/engine/userguide/storagedriver/device-mapper-driver/#configure-direct-lvm-mode-for-production
It says "It assumes that you have a spare block device at /dev/xvdf with enough free space to complete the task."
I don't have a device at /dev/xvdf. How can I create one?
Basically, a block device can be one of the following:
Hard drive
Flash drive
DVD drive
Blu-ray drive
etc.
In this case, you need to mount a second hard drive in your server.
Or if you are using Vagrant/Virtualbox for development, you can add new hard disk in Oracle VM VirtualBox Manager
Open Settings page of the box you are working with
Select Storage from left menu
Click to add new hard disk
Click Add Hard Disk
Click Create new disk
Select VMDK (Virtual Machine Disk)
Select Dynamically allocated
Give the disk a name and specify the size of the disk
Finally click create
Restart the box
You now have a block device to work with, to list the available block device run lsblk
For my case, I have added two hard disks and they are labeled with /dev/sdb & /dev/sdc
You can use the hard disk to create a physical volume.
My understanding
As far as I understood artifacts up to now, is that it was used to know if my project refers to an unfinished build task's artifacts.
Problem
I tried reading more (on jenkins site too) but I'm not sure I understand so easily what they do now. I know that when I promote a build, I can fingerprint artifacts. What does it mean then?
Artifacts - anything produced during the build process.
Fingerprinting artifacts - recording the MD5 checksum of selected artifacts.
You can then use the MD5 checksum to track a particular artifact back to the build it came from.
Adding to #Slav answer, Fingerprinting help Jenkins in keeping track of which version of a file is used by which version of a dependency.
Quoting an example and how it works from Jenkins Page:
For example:
Suppose you have the TOP project that depends on the MIDDLE project, which in turn depends on the BOTTOM project.
You are working on the BOTTOM project. The TOP team reported that bottom.jar that they are using causes an Null Pointer Exception, which you (a member of the BOTTOM team) thought you fixed in BOTTOM #32.
Jenkins can tell you which MIDDLE builds and TOP builds are using (or not using) your bottom.jar #32.
How does it work?
The fingerprint of a file is simply a MD5 checksum. Jenkins maintains a database of md5sum, and for each md5sum, Jenkins records which builds of which projects used. This database is updated every time a build runs and files are fingerprinted.
To avoid the excessive disk usage, Jenkins does not store the actual file. Instead, it just stores md5sum and their usages. These files can be seen in
$JENKINS_HOME/fingerprints
I noticed that each line in the Dockerfile creates a separate image. Is there any limit on the number of images that are created?
Should we try to do a oneliner of RUN cmd1 && cmd2 && cmd3 instead?
How would this differ if we use a service like Quay?
Thanks!
As Alister said, there is an upper limit on the number of layers in a Docker image if you are using the AUFS file system. At Docker version 0.7.2 the limit was raised to 127 layers (changelog).
Since this a limitation of the underlying union file system (in the case of AUFS), using Quay or other private registries won't change the outcome. But you could use a different file system.
The current alternative filesystem is to use devicemapper (see CLI docs). These other filesystems may have different limitations on the number of layers -- I don't think devicemapper has an upper limit.
You're right, by RUNning multiple commands in a single RUN statement, you can reduce the number of layers.
Alternatively, if you really need a lot of layers to build your image, you could build an image until it reaches the maximum and then use docker export to create an un-layered copy of the image's file system. Then docker import to turn it back into an image again, this time with just one layer, and continue building. You lose the history that way though.
There is a limit, of 42 - apparently a hard limit imposed by AUFS.
It can help to be somewhat avoided by putting what would be done in individual RUN commands into a script, and then running that script. You would then end up with a single, larger image layer, rather than a number of smaller files to merge. Smaller images (with multiple RUN lines) make initial testing easier (since a new addition on the end of the RUNlist can re-use the previous image), so it's typical to wait until your Dockerfile has stabilised before merging the lines.
You can also reduce the potential number of images when ADDing a number of files, by adding a directory-full, rather than a number of individual files.