Hyperleder Composer docker container occupying space - docker

I had started Hyperledger-composer from fabric-dev-server, So all images running as regular.
Now after two weeks I had seen that my HDD space is occupied by docker container.
So, Here are some screenshots of my hdd space:
Day-1
Day-2
In 2 days, the hdd available size become 9.8G to 9.3G.
So, How can I resolve this issue?

I think the problem is that the docker container of peer0 is generating too many logs, so if you run that container continuously, it will generate more logs when you access the fabric network.
you can check the file size of the log for particular docker container:
Find container id of peer0.
Goto directory /var/lib/docker/containers/container_id/.
There should be a file named as container_id-json.log.
So in my case:
My fabric was running from 2 weeks, and the logs file is at (example):
/var/lib/docker/containers/a50ea6b441ee327587a73e2a0efc766ff897bed2e187575fd69ff902b56a5830/a50ea6b441ee327587a73e2a0efc766ff897bed2e187575fd69ff902b56a5830-json.log
I had check the size of that file, it was near 6.5GB.
Solution (Temporary):
Run below command, which will delete data of that file (example):
> var/lib/docker/containers/a50ea6b441ee327587a73e2a0efc766ff897bed2e187575fd69ff902b56a5830/a50ea6b441ee327587a73e2a0efc766ff897bed2e187575fd69ff902b56a5830-json.log
Solution (Permanent):
What you can do this just make a script that run everyday and remove data from that log file.
You can use crontab, which give you ability to run script on specific time,day etc.

Related

Docker: What is the main reason behind the error flags: 0x5001: no space left on device

Background:
I run a java process in my docker container and I take histo dumps using jmap to a file at /home/heapdump.txt inside container. I get this file from the container for further processing.
Now, I do this at an interval of 5 minutes. However, after 20 mins meaning, 4 heapdumps, when I try to get this file, I get the below error:
{"message":"mount/:/var/lib/docker/overlay2/<container_id>/merged/hostroot, flags: 0x5001: no space left on device"}
I don't understand what no space left on device means in this case. ๐Ÿ˜•๐Ÿ˜•๐Ÿ˜•
Your storage is mapped to default /var. Which I believe will hold much less space unless you have manually allotted more.
Do a df -kh on your device and see the status of the device mapped to /var. You would have run out of space.
To fix this find a disk with good space - remember this will be used by docker to store all its image and volume data. and make the docker use it.
You need to configure this in daemon.json file as a data-root config like below.
{
โ€œdata-rootโ€: โ€œ/new/data/root/pathโ€
}
Remember to reload the daemon and restart docker service.
Once done you will see docker beautifully copies its image and volume data to the new directory.
once you test you can clean up the var/lib/docker.
Hope this helps

Docker Logs filling up on running container

I've got a Docker container currently running in production on a CentOS 7 VM. We have encountered a problem where the logs of the container are filling up the host drive (the log files found at /var/lib/docker/{continer_name}) over time and causing the container to become unresponsive forcing us to clear logs on the host in order to enable it to continue processing.
We can't take the container down, meaning I can't just bring it back up using the --log-opt flag to set up some log rotation options.
We've tried using logrotate, but the nature of the container means the logs are being written to regularly and what we find is often the logs are rotated, but the original file does not decrease in size due to being written to whilst the rotation is underway.
I'm trying to find a solution to this problem where we can set up some kind of task that will clear the logs down to a specific file size. Any help is greatly appreciated.
i would suggest mounting the containers logs directory to a host directory, and there you can schedule whatever task to zip/move/delete the log files...

Are temporary files deleted while creating a docker image from a DockerFile?

So, I am trying to create a docker image from a DockerFile. It involves copying a 10 GB binary file inside the docker image. Due to some connectivity issues, my download stopped after 90% for a total of 3 times. After each time, I would simply run the docker build command again.
I am not sure if docker will delete the old files on its own. Or have I now used about 30GB space?
I am using Windows 10 with Hyper-V.
Tempfiles are in the TEMP directory. Docker, as any other well written application removes its temp files after use.
Of course, when not regurlarly ended, things can go wrong. If unsure, check your TEMP directory, you should be able to see files from 10 GB from a specific time and date.
As specified, the docker daemon cleans up all traces of the build process when it loses contact with the docker client for any reason.

How to fix the running out of disk space error in Docker?

When I am trying to build the docker image I am getting out of disk space error and after investigating I find the following:
df -h
Filesystem Size Used Avail Use% Mounted on
/dev/vda1 4G 3.8G 0 100% /
How do I fix this out of space error?
docker system prune
https://docs.docker.com/engine/reference/commandline/system_prune/
This will clean up all images, containers, networks, volumes not used. We generally try to clean up old images when creating a new one but you could also have this run as a scheduled task on your docker server every day.
use command - docker system prune -a
This will clean up total Reclaimable Size for Images, Network & Volume..... This will remove all images related reclaimable space which are not associated with any running container.....
Run docker system df command to view Reclaimable memory
In case there is some Reclaimable memory then if above command does not work in first go then run the same command twice then it should cleaned up....
I have been experiencing this behavior almost on daily basis.....
Planning to report this bug to Docker Community but before that want to reproduce this bug with new release to see if this has been fixed or not with latest one....
Open up the docker settings -> Resources -> Advanced and up the amount of Hard Drive space it can use under disk image size.
If you are using linux, then most probably docker is filling up the directory /var/lib/docker/containers, because it is writing container logs to <CONTAINER_ID>-json.log file under this directory. You can use the command cat /dev/null > <CONTAINER_ID>-json.log to clear this file or you can set the maximum log file size be editing /etc/sysconfig/docker. More information can be found in this RedHat documentation. In my case, I have created a crontab to clear the contents of the file every day at midnight. Hope this helps!
NB:
You can find the docker containers with ID using the following command
sudo docker ps --no-trunc
You can check the size of the file using the command
du -sh $(docker inspect --format='{{.LogPath}}' CONTAINER_ID_FOUND_IN_LAST_STEP)
Nothing works for me. I change the disk images max size in Docker Settings, and just after that it free huge size.
Going to leave this here since I couldn't find the answer.
Go to the Docker GUI -> Prefereces -> Reset -> Uninstall
Completely uninstall Docker.
Then install it fresh using this link
My docker was using 20GB of space when building an image, after fresh install, it uses 3-4GB max. Definitely helps!
Also, if you using a macbook, have look at ~/Library/Containers/docker*
This folder for me was 60 GB and was eating up all the space on my mac! Even though this may not be relevant to the question, I believe it is vital for me to leave this here.

Docker: readonly filesystem error

I am required to a create container (docker run), run an application like gams and then destroy the container. for a load test I repeat this a 1000 times. by the end of this test, RHEL7 complains about a 'readonly filesystem' or 'segmentation fault' on ls. the only solution thus far is a disk reset. tried increasing the ulimit. tried resetting $LD_LIBRARY_PATH.none worked.
what could be a good diagnosis?
Solutions so far:
upgraded from docker 1.12.6 to 17.x
set the interval to 5min between runs
no disk reset has been experienced so far after implementing the above two solutions, waiting for the next test to complete to confirm
Update
Issue cropped again when copying files from the master node to compute node through java code: "bash: cannot create temp file for here-document: Read-only file system"

Resources