I have a play application and running in docker 1.10.3. We are hitting this applicaton with 1000 request per second to do a load test. Application works fine. We see a significant HD memory consumed by Docker. In 3 day the docker consumed fron 2.2gb to 39gb. This worries us a load.
Docker INFO and the consumed space highlighted
Is there any was to configre docker not to consumen HD memory?
Any help will be appreciated.
Docker captures the standard output (STDOUT) of your application and stores it (by default) in an internal log file. You can find this file at /var/lib/docker/containers/$CONTAINER_ID/$CONTAINER_ID-json.log. This file is not rotated by default and may grow large if your application prints to STDOUT verbosely.
Two possible solutions:
Configure log rotation for the Docker log files. I've found a good article here that describes how to enable log rotation for Docker by creating the file /etc/logrotate.d/docker-container with the following contents:
/var/lib/docker/containers/*/*.log {
rotate 7
daily
compress
size=1M
missingok
delaycompress
copytruncate
}
You can play around with the options. They are all documented in logrotate's man page.
Use alternate logging for your containers by specifying the --log-driver option when creating a container:
$ docker run --log-driver=syslog your_image
Available drivers are documented in the official documentation. You can for example use --log-driver=syslog to use the system's syslog daemon, target various cloud services or disable logging entirely by using --log-driver=none.
Related
Background:
I run a java process in my docker container and I take histo dumps using jmap to a file at /home/heapdump.txt inside container. I get this file from the container for further processing.
Now, I do this at an interval of 5 minutes. However, after 20 mins meaning, 4 heapdumps, when I try to get this file, I get the below error:
{"message":"mount/:/var/lib/docker/overlay2/<container_id>/merged/hostroot, flags: 0x5001: no space left on device"}
I don't understand what no space left on device means in this case. ššš
Your storage is mapped to default /var. Which I believe will hold much less space unless you have manually allotted more.
Do a df -kh on your device and see the status of the device mapped to /var. You would have run out of space.
To fix this find a disk with good space - remember this will be used by docker to store all its image and volume data. and make the docker use it.
You need to configure this in daemon.json file as a data-root config like below.
{
ādata-rootā: ā/new/data/root/pathā
}
Remember to reload the daemon and restart docker service.
Once done you will see docker beautifully copies its image and volume data to the new directory.
once you test you can clean up the var/lib/docker.
Hope this helps
I am trying to debug a production failure involving (multiple) nginx and tomcat logs. I have copied the logs to my dev machine. What is the easiest way for me to import these logs into an elastic/ELK stack to sift through quickly? (Currently, I'm making do with less commands across multiple windows)
So far I've found only generic docker containers (like https://elk-docker.readthedocs.io/) that require me to install filebeat and configure it. However, since my data is static, I would prefer a simpler installation.
What I did earlier is create the ELK stack with docker-compose and ingest the data via 'nc' (netcat). An example can be found at: https://github.com/deviantony/docker-elk
You might want to adjust the logstash config, so that it reads and parses your data correctly. If the amount of files is not too big, you can nc them one-by-one and otherwise you can write a small script around it, in bash for example, to loop through the files.
I've got a Docker container currently running in production on a CentOS 7 VM. We have encountered a problem where the logs of the container are filling up the host drive (the log files found at /var/lib/docker/{continer_name}) over time and causing the container to become unresponsive forcing us to clear logs on the host in order to enable it to continue processing.
We can't take the container down, meaning I can't just bring it back up using the --log-opt flag to set up some log rotation options.
We've tried using logrotate, but the nature of the container means the logs are being written to regularly and what we find is often the logs are rotated, but the original file does not decrease in size due to being written to whilst the rotation is underway.
I'm trying to find a solution to this problem where we can set up some kind of task that will clear the logs down to a specific file size. Any help is greatly appreciated.
i would suggest mounting the containers logs directory to a host directory, and there you can schedule whatever task to zip/move/delete the log files...
How can one define log retention for kubernetes pods?
For now it seems like the log file size is not limited, and it is uses the host machine complete resources.
According to Logging Architecture from kubernetes.io there are some options
First option
Kubernetes currently is not responsible for rotating logs, but rather
a deployment tool should set up a solution to address that. For
example, in Kubernetes clusters, deployed by the kube-up.sh script,
there is a logrotate tool configured to run each hour. You can also
set up a container runtime to rotate applicationās logs automatically,
e.g. by using Dockerās log-opt. In the kube-up.sh script, the latter
approach is used for COS image on GCP, and the former approach is used
in any other environment. In both cases, by default rotation is
configured to take place when log file exceeds 10MB.
Also
Second option
Sidecar containers can also be used to rotate log files that cannot be rotated by the application itself. An example of this approach is a small container running logrotate periodically. However, itās recommended to use stdout and stderr directly and leave rotation and retention policies to the kubelet.
You can always set the logging retention policy on your docker nodes
See: https://docs.docker.com/config/containers/logging/json-file/#examples
I've just got this working by changing the ExecStart line in /etc/default/docker and adding the line --log-opt max-size=10m
Please note, that this will affect all containers running on a node, which makes it ideal for a Kubernetes setup (because my real-time logs are uploaded to an external ELK stack)
Im using docker with my Web service.
when I deploy using Docker, loosing some logging files (nginx accesslog, service log, system log.. etc)
Cause, docker deployment system using down and up container architecures.
So I thought about this problem.
LoggingServer and serviceServer(for api) must seperate!
using these, methods..
First, Using logstash(in elk)(attaching all my logFile) .
Second, Using batch system, this batch system will moves logfiles to otherServer on every midnight.
isn't it okay?
I expect a better answer.
thanks.
There are many ways for logging which most the admin uses for containers
1 ) mount log directory to host , so even if docker goes up/down logs will be persisted on host.
2) ELK server, using logstash/filebeat for pushing logs to elastic search server with tailing option of file, so if new log contents it pushes to server.
3) if there is application logs like maven based projects, then there are many plugins which pushes logs to server
4) batch system , which is not recommended because if containers dies before mid-night then logs will be lost.