I am trying to configure Docker Deamon to follow CIS bencmarks. One of the recommendations is to configure default ulimit parameters when starting. They give the example of
dockerd --default-ulimit nproc=1024:2048 --default-ulimit nofile=100:200
How do I know how to calculate the best settings for nproc and nofile for my particular environment?
I guess, to set those, you would need to know what your application upper limits might be. You would need to first run the applications and put them under the type of loads you expect them to be under and then measure the limits and open files.
See https://unix.stackexchange.com/questions/230346/how-to-check-ulimit-usage for information on how to check the limits. You would need to get the limits for all the PIDs that you are running as containers. I would then pad them a bit to account for headroom.
Even then, you are likely to go down a path of chasing limits constantly as it will probably be difficult to accurately get all of them correct before the application goes to production.
Related
For the project I'm working on, I need to be able to measure the total amount of network traffic for a specific container over a period of time. These periods of time generally are about 20 seconds, and the precision needed realistically is only in kilobytes. Ideally this solution would not involve the use of additional software either in the containers or on the host machine, and would be suitible for linux/windows hosts.
Originally I had planned to use the 'NET I/O' attribute of the 'docker stats' command, though the field is automatically formatted to a more human readable format (i.e. '200 MB') which means for containers that have been running for some time, I can't get the precision I need.
Is there any way to get the raw value of 'NET I/O' or to reset the running count? Outside of this I've explored using something tshark or iptables, but like I said above ideally the solution would not require additional programs. Though if there aren't any good solutions that fit that criteria any other suggestions would be welcome. Thank you!
I was wondering whether there is a way to automatically limit the number of CPUs all docker container can use when running as by default, each container uses all the available resources.
When running the docker run command, I know I can specify the number of CPUs (--cpus=) but in my current case, the containers are started using another application (ShinyProxy) which does not allow me to specify this option.
I already spent a lot of time on this issue (e.g. using cgroups) but I haven't been able to set up anything working.
E.g, I tried to implement the solution proposed hereafter but I was not able to achieve any result.
https://stackoverflow.com/a/46557336/8939750
Many thanks for your help,
Sylvain
I create my docker (python flask).
How can I calculate what is the limit to put for memory and CPU?
Do we have some tools that run performance tests on docker with different limitation and then advise what is the best limitation numbers to put?
With an application already running inside of a container, you can use docker stats to see the current utilization of CPU and memory. While there it little harm in setting CPU limits too low (it will just slow down the app, but it will still run), be careful to keep memory limits above the worst case scenario. When apps attempt to exceed their memory limit, they will be killed and usually restarted by a restart policy/orchestration tool. If the limit is set too low, you may find your app in a restart loop.
This is more about the consumption of your specific Flask application, you can probably take use the resource module in Python to calculate them.
More information here and here.
Using JConsole someone can access to the metrics that were gathered by default for OS like memory, CPU load and ..., in addition to process specific metrics. My question is can we add some OS customized metrics, like the usage of some directory using Java Files API or checking if a specific port is responsive?
I gather so-called metrics using remote SSH and the commands like du -sh /directory that has so many delays and I want to get it using JMX so it could run faster.
This question talked about adding spring metrics.
As the linked question shows it is easy to expose a Java class as an MBean, so you could certainly write a class that collects the metrics you need. Implementing du in Java is not difficult. However, I'm not sure that it will solve your problem. The example of du -sh /directory is probably slow because it needs to recursively measure the size of a directory hierarchy. That will be just as slow (probably slower!) in Java.
As a side note I would normally use collectd or Telegraf for that kind of thing, but again the I/O cost for finding disk usage would be the same.
I would suggest adding some logs with times to your current script so that you can see where it spends time. If it takes less than a second to connect with SSH and 15 seconds to determine the directory size, for example, moving from SSH to JMX won't help.
Say I want to build a hosting environment using Docker (so that I can scale up if needed).
I want:
Each user to be able to execute arbitrary code and
Each user to not see or affect other users
Is this something more concerning Docker, or some other tool like Apparmor?
I want users to be able to run, say, PHP code. If one users gets a lot of hits and is using a lot of cpu, I want it to not affect another user who I've promised a certain amount of cpu usage. Perhaps I'm missing what concept governs this type of thing altogether?
you can limit the memory an cpu usage of dockers using --memory and --cpus flags when you run the docker so users have a maximum amount of resources they are limited to, for all such constraints use the following documentation.
https://docs.docker.com/engine/admin/resource_constraints/