How does kubectl logs -f affect CPU and RAM usage? - docker

After some research, I found out that, kubernetes logs -f <pod> reads logs from files, i.e, .log files to which docker containers running inside the pods have written the logs. In my case, docker container is an application that I have written. Now, say I have disabled logging in my application expecting that RAM usage on the system will reduce.
With logging enabled in my application, I kept track of CPU and MEM usage
Commands used:
a. top | grep dockerd
b. top | grep containerd-shim
Without logging enabled also, I kept track of CPU and MEM usage.
But I didn't find any difference. Could anyone explain what is happening here internally?

The simple explanation:
Logging doesn't use a lot of RAM. As you said, the logs are written to disk as soon as they are processed so they are not stored in memory other than just one time variables used per log entry.
Logging doesn't use a lot of CPU cycles. The CPU is typically at least 10 to the -5 orders of magnitude faster than disk (less for SSDs) so your CPU can do a lot more when logs are written to disk. In other words, you'll barely notice the difference when disabling the logs in terms of CPU usage.

Related

why docker shows wrong ram usage

so i have a docker container and its ram usage grows over time like if its leaking memory.
first i thought its because my app is leaking memory but in local tests i saw nothing (app is written in rust with axum) then i checked the process itself and its ram usage is different than what docker says.
docker stats says container is using 50 mb but htop says something else

Kubernetes OOM pod killed because kernel memory grows to much

I am working on a java service that basically creates files in a network file system to store data. It runs in a k8s cluster in a Ubuntu 18.04 LTS.
When we began to limit the memory in kubernetes (limits: memory: 3Gi), the pods began to be OOMKilled by kubernetes.
At the beginning we thought it was a leak of memory in the java process, but analyzing more deeply we noticed that the problem is the memory of the kernel.
We validated that looking at the file /sys/fs/cgroup/memory/memory.kmem.usage_in_bytes
We isolated the case to only create files (without java) with the DD command like this:
for i in {1..50000}; do dd if=/dev/urandom bs=4096 count=1 of=file$i; done
And with the dd command we saw that the same thing happened ( the kernel memory grew until OOM).
After k8s restarted the pod, I got doing a describe pod:
Last State:Terminated
Reason: OOMKilled
Exit Code: 143
Creating files cause the kernel memory grows, deleting those files cause the memory decreases . But our services store data , so it creates a lot of files continuously, until the pod is killed and restarted because OOMKilled.
We tested limiting the kernel memory using a stand alone docker with the --kernel-memory parameter and it worked as expected. The kernel memory grew to the limit and did not rise anymore. But we did not find any way to do that in a kubernetes cluster.
Is there a way to limit the kernel memory in a K8S environment ?
Why the creation of files causes the kernel memory grows and it is not released ?
Thanks for all this info, it was very useful!
On my app, I solved this by creating a new side container that runs a cron job, every 5 minutes with the following command:
echo 3 > /proc/sys/vm/drop_caches
(note that you need the side container to run in privileged mode)
It works nicely and has the advantage of being predictable: every 5 minutes, your memory cache will be cleared.

Nifi 1.6.0 memory leak

We're running Docker containers of NiFi 1.6.0 in production and have to come across a memory leak.
Once started, the app runs just fine, however, after a period of 4-5 days, the memory consumption on the host keeps on increasing. When checked in the NiFi cluster UI, the JVM heap size used hardly around 30% but the memory on the OS level goes to 80-90%.
On running the docker starts command, we found that the NiFi docker container is consuming the memory.
After collecting the JMX metrics, we found that the RSS memory keeps growing. What could be the potential cause of this? In the JVM tab of cluster dialog, young GC also seems to be happening in a timely manner with old GC counts shown as 0.
How do we go about identifying in what's causing the RSS memory to grow?
You need to replicate that in a non-docker environment, because with docker, memory is known to raise.
As I explained in "Difference between Resident Set Size (RSS) and Java total committed memory (NMT) for a JVM running in Docker container", docker has some bugs (like issue 10824 and issue 15020) which prevent an accurate report of the memory consumed by a Java process within a Docker container.
That is why a plugin like signalfx/docker-collectd-plugin mentions (two weeks ago) in its PR -- Pull Request -- 35 to "deduct the cache figure from the memory usage percentage metric":
Currently the calculation for memory usage of a container/cgroup being returned to SignalFX includes the Linux page cache.
This is generally considered to be incorrect, and may lead people to chase phantom memory leaks in their application.
For a demonstration on why the current calculation is incorrect, you can run the following to see how I/O usage influences the overall memory usage in a cgroup:
docker run --rm -ti alpine
cat /sys/fs/cgroup/memory/memory.stat
cat /sys/fs/cgroup/memory/memory.usage_in_bytes
dd if=/dev/zero of=/tmp/myfile bs=1M count=100
cat /sys/fs/cgroup/memory/memory.stat
cat /sys/fs/cgroup/memory/memory.usage_in_bytes
You should see that the usage_in_bytes value rises by 100MB just from creating a 100MB file. That file hasn't been loaded into anonymous memory by an application, but because it's now in the page cache, the container memory usage is appearing to be higher.
Deducting the cache figure in memory.stat from the usage_in_bytes shows that the genuine use of anonymous memory hasn't risen.
The signalFX metric now differs from what is seen when you run docker stats which uses the calculation I have here.
It seems like knowing the page cache use for a container could be useful (though I am struggling to think of when), but knowing it as part of an overall percentage usage of the cgroup isn't useful, since it then disguises your actual RSS memory use.
In a garbage collected application with a max heap size as large, or larger than the cgroup memory limit (e.g the -Xmx parameter for java, or .NET core in server mode), the tendency will be for the percentage to get close to 100% and then just hover there, assuming the runtime can see the cgroup memory limit properly.
If you are using the Smart Agent, I would recommend using the docker-container-stats monitor (to which I will make the same modification to exclude cache memory).
Yes, NiFi docker has memory issues, shoots up after a while & restarts on its own. On the other hand, the non-docker works absolutely fine.
Details:
Docker:
Run it with 3gb Heap size & immediately after the start up it consumes around 2gb. Run some processors, the machine's fan runs heavily & it restarts after a while.
Non-Docker:
Run it with 3gb Heap size & it takes 900mb & runs smoothly. (jconsole)

Spring boot is consuming too much RAM

I have created some services in spring boot, I have 11 fat jars and I deploy them in docker containers, my doubt was that every jar was consuming between 1 and 1.5 GB of RAM without any use, I check the RAM by running:
docker stats containername
At first I thought that it was the java container and I tried to change to one that uses alpine but nothing changed, so I think the only problem is my jar. Is there a way to change the RAM that the jar is using? Or this behavior is normal because every jar has an embedded tomcat? Or maybe is better to put some jars together and deploy them as war and use only one tomcat for a group of "jars"? Can someone share his/her experience?,
Thanks in advance.
This is how Java behaves in general. The JVM takes as much memory as you give it, and it will perform a process called Garbage collection (What is the garbage collector in Java) to free up space once it decides it should do so.
However, if you don't tell your JVM how much memory it can use, it will use the system defaults, which depend on your systems memory and the amount of cores you have. You can verify this using the following command (How is the default Java heap size determined):
java -XX:+PrintFlagsFinal -version | grep HeapSize
On my machine, that's an initial heap memory of 256MiB and a maximum heap size of 4GiB. However, that doesn't mean that your application needs it.
A good way of measuring your memory is by using a monitoring tool like jvisualvm. Additionally, you could use actuator's /health endpoint to see the heap memory usage as well.
Your heap memory usage will normally have a sawtooth pattern (Why a sawtooth shaped graph), where the memory is gradually being used, and eventually freed by the garbage collector.
The memory that is left over after a garbage collection are usually objects that cannot be destroyed because they're still in use. You could see this as your working memory. Now, to configure your -Xmx you'll have to see how your application behaves after trying it out:
Configure it below your normal memory usage and your application will go out of memory, throwing an OutOfMemoryError.
Configure it too low but above your minimal memory usage, and you will see a huge performance hit, due to the garbage collector continuously having to free memory.
Configure it too high and you'll reserve memory you won't need in most of the cases, so wasting too much resources.
From the screenshot above, you can see that my application reserves about 1GiB of memory for heap usage, while it only uses about 30MiB after a garbage collection. That means that it has a way too high -Xmx value, so we could change it to different values and see how the application behaves.
People often prefer to work in powers of 2 (even though there is no limitation, as seen in jvm heap setting pattern). In my case, I need to go with at least 30MiB, since that's the amount of memory my application uses at all times. So that means I could try -Xmx32m, see how it performs, and adjust if it goes out of memory or performs worse.
You can set memory usage of docker container using -e JAVA_OPTS="-Xmx64M -Xms64M".
docker file:
FROM openjdk:8-jre-alpine
VOLUME ./mysql:/var/lib/mysql
ADD /build/libs/application.jar app.jar
ENTRYPOINT exec java $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -jar /app.jar
image run:
docker run -d --name container-name -p 9100:9100 -e JAVA_OPTS="-Xmx512M -Xms512M" imagename:tag
Here i set 512Mb memory usage . you can set 1g or as per your requirement. After run using this check your memory usage. it will max 512Mb.
After taking a look into the openjkd DockerHub image documentation it seems that you can set the Default Heap Size by setting -XX:MaxRAM=...:
RAM limit is supported by Windows Server containers, but currently JVM
cannot detect it. To prevent excessive memory allocations,
-XX:MaxRAM=... option must be specified with the value that is not bigger than a containers RAM limit.
From the oracle docs:
Default Heap Size Unless the initial and maximum heap sizes are specified on the command line, they are calculated based on the amount
of memory on the machine.

Docker reserve a certain amount of memory for container

I'm running npm inside a docker container and every so often it aborts because it cannot allocate enough memory. I see some flags like --memory (How do I set resources allocated to a container using docker?) for the docker run command that seem to limit the maximum amount of memory that a container can consume, but haven't seen anything yet that would allow me to reserve an amount of memory for the container and abort immediately if it cannot be allocated.
This is not how memory management works under Linux.
If you run full virtualization, like QEMU, then all memory can be allocated and passed down into the VM. That VM then boots the kernel and the memory is managed by the kernel in the VM.
In Docker, or any other container/namespace system, the memory is managed by the kernel that runs docker and the "containers". The process that is run in container still runs like a normal process but in a different cgroup. Each cgroup has limits, like how much memory the kernel will hand out to userland, or what network interfaces it sees, but it still runs on same kernel.
An analogy of this is that docker is a "glorified ulimit". Processes under this limit still behave as normal Linux processes
they allocate memory as-needed
they will cause OOM issues if they pass some limit, or host runs out of memory
And just like you can't pre-allocate memory for Firefox, you can't pre-allocate memory for a Docker container.
You can't reserve memory in docker, only limit it with --memory.
See: https://docs.docker.com/engine/reference/run/ for more detail.
Specifically look at the user memory constraints section.
memory=inf, memory-swap=inf (default) >>>> There is no memory limit for
the container. The container can use as much memory as needed.
Note that's the default. So like other processes on the system npm will use all it can get/need.
So either free up some memory or add more.
As others have said, you cannot reserve memory for processes, and therefore containers. However, you could have the node app called from a script that will check the available memory and exit if it is below a certain threshold.

Resources