After executing show disk details , I am able to see the "Disk rebuild speed low "on solace appliance 3560.
What is mean of "Disk rebuild speed low" in solace.
The CLI command
disk rebuild speed <low | high >
changes the RAID rebuild speed of a physical appliance. The higher the rebuild speed, the faster the RAID 1 mirroring completes but more system resources are consumed for the rebuild task.
You can check disk rebuild status in the support shell with:
[support#solace ~]$ cat /proc/mdstat
and the corresponding speed limits set after changing the speeds with the above CLI command:
[support#solace ~]$ cat /proc/sys/dev/raid/speed_limit_max
[support#solace ~]$ cat /proc/sys/dev/raid/speed_limit_min
Related
I am using docker over https https://x.x.198.38:2376/v1.40/images/load
And I started getting this error when running docker on Centos, this was not an issue on Ubuntu.
The image in question is 1.1gb in size.
Error Message:
Error processing tar file(exit status 1): open /root/.cache/node-gyp/12.21.0/include/node/v8-testing.h: no space left on device
I ran into a similar issue some time back.
The image might have a lot of small files and you might be falling short on disk space or inodes.
I was able to get to it only when I did "watch df -hi", it showed me that inodes were pegging up to 100 but docker cleaned up and it was back to 3%. Check this screensshot
And further analysis showed that the volume attached was very small, it was just 5gb out of which 2.9 was already used by some unused images and stopped or exited containers
Hence as a quick fix
sudo docker system prune -a
And this increased the inodes from 96k to 2.5m
And as a long-term fix, I increased the aws abs volume to up to 50gb as we had plans to use windows images too in the future..
HTH
#bjethwan you caught very good command. I solved my problem.Thank you. I am using redhat. I want to add something.
watch command works 2 seconds interval at default. When i used it default, It couldnt catch the problematic inodes.
I ran watch command with 0.5 seconds. This arrested the guilty volume :)
watch -n 0.5 df -hi
After detecting the true volume you must increase it.
After some research, I found out that, kubernetes logs -f <pod> reads logs from files, i.e, .log files to which docker containers running inside the pods have written the logs. In my case, docker container is an application that I have written. Now, say I have disabled logging in my application expecting that RAM usage on the system will reduce.
With logging enabled in my application, I kept track of CPU and MEM usage
Commands used:
a. top | grep dockerd
b. top | grep containerd-shim
Without logging enabled also, I kept track of CPU and MEM usage.
But I didn't find any difference. Could anyone explain what is happening here internally?
The simple explanation:
Logging doesn't use a lot of RAM. As you said, the logs are written to disk as soon as they are processed so they are not stored in memory other than just one time variables used per log entry.
Logging doesn't use a lot of CPU cycles. The CPU is typically at least 10 to the -5 orders of magnitude faster than disk (less for SSDs) so your CPU can do a lot more when logs are written to disk. In other words, you'll barely notice the difference when disabling the logs in terms of CPU usage.
We're running Docker containers of NiFi 1.6.0 in production and have to come across a memory leak.
Once started, the app runs just fine, however, after a period of 4-5 days, the memory consumption on the host keeps on increasing. When checked in the NiFi cluster UI, the JVM heap size used hardly around 30% but the memory on the OS level goes to 80-90%.
On running the docker starts command, we found that the NiFi docker container is consuming the memory.
After collecting the JMX metrics, we found that the RSS memory keeps growing. What could be the potential cause of this? In the JVM tab of cluster dialog, young GC also seems to be happening in a timely manner with old GC counts shown as 0.
How do we go about identifying in what's causing the RSS memory to grow?
You need to replicate that in a non-docker environment, because with docker, memory is known to raise.
As I explained in "Difference between Resident Set Size (RSS) and Java total committed memory (NMT) for a JVM running in Docker container", docker has some bugs (like issue 10824 and issue 15020) which prevent an accurate report of the memory consumed by a Java process within a Docker container.
That is why a plugin like signalfx/docker-collectd-plugin mentions (two weeks ago) in its PR -- Pull Request -- 35 to "deduct the cache figure from the memory usage percentage metric":
Currently the calculation for memory usage of a container/cgroup being returned to SignalFX includes the Linux page cache.
This is generally considered to be incorrect, and may lead people to chase phantom memory leaks in their application.
For a demonstration on why the current calculation is incorrect, you can run the following to see how I/O usage influences the overall memory usage in a cgroup:
docker run --rm -ti alpine
cat /sys/fs/cgroup/memory/memory.stat
cat /sys/fs/cgroup/memory/memory.usage_in_bytes
dd if=/dev/zero of=/tmp/myfile bs=1M count=100
cat /sys/fs/cgroup/memory/memory.stat
cat /sys/fs/cgroup/memory/memory.usage_in_bytes
You should see that the usage_in_bytes value rises by 100MB just from creating a 100MB file. That file hasn't been loaded into anonymous memory by an application, but because it's now in the page cache, the container memory usage is appearing to be higher.
Deducting the cache figure in memory.stat from the usage_in_bytes shows that the genuine use of anonymous memory hasn't risen.
The signalFX metric now differs from what is seen when you run docker stats which uses the calculation I have here.
It seems like knowing the page cache use for a container could be useful (though I am struggling to think of when), but knowing it as part of an overall percentage usage of the cgroup isn't useful, since it then disguises your actual RSS memory use.
In a garbage collected application with a max heap size as large, or larger than the cgroup memory limit (e.g the -Xmx parameter for java, or .NET core in server mode), the tendency will be for the percentage to get close to 100% and then just hover there, assuming the runtime can see the cgroup memory limit properly.
If you are using the Smart Agent, I would recommend using the docker-container-stats monitor (to which I will make the same modification to exclude cache memory).
Yes, NiFi docker has memory issues, shoots up after a while & restarts on its own. On the other hand, the non-docker works absolutely fine.
Details:
Docker:
Run it with 3gb Heap size & immediately after the start up it consumes around 2gb. Run some processors, the machine's fan runs heavily & it restarts after a while.
Non-Docker:
Run it with 3gb Heap size & it takes 900mb & runs smoothly. (jconsole)
When I start docker for windows memory usage increases by almost 25% of 6 GB (that's 1.5 GB) without even running a container. I can't see the docker process that in the task manager, but I figured the memory usage by looking at the memory usages % before and after running the docker for windows program.
I'm running windows 10. How can I prevent docker from eating up all this ram.
You can change it in settings. Just decrease memory usage by the slider. Go to settings and choose the Advanced tab.
other settings:
https://docs.docker.com/docker-for-windows/#docker-settings-dialog
The solution is to create a .wslconfig file in the Windows home directory (C:\Users\<Your Account Name>).
Input the contents of the file as follows:-
[wsl2]
memory=1GB
processors=1
The memory and processors are the resources allocated to the wsl2 process. You can change the memory and processors according to your preference. This is my config on a 16GB i5 machine.
After that, restart the WSL2 process:-
Start PowerShell in admin mode and type: Restart-Service LxssManager
After that, you are good to go!
P.s.: Start docker only when it is required.
I have a spark job with setting spark.executor.memory=4G and setting docker memory limitation=5G
And I monitoring memory usage by docker stats
$ docker stats --format="{{.MemUsage}}"
1.973GiB / 5GiB
Usage of memory = RSS+Cache = 930MB + 1.xGB = 1.97GB
The cache size increasing until triggering OOM killer then my job will be failed.
Currently, I release cache manually by typing command
$ sync && echo 3 > /proc/sys/vm/drop_caches
It works for me, but is there any better way to limit docker memory cache size or dropping memory cache automatically