Storage Spaces not increasing the allocated Size in Thin Provisioning Storage Pool - error "Not enough space" - storage

I have a pool of 2 RAID SSD (a RAID1 3.25TB and a RAID5 24TB), with multiple thin virtual disks and volumes, everything worked ok until last week. We started getting the error "not enough space" when there is enough on the pool, the virtual disks and the volumes.
All physical disks are healthy and tried pool optimization, nothing worked!!!!

Related

High memory utilisation in Golang application deployed on kubernetes cluster

We have an Image Service written in Golang.
It supports image operation like resize crop blur..
The RPS is around 400.
Pod Config : 16GB RAM and 8 cores
We deployed the application and observed for a day, it showed high core utilization
We introduced ballast(https://blog.twitch.tv/en/2019/04/10/go-memory-ballast-how-i-learnt-to-stop-worrying-and-love-the-heap-26c2462549a2/) of 4GB and Sync pool(https://medium.com/a-journey-with-go/go-understand-the-design-of-sync-pool-2dde3024e277) to contain the core issues
Next we started observing high memory utilization.
Hence we reduced Ballast to 1GB, but still memory utilization is high
According to this article https://www.bwplotka.dev/2019/golang-memory-monitoring/ Goland version 1.12+ reported high RSS According to the article "This does not mean that they require more memory, it’s just optimization for cases where there is no other memory pressure."
To verify that we did a small POC on local machine to validate above and it worked.
Local Set up - Container memory - 500MB
The memory would continuously increase if it had and would remain there at 450MB until the pressure increases. As soon as the pressure increases the memory would go down to 4MB.
But this POC failed on Kubernetes cluster and the pods started crashing and restarting when the memory reached ~16 GB RAM on high RPS like 400.
Can someone suggest how can we contain this memory issue and why this POC failed on the cluster.
Let me know if more detail is required..

docker for mac memory usage in com.docker.hyperkit

I'm running docker desktop community 2.1.0.3 on MacOS Mojave. I've got 8GB of memory allocated to Docker, which already seems like a lot (that's half my RAM). Somehow even after exiting and then starting Docker for Mac again, which means no containers are running, docker is already exceeding the memory allocation by 1GB.
What is expected memory usage for docker with no containers running? Is there a memory leak in docker for mac or docker's hyperkit?
As #GabLeRoux has shared in a comment, the "Real Memory" usage is much lower than what you see in the "Memory" column in Activity Monitor.
This document thoroughly explains memory usage on Mac OS with Docker Desktop and information is excerpted from there.
To see the "Real Memory" used by Docker, right-click the column names in Activity Monitor and Select "Real Memory". The value in this column is what's currently physically allocated to com.docker.hyperkit.
Alternate answer: I reduced the number of CPUs and Memory Docker is allowed to use within the Docker Resources preferences. My computer is running faster and quieter now.
I just now put this in place, so time will tell if this solution works for me. Before it was making my computer max out on memory. Now it's significantly reduced.
Thank you for the note on real memory. I added that to my Activity Monitor.
UPDATE: It's been a few days now and my computer runs well below the max of memory and my fan runs at a minimum if at all.
I think you shouldn't be using swap while ram is not full, for ssd health and speed

Does it make sense to run multinode Elasticsearch cluster on a single host?

What do I get by running multiple nodes on a single host? I am not getting availability, because if the host is down, the whole cluster goes with it. Does it make sense regarding performance? Doesn't one instance of ES take as many resources from the host as it needs?
Generally no, but if you have machines with ridiculous amounts of CPU and memory, you might want that to properly utilize the available resources. Avoiding big heaps with Elasticsearch is a good thing generally since garbage collection on bigger heaps can become a problem and in any case above 32 GB you lose the benefit of pointer compression. Mostly you should not need big heaps with ES. Most of the memory that ES uses is through memory mapped files, which relies on the OS cache. So just because you aren't assigning memory to the heap doesn't mean it is not being used: more memory available for caching means you'll be able to handle bigger shards or more shards.
So if you run more nodes, that advantage goes away and you waste memory on redundant heaps, and you'll have nodes competing for resources. Mostly, you should base these decisions on actual memory, cache, and cpu usage of course.
It depends on your host and how you configure your nodes.
For example, Elastic recommends allocating up to 32GB of RAM (because of how Java compresses pointers) to elasticsearch and have another 32GB for the operating system (mostly for disk caching).
Assuming you have more than 64GB of ram on your host, let's say 128, it makes sense to have two nodes running on the same machine, having both configured to 32GB ram each and leaving another 64 for the operating system.

What does "Thin Pool" in docker mean?

I guess this should be pretty elementary but I've tried to google it and I've read the docker documentation. However, I still can't grasp what exactly does "Thin Pool" mean and the role it plays in the docker world.
Short story:
A thin pool is a storage source that provides on-demand allocation for storage space. It is more or less similar to virtual memory, which provides full address space to every process.
Long story:
Fat Provisioning
The traditional storage allocation method is called "fat" or "thick" provisioning.
For example, a user claims to use 10G storage space. Fat provisioning then reserves 10G physical storage space for this user even though he/she only uses 1% of it. No one else can use this reserved space.
Thin Provisioning
Thin provisioning provides a mechanism of on-demand storage allocation, which allows a user to claim more storage space than has been physically reserved for that user.
In other words, it enables over-allocation for storage space. Think about RAM's over-commit feature.
Thin Pool
Thin pool is a conceptional term which stands for the backing storage source used by thin provisioning. Thin provisioning allocates virtual chunks of storage from thin pool, while fat provisioning allocates physical blocks of storage from the traditional storage pool.
Thin Pool in Docker
The Docker Engine can be configured to use Device Mapper as its storage driver. This is where you deal with thin provisioning. According to Docker's documentation:
Production hosts using the devicemapper storage driver must use direct-lvm mode. This mode uses block devices to create the thin pool.
Two different spaces of thin pool need to be taken care of: the Metadata space (which stores pointers) and the Data space (which stores the real data). At the very beginning, all the pointers in Metadata space point to no real chunks in the pool. No chunk in data space is really allocated until a write request arrives. This is nothing new if you are familiar with
the virtual memory mechanism.
Let's take a look at the output of docker info:
Data Space Used: 11.8 MB
Data Space Total: 107.4 GB
Data Space Available: 7.44 GB
Metadata Space Used: 581.6 kB
Metadata Space Total: 2.147 GB
Metadata Space Available: 2.147 GB
Thin Pool Minimum Free Space: 10.74 GB
Here, the only confusing one is the Thin Pool Minimum Free Space. What does it stand for?
It specifies the min free space in GB in a thin pool required for a new device creation to succeed. This check applies to both free data space as well as free metadata space.
Container creation (during docker pull or docker run) fails if free space in thin pool is less than the value in Thin Pool Minimum Free Space. Insufficient space requires either adding more storage into thin pool or clearing up unused images.
Links:
Thin provisioning Wikipedia page
lvmthin Linux man page
The Device Mapper

JVM Crash java.lang.OutOfMemoryError

My Java application runs on Windows Server 2008 R2 and JDK 1.6
When I monitor it with JConsole, the committed virtual memory increases continually over time, while the heap memory usage stays below 50MB.
The max heap size is 1024MB.
The application creates many small, short-lived event objects over time. The behavior is as if each heap allocation is counted against the committed virtual memory.
When the committed virtual memory size approaches 1000MB, the application crashes with a native memory allocation failure.
java.lang.OutOfMemoryError: requested 1024000 bytes for GrET in C:\BUILD_AREA\jdk6\hotspot\src\share\vm\utilities\growableArray.cpp. Out of swap space?
My conclusion is the virtual address space of 2GB has been exhausted.
Why does JConsole show the committed virtual address space growing over time even though the heap is not growing over time?

Resources