dotnet core limit memory usage - docker

I'm trying to run dotnet core project in Kubernetes and I am wondering if there a way to limit memory usage from dotnet core project itself the same way we can set a limit for jvm projects.
I see there is an optional argument in Docker to set memory limit:
docker run --memory="198m" xxx
Also in Kubernetes yaml file, we can set memory limit as well:
resources:
requests:
# ~402 MB
memory: "384Mi"
cpu: "250m"
limits:
# ~1500MB
memory: "1430Mi"
cpu: "500m"

If you're using dotnet core 2.0.2 or higher I believe it respects Docker's cgroup limits by default.
So all good on 2.0.2+. Just set the resource limits in K8S 👍
https://github.com/dotnet/coreclr/pull/13895

Related

Application running on multiple PODS

Currently my test application is running on only 1 POD(1 Replica). If I increase the Replicas then the CPU & Memory increment also required or for each POD the CPU & Memory will be allocation will be distributed based on config.
My config now.
requests:
cpu: 100m
memory: 128Mi
My doubt : If I increase PODS then for each POD the config will be 100m & 128mi or this configuration will be distributed across PODS?
The unit suffix Mi stands for mebibytes, and so this resource object specifies that the container needs 50 Mi and can use at most 100 Mi.
resources:
requests:
memory: 50Mi
limits:
memory: 100Mi
Resource limits are set on a per-container basis using the resources property of a containerSpec, which is a v1 api object of type ResourceRequirements.

how memory management happens when we set jvm arguments and memory requests and limits on container

I have set pod definition config as below. i set both heap memory and memory limits on container,
spec:
containers:
- command:
- sh
- '-c'
- >-
exec java -XX:+UseG1GC -Xms512m -Xmx512m -XX:MaxRAM=640m
ports:
- containerPort: 8080
name: http
protocol: TCP
- containerPort: 8443
name: https
protocol: TCP
- containerPort: 8081
name: management
protocol: TCP
resources:
limits:
cpu: 200m
memory: 950Mi
requests:
cpu: 100m
memory: 128Mi
but pod fequently gets killed with OOM. in that case what values should i change. whether resources part or heap memory .
also would like to how memory as jvm arguments and memory as resources works together.
First of all your configurations seem fine. I don't think you need "-XX:MaxRAM=640m". If you are using Java 10+ you don't even need these flags at all, with Java 8 there is a flag that helps you remove these flags as well.
I think your problem is actual resources on nodes are not sufficient, because pod isn't in pending state which means there is at least 128Mi empty memory reservation but not the actual resource. Problem may be 2 reasons:
1: Your bursting isn't enough(200 mcpu, 950Mi memory) and your app crashes while starting. This is common problem with Java based apps, especially with Spring Boot. To check this remove memory limit part from configuration and see if you have still OOM kills. If this fixes your problem, then find the sweet spot for memory limit your app needs.
2: Your nodes working at near full capacity and your app has only 128Mi as guaranteed but not much after that since you may have more bursting apps working above requested power. You can simply monitor it with "free -h" in nodes. This is the reason it's considered best by some group to set requests and limits same to provide stability.

Resource utilizations in Docker Swarm and Mesos

I am running my application as a docker container in both Docker Swarm UCP (using compose.yml file) as well as in Mesos (using marathon.json file).
I have added the resource constraints in both the files.
compose.yml:
resources:
limits:
cpus: '0.50'
memory: 50M
reservations:
cpus: '0.25'
memory: 20M
marathon.json:
"cpus": 0.50,
"mem": 128.0,
"disk": 5.0
What I found out is memory is the hard limit and cpu is the soft limit. i.e. cpu limit is only for weight and priority. If mesos cpu is 1 and if two applications are running one with 0.4 cpu and the other with 0.6 cpu then app one will get 40% of the cpu cycles and app two will get 60% of the cpu cycles.
Then what is the use of limit and reservation in compose.yml file here?
Now I am trying to understand below things
How does this resource constraints work exactly?
What happens when the container exceeds these values?
reservations means that the container won't start on a node of that node doesn't have enough free resources to respect this constraint.
limits means that when a process in the container reaches that limit and tries to allocate more it will be forcefully killed.

How to enlarge memory limit in docker-compose

I used docker-compose to run a service but it crashed, I entered the container and got resource info by 'top' as below.
top - 13:43:25 up 1 day, 6:46, 0 users, load average: 1.82, 0.74, 0.52
Tasks: 3 total, 1 running, 2 sleeping, 0 stopped, 0 zombie
%Cpu(s): 32.2 us, 22.4 sy, 0.0 ni, 40.1 id, 3.0 wa, 0.0 hi, 2.3 si, 0.0 st
KiB Mem: 2047040 total, 1976928 used, 70112 free, 172 buffers
KiB Swap: 1048572 total, 1048572 used, 0 free. 14588 cached Mem
So I think my docker is out of memory.
I've tried add
mem_limit: 5g
memswap_limit: 5g
mem_reservation: 5g
into docker-compose.yml
But it seems not work. My question is, how to enlarge docker's memory limit by docker-compose.
The docker engine has a compatiblity mode which aims to make transition from compose v2 files to v3 easier. As a result of that, it is possible to partially use the swarm config (deploy) to specifiy resource limits for standard docker-compose usage:
To run in compatiblity mode just add the --compatiblity flag like this:
docker-compose --compatibility up myservice
and you can use a compose file like this:
version: '3.5'
services:
myservice:
image: postgres:12-alpine
deploy:
resources:
limits:
cpus: '0.50'
memory: 50M
If this is on a machine running Docker Desktop, then you would have to open the Docker Desktop preferences and go to Resources section to tweak how much of your host resources the Docker Engine can use.
As stated in the documentation the following fields can be used in docker-compose to control memory and cpu resources.
deploy:
resources:
limits:
cpus: '0.001'
memory: 50M
reservations:
cpus: '0.0001'
memory: 20M
Note however, by default there are no limits for the container memory usage, so setting the memory flags will unlikely help.

Docker - cpu limit configuration

I would like to set limitation of cpu resource, and found this docker-compose setting file below.
version: '3'
services:
redis:
image: redis:alpine
deploy:
resources:
limits:
cpus: '0.001'
memory: 50M
reservations:
cpus: '0.0001'
memory: 20M
BUT I don't know what 0.001 mean in here. It will use 0.1% of total cpu capacity? and some document said it can be set more than 1. what happen If i set more than 1?
and What is default value of resource.limits.cpus and resource.limits.cpus.memory?
By default a container has limitless access to the host's cpu. Limits allow you to configure resources that are available to a container.
A cpus value of 0.001 will mean for each one second, a 1 millisecond will be available to the container.
A value greater than 1 will make sense when you think of hosts with multi-cpus. A value of 2 for instance will mean the container will be given access for at most 2 cpus.
The limits are the maximum docker will use for the container. The reservations are how much it will set aside for the container i.e. prevent other containers from using.
CPU is measured in percentage of a second on a single CPU. So the example you have will limit you to 0.1% of a CPU (1ms/s), which might be useful for some I/O bound tasks. '1.0' would allow one CPU to be used entirely, and higher values would allow multiple CPUs to be allocated (or reserved) to the container.
This answer has some graphs showing the effects.

Resources