Memsql crashing when running out of memory - docker

We are using memsql for real time analytics purpose. We have created Kafka Connector which is continuously consuming from Kafka and pushing to memsql. After sometime memsql is crashing due to memory issues.
Is there any way to monitor how much memory memsql consumes and how it autoscales.

(by your 'docker' tag, I am guessing that you run 'memsql' in a container)
Assuming that you have access to the host where your 'memsql' container is running and you know the container you want to monitor, the memory usage information is available from the kernel's cgroups stats on the host.
You could run the following on the Docker host system:
container="your_container_name_or_ID"
set -- `docker inspect $container --format "{{.Id}} {{.HostConfig.CgroupParent}}"`
id="$1"
cg="$2"
if [ -z "$cg" ] ; then
cg=/docker
fi
cat /sys/fs/cgroup/memory${cg}/${id}/memory.max_usage_in_bytes
This will print the peak memory usage of all processes running in the container's cgroup (that is, the initial process started by the container's ENTRYPOINT and any child processes that it might have ran).

If you want to check how much memory your memsql nodes are using run:
select * from information_schema.mv_nodes
There is a more details about how memsql uses memory here:
https://help.memsql.com/hc/en-us/articles/115001091386-What-Is-Using-Memory-on-My-Leaves-

Related

Kubernetes pod memory usage breakdown

I'm trying to get a breakdown of the memory usage of my pods running on Kubernetes. I can see the pod's memory usage through kubectl top pod but what I need is a total breakdown of where the memory is used.
My container might download or write new files to disk, so I'd like to see at a certain moment how much of the used memory is used for each file and how much is used by software running.
currently, there's no real disk but just the TempFS, so that means every file is consuming the allocated memory resources, that is okay as long as I can inspect and know how much memory is where.
Couldn't find anything like that, it seems that cAdvisor helps to get memory statics but it just uses docker/cgroups which doesn't give a breakdown as I described.
A better solution would be to install a metrics server along with Prometheus and Grafana in your cluster. Prometheus will scrape the metrics which can be used by Grafana for displaying as graphs. This might be useful.
If you want the processes consumption inside the container, you can go into the container and monitor the processes.
$ docker exec -it <container-name> watch ps -aux
Moreover, you can check docker stats.
Following Linux command will summarize the sizes of the directories:
$ du -h

Docker Scheduling Strategy Clarification

I am new to docker, was playing around trying to run multiple containers on a swarm with spread strategy. Swarm had 3 nodes & each node already had 2 containers , and i ran 3 more docker run commands so that each node gets 1 more container.
As I was running docker containers without specifying memory using -m , nor CPU in the docker run command. After doing so i ran "docker info" but i noticed that Reserved memory was 0.
I re ran container by specifying -m , this time i noticed that Reserved memory is not 0.
Question is : when running containers without using -m command , why is the Reserved memory is shown 0 , although the running containers ( spawned without -m ) still would be using memory ... For Example , if there are 10 containers already running on a single node (i.e the host memory would be full) and i try to start a container on same host machine by giving -m 1G it may not have space to reserve ,hence will it fail ?
You are correct. When you don't specify the reserved memory with -m memory is still used.
You see 0 because this is the default when reserved memory is not specified. That number is not the real consumption of memory on the node. So 0 makes sense, as a setting rather than a real time report.
This was discussed on issue #1819 in the docker swarm project.

How to set RAM memory of a Docker container by terminal or DockerFile

I need to have a Docker Container with 6gb of RAM memory.
I tried this command:
docker run -p 5311:5311 --memory=6g my-linux
But it doesn't work because I logged in to the Docker Container and I checked the amount of memory available. This is the output which shows there are only 2gb available:
>> cat /proc/meminfo
MemTotal: 2046768 kB
MemFree: 1747120 kB
MemAvailable: 1694424 kB
I tried setting the preferences -> advance in the Docker Application.
If I set 6gb, it works... I mean, I have a container with 6gb MemTotal.
In this way all my containers will have 6gb...
I was wondering how to allocate 6gb of memory for only one container using some commands or setting something in the Docker File. Any help?
Don't rely on /proc/meminfo for tracking memory usage from inside a docker container. /proc/meminfo is not containerized, which means that the file is displaying the meminfo of your host system.
Your /proc/meminfo indicates that your Host system has 2G of memory available. The only way you'll be able to make 6G available in your container without getting more physical memory is to create a swap partition.
Once you have a swap partition larger or equal to ~4G, your container will be able to use that memory (by default, docker imposes no limitation to running containers).
If you want to limit the amount of memory available to your container explicitly to 6G, you could do docker run -p 5311:5311 --memory=2g --memory-swap=6g my-linux, which means that out of a total memory limit of 6G (--memory-swap), upto 2G may be physical memory (--memory). More information about this here.
There is no way to set memory limits in the Dockerfile that I know of (and I think there shouldn't be: Dockerfiles are there for building containers, not running them), but docker-compose supports the above options through the mem_limit and memswap_limit keys.

Docker CPU and memory too low

I am newbie to Docker world. I could successfully build and run container with Tomcat. But performance is very poor. I logged into running system and found that only 2 cpu cores and 4 GB RAM is allocated. Is it one of reason for bad performance, if so how can I allocate more resources.
I tried following command, but no luck..
docker run --rm -c 3 -p 32772:8080 --memory=8Gb -d helloworld
Any pointer will be helpful.
thanks in advance.
Do you use Docker for Windows/Mac? Then you can change it in the settings (Docker icon in the taskbar).
On Windows, Docker runs in Hyper-V without dynamic memory, so the memory will not be avalible to your system even if it isn't used.
With docker info you can find out how many resources are avalible.
The bad performace may also be caused by very slow file access on Docker for Mac.
On Linux, Docker has no upper limit by default.
The cpu and memory args of docker run limit the resources for one container, if they are not set there is no upper limit.

Limit disk size and bandwidth of a Docker container

I have a physical host machine with Ubuntu 14.04 running on it. It has 100G disk and 100M network bandwidth. I installed Docker and launched 10 containers. I would like to limit each container to a maximum of 10G disk and 10M network bandwidth.
After going though the official documents and searching on the Internet, I still can't find a way to allocate specified size disk and network bandwidth to a container.
I think this may not be possible in Docker directly, maybe we need to bypass Docker. Does this means we should use something "underlying", such as LXC or Cgroup? Can anyone give some suggestions?
Edit:
#Mbarthelemy, your suggestion seems to work but I still have some questions about disk:
1) Is it possible to allocate other size (such as 20G, 30G etc) to each container? You said it is hardcoded in Docker so it seems impossible.
2) I use the command below to start the Docker daemon and container:
docker -d -s devicemapper
docker run -i -t training/webapp /bin/bash
then I use df -h to view the disk usage, it gives the following output:
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/docker-longid 9.8G 276M 9.0G 3% /
/dev/mapper/Chris--vg-root 27G 5.5G 20G 22% /etc/hosts
from the above I think the maximum disk a container can use is still larger than 10G, what do you think
?
I don't think this is possible right now using Docker default settings. Here's what I would try.
About disk usage: You could tell Docker to use the DeviceMapper storage backend instead of AuFS. This way each container would run on a block device (Devicemapper dm-thin target) limited to 10GB (this is a Docker default, luckily enough it matches your requirement!).
According to this link, it looks like latest versions of Docker now accept advanced storage backend options. Using the devicemapperbackend, you can now change the default container rootfs size option using --storage-opt dm.basesize=20G (that would be applied to any newly created container).
To change the storage backend: use the --storage-driver=devicemapper Docker option. Note that your previous containers won't be seen by Docker anymore after the change.
About network bandwidth : you could tell Docker to use LXC under the hoods : use the -e lxcoption.
Then, create your containers with a custom LXC directive to put them into a traffic class :
docker run --lxc-conf="lxc.cgroup.net_cls.classid = 0x00100001" your/image /bin/stuff
Check the official documentation about how to apply bandwidth limits to this class.
I've never tried this myself (my setup uses a custom OpenVswitch bridge and VLANs for networking, so bandwidth limitation is different and somewhat easier), but I think you'll have to create and configure a different class.
Note : the --storage-driver=devicemapperand -e lxcoptions are for the Docker daemon, not for the Docker client you're using when running docker run ........
New releases version has --device-read-bps and --device-write-bps.
You can use:
docker run --device-read-bps=/dev/sda:10mb
More info here:
https://blog.docker.com/2016/02/docker-1-10/
If you have access to the containers you can use tc for bandwidth control within them.
eg: in your entry point script you can add:
tc qdisc add dev eth0 root tbf rate 240kbit burst 300kbit latency 50ms
to have a bandwidth of 240kbps, burst 300kbps and 50 ms latency.
You also need to pass the --cap-add=NET_ADMIN to the docker run command if you are not running the containers as root.
1) Is it possible to allocate other size (such as 20G, 30G etc) to each container? You said it is hardcoded in Docker so it seems impossible.
to answer this question please refer to Resizing Docker containers with the Device Mapper plugin

Resources