Failed to limit memory with docker compose - docker

My server has 2GB mem
I launched 2 containers in the server with docker-compose
Although I set the memory limiti, but it seems not work
docker-compose
hub:
mem_limit: 256m
image: selenium/hub
ports:
- "4444:4444"
test:
mem_limit: 256m
build: ./
links:
- hub
ports:
- "5900"

I'm not sure about this but try set mem_limit to 256000000 without using 'm'

This is not documented anywhere in docker-compose, but you can pass any valid system call setrlimit option in ulimits.
So, you can specify in docker-compose.yaml
ulimits:
as:
hard: 130000000
soft: 100000000
memory size is in bytes. After going over this limit your process will get memory allocation exceptions, which you may or may not trap.

Related

ElasticSearch - cannot run two es docker containers at the same time

ElasticSearch - cannot run two es docker containers at the same time
I'm trying to run 2 services of ElasticSearch using docker-compose.yaml
Every time I run docker-compose up -d only one service is working at time. When I try to start stopped service it runs but the first one which was working before, stops immediately.
This is how my docker-compose.yaml looks like:
version: '3.1'
services:
es-write:
image: docker.elastic.co/elasticsearch/elasticsearch:7.17.0
container_name: es-write
environment:
- discovery.type=single-node
- TAKE_FILE_OWNERSHIP=true
ulimits:
memlock:
soft: -1
hard: -1
ports:
- 9200:9200
es-read:
image: docker.elastic.co/elasticsearch/elasticsearch:7.17.0
container_name: es-read
environment:
- discovery.type=single-node
- TAKE_FILE_OWNERSHIP=true
ulimits:
memlock:
soft: -1
hard: -1
ports:
- 9201:9200
sqs:
image: "roribio16/alpine-sqs:latest"
container_name: "sqs"
ports:
- "9324:9324"
- "9325:9325"
volumes:
- "./.docker-configuration:/opt/custom"
stdin_open: true
tty: true
Tldr;
I believe you get the known <container name> exited with code 137 error.
Which is docker way of telling you it is OOM (out of memory).
To solve
Define a maximum amount of Ram each container is allowed to use.
I allowed 4GB, but you choose what suits you.
version: '3.1'
services:
es-write:
image: docker.elastic.co/elasticsearch/elasticsearch:7.17.0
container_name: es-write
environment:
- discovery.type=single-node
ports:
- 9200:9200
deploy:
resources:
limits:
memory: 4GB # Use at most 50 MB of RAM
es-read:
image: docker.elastic.co/elasticsearch/elasticsearch:7.17.0
container_name: es-read
environment:
- discovery.type=single-node
ports:
- 9201:9200
deploy:
resources:
limits:
memory: 4GB # Use at most 50 MB of RAM

How to write a command to be executed in the docker compose?

I am kind of new to docker and docker compose. I am using 20.10.12 version of docker and 2.9.0 of portainer. My aim is to do the docker compose for elasticSearch to deploy it in portainer but I get a problem that the memory given is not enough. After looking through other questions I found that I could execute the following bash command to assign a bigger limit to the VM memory
sysctl -w vm.max_map_count=262144
So my .yml is like this:
version: "3.8"
services:
command: >
bash -c ' sysctl -w vm.max_map_count=262144'
master1:
image: docker.elastic.co/elasticsearch/elasticsearch:8.0.0
environment:
node.name: "master1"
ulimits:
memlock:
soft: -1
hard: -1
deploy:
endpoint_mode: dnsrr
mode: "replicated"
replicas: 1
resources:
limits:
memory: 4G
The problem is that when I try to deploy this compose, it says "services.command must be a mapping".
I think that problem is raised when then indentation is not correct but I think in my case is indented correctly.
vm.max_map_count must be set on the host, and not in the docker container.
Set it as described in this official doc.

Unable to get OpenSearch dashboard by running OpenSearch docker compose

I am a windows user. I installed Windows Subsystem for Linux [wsl2] and then installed docker using it. Then I tried to get started with OpenSearch so I followed the documentation in the given link
https://opensearch.org/downloads.html and run docker-compose up, In the shell, I am getting an error message like
opensearch-dashboards | {"type":"log","#timestamp":"2022-01-18T16:31:18Z","tags":["error","opensearch","data"],"pid":1,"message":"[ConnectionError]: getaddrinfo EAI_AGAIN opensearch-node1 opensearch-node1:9200"}
In the port http://localhost:5601/ I am getting messages like
OpenSearch Dashboards server is not ready yet
I also changed resources preference for memory to 5GB in docker-desktop but it still doesn't work. Can somebody help me with this?
After 5 days having issues with opensearch I've found something working fine for me:
Set docker memory to 4GB
And docker vm.max_map_count = 262144
Then I use previous versions of opensearch because the latest does not seems stable:
opensearchproject/opensearch:1.2.3
opensearchproject/opensearch-dashboards:1.1.0
opensearchproject/logstash-oss-with-opensearch-output-plugin:7.16.2
Here is my docker-compose.yml file:
version: '3'
services:
opensearch-node1A:
image: opensearchproject/opensearch:1.2.3
container_name: opensearch-node1A
environment:
- cluster.name=opensearch-cluster
- node.name=opensearch-node1A
- discovery.seed_hosts=opensearch-node1A,opensearch-node2A
- cluster.initial_master_nodes=opensearch-node1A,opensearch-node2A
- bootstrap.memory_lock=true # along with the memlock settings below, disables swapping
- "OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m" # minimum and maximum Java heap size, recommend setting both to 50% of system RAM
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536 # maximum number of open files for the OpenSearch user, set to at least 65536 on modern systems
hard: 65536
volumes:
- opensearch-data1:/usr/share/opensearch/data
ports:
- 9200:9200
- 9600:9600 # required for Performance Analyzer
networks:
- opensearch-net
opensearch-node2A:
image: opensearchproject/opensearch:1.2.3
container_name: opensearch-node2A
environment:
- cluster.name=opensearch-cluster
- node.name=opensearch-node2A
- discovery.seed_hosts=opensearch-node1A,opensearch-node2A
- cluster.initial_master_nodes=opensearch-node1A,opensearch-node2A
- bootstrap.memory_lock=true
- "OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
volumes:
- opensearch-data2:/usr/share/opensearch/data
networks:
- opensearch-net
opensearch-dashboardsA:
image: opensearchproject/opensearch-dashboards:1.1.0
container_name: opensearch-dashboardsA
ports:
- 5601:5601
expose:
- "5601"
environment:
OPENSEARCH_HOSTS: '["https://opensearch-node1A:9200","https://opensearch-node2A:9200"]'
networks:
- opensearch-net
logstash-with-plugin:
image: opensearchproject/logstash-oss-with-opensearch-output-plugin:7.16.2
container_name: logstash-with-plugin
networks:
- opensearch-net
volumes:
opensearch-data1:
opensearch-data2:
networks:
opensearch-net:
I had the same error message when opening "http://localhost:5601/" while testing opensearch and opensearch dasboard locally using Docker in Windows 10:
OpenSearch Dashboards server is not ready yet
opensearch-dashboards |
{"type":"log","#timestamp":"2022-02-10T12:29:35Z","tags":["error","opensearch","data"],"pid":1,"message":"[ConnectionError]:
getaddrinfo EAI_AGAIN opensearch-node1 opensearch-node1:9200"}
But when looking into the log I also found this other error:
opensearch-node1 | [1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
The 3 part solution working for me was:
Part 1
On each opensearch nodes update the file:
/usr/share/opensearch/config/opensearch.yml
And add line:
plugins.security.disabled: true
Before the security plugins:
cks. "Single-node" mode disables them again.
#discovery.type: single-node
plugins.security.disabled: true
######## Start OpenSearch Security Demo Configuration ########
# WARNING: revise all the lines below before you go into production
plugins.security.ssl.transport.pemcert_filepath: esnode.pem
I found the information on opensearch official documentation
Part 2
Setting allocated memory for docker desktop to 4GB into .wslconfig more information here:
opendistrocommunity discussion
stackoverflow aloocate memory
Make sure your allocated memory is well set up (you have to restart docker desktop) with this command: docker info and check the line "Total Memory" it should be set to 4GB (approximately, in my case it has be set to 3.84GiB)
Part 3
And also increase vm.max_map_count:
open powershell
wsl -d docker-desktop
echo "vm.max_map_count = 262144" > /etc/sysctl.d/99-docker-desktop.conf
The info was founded here on github discussion
I had the same issue with my Opensearch-dashboards instance installed on VM without Docker usage. The problem was caused by wrong setting for connection to search engine in the opensearch-dashboards.yml file. I mixed up https and http protocols here (there was mismatch between settings of opensearch and opensearch-dashboards):
opensearch.hosts: [https://localhost:9200]

Docker-swarm across multiple hosts using same docker-compse file

I am building a docker swarm across 3 hosts for the following services, Grakn, Redis, Elasticsearch, MinIO and RabbitMQ.
My queries are this,
Can i use one docker-compose.yml so that everything builds across 3 hosts? Or we need to have 3 docker-compose.yml file?
In order to have HA, I also want to build 3 more host so that say, if one host (physical) fails, the services which are running on this be transfered to other one and service wont be interrupted.
Can i use docker stack here, if so how?
services:
grakn:
image: graknlabs/grakn:1.7.2
ports:
- 48555:48555
volumes:
- grakndata:/grakn-core-all-linux/server/db
restart: always
redis:
image: redis:6.0.5
restart: always
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
volumes:
- esdata:/usr/share/elasticsearch/data
environment:
- discovery.type=single-node
restart: always
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
minio:
image: minio/minio:RELEASE.2020-05-16T01-33-21Z
volumes:
- s3data:/data
ports:
- "9000:9000"
environment:
MINIO_ACCESS_KEY: ${MINIO_ACCESS_KEY}
MINIO_SECRET_KEY: ${MINIO_SECRET_KEY}
command: server /data
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
interval: 30s
timeout: 20s
retries: 3
restart: always
rabbitmq:
image: rabbitmq:3.8-management
environment:
- RABBITMQ_DEFAULT_USER=${RABBITMQ_DEFAULT_USER}
- RABBITMQ_DEFAULT_PASS=${RABBITMQ_DEFAULT_PASS}
restart: always
Can i use one docker-compose.yml so that everything builds across 3
hosts? Or we need to have 3 docker-compose.yml file?
Yes, you should use one docker-compose.yml file. There you declare services and their desired state including number of replicas.
In order to have HA, I also want to build 3 more host so that say, if
one host (physical) fails, the services which are running on this be
transfered to other one and service wont be interrupted.
If you initialized a cluster of Docker Engines in swarm mode and these engines run on different hosts, service replicas can run on any host. (unless you restrict service placement using Docker labels)
Can i use docker stack here, if so how?
Yes, run docker stack deploy --compose-file [Path to a Compose file]

docker-compose stop not working after docker-compose -p <name> up

I am using docker-compose version 2. I am starting containers with docker-compose -p some_name up -d and trying to kill them with docker-compose stop. The commands exits with 0 code but the containers are still up and running.
Is this the expected behaviour for version? If yes, any idea how can I work around it?
my docker-compose.yml file looks like this
version: '2'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:5.3.0
ports:
- "9200:9200"
environment:
ES_JAVA_OPTS: "-Xmx512m -Xms512m"
xpack.security.enabled: "false"
xpack.monitoring.enabled: "false"
xpack.graph.enabled: "false"
xpack.watcher.enabled: "false"
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 262144
hard: 262144
kafka-server:
image: spotify/kafka
environment:
- TOPICS=my-topic
ports:
- "9092:9092"
test:
build: .
depends_on:
- elasticsearch
- kafka-server
update
I found that the problem is caused by using the -p parameter and giving explicit prefix to the container. Still looking for the best way to solve it.
docker-compose -p [project_name] stop worked in my case. I had the same problem.
Try forcing running containers to stop by sending a SIGKILL with docker-compose -p some_name kill.
docker-compose kill
I just read and experimented with something from compose CLI envs when passing -p.
You have to pass the -p some_name to kill the containers or compose will assume the directory name if you don't.
Kindly let me know if this helped.

Resources