Setting max_map_count for a particular docker container debian - docker

I am trying to run a ElasticSearch docker container on a remote terminal. While trying to run ES, I get that standard max_map_count is too low error and the container stops.
I don't want to change the configuration of the entire terminal because there are many things running on it and they may get affected.
So, is there a way I can specifically change the vm configuration of a docker container while trying to exec it.

Since you're in dev mode, simply starting the image with -e "discovery.type=single-node" should do the trick:
docker run -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" docker.elastic.co/elasticsearch/elasticsearch:6.2.4

Related

SQLServer Container looses Data when stopped although it is mounted to volume

please bear in mind this is my first time posting here I might not know all the rules and stuff.
so I have just started working with dockers and images. I created a small .Net app and I used MsSQLServer docker image in my app. The app gets connected with the server and I am able to create database using CodeFirst approach (add-migration and update database works fine).
i am running my SQL container with -rm mode. Although it has a volume mounted to it but i have to create database every time i rerun the container.
I read that if we want our data not to get lost when we delete the container we have to mount volumes to our container. i had done the same thing using MongoDb docker image and it worked fine. my data is still present even when i stop and rerun my MongoContainer. but my MSSQL is unable to retain data.
here is the command i m using to run my docker image ( i know about docker-compose but i am avoiding using it for now)
docker run -d --rm --name mssqlserver -p 1433:1433 -e "ACCEPT_EULA=Y" -e MSSQL_SA_PASSWORD=********** -v sqlData:/data/db mcr.microsoft.com/mssql/server
Note: this says it will not delete named volume with --rm
https://docs.docker.com/engine/reference/run/#clean-up---rm
The path you mount the volume on is a Linux path, so you need to use forward slashes rather than back-slashes, like this
docker run -d --rm --name mssqlserver -p 1433:1433 -e "ACCEPT_EULA=Y" -e MSSQL_SA_PASSWORD=********** -v sqlData:/var/opt/mssql mcr.microsoft.com/mssql/server

How to configure feature flags in Rabbitmq when using docker image?

I am trying to create a docker container with a rabbitMQ image, and then join that instance to an existing cluster.
However I get the error incompatible_feature_flags
It looks like the created image automatically enables some feature flags that are not enabled and cannot be enabled in the existing cluster.
I am running the container using the following code:
docker run -d --hostname xxx.yyy.com.co --name rabbit -p 15672:15672 -p 5672:5672 -p 4369:4369 --add-host='rabbit1:xxx.xxx.xx.xxx' --network=host -e RABBITMQ_DEFAULT_USER=admin -e RABBITMQ_DEFAULT_PASS=admin -e RABBITMQ_ERLANG_COOKIE='xxxxxxxx' -e ERL_EPMD_PORT=4369 rabbitmq:latest
I think that I can enable/disable feature flags as parameters when starting the container, but I have not been able to find anything in the documentation.
I would appreciate any help
It may be caused by the different version between the tow RabbitMQ applications.
eg: one is 3.7.x, but the another is 3.8.x .

Conflict between two Docker containers of different Elasticseach images

I am using two different Docker images of Elasticsearch on two different projects:
Project 1: docker.elastic.co/elasticsearch/elasticsearch:6.8.6
Project 2: docker.elastic.co/elasticsearch/elasticsearch:5.6.8
It works, but I have noted a weird behaviour, when I start the one with the 6.8.6 version the other crashes:
f35d8b319ec0 docker.elastic.co/elasticsearch/elasticsearch:5.6.8 "/bin/bash bin/es-do…" 3 hours ago Exited (137) Less than a second ago
If doing a docker-compose up, Docker tries to restart it but without success (same message).
Now If I do a composer down on the other project, then the container with the 5.6.8 version can work again:
f35d8b319ec0 docker.elastic.co/elasticsearch/elasticsearch:5.6.8 "/bin/bash bin/es-do…" 3 hours ago Up 12 seconds (healthy) 9300/tcp, 0.0.0.0:9203->9200/tcp
Of course, these two containers forward the Elasticsearch to two different ports 9203 and 9209.
I found something suspicious while writing this question; both containers use the same transport port:
9300/tcp, 0.0.0.0:9209->9200/tcp
9300/tcp, 0.0.0.0:9203->9200/tcp
Could the problem come from this setting? And how to fix this?
I had this same question, and #Sai Kumar's accepted answer was not working for me initially. I finally stumbled across ElasticSearch on docker - 2nd instance kills the first instance which prompted me to override memory defaults, preventing the respective elasticsearch instances from each gobbling up 2GB of RAM by default.
Full sequence that worked for me:
docker pull docker.elastic.co/elasticsearch/elasticsearch:6.3.2
docker pull docker.elastic.co/elasticsearch/elasticsearch:6.4.1
docker run -d --restart=always --name elasticsearch1 -p 9201:9200 -p 9301:9300 -e "discovery.type=single-node" -e ES_JAVA_OPTS="-Xms200m -Xmx200m" docker.elastic.co/elasticsearch/elasticsearch:6.3.2
docker run -d --restart=always --name elasticsearch2 -p 9202:9200 -p 9302:9300 -e "discovery.type=single-node" -e ES_JAVA_OPTS="-Xms200m -Xmx200m" docker.elastic.co/elasticsearch/elasticsearch:6.4.1
I'm now able to verify both containers running at http://localhost:9201/ and http://localhost:9202/ respectively.
EDIT: I was able to achieve the same result by simply going into Docker desktop's Settings > Resources area and upping the Memory (in my case from 2G to 4G).
So evidently the same symptom originally posted by #Coil above (that of endless crash-restart ping-pong between two elasticsearch container instances) can also be the result of memory settings, rather than port mappings or other settings in your docker run command.
Change the binding the tcp ports to dockers
docker run -d --name elasticsearch1 --net somenetwork -p 9201:9200 -p 9301:9300 -e "discovery.type=single-node" elasticsearch:tag
docker run -d --name elasticsearch2 --net somenetwork -p 9202:9200 -p 9302:9300 -e "discovery.type=single-node" elasticsearch:tag

Docker containers won't start again after being stopped

I'm trying to launch a GitLab or Gitea docker container in my QNAP NAS (Container Station) and, for some reason, when I restart the container, it won't start back up because files are lost (it seems).
For example, for GitLab it gives me errors saying runsvdir-start and gitlab-ctl don't exist. For Gitea it's the s6-supervise file.
Now I am launching the container like this, just to keep it simple:
docker run -d --privileged --restart always gitea/gitea:latest
A simple docker stop .... and docker start .... breaks it. How do I troubleshoot something like this?
QNAP has sent this issue to R&D and they were able to replicate it. It's a bug and will probably be fixed in a new Container Station update.
It's now fixed in QTS 4.3.6.20190906 and later.
Normal to lose you data if you launch just :
docker run -d --privileged --restart always gitea/gitea:latest
You should use VOLUME to sharing folder between your host and docker host for example :
docker run -d --privileged -v ./gitea:/data -p 3000:3000 -p 222:22 --restart always gitea/gitea:latest
Or use docker-compose.yml (see the official docs).

Docker zookeeper image unable to connect to sheepkiller/kafka-manager image

I am using two images sheepkiller/kafka-manager/ (Tool from Yahoo Inc) but the image was made by someone with a weird sense of humor but it has good reviews.
And zookeeper
I start ZooKeeper
docker run --it --restart always -d zookeeper
then try to start apache manager
docker run -it --rm -p 9000:9000 -e ZK_HOSTS="your-zk.domain:2181" -e APPLICATION_SECRET=letmein sheepkiller/kafka-manager
Document says:
(if you don't define ZK_HOSTS, default value has been set to "localhost:2181")
Error:
Initiating client connection, connectString=localhost:2181 sessionTimeout=60000 watcher=org.apache.curator.ConnectionState#7bf272d3
[info] o.a.z.ClientCnxn - Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
[info] k.m.a.KafkaManagerActor - zk=localhost:2181
[info] k.m.a.KafkaManagerActor - baseZkPath=/kafka-manager
[warn] o.a.z.ClientCnxn - Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
I am using Docker version 17.12.0-ce, build c97c6d6 on windows 10. I have tried several different things but was unsuccessful. I am assuming there is an issue with the ports, I zookeeper config file and /sheepkiller/kafka-manager/dockerfile/ but I am not sure how to change these images after I already pulled them if that really is the case.
The following should work fine:
docker network create zookeeper-net
docker run -it --restart always -p 2181:2181 --network zookeeper-net --name zookeeper -d zookeeper
docker run -it --rm -p 9000:9000 -e ZK_HOSTS="zookeeper:2181" -e APPLICATION_SECRET=letmein sheepkiller/kafka-manager
Update:
There is also a compose file to setup everything. I suggest you use that.
docker-compose up -d

Resources