ElasticSearch on docker - 2nd instance kills the first instance - docker

I'm trying to run multiple versions of ElasticSearch at the same time, should be easy. Here are my commands:
docker run -d --rm -p 9250:9200 -p 9350:9300 --name es_5_3_3_integration -e "xpack.security.enabled=false" docker.elastic.co/elasticsearch/elasticsearch:5.3.3
docker run -d --rm -p 9251:9200 -p 9351:9300 --name es_5_4_3_integration -e "xpack.security.enabled=false" docker.elastic.co/elasticsearch/elasticsearch:5.4.3
The first docker starts up great. The 2nd docker starts, but at the cost of killing the first docker. If I run it without the -d I don't get any info back to the UI about why the docker stopped.

By default, ES on docker tries to take 2G of memory. So 2 dockers was trying to take up 4G of memory, which my machine didn't have.
The solution: limit the amount of memory each ES instance tried to take to 200mb using the following switch -e ES_JAVA_OPTS="-Xms200m -Xmx200m"
Full, working commands for 4 concurrent dockers:
docker run -d --rm -p 9250:9200 -p 9350:9300 --name es_5_3_3_integration -e "xpack.security.enabled=false" -e ES_JAVA_OPTS="-Xms200m -Xmx200m" docker.elastic.co/elasticsearch/elasticsearch:5.3.3
docker run -d --rm -p 9251:9200 -p 9351:9300 --name es_5_4_3_integration -e "xpack.security.enabled=false" -e ES_JAVA_OPTS="-Xms200m -Xmx200m" docker.elastic.co/elasticsearch/elasticsearch:5.4.3
docker run -d --rm -p 9252:9200 -p 9352:9300 --name es_5_5_3_integration -e "xpack.security.enabled=false" -e ES_JAVA_OPTS="-Xms200m -Xmx200m" docker.elastic.co/elasticsearch/elasticsearch:5.5.3
docker run -d --rm -p 9253:9200 -p 9353:9300 --name es_5_6_4_integration -e "xpack.security.enabled=false" -e ES_JAVA_OPTS="-Xms200m -Xmx200m" docker.elastic.co/elasticsearch/elasticsearch:5.6.4
Thank you to #Val who really answered this question in the comments.

If this is a lack of memory problem, you can check if your container was OOMKilled (OOM).
First check if the exit code of the container is 137 = (128+9) Container received a SIGKILL.
You can test it with docker ps -a or
docker inspect --format='{{.State.ExitCode}}' $INSTANCE_ID
Then you can check the state of the container with :
docker inspect --format='{{.State.OOMKilled}}' $INSTANCE_ID
If it returns true, it was a OOM problem.
Further details at https://docs.docker.com/engine/reference/run/#user-memory-constraints .
Extract :
By default, kernel kills processes in a container if an out-of-memory
(OOM) error occurs. To change this behaviour, use the
--oom-kill-disable option. Only disable the OOM killer on containers where you have also set the -m/--memory option. If the -m flag is not
set, this can result in the host running out of memory and require
killing the host’s system processes to free memory.

Related

How to pass erlang.cookie in "docker run" after RABBITMQ_ERLANG_COOKIE got depricated

I want to start three RabbitMQ containers that will be joined together in a cluster. I want to keep it simple and not define complex Dockerfiles with specific volumes.
This is what I am doing right now:
docker network create rabbits
docker run -d --rm --net rabbits --hostname rabbit-1 --name rabbit-1 -p 8081:15672 -e RABBITMQ_ERLANG_COOKIE=ASDF rabbitmq:3.8-management
docker run -d --rm --net rabbits --hostname rabbit-2 --name rabbit-2 -p 8082:15672 -e RABBITMQ_ERLANG_COOKIE=ASDF rabbitmq:3.8-management
docker run -d --rm --net rabbits --hostname rabbit-3 --name rabbit-3 -p 8083:15672 -e RABBITMQ_ERLANG_COOKIE=ASDF rabbitmq:3.8-management
When I then try to tell the nodes to join each other with the following commands, I get an error message:
docker exec -it rabbit-2 rabbitmqctl stop_app
docker exec -it rabbit-2 rabbitmqctl reset
docker exec -it rabbit-2 rabbitmqctl join_cluster rabbit#rabbit-1
docker exec -it rabbit-2 rabbitmqctl start_app
docker exec -it rabbit-2 rabbitmqctl cluster_status
This results in:
RABBITMQ_ERLANG_COOKIE env variable support is deprecated and will be REMOVED in a future version. Use the $HOME/.erlang.cookie file or the --erlang-cookie switch instead.
However I do not know how to pass this switch. When I add this to the docker run command it does not work. So i thought maybe add this after the join_cluster command, but then the cookie is already set.
How do I need to change the docker run command?
In response to your and other questions about RABBITMQ_ERLANG_COOKIE, I opened this issue:
https://github.com/rabbitmq/rabbitmq-server/issues/7262
Currently you should use the environment variable and disregard the warning.
The best practice is to use docker compose and your own image based off of the official RabbitMQ images:
https://github.com/lukebakken/docker-rabbitmq-cluster/blob/main/docker-compose.yml
https://github.com/lukebakken/docker-rabbitmq-cluster/blob/main/rmq/Dockerfile
NOTE: the RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.
The RABBITMQ_ERLANG_COOKIE environment variable is no longer used in RabbitMQ starting from version 3.7.0. Instead, you can set the Erlang cookie value by using the -e option in the docker run command and setting the RABBITMQ_ERLANG_COOKIE environment variable to your desired value. Here's an example:
docker run -d --name rabbitmq -e RABBITMQ_ERLANG_COOKIE='your_cookie_value' rabbitmq:3
Alternatively, you can store the Erlang cookie in a file and mount it as a volume in your container. For example:
Create a file named erlang.cookie with your desired cookie value
echo 'your_cookie_value' > erlang.cookie
Start the RabbitMQ container, mounting the erlang.cookie file
docker run -d --name rabbitmq -v $(pwd)/erlang.cookie:/var/lib/rabbitmq/.erlang.cookie rabbitmq:3

How to have 2 containers connect to other container using TCP in docker network

I have this right now:
docker network rm cprev || echo;
docker network create cprev || echo;
docker run --rm -d -p '3046:3046' \
--net=cprev --name 'cprev-server' cprev-server
docker run --rm -d -p '3046:3046' \
-e cprev_user_uuid=111 --net=cprev --name 'cprev-agent-1' cprev-agent
docker run --rm -d -p '3046:3046' \
-e cprev_user_uuid=222 --net=cprev --name 'cprev-agent-2' cprev-agent
basically the 2 cprev-agents are supposed to connect to the cprev-server using TCP. The problem is I am getting this error:
docker: Error response from daemon: driver failed programming external
connectivity on endpoint cprev-agent-1
(6e65bccf74852f1208b32f627dd0c05b3b6f9e5e7f5611adfb04504ca85a2c11):
Bind for 0.0.0.0:3046 failed: port is already allocated.
I am sure it's a simple fix but frankly I don't know how to allow two way traffic from the two agent containers without using the same port etc.
So this worked (using --network=host) but I am wondering how I can create a custom network that doesn't interfere with the host network??
docker network create cprev; # unused now
docker run --rm -d -e cprev_host='0.0.0.0' \
--network=host --name 'cprev-server' "cprev-server:$curr_uuid"
docker run --rm -d -e cprev_host='0.0.0.0' \
-e cprev_user_uuid=111 --network=host --name 'cprev-agent-1' "cprev-agent:$curr_uuid"
docker run --rm -d -e cprev_host='0.0.0.0' \
-e cprev_user_uuid=222 --network=host --name 'cprev-agent-2' "cprev-agent:$curr_uuid"
so is there anyway to get this to work using my custom docker network "cprev"?

Can't start dockerized event store with all projections running

I'm trying to run event store using docker in windows, but for some reason, my projections start stopped.
Here is what I'm executing
docker run --name eventstore-node -p 2113:2113 -p 1113:1113 --run-projections=ALL --start-standard-projections=TRUEeventstore/eventstore
Also tried running as environment variables
docker run --name eventstore-node -p 2113:2113 -p 1113:1113 -e EVENTSTORE_RUN_PROJECTIONS=ALL -e EVENTSTORE_START_STANDARD_PROJECTIONS=TRUE eventstore/eventstore
This is how my projections are shown after running the docker container
docker administrator image
Does anyone had a clue what is going on?
The commands:
docker run --name eventstore-node -p 2113:2113 -p 1113:1113 eventstore/eventstore --run-projections=ALL --start-standard-projections=TRUE
docker run --name eventstore-node -p 2113:2113 -p 1113:1113 eventstore/eventstore -e EVENTSTORE_RUN_PROJECTIONS=ALL -e EVENTSTORE_START_STANDARD_PROJECTIONS=TRUE
are both not the right shape.
See the documentation for the docker image.
https://hub.docker.com/r/eventstore/eventstore/
Example:
docker run -it -p 2113:2113 -p 1113:1113 -e EVENTSTORE_RUN_PROJECTIONS=ALL eventstore/eventstore
Well, for some reason, the ALL word, should be written like this "All" not capital L's. Now its working for me.
Thanks

docker mounted volume data getting wiped after restart the server

I am trying with docker. I ran kong docker Image and linked it with cassandra database which is mounted to the folder /data/api. but whenever I restart the server I am not able to see the mounted volume and all the data in the db lost.
here is the command which i am using
docker run -d --name kong-database -p 9042:9042 -v /data/api:/var/lib/cassandra cassandra:2.2
after i run db docker image I am running kong
docker run --privileged -d --name kong --link kong-database:kong-database -e "KONG_DATABASE=cassandra" -e "KONG_CASSANDRA_CONTACT_POINTS=kong-database" -e
"KONG_PG_HOST=kong-database" -p 80:8000 -p 443:8443 -p 7001:7001 -p 7946:7946 -p 7946:7946/udp kong
my kong entries are there mounted to the folder /data/api. but when I restarted my servercouldn't see the folder /data/api
because of this reason I am stuck at my work. can anyone help me on this?
Thanks in advance,

Docker : How to run a service and a terminal in one command?

I'm running an apache server like this
docker run -d -p 80:80 php:apache /usr/sbin/apache2ctl -D FOREGROUNDD
Then I determine the name of the container with
docker ps
and execute an interactive shell on the container with
docker exec -ti hungry_fermi bash
It works well, but I would like to do the same in one command. I've tried
docker run -ti -d -p 80:80 php:apache /bin/bash -c 'bash; apache2ctl -D FOREGROUND'
The problem is that, I don't obtain a terminal and the command returns.
You're trying this:
docker run -ti -d -p 80:80 php:apache \
/bin/bash -c 'bash; apache2ctl -D FOREGROUND'
There are several problems here. First, you're using the -d command line option, which asks the docker client to detach and leave the container running. You will never get an interactive shell when using -d.
Secondly, your command -- bash; apache2ctl -D FOREGROUND -- would run bash, wait for bash to exit, then run httpd. You can instead do something like this:
docker run -ti -p 80:80 php:apache \
/bin/bash -c 'apachectl start; bash'
This would start Apache in the background (because there is no -D FOREGROUND), and then start bash...but I'm not really clear why you would want to do this, because now if you were to exit your shell the container would exit as well (taking Apache with it).
I think you are much better simply starting Apache the way you are now, and using docker exec to get a shell inside the container.

Resources