docker mounted volume data getting wiped after restart the server - docker

I am trying with docker. I ran kong docker Image and linked it with cassandra database which is mounted to the folder /data/api. but whenever I restart the server I am not able to see the mounted volume and all the data in the db lost.
here is the command which i am using
docker run -d --name kong-database -p 9042:9042 -v /data/api:/var/lib/cassandra cassandra:2.2
after i run db docker image I am running kong
docker run --privileged -d --name kong --link kong-database:kong-database -e "KONG_DATABASE=cassandra" -e "KONG_CASSANDRA_CONTACT_POINTS=kong-database" -e
"KONG_PG_HOST=kong-database" -p 80:8000 -p 443:8443 -p 7001:7001 -p 7946:7946 -p 7946:7946/udp kong
my kong entries are there mounted to the folder /data/api. but when I restarted my servercouldn't see the folder /data/api
because of this reason I am stuck at my work. can anyone help me on this?
Thanks in advance,

Related

I try to mount a created database from host, name mydb, onto mysql container here is what I’ve tried:

I try to mount a created database from host, name mydb, onto mysql container here is what I’ve tried:
sudo docker run -v mysql-data:/var/lib/mysql/mydb/ --name mysql_web -e MYSQL_ROOT_PASSWORD=12345 -p 3306:3306 -d mysql:5.7
root#ip-172-31-16-134:/var/lib/mysql# sudo docker run -v mysql-data:/var/lib/mysql/mydb/ --name mysql_web -e MYSQL_ROOT_PASSWORD=12345 -p 3306:3306 -d mysql:5.7
3c0a8eb588c0dd0d0b7b72a727912744f44e744e700de7efc64a0c4c1f651685
root#ip-172-31-16-134:/var/lib/mysql# docker exec -it mysql_web bash
Error response from daemon: Container 3c0a8eb588c0dd0d0b7b72a727912744f44e744e700de7efc64a0c4c1f651685 is not running
but it works when I change where to mount…
sudo docker run -v mysql-data:/var/lib/mydb/ --name mysql_web -e MYSQL_ROOT_PASSWORD=12345 -p 3306:3306 -d mysql:5.7
root#ip-172-31-16-134:/var/lib/mysql# sudo docker run -v mysql-data:/var/lib/mydb/ --name mysql_web -e MYSQL_ROOT_PASSWORD=12345 -p 3306:3306 -d mysql:5.7
06233eeb864c32ff16d6542e632a4da3ff6dfbd4a30fcc9aac8086dfc1245948
root#ip-172-31-16-134:/var/lib/mysql# docker exec -it mysql_web bash
root#06233eeb864c:/# cd /var/lib/mydb
root#06233eeb864c:/var/lib/mydb# exit
exit
seems It can’t mount onto specific folder and I don’t know why, I just want both databases, one from host, and the other from container, synced itself instead of loading dump.sql everytime when I start a new container.
any suggestion would help , thanks
This is known issue for the error you got [ERROR] --initialize specified but the data directory has files in it. Aborting.
What this error means is explained here.
Try few things like
Adding --ignore-db-dir=lost+found as mentioned here.
Providing exact path of data directory on host and mount on to container path at /var/lib/mysql like this docker run -v /path/to/mysql-data:/var/lib/mysql --name mysql_web -e MYSQL_ROOT_PASSWORD=12345 -p 3306:3306 -d mysql:5.7 as mentioned here.
Hope this helps.

Can we run docker inside a docker container which is running in a virtual-box of Ubuntu 18.04?

I want to run docker inside another docker container. My main container is running in a virtualbox of OS Ubuntu 18.04 which is there on my Windows 10. On trying to run it, it is showing me as:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
How can I resolve this issue?
Yes, you can do this. Check for dind (docker in docker) on docker webpage how to achieve it: https://hub.docker.com/_/docker
Your error indicates that either dockerd in the top level container is not running or you didn't mount docker.sock on the dependent container to communicate with dockerd running on your top-level container.
I am running electric-flow in a docker container in my Ubuntu virtual-box using this docker command: docker run --name efserver --hostname=efserver -d -p 8080:8080 -p 9990:9990 -p 7800:7800 -p 7070:80 -p 443:443 -p 8443:8443 -p 8200:8200 -i -t ecdocker/eflow-ce. Inside this docker container, I want to install and run docker so that my CI/CD pipeline in electric-flow can access and use docker commands.
From your above description, ecdocker/eflow-ce is your CI/CD solution container, and you just want to use docker command in this container, then you did not need dind solution. You can just access to a container's host docker server.
Something like follows:
docker run --privileged --name efserver --hostname=efserver -d -p 8080:8080 -p 9990:9990 -p 7800:7800 -p 7070:80 -p 443:443 -p 8443:8443 -p 8200:8200 -v $(which docker):/usr/bin/docker -v /var/run/docker.sock:/var/run/docker.sock -i -t ecdocker/eflow-ce
Compared to your old command:
Add --privileged
Add -v $(which docker):/usr/bin/docker, then you can use docker client in container.
Add -v /var/run/docker.sock:/var/run/docker.sock, then you can access host's docker daemon using client in container.

Can't start dockerized event store with all projections running

I'm trying to run event store using docker in windows, but for some reason, my projections start stopped.
Here is what I'm executing
docker run --name eventstore-node -p 2113:2113 -p 1113:1113 --run-projections=ALL --start-standard-projections=TRUEeventstore/eventstore
Also tried running as environment variables
docker run --name eventstore-node -p 2113:2113 -p 1113:1113 -e EVENTSTORE_RUN_PROJECTIONS=ALL -e EVENTSTORE_START_STANDARD_PROJECTIONS=TRUE eventstore/eventstore
This is how my projections are shown after running the docker container
docker administrator image
Does anyone had a clue what is going on?
The commands:
docker run --name eventstore-node -p 2113:2113 -p 1113:1113 eventstore/eventstore --run-projections=ALL --start-standard-projections=TRUE
docker run --name eventstore-node -p 2113:2113 -p 1113:1113 eventstore/eventstore -e EVENTSTORE_RUN_PROJECTIONS=ALL -e EVENTSTORE_START_STANDARD_PROJECTIONS=TRUE
are both not the right shape.
See the documentation for the docker image.
https://hub.docker.com/r/eventstore/eventstore/
Example:
docker run -it -p 2113:2113 -p 1113:1113 -e EVENTSTORE_RUN_PROJECTIONS=ALL eventstore/eventstore
Well, for some reason, the ALL word, should be written like this "All" not capital L's. Now its working for me.
Thanks

How can I save zeppelin notebook from a docker?

I am using a docker-container for spark-zeppelin. The docker image was fund here,
https://github.com/Gmousse/docker-zeppelin-python3
I can start an image and work using this command,
docker run -it -p 8080:8080 -p 8081:8081 gmousse/docker-zeppelin-python3
To be able to communicate with the host, I have mounted some paths to host with volume flag like this,
docker run -it -v /cephfs:/cephfs -p 8080:8080 -p 8081:8081 gmousse/docker-zeppelin-python3
it works fine. Now to mount the zeppelin working directory I added this,
docker run -it -v /cephfs:/cephfs -v my_path_on_host:/zeppelin -p 8080:8080 -p 8081:8081 gmousse/docker-zeppelin-python3
And this does not run.
In this command actually it is looking for a zeppelin.sh file in /zeppelin and fails.
Any idea, how can I mount a local volume, and be able to save zeppelin notebook on the host?
Thank you for your time, in advance...
It is very handy to store notebooks on local file system especially under version control.
So you need to mount only notebook folder, but you tried to mount whole zeppelin folder and on start container could not find zeppelin files.
Correct mount examples:
docker run \
-p 8080:8080 \
-v /home/user/zeppelin_notebooks:/zeppelin/notebook \
apache/zeppelin:0.8.0
docker run \
-p 8080:8080 \
--mount type=bind,source="$(pwd)"/zeppelin_notebooks,target=/zeppelin/notebook \
--rm --name zeppelin apache/zeppelin:0.8.0
for My apache zeppelin docker hosted on window 10, the pwd is /opt/zeppelin, the default path for notebooks is /opt/zeppelin/notebook, so I mount my window path as below, Therefore, All notebooks are being save in "C:/Zeppelin/notebook"
docker run -p 8080:8080 -v C:/Zeppelin/Data/:/opt/zeppelin/Data/ -v C:/Zeppelin/notebook:/opt/zeppelin/notebook --name zeppelin apache/zeppelin:0.10.0

ElasticSearch on docker - 2nd instance kills the first instance

I'm trying to run multiple versions of ElasticSearch at the same time, should be easy. Here are my commands:
docker run -d --rm -p 9250:9200 -p 9350:9300 --name es_5_3_3_integration -e "xpack.security.enabled=false" docker.elastic.co/elasticsearch/elasticsearch:5.3.3
docker run -d --rm -p 9251:9200 -p 9351:9300 --name es_5_4_3_integration -e "xpack.security.enabled=false" docker.elastic.co/elasticsearch/elasticsearch:5.4.3
The first docker starts up great. The 2nd docker starts, but at the cost of killing the first docker. If I run it without the -d I don't get any info back to the UI about why the docker stopped.
By default, ES on docker tries to take 2G of memory. So 2 dockers was trying to take up 4G of memory, which my machine didn't have.
The solution: limit the amount of memory each ES instance tried to take to 200mb using the following switch -e ES_JAVA_OPTS="-Xms200m -Xmx200m"
Full, working commands for 4 concurrent dockers:
docker run -d --rm -p 9250:9200 -p 9350:9300 --name es_5_3_3_integration -e "xpack.security.enabled=false" -e ES_JAVA_OPTS="-Xms200m -Xmx200m" docker.elastic.co/elasticsearch/elasticsearch:5.3.3
docker run -d --rm -p 9251:9200 -p 9351:9300 --name es_5_4_3_integration -e "xpack.security.enabled=false" -e ES_JAVA_OPTS="-Xms200m -Xmx200m" docker.elastic.co/elasticsearch/elasticsearch:5.4.3
docker run -d --rm -p 9252:9200 -p 9352:9300 --name es_5_5_3_integration -e "xpack.security.enabled=false" -e ES_JAVA_OPTS="-Xms200m -Xmx200m" docker.elastic.co/elasticsearch/elasticsearch:5.5.3
docker run -d --rm -p 9253:9200 -p 9353:9300 --name es_5_6_4_integration -e "xpack.security.enabled=false" -e ES_JAVA_OPTS="-Xms200m -Xmx200m" docker.elastic.co/elasticsearch/elasticsearch:5.6.4
Thank you to #Val who really answered this question in the comments.
If this is a lack of memory problem, you can check if your container was OOMKilled (OOM).
First check if the exit code of the container is 137 = (128+9) Container received a SIGKILL.
You can test it with docker ps -a or
docker inspect --format='{{.State.ExitCode}}' $INSTANCE_ID
Then you can check the state of the container with :
docker inspect --format='{{.State.OOMKilled}}' $INSTANCE_ID
If it returns true, it was a OOM problem.
Further details at https://docs.docker.com/engine/reference/run/#user-memory-constraints .
Extract :
By default, kernel kills processes in a container if an out-of-memory
(OOM) error occurs. To change this behaviour, use the
--oom-kill-disable option. Only disable the OOM killer on containers where you have also set the -m/--memory option. If the -m flag is not
set, this can result in the host running out of memory and require
killing the host’s system processes to free memory.

Resources