I created three zookeeper nodes in docker with the following commands.
docker run -d -p 2181:2181 --name zookeeper_node1 --privileged --restart always --network zoonet --ip 172.18.0.2 -v C:/zookeeper/zk_node1/volumes/data:/data -v C:/zookeeper/zk_node1/volumes/datalog:/datalog -v C:/zookeeper/zk_node1/volumes/logs:/logs -e ZOO_MY_ID=1 -e "ZOO_SERVERS=server.1=172.18.0.2:2888:3888;2181 server.2=172.18.0.3:2888:3888;2181 server.3=172.18.0.4:2888:3888;2181" 36c607e7b14d
docker run -d -p 2182:2181 --name zookeeper_node2 --privileged --restart always --network zoonet --ip 172.18.0.3 -v C:/zookeeper/zk_node2/volumes/data:/data -v C:/zookeeper/zk_node2/volumes/datalog:/datalog -v C:/zookeeper/zk_node2/volumes/logs:/logs -e ZOO_MY_ID=2 -e "ZOO_SERVERS=server.1=172.18.0.2:2888:3888;2181 server.2=172.18.0.3:2888:3888;2181 server.3=172.18.0.4:2888:3888;2181" 36c607e7b14d
docker run -d -p 2183:2181 --name zookeeper_node3 --privileged --restart always --network zoonet --ip 172.18.0.4 -v C:/zookeeper/zk_node3/volumes/data:/data -v C:/zookeeper/zk_node3/volumes/datalog:/datalog -v C:/zookeeper/zk_node3/volumes/logs:/logs -e ZOO_MY_ID=3 -e "ZOO_SERVERS=server.1=172.18.0.2:2888:3888;2181 server.2=172.18.0.3:2888:3888;2181 server.3=172.18.0.4:2888:3888;2181" 36c607e7b14d
The above three zookeeper nodes are in a network called zoonet.
I have changed the config files and started a clickhouse node in zoonet(existing in docker). I used the below command to start the clickhouse node.
docker run -d -p 8125:8123 -p 9001:9000 -p 9019:9009 --name=ck_node-1 --privileged --network zoonet --ip 172.18.0.5 --ulimit nofile=262144:262144 -v C:/some-clickhouse-server/ck-node-1/data:/var/lib/clickhouse:rw -v C:/some-clickhouse-server/ck-node-1/conf:/etc/clickhouse-server:rw -v C:/some-clickhouse-server/ck-node-1/log:/var/log/clickhouse-server:rw d846490c0466
It started the node and exited.
Can someone please help me how bring click house node into zoonet.
Thanks in Advance!
Don't try to volume clickhouse data folder -v C:/some-clickhouse-server/ck-node-1/data:/var/lib/clickhouse:rw
only logs
-v C:/some-clickhouse-server/ck-node-1/logs:/var/log/clickhouse-server/:rw
cause Windows 10 + WSL2 (I hope you use latest Docker Desktop) will mount this with 0777 rights and wrong file and folder owner, clickhouse-server will check it and fail during restart
Related
Working on docker desktop on windows.
docker command from the PowerShell:
docker run -p 80:8080 -d --name demo1 -e SWAGGER_JSON=/custom/swagger.json -v a-data-volume:/custom swaggerapi/swagger-ui
docker command from the Git Bash:
docker run -p 80:8080 -d --name demo2 -e SWAGGER_JSON=/custom/swagger.json -v a-data-volume:/custom swaggerapi/swagger-ui
Issue: The environment variable SWAGGER_JSON is not the same on both containers even though it is set the same way in the command. While demo1 has the correct one, demo2 doesn't.
still pretty new to docker and i could use some help. I am trying to setup some docker containers precisely the redash appplication. Here is my working code:
sudo docker network create redash_default
sudo docker container run --name redis --network redash_default -d redis:4.0-alpine
sudo docker container run --name postgres --network redash_default --env-file /opt/redash/env -v /opt/redash/postgres-data -d postgres:9.5.21-alpine
sudo docker container run --rm -p 5000:5000 -e REDASH_WEB_WORKERS=4 --name server --network redash_default --env-file env redash/redash:7.0.0.b18042 create_db
sudo docker container run -d --restart always -p 5000:5000 -e REDASH_WEB_WORKERS=4 --name server --network redash_default --env-file env redash/redash:7.0.0.b18042
sudo docker container run -d --restart always -e QUEUES=celery -e WORKERS_COUNT=1 --name scheduler --network redash_default --env-file env redash/redash:7.0.0.b18042
sudo docker container run -d --restart always -e QUEUES=scheduled_queries,schemas -e WORKERS_COUNT=1 --name scheduled_worker --network redash_default --env-file env redash/redash:7.0.0.b18042
sudo docker container run -d --restart always -e QUEUES=queries -e WORKERS_COUNT=2 --name adhoc_worker --network redash_default --env-file env redash/redash:7.0.0.b18042
sudo docker container run -d --restart always -p 80:80 --link server:redash --name nginx --network redash_default redash/nginx:latest
however i am only allowed to run the containers in read-only mode which is breaking the application. Only redis seemed to like the read-only mode so i thought about mounting an external persistent disk on the instance /dockervol/ where the apps can be given RW access but i cant seem to get that work either. here is something i have tried for example with nginx
docker container run -p 80:80 --mount type=bind,source=/dockervol/nginx,target=/etc/nginx – read-only --link server:redash --name nginx --network redash_default redash/nginx:latest
i get an error
nginx: [emerg] open() “/etc/nginx/nginx.conf” failed (2: No such file or directory)
My question is how do i get all the containers to work in read-only mode with a persistent volume without breaking the application.
I am running a docker run command to spawn a new container. The command I gave:
docker run -h 'activemq1' --net bridge -m 20g --env-file /opt/dockerenv/activemq-1/env.txt -p 8161:8161 -p 61613:61613 -p 61614:61614 -p 61616:61616 -p 1616:1616 -p 5672:5672 -p 1883:1883 -v /opt/dckexchange:/exchange -v /etc/yum.repos.d:/etc/yum.repos.d -v /mnt/data/volumes/activemq1/data:/usr/share/activemq/data --log-opt max-size=1g --log-opt max-file=2 --name activemq-dev mydocker:5000/activemq/activemq:latest
It should be running perfectly without error, but apparently it throws me an error unknown flag: --log-opts. It is running ok if I remove all of the log-opt command.
Docker version: 1.13.1
Any ideas?
Maybe you're missing the log-driver e.g.
--log-driver json-file --log-opt max-size=1g --log-opt max-file=2
I think you need this unless you've specified a default in /etc/docker/daemon.json
I want to know if I can link Docker containers to a running container. I am running this command on a server:
docker run -d -u jenkins --name appdev-jenkins --network=host --memory="8g" -p 80:8080 -p 443:443 -p 50000:50000 -v "/opt/jenkins":/var/jenkins_home -v /var/run/docker.sock:/var/run/docker.sock jenkinsci/blueocean
I want to be able to link another Jenkins instance as an agent to the original Jenkins instance. Is this possible?
Have you tried the following?
docker run -d --link appdev-jenkins --name second-jenkins [other params]
I have updated my windows OS 10 from Home to Pro. I have installed windows docker toolbox on my machine.
I try to run the apache kafka in docker using following command :
docker run --rm -it \
-p 2181:2181 -p 3030:3030 -p 8081:8081 \
-p 8082:8082 -p 8083:8083 -p 9092:9092 \
-e ADV_HOST=192.168.99.100 \
landoop/fast_data_dev
But when I run the above program, it gives me following error :
Unable to find image 'landoop/fast_data_dev:latest' locally
F:...\Docker Toolbox\docker.exe: Error response from daemon: repository landoop/fast_data_dev not found: does not exist or no pull access.
See:'F:...\Docker Toolbox\docker.exe run --help.
Please let me know what is the problem over here.
The image name appears to be incorrect. I think you are looking for https://hub.docker.com/r/landoop/fast-data-dev/. The correct command should be (notice the - instead of _):
docker run --rm -it -p 2181:2181 -p 3030:3030 -p 8081:8081 -p 8082:8082 -p 8083:8083 -p 9092:9092 -e ADV_HOST=192.168.99.100 landoop/fast-data-dev