Docker environment variable discrepancy - docker

Working on docker desktop on windows.
docker command from the PowerShell:
docker run -p 80:8080 -d --name demo1 -e SWAGGER_JSON=/custom/swagger.json -v a-data-volume:/custom swaggerapi/swagger-ui
docker command from the Git Bash:
docker run -p 80:8080 -d --name demo2 -e SWAGGER_JSON=/custom/swagger.json -v a-data-volume:/custom swaggerapi/swagger-ui
Issue: The environment variable SWAGGER_JSON is not the same on both containers even though it is set the same way in the command. While demo1 has the correct one, demo2 doesn't.

Related

Clickhouse cluster setup with zookpeer in Docker in Windows Machine

I created three zookeeper nodes in docker with the following commands.
docker run -d -p 2181:2181 --name zookeeper_node1 --privileged --restart always --network zoonet --ip 172.18.0.2 -v C:/zookeeper/zk_node1/volumes/data:/data -v C:/zookeeper/zk_node1/volumes/datalog:/datalog -v C:/zookeeper/zk_node1/volumes/logs:/logs -e ZOO_MY_ID=1 -e "ZOO_SERVERS=server.1=172.18.0.2:2888:3888;2181 server.2=172.18.0.3:2888:3888;2181 server.3=172.18.0.4:2888:3888;2181" 36c607e7b14d
docker run -d -p 2182:2181 --name zookeeper_node2 --privileged --restart always --network zoonet --ip 172.18.0.3 -v C:/zookeeper/zk_node2/volumes/data:/data -v C:/zookeeper/zk_node2/volumes/datalog:/datalog -v C:/zookeeper/zk_node2/volumes/logs:/logs -e ZOO_MY_ID=2 -e "ZOO_SERVERS=server.1=172.18.0.2:2888:3888;2181 server.2=172.18.0.3:2888:3888;2181 server.3=172.18.0.4:2888:3888;2181" 36c607e7b14d
docker run -d -p 2183:2181 --name zookeeper_node3 --privileged --restart always --network zoonet --ip 172.18.0.4 -v C:/zookeeper/zk_node3/volumes/data:/data -v C:/zookeeper/zk_node3/volumes/datalog:/datalog -v C:/zookeeper/zk_node3/volumes/logs:/logs -e ZOO_MY_ID=3 -e "ZOO_SERVERS=server.1=172.18.0.2:2888:3888;2181 server.2=172.18.0.3:2888:3888;2181 server.3=172.18.0.4:2888:3888;2181" 36c607e7b14d
The above three zookeeper nodes are in a network called zoonet.
I have changed the config files and started a clickhouse node in zoonet(existing in docker). I used the below command to start the clickhouse node.
docker run -d -p 8125:8123 -p 9001:9000 -p 9019:9009 --name=ck_node-1 --privileged --network zoonet --ip 172.18.0.5 --ulimit nofile=262144:262144 -v C:/some-clickhouse-server/ck-node-1/data:/var/lib/clickhouse:rw -v C:/some-clickhouse-server/ck-node-1/conf:/etc/clickhouse-server:rw -v C:/some-clickhouse-server/ck-node-1/log:/var/log/clickhouse-server:rw d846490c0466
It started the node and exited.
Can someone please help me how bring click house node into zoonet.
Thanks in Advance!
Don't try to volume clickhouse data folder -v C:/some-clickhouse-server/ck-node-1/data:/var/lib/clickhouse:rw
only logs
-v C:/some-clickhouse-server/ck-node-1/logs:/var/log/clickhouse-server/:rw
cause Windows 10 + WSL2 (I hope you use latest Docker Desktop) will mount this with 0777 rights and wrong file and folder owner, clickhouse-server will check it and fail during restart

docker: open /.env: no such file or directory

Goal
Action: Run command from my local machine
Result: Docker image deployed on cloud instance
Approach
For remote deployment, I am using gcloud commands.
The command below is working but the only problem is that it is not picking environment variables file i.e. .env. I have this .env file placed in the working directory.
Command:
gcloud beta compute ssh --quiet --zone "us-west1-b" "devop-beta-persistent-2" --project "my-project" --command 'sudo docker run -p 8080:8080 -p 8443:8443 -p 50000:50000 -v ~/jenkins_data:/var/jenkins_home -v $FILE_PATH/jenkins.yaml:/var/configurations/jenkins_casc.yml --name jenkins-devkit --env-file $PWD/.env $JENKINS_IMAGE:latest'
Error: docker: open /.env: no such file or directory.
What I already tried
I have tried setting path to:
.env
/full/path/to/.env
$PWD/.env
but still getting the same error.
If I run this command on my local machine, it works fine i.e. picking up the .env file.
sudo docker run -p 8080:8080 -p 8443:8443 -p 50000:50000 -v ~/jenkins_data:/var/jenkins_home -v $FILES_PATH/jenkins.yaml:/var/configurations/jenkins_casc.yml --name jenkins-devkit --env-file $PWD/.env $JENKINS_IMAGE:latest
Can any one suggest the possible solution?

Docker containers in read-only mode

still pretty new to docker and i could use some help. I am trying to setup some docker containers precisely the redash appplication. Here is my working code:
sudo docker network create redash_default
sudo docker container run --name redis --network redash_default -d redis:4.0-alpine
sudo docker container run --name postgres --network redash_default --env-file /opt/redash/env -v /opt/redash/postgres-data -d postgres:9.5.21-alpine
sudo docker container run --rm -p 5000:5000 -e REDASH_WEB_WORKERS=4 --name server --network redash_default --env-file env redash/redash:7.0.0.b18042 create_db
sudo docker container run -d --restart always -p 5000:5000 -e REDASH_WEB_WORKERS=4 --name server --network redash_default --env-file env redash/redash:7.0.0.b18042
sudo docker container run -d --restart always -e QUEUES=celery -e WORKERS_COUNT=1 --name scheduler --network redash_default --env-file env redash/redash:7.0.0.b18042
sudo docker container run -d --restart always -e QUEUES=scheduled_queries,schemas -e WORKERS_COUNT=1 --name scheduled_worker --network redash_default --env-file env redash/redash:7.0.0.b18042
sudo docker container run -d --restart always -e QUEUES=queries -e WORKERS_COUNT=2 --name adhoc_worker --network redash_default --env-file env redash/redash:7.0.0.b18042
sudo docker container run -d --restart always -p 80:80 --link server:redash --name nginx --network redash_default redash/nginx:latest
however i am only allowed to run the containers in read-only mode which is breaking the application. Only redis seemed to like the read-only mode so i thought about mounting an external persistent disk on the instance /dockervol/ where the apps can be given RW access but i cant seem to get that work either. here is something i have tried for example with nginx
docker container run -p 80:80 --mount type=bind,source=/dockervol/nginx,target=/etc/nginx – read-only --link server:redash --name nginx --network redash_default redash/nginx:latest
i get an error
nginx: [emerg] open() “/etc/nginx/nginx.conf” failed (2: No such file or directory)
My question is how do i get all the containers to work in read-only mode with a persistent volume without breaking the application.

Link containers to a running container

I want to know if I can link Docker containers to a running container. I am running this command on a server:
docker run -d -u jenkins --name appdev-jenkins --network=host --memory="8g" -p 80:8080 -p 443:443 -p 50000:50000 -v "/opt/jenkins":/var/jenkins_home -v /var/run/docker.sock:/var/run/docker.sock jenkinsci/blueocean
I want to be able to link another Jenkins instance as an agent to the original Jenkins instance. Is this possible?
Have you tried the following?
docker run -d --link appdev-jenkins --name second-jenkins [other params]

Restcomm RVD is not running from docker container

I am running restcomm docker container in AMI where. I have created the container by default setting.
Using the default values:
docker run --name=restcomm -d -p 8080:8080 -p 5080:5080 -p 5082:5082 -p 5080:5080/udp -p 65000-65535:65000-65535/udp gvagenas/restcomm:7.3.0
i have not been able to access the RVD by
http://x.x.x.x:8080
Can you try the command mentioned in http://www.telestax.com/rapid-webrtc-application-development-with-restcomm-and-docker/ ie
docker run –name=restcomm -d -e STATIC_ADDRESS=”YOUR_HOST_IP_ADDRESS_HERE” -p 8080:8080 -p 5080:5080 -p 5082:5082 -p 5080:5080/udp -p 65000-65535:65000-65535/udp gvagenas/restcomm

Resources