I am running ZooKeeper image on Docker via docker-compose.
My files are:
--repo
----script.zk (contains zookeeper script commands such as `create`
----docker-compose.yaml
----Dockerfile.zookeeper
----zoo.cfg
docker-compose.yaml contains names and properties:
services:
zoo1:
restart: always
hostname: zoo1
container_name: zoo1
build:
context: .
dockerfile: Dockerfile.zookeeper
volumes:
- $PWD/kyc_zookeeper.cfg:/conf/zoo.cfg
ports:
- 2181:2181
environment:
.... and two more nodes
Dockerfile.zookeeper currently contains only image
FROM zookeeper:3.4
Locally I can run zkCli.sh and communicate with zookeeper, but i wish to do it automatically when Dockerfile.zookeeper runs.
do I need to create a container with a vm, install zookeeper and copy the zkCli.sh into the container in order to run commands?
Or is it possible to run zookeeper commands via Dockerfile?
I've tried too attach to the container and using CMD in dockerfile but it's not working.
any idea how I can do it?
Thank you
In order to resolve that I wrote bash script, who gets zookeeper host and a zookeeper script (file with zookeeper commands line-by-line)
and run all the commands on the remote docker who contains zookeeper image:
config_script_file="$2"
zookeeper_host_url="$1"
#Retrieve all commands from file
TMPVAR=""
while read -r line
do
if [ -z "$TMPVAR" ]; then
TMPVAR="$line"
else
TMPVAR="$TMPVAR\n$line"
fi
done < "$config_script_file"
#Run ZooKeeper commands on remote machine
docker exec -i $zookeeper_host_url bash << EOF
./bin/zkCli.sh -server localhost:2181
$(echo -e ${TMPVAR})
quit
exit
EOF
example of a zookeeper script:
create /x 1
create /y 2
Usage:
./zkCliHelper.sh <zookeeper_url> <script.zk file>
Related
I want to spin up a localstack docker container and run a file, create_bucket.sh, with the command
aws --endpoint-url=http://localhost:4566 s3 mb s3://my-bucket
after the container starts. I tried creating this Dockerfile
FROM: localstack/localstack:latest
COPY create_bucket.sh /usr/local/bin/
ENTRYPOINT []
and a docker-compose.yml file that has
version: '3.8'
services:
localstack:
image: localstack/localstack:latest
environment:
...
ports:
- '4566-4583:4566-4583'
command: sh -c "/usr/local/bin/create_bucket.sh"
but when I run
docker-compose up
the container comes up, but the command isn't run. How do I execute my command against the localstack container after container startup?
You can use mount volume instead of "command" to execute your script at startup container.
volumes:
- ./create_bucket.sh:/docker-entrypoint-initaws.d/create_bucket.sh
Also as they specify in their documentation, localstack must be precisely configured to work with docker-compose.
Please note that there’s a few pitfalls when configuring your stack manually via docker-compose (e.g., required container name, Docker network, volume mounts, environment variables, etc.)
In your case I guess you are missing some volumes, container name and variables.
Here is an example of a docker-compose.yml found here, which I have more or less adapted to your case
version: '3.8'
services:
localstack:
image: localstack/localstack
container_name: localstack-example
hostname: localstack
ports:
- "4566-4583:4566-4583"
environment:
# Declare which aws services will be used in localstack
- SERVICES=s3
- DEBUG=1
# These variables are needed for localstack
- AWS_DEFAULT_REGION=<region>
- AWS_ACCESS_KEY_ID=<id>
- AWS_SECRET_ACCESS_KEY=<access_key>
- DOCKER_HOST=unix:///var/run/docker.sock
- DATA_DIR=/tmp/localstack/data
volumes:
- "${TMPDIR:-/tmp}/localstack:/tmp/localstack"
- /var/run/docker.sock:/var/run/docker.sock
- ./create_bucket.sh:/docker-entrypoint-initaws.d/create_bucket.sh
Other sources:
Running shell script against Localstack in docker container
https://docs.localstack.cloud/localstack/configuration/
If you exec into the container, the create_bucket.sh is not copied. I'm not sure why and I couldn't get it to work either.
However, I have a working solution if you're okay to have a startup script as your goal is to bring up the container and execute the creation of the bucket in a single command.
Assign a name to your container in docker-compose.yml
version: '3.8'
services:
localstack:
image: localstack/localstack:latest
container_name: localstack
ports:
- '4566-4583:4566-4583'
Update your create_bucket.sh to use awslocal instead, it is already available in the container. Using aws cli with an endpoint-url needs aws configure as a pre-req.
awslocal s3 mb s3://my-bucket
Finally, create a startup script that runs the list of commands to complete the initial setup.
docker-compose up -d
docker cp create_bucket.sh localstack:/usr/local/bin/
docker exec -it localstack sh -c "chmod +x /usr/local/bin/create_bucket.sh"
docker exec -it localstack sh -c "/usr/local/bin/create_bucket.sh"
Execute the startup script
sh startup.sh
To verify, if you now exec into the running container, the bucket would have been created.
docker exec -it localstack /bin/sh
awslocal s3 ls
Try by executing below
docker exec Container_ID Your_Command
How can I implement the below docker-compose code, but using the docker run command? I am specifically interested in the depends_on part.
version: '2'
services:
db:
image: postgres
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
depends_on: doesn't map to a docker run option. When you have your two docker run commands you need to make sure you put them in the right order.
docker build -t web_image .
docker network create some_network
docker run --name db --net some_network postgres
# because this depends_on: [db] it must be second
docker run --name web --net some_network ... web_image ...
depends-on mean :
Compose implementations MUST guarantee dependency services have been started before starting a dependent service. Compose implementations MAY wait for dependency services to be “ready” before starting a dependent service.
Hence the depends on is not only an order of running
and you can use docker-compose instead of docker run and every option in docker run can be in the docker-compose file
I have two docker run commands as given below but I would like to merge these two commands together and execute it.
1st command - Start orthanc just with web viewer enabled
docker run -p 8042:8042 -e WVB_ENABLED=true osimis/orthanc
2nd command - Start Orthanc with mount directory tasks
docker run -p 4242:4242 -p 8042:8042 --rm --name orthanc -v
$(pwd)/orthanc/orthanc.json:/etc/orthanc/orthanc.json -v
$(pwd)/orthanc/orthanc-db:/var/lib/orthanc/db jodogne/orthanc-plugins
/etc/orthanc --verbose
As you can see, in both the cases the Orthanc is being started but I would like to merge these into one and start Orthanc. When it is started Web viewer should also be enabled and mount directory should also have happened
Can you let me know on how can this be done?
Use docker-compose, it is specially targeted for running multiple containers.
docker-compose.yml
version: '3'
services:
osimis:
image: osimis/orthanc
environment:
WVB_ENABLED: 'true'
ports:
- 8042:8042
orthanc:
image: jodogne/orthanc-plugins
environment:
WVB_ENABLED: 'true'
ports:
- 4242:4242
- 8042:8042
volumes:
- ./orthanc/orthanc.json:/etc/orthanc/orthanc.json
- ./orthanc/orthanc-db:/var/lib/orthanc/db
command: /etc/orthanc --verbose
and docker-compose up to finish the work
I am quite new to docker but am trying to use docker compose to run automation tests against my application.
I have managed to get docker compose to run my application and run my automation tests, however, at the moment my application is running on localhost when I need it to run against a specific domain example.com.
From research into docker it seems you should be able to hit the application on the hostname by setting it within links, but I still don't seem to be able to.
Below is the code for my docker compose files...
docker-compose.yml
abc:
build: ./
command: run container-dev
ports:
- "443:443"
expose:
- "443"
docker-compose.automation.yml
tests:
build: test/integration/
dockerfile: DockerfileUIAuto
command: sh -c "Xvfb :1 -screen 0 1024x768x16 &>xvfb.log && sleep 20 && DISPLAY=:1.0 && ENVIRONMENT=qa BASE_URL=https://example.com npm run automation"
links:
- abc:example.com
volumes:
- /tmp:/tmp/
and am using the following command to run...
docker-compose -p tests -f docker-compose.yml -f docker-compose.automation.yml up --build
Is there something I'm missing to map example.com to localhost?
If the two containers are on the same Docker internal network, Docker will provide a DNS service where one can talk to the other by just its container name. As you show this with two separate docker-compose.yml files it's a little tricky, because Docker Compose wants to isolate each file into its own separate mini-Docker world.
The first step is to explicitly declare a network in the "first" docker-compose.yml file. By default Docker Compose will automatically create a network for you, but you need to control its name so that you can refer to it from elsewhere. This means you need a top-level networks: block, and also to attach the container to the network.
version: '3'
networks:
abc:
name: abc
services:
abc:
build: ./
command: run container-dev
ports:
- "443:443"
networks:
abc:
aliases:
- example.com
Then in your test file, you can import that as an external network.
version: 3
networks:
abc:
external: true
name: abc
services:
tests:
build: test/integration/
dockerfile: DockerfileUIAuto
command: sh -c "Xvfb :1 -screen 0 1024x768x16 &>xvfb.log && sleep 20 && npm run automation"
environment:
DISPLAY: "1.0"
ENVIRONMENT: qa
BASE_URL: "https://example.com"
networks:
- abc
Given the complexity of what you're showing for the "test" container, I would strongly consider running it not in Docker, or else writing a shell script that launches the X server, checks that it actually started, and then runs the test. The docker-compose.yml file isn't the only tool you have here.
I have a working docker command:
docker run -p 3001:8080 -p 50000:50000 -v /Users/thomas/Desktop/digital-ocean-jenkins/jenkins:/var/jenkins_home jenkins/jenkins:lts
I'd like to put these config variables in a Dockerfile:
FROM jenkins/jenkins:lts
EXPOSE 3001 8080
EXPOSE 50000 50000
VOLUME jenkins:var/jenkins_home
However it's not taking any of these configuration variables. How can I pass in the parameters I am passing to docker run as apart of the build?
I built and ran using this:
docker build -t treggi-jenkins .
docker run treggi-jenkins
I think you'd need to use docker-compose for something like that.
See docker-compose docs
The docker-compose file could look something like this
version: '3'
services:
jenkins:
image: jenkins/jenkins:lts
ports:
- "3001:8080"
- "50000:50000"
volumes:
- jenkins:var/jenkins_home
volumes:
jenkins: