Run Multiple Docker Images from One Bash script - ios

I have a bash file that runs my apps docker image using docker run -it --network test_network -p 8000:8000 testApp but I also need to run my mysql image using docker run -it --network test_network -p 3308:3308 mysql/mysql-server
Normally I open a separate terminal window manually to run each one but I'm trying to edit my bash script so that it can do both for me. Not sure how though?

You can run both in the detached mode. That will not block the script and allow you to run both together. For that, you need to use the -d or --detach flag.
docker run --detach -it --network test_network -p 8000:8000 testApp
docker run --detach -it --network test_network -p 3308:3308
mysql/mysql-server
Edit:
While the approach mentioned above works, it is better to use docker compose to run multiple containers.

Related

How to pass erlang.cookie in "docker run" after RABBITMQ_ERLANG_COOKIE got depricated

I want to start three RabbitMQ containers that will be joined together in a cluster. I want to keep it simple and not define complex Dockerfiles with specific volumes.
This is what I am doing right now:
docker network create rabbits
docker run -d --rm --net rabbits --hostname rabbit-1 --name rabbit-1 -p 8081:15672 -e RABBITMQ_ERLANG_COOKIE=ASDF rabbitmq:3.8-management
docker run -d --rm --net rabbits --hostname rabbit-2 --name rabbit-2 -p 8082:15672 -e RABBITMQ_ERLANG_COOKIE=ASDF rabbitmq:3.8-management
docker run -d --rm --net rabbits --hostname rabbit-3 --name rabbit-3 -p 8083:15672 -e RABBITMQ_ERLANG_COOKIE=ASDF rabbitmq:3.8-management
When I then try to tell the nodes to join each other with the following commands, I get an error message:
docker exec -it rabbit-2 rabbitmqctl stop_app
docker exec -it rabbit-2 rabbitmqctl reset
docker exec -it rabbit-2 rabbitmqctl join_cluster rabbit#rabbit-1
docker exec -it rabbit-2 rabbitmqctl start_app
docker exec -it rabbit-2 rabbitmqctl cluster_status
This results in:
RABBITMQ_ERLANG_COOKIE env variable support is deprecated and will be REMOVED in a future version. Use the $HOME/.erlang.cookie file or the --erlang-cookie switch instead.
However I do not know how to pass this switch. When I add this to the docker run command it does not work. So i thought maybe add this after the join_cluster command, but then the cookie is already set.
How do I need to change the docker run command?
In response to your and other questions about RABBITMQ_ERLANG_COOKIE, I opened this issue:
https://github.com/rabbitmq/rabbitmq-server/issues/7262
Currently you should use the environment variable and disregard the warning.
The best practice is to use docker compose and your own image based off of the official RabbitMQ images:
https://github.com/lukebakken/docker-rabbitmq-cluster/blob/main/docker-compose.yml
https://github.com/lukebakken/docker-rabbitmq-cluster/blob/main/rmq/Dockerfile
NOTE: the RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.
The RABBITMQ_ERLANG_COOKIE environment variable is no longer used in RabbitMQ starting from version 3.7.0. Instead, you can set the Erlang cookie value by using the -e option in the docker run command and setting the RABBITMQ_ERLANG_COOKIE environment variable to your desired value. Here's an example:
docker run -d --name rabbitmq -e RABBITMQ_ERLANG_COOKIE='your_cookie_value' rabbitmq:3
Alternatively, you can store the Erlang cookie in a file and mount it as a volume in your container. For example:
Create a file named erlang.cookie with your desired cookie value
echo 'your_cookie_value' > erlang.cookie
Start the RabbitMQ container, mounting the erlang.cookie file
docker run -d --name rabbitmq -v $(pwd)/erlang.cookie:/var/lib/rabbitmq/.erlang.cookie rabbitmq:3

why would we want to use both --detach switch with --interactive and --tty in docker?

I'm reading the docker documentations, and I've seen this command:
$ docker run -d \
-it \
--name devtest \
--mount type=bind,source="$(pwd)"/target,target=/app,readonly \
nginx:latest
As far as I know, using -d or --detach switch run the command outside of the current terminal emulator, and return the control of terminal back to the user. And also using --tty -t and --interactive -i is completely the opposite. Why would anyone want to use them in a command?
For that specific command, it doesn't make sense, since nginx does not have an interactive component. But in general, it allows you to later attach to the container with docker attach. E.g.
$ docker run --name test-no-input -d busybox /bin/sh
92c0447e0c19de090847b7a36657d3713e3795b72e413576e25ab2ce4074d64b
$ docker attach test-no-input
You cannot attach to a stopped container, start it first
$ docker run --name test-input -dit busybox /bin/sh
57e4adcc14878261f64d10eb7839b35d5fa65c841bbcb3cd81b6bf5b8fe9d184
$ docker attach test-input
/ # echo hello from the container
hello from the container
/ # exit
The first container stopped since it was running a shell, and there was no input on stdin (no -i). A shell exits when it finishes reading input (e.g. the end of a shell script).

why do i keep seeing nginx index.html on localhost when i run my docker image

I installed and run nginx on my linux machine to understand the configurations etc. After a while i decided to remove it safely by following this thread in order to use it in docker
By following this documentaion i run this command
sudo docker run --name ngix -d -p 8080:80 pillalexakis/myrestapi:01
And i saw ngix's homepage at localhost
Then i deleted all ngix images & stopped all containers and i also run this command
sudo docker system prune -a
But now restarted my service by this command
sudo docker run -p 192.168.2.9:7777:8085 phillalexakis/myfirstapi:01 and i keep seeing at localhost ngix index.html
How can i totally remove it ?
Note: I'm new with docker and i might have missed a lot of things. Let me know what extra docker commands should i run in order provide better information.
Assuming your host have been preparing as below
your files (index.html, js, etc) under folder - /myhost/nginx/html
your nginx configuration - /myhost/nginx/nginx.conf
Solution
map your files (call volume) on the fly from outside docker image via docker cli
This is the command
docker run -it --rm -d -p 8080:80 --name web \
-v /myhost/nginx/html:/usr/share/nginx/html \
-v /myhost/nginx/nginx.conf:/etc/nginx/nginx.conf \
nginx
copy your files into docker image by build your own docker image via Dockerfile
This is your Dockerfile under /myhost/nginx
FROM nginx:latest
COPY ./html/index.html /usr/share/nginx/html/index.html
This is the command to build your docker image
cd /myhost/nginx
docker build -t pillalexakis/nginx .
This is the command to run your docker image
docker run -it --rm -d -p 8080:80 --name web \
pillalexakis/nginx

Can we run docker inside a docker container which is running in a virtual-box of Ubuntu 18.04?

I want to run docker inside another docker container. My main container is running in a virtualbox of OS Ubuntu 18.04 which is there on my Windows 10. On trying to run it, it is showing me as:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
How can I resolve this issue?
Yes, you can do this. Check for dind (docker in docker) on docker webpage how to achieve it: https://hub.docker.com/_/docker
Your error indicates that either dockerd in the top level container is not running or you didn't mount docker.sock on the dependent container to communicate with dockerd running on your top-level container.
I am running electric-flow in a docker container in my Ubuntu virtual-box using this docker command: docker run --name efserver --hostname=efserver -d -p 8080:8080 -p 9990:9990 -p 7800:7800 -p 7070:80 -p 443:443 -p 8443:8443 -p 8200:8200 -i -t ecdocker/eflow-ce. Inside this docker container, I want to install and run docker so that my CI/CD pipeline in electric-flow can access and use docker commands.
From your above description, ecdocker/eflow-ce is your CI/CD solution container, and you just want to use docker command in this container, then you did not need dind solution. You can just access to a container's host docker server.
Something like follows:
docker run --privileged --name efserver --hostname=efserver -d -p 8080:8080 -p 9990:9990 -p 7800:7800 -p 7070:80 -p 443:443 -p 8443:8443 -p 8200:8200 -v $(which docker):/usr/bin/docker -v /var/run/docker.sock:/var/run/docker.sock -i -t ecdocker/eflow-ce
Compared to your old command:
Add --privileged
Add -v $(which docker):/usr/bin/docker, then you can use docker client in container.
Add -v /var/run/docker.sock:/var/run/docker.sock, then you can access host's docker daemon using client in container.

Docker : How to run a service and a terminal in one command?

I'm running an apache server like this
docker run -d -p 80:80 php:apache /usr/sbin/apache2ctl -D FOREGROUNDD
Then I determine the name of the container with
docker ps
and execute an interactive shell on the container with
docker exec -ti hungry_fermi bash
It works well, but I would like to do the same in one command. I've tried
docker run -ti -d -p 80:80 php:apache /bin/bash -c 'bash; apache2ctl -D FOREGROUND'
The problem is that, I don't obtain a terminal and the command returns.
You're trying this:
docker run -ti -d -p 80:80 php:apache \
/bin/bash -c 'bash; apache2ctl -D FOREGROUND'
There are several problems here. First, you're using the -d command line option, which asks the docker client to detach and leave the container running. You will never get an interactive shell when using -d.
Secondly, your command -- bash; apache2ctl -D FOREGROUND -- would run bash, wait for bash to exit, then run httpd. You can instead do something like this:
docker run -ti -p 80:80 php:apache \
/bin/bash -c 'apachectl start; bash'
This would start Apache in the background (because there is no -D FOREGROUND), and then start bash...but I'm not really clear why you would want to do this, because now if you were to exit your shell the container would exit as well (taking Apache with it).
I think you are much better simply starting Apache the way you are now, and using docker exec to get a shell inside the container.

Resources