Multiple docker images run from docker file - docker

I am trying to execute multiple docker images run from single docker file with different ports.
Please advise How to execute multiple "docker run" commands from single docker file with different ports.

You want to use docker-compose it sounds like. Here is an example using nginx and redis (It's how I do it anyway)
services:
nginx:
image: nginx
ports:
- "80:80"
redis:
image: redis
ports:
- "1000:1000"
So as you can see, if I run docker-compose up, docker will spin up two containers, nginx and redis, each running off of a different port! If you don't want to you docker-compose, you can do it from docker run
docker run --name nginx -p 1000:10001
docker run --name redis -p 3333:2423
I don't 100% understand your question, but I hope this helps!

Related

How to specify the port to map to on the host machine when you build the image wiith Dockerfile

When I run Docker from command line I do the following:
docker run -it -d --rm --hostname rabbit1 --name rabbit1 -p 127.0.0.1:8000:5672 -p 127.0.0.1:8001:15672 rabbitmq:3-management
I publish the ports with -p in order to see the connection on the host.
How can I do this automatically with a Dockerfile?
The Dockerfile provides the instructions used to build the docker image.
The docker run command provides instructions used to run a container from a docker image.
How can I do this automatically with a Dockerfile
You don't.
Port publishing is something you configure only when starting a container.
You cant specify ports in Dockerfile but you can use docker-compose to achieve that.
Docker Compose is a tool for running multi-container applications on Docker.
example for docker-compose.yml with ports:
version: "3.8"
services :
rabbit1:
image : mongo
container_name : rabbitmq:3-management
ports:
- 8000:5672
- 8001:15672

docker-compose how to define container scoped network like in docker run?

Running 2 containers where mycontainer2 must use the same network stack of mycontainer1. As if the two containers were running in the same machine. Here how I try to do by using docker run with --network container:xxx
$ docker run -it --rm --name mycontainer1 -p 6666:7777 myregistry/my-container1:latest
$ docker run -it --rm --network container:mycontainer1 --name mycontainer2 myregistry/my-container2:latest
I tried to replicate this behavior using docker-compose instead. But the networks: definition of docker-compose.yaml doesn't indicate something equivalent to the --network container:xxx option of docker run. Is it possible in docker-compose to configure two containers to use the same network stack?
This is a network_mode: setting.
version: '3.8'
services:
mycontainer1:
image: myregistry/my-container1:latest
ports: ['6666:7777']
mycontainer2:
image: myregistry/my-container2:latest
network_mode: service:mycontainer1 # <---
Since Compose will generally pick its own container names, this service:name form uses the container matching the named Compose service. (If you override container_name: then you can also use container:mycontainer1 the same way you did with docker run.)
Creating an external network and use it inside docker-compose YAML manifest might help. Here is how you do it.
version: '3.7'
networks:
default:
external:
name: an-external-network
services:
my-container1:
...
my-container2:
...
Note: use docker network create command to create an-external-network before running docker-compose up command.

How to make a docker-compose file from command lline

I run my container by five Docker commands as follows:
docker run --privileged -d -v /root/docker/data:/var/lib/mysql -p 8888:80 testimg:2 init
docker ps ---> to get container ID
docker exec -it container_id bash
docker exec container_id systemctl start mariadb
docker exec container_id systemctl start httpd
I was trying to do these steps by docker-compose but failed.
Can somebody make a docker-compose.yml or Dockerfile to get same result for me?
You're not going to be be able to do this with just a docker-compose.yml, because a compose file doesn't have any mechanism similar to docker exec. Additionally, running systemd (or really any process manager) inside a container is an anti-pattern. It can complicate the management and scaling of your containers, and in most cases doesn't provide you with any benefits.
Why don't you just have two images:
One that starts mariadb
One that starts Apache httpd
That might look something like:
version: "3"
services:
web:
image: httpd
ports:
- "8888:80"
db:
image: mariadb
volumes:
- "/root/docker/data:/var/lib/mysql"
You would probably need a custom image for the web server containing whatever application you're running, but you can definitely use the official mariadb image for your database.

Start Docker Containers on logon under Windows

I've just set up a new Windows 10 development machine and so as to minimise the hassle of installs I've got various dev dependencies (Oracle, MongoDB, RabbitMQ, HAProxy, etc.) running under Docker using a docker-compose script.
I'd like to automatically start these containers on Windows logon but as yet I haven't figured out a way to do this; a simple script that executes docker-compose up -d in the correct directory should do it, but if it executes immediately on logon Docker hasn't yet started up so the script fails. Does anyone know how to programatically wait until docker is running?
To further elaborate on my comment i have done a little test with a webserver service, but it should work for any service, as long as you configure it the way you want it to behave.
Its quite easy to set this up using the following commands:
docker swarm init
Then for example a webserver
docker service create --name webserver --publish 80:80 httpd
Or even a database
docker service create --replicas 1 --name database --publish 1433:1433 -e "ACCEPT_EULA=y" -e "SA_PASSWORD=test" microsoft/mssql-server-linux
These will restart after a reboot and on fatal crashes automatically because of the requested amount of replicas (1 by default) that Docker swarm keeps alive for you.
Hopefully this can be of some help!
Turns out this is really easy to achieve via docker-compose using restart! Have changed out compose file as follows:
version: '2'
services:
rabbitmq:
image: rabbitmq:3.6-management
ports:
- "5672:5672"
- "15672:15672"
volumes:
- /var/lib/rabbitmq
restart: unless-stopped
This extra restart directive means that unless the container has been explicitly stopped it will start up with docker on logon/reboot. Tested and working!

Strange way to launch a background apache/mysql docker container

I am downloaded a debian image for docker and i have created a container from it.
I haver successfully installed apache and mysql on this container (from /bin/bash).
I want to make this docker container running in background.
I have tried a lot of tutorials (i have created images with Dockerfile) but nothing really works. Apache and mysql were run as root...
So i have launched this command:
docker run -d -p 80:80 myimagefile /bin/bash -c "while true; do sleep 10; done"
Then i have attached a /bin/bash with exec command and i started manually mysql and apache2 (/etc/init.d/ scripts). When i type CTRL-D, the bash is killed but the container stills in background, with mysql and apache alive !
I am wondering if this method is correct or is it something ugly ? Is there a best way to do this ?
I do not want to write a Dockerfile that describes how to install apache and mysql. I have made my own image, with my application and all prerequisites.
I just want to start a container from my image and start automatically apache and mysql.
I have a second question: With my method, the container is not reloaded if i reboot physical computer. How can i start it automatilcy with persistence of data ?
Thanks
I would suggest using running mysql and apache in separate containers. Additionally the docker hub already has container images that you could re-use:
https://hub.docker.com/_/mysql/
The following is an example of a docker-compose file that describe how to launch Drupal
version: '2'
services:
db:
image: mysql
environment:
- MYSQL_ROOT_PASSWORD=letmein
- MYSQL_DATABASE=drupal
- MYSQL_USER=drupal
- MYSQL_PASSWORD=drupal
volumes:
- /var/lib/mysql
web:
image: drupal
depends_on:
- db
ports:
- "8080:80"
volumes:
- /var/www/html/sites
- /var/www/private
Run as follows
$ docker-compose up -d
Creating dockercompose_db_1
Creating dockercompose_web_1
Which exposes Drupal on port 8080
$ docker-compose ps
Name Command State Ports
--------------------------------------------------------------------------------
dockercompose_db_1 docker-entrypoint.sh mysqld Up 3306/tcp
dockercompose_web_1 apache2-foreground Up 0.0.0.0:8080->80/tcp
Note:
When running the drupal installer, configure it to connect to a host called "db", which is the mysql container.

Resources