To build a web-server, i'm trying to understand how containers are attached to each other, and i really need some quick answers.
So, if we take this docker-compose.yml file as an exemple:
version: '2'
services:
# APP
nginx:
build: docker/nginx
volumes_from:
- php
links:
- php
depends_on:
- php
php:
build: docker/php
volumes:
- ${SYMFONY_APP_PATH}:/symfony
links:
- mysql
- faye
- rabbitmq
- elasticsearch
client:
image: node:8.9.4
volumes_from:
- php
working_dir: /symfony
user: 1000:1000
command: "npm run dev"
ports:
- "${LIVERELOAD_PORT}:35729"
environment:
LIVERELOAD_PORT: ${LIVERELOAD_PORT}
mysql:
build: docker/mysql
environment:
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
MYSQL_DATABASE: ${MYSQL_DATABASE}
MYSQL_USER: ${MYSQL_USER}
MYSQL_PASSWORD: ${MYSQL_PASSWORD}
volumes:
- ${SYMFONY_APP_PATH}:/symfony
- "mysql:/var/lib/mysql"
rabbitmq:
image: rabbitmq:3.4-management
volumes:
- "rabbitmq:/var/lib/rabbitmq"
volumes:
- "elasticsearch5:/usr/share/elasticsearch/data"
- ${SYMFONY_APP_PATH}:/symfony
volumes:
mysql: ~
elasticsearch5: ~
rabbitmq: ~
What is the difference between volumes_from, links, and depends_on ?
If the idea is to attach each container with the other why we don't use only links. what is the difference between volumes_from, links, and depends_on.
Why in my example ngnix depend/linked to php container? why not the opposite? At the file footer, there's a volume configuration volumes:
mysql: ~ elasticsearch5: ~ rabbitmq: ~ but I think we already defined as the volume of each container, so what's is the main reason of that config?
And why not we dont use only one web container that use php, ngnix, and mysqld why we seperate them?
What is the difference between volumes_from, links, and depends_on?
Both links and depends_on provide a way for a container to communicate with each other.
links is a legacy feature and will be deprecated in the future, so avoid using links whenever possible.
volumes_from is used for other purpose, and it has nothing to do with links and depends_on.
Why in my example ngnix depend/linked to php container? why not the opposite?
depends_on defines order of services starting. In your example, you're using Nginx as a proxy server to the PHP service. So you might want the PHP service to start before Nginx.
And why not we dont use only one web container that use php, ngnix, and mysqld why we seperate them?
One of Docker's best practices is to keep each container simple enough to do only one job. Much like Unix's "do one thing and do it well" philosophy.
Single responsibility principle is a good thing, embrace it.
Related
I have a docker-compose file which contains details of my container as well as rabbitmq.
Here is a cut down version of my docker-compose.yml file where I am using the container_name and links keywords to access the IP address of rabbitmq from inside my container.
version: "3.2"
environment:
&my-env
My_TEST_VAR1: 'test_1'
My_TEST_VAR2: 'test_2'
rabbitmq:
container_name: rabbitmq
image: 'rabbitmq:3.6-management-alpine'
ports:
- '5672:5672'
- '15672:15672'
environment:
AMQP_URL: 'amqp://rabbitmq?connection_attempts=5&retry_delay=5'
RABBITMQ_DEFAULT_USER: "guest"
RABBITMQ_DEFAULT_PASS: "guest"
my-service:
tty: true
image: my_image_name:latest
working_dir: /opt/services/my_service/
command: python3.8 my_script.py
ports:
- 9000:9000
links:
- rabbitmq:rabbitmq.server
environment:
<<: *my-env
From inside my container I can ping the rabbitmq server successfully via:
ping rabbitmq.server
Is there I way I can access the rabbitmq default username and password using this link? Or do I have to just pass them as separate environment variables? (I would like to avoid this duplication if possible)
You should pass them using environment variables. Docker links at this point are an obsolete feature, and I'd recommend just outright deleting any links: you have left in your docker-compose.yml file. Compose sets up networking for you so that the Compose service names rabbitmq and my-server can be used as host names between the containers without any special setup; the environment variables that links provided were confusing and could unintentionally leak data.
If you want to avoid repeating things, you can use YAML anchor syntax as you already have, or write the environment variables into a separate env_file:. Unless you have a lot of settings or a lot of containers, just writing them in the docker-compose.yml file is easiest.
version: '3.8'
services:
rabbitmq:
image: 'rabbitmq:3.6-management-alpine'
ports:
- '5672:5672'
- '15672:15672'
environment:
RABBITMQ_DEFAULT_USER: "guest"
RABBITMQ_DEFAULT_PASS: "guest"
# You may want volumes: to persist the queue.
# As a special case for RabbitMQ only, you would need a hostname:.
my-service:
image: my_image_name:latest
ports:
- 9000:9000
environment:
# I'd just write these out.
My_TEST_VAR1: 'test_1'
My_TEST_VAR2: 'test_2'
RABBITMQ_HOST: rabbitmq
RABBITMQ_USER: guest
RABBITMQ_PASSWORD: guest
# working_dir: and command: should be in your Dockerfile as
# WORKDIR and CMD respectively. links: is obsolete.
In principle you can attach an anchor to any YAML node, though I'd find the syntax a little bit confusing if I was reading it. I'd tend to avoid syntax like this but it is technically an option.
services:
rabbitmq:
environment:
RABBITMQ_DEFAULT_USER: &rabbitmq_user guest
my-app:
environment:
RABBITMQ_USER: *rabbitmq_user
Finally, I hinted initially that the obsolete Docker links feature does republish environment variables. I wouldn't take advantage of this – it's an information leak in many ways, there's potential conflicts with the application's own environment variables, and the links feature in general is considered obsolete – but it is theoretically possible to use it
services:
rabbitmq:
environment:
RABBITMQ_DEFAULT_USER: guest
my-app:
links: [rabbitmq]
docker-compose run my-app sh -c 'echo $RABBITMQ_DEFAULT_USER'
It'd be up to your application setup to understand the RabbitMQ image setup variables.
I am pretty beginner with Docker, and I'm trying to create a local development LAMP (more exactly Apache, MariaDB, PHP) stack using docker-compose, existing Docker images from Docker hub and no Dockerfile if possible, to be used with several local web projects.
I'd like to map my local web project directory /Users/myusername/projects/myprojectname to the default document root for Apache container (which seems to be /app for the Apache image I'm using)
Here is my docker-compose.yml file:
version: "3"
services:
mariadb:
image: mariadb:10.5
container_name: mariadb
restart: always
ports:
- 8889:3306
volumes:
- ./mysql:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_USER=localmysqluser
- MYSQL_PASSWORD=localmysqlpwd
php:
image: bitnami/php-fpm:7.4
container_name: php
ports:
- 9000:9000
volumes:
- /Users/myusername/projects/myprojectname:/app
apache:
image: bitnami/apache:latest
container_name: apache
restart: always
ports:
- 8080:80
volumes:
- ./apache-vhosts/myapp.conf:/vhosts/myapp.conf:ro
- /Users/myusername/projects/myprojectname:/app
depends_on:
- mariadb
- php
But when I do docker-compose up -d then browse to http://localhost:8080/, I get zero data. Where am I wrong? Is my docker-compose.yml configuration wrong, or is it because of system rights?
I've been looking at this similar question, but I'd prefer not using any Dockerfile if possible.
Further question: is it possible to make a local directory /Users/myusername/projects/ browsable by Apache in my local browser?
As answered by J. Song, exposed port number of this Apache Docker image is 8080, not 80.
So we just need to change port mapping of Apache service to 8080:8080 instead of 8080:80.
I have a docker-compose LAMP stack comprised of three services; a webserver, php and mysql.
The apache2 webroot inside the container is shared to my local machine using a volume like so:
volumes:
- ./public_html:/usr/local/apache2/htdocs
When the stack is running though, I can't edit files inside of the shared volume, since I have a different local user as the user inside the apache2 container. Additionally the installer of my CMS (Processwire) is unable to acquire permissions to the required install directories.
The apache container uses alpine 2.4.35.
I've build my docker-compose file according to this tutorial:
https://medium.com/#thivi/creating-a-lamp-stack-using-docker-compose-13ca4e3950e1
Below I have attached my docker-compose.yml.
version: '3.7'
services:
apache:
build: './apache'
restart: always
ports:
- 80:80
- 443:443
networks:
- frontend
- backend
volumes:
- ./public_html:/usr/local/apache2/htdocs
- ./cert/:/usr/local/apache2/cert/
depends_on:
- php
- mysql
php:
build: './php'
restart: always
networks:
- backend
volumes:
- ./public_html:/usr/local/apache2/htdocs
- ./tmp:/usr/local/tmp
mysql:
build: './mysql'
restart: always
ports:
- 3306:3306
expose:
- 3306
networks:
- backend
volumes:
- ./database:/var/lib/mysql
networks:
backend:
frontend:
Is there any way to fix this issue? I'd be grateful for answers, I've been dealing with this issue for the past 2 days, without getting anywhere and I'm also kind of surprised that such an essential feature like directory sharing is so complicated.
/edit:
I've also noticed something interesting; when I execute a bash inside the apache-container the ownership of apache's document root is set to nobody:nobody, which probably also isn't right.
Let's take for example a mobile application who depends on two or more API.
Each of these projects are separated in independent Git repositories. Then we have 3 repositories allowing us to develop each in parallel.
Each projects have their own dependencies:
First API, for example, requires a SQL database
Second API, requires a NoSQL database
The mobile app requires these two APIs
Now I want to "dockerize" all of these projects to simplify development environment and unify it between developers and/or production environment.
Currently in each project we can create a custom docker-compose.yml file working with each projects requirements.
For example in the 1st API
version: "3.7"
services:
first_api:
image: golang:1.13
working_dir:
- /src
depends_on:
- mysql
volumes:
- ".:/src"
command: go run main.go
mysql:
image: mysql:5.7
environment:
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
MYSQL_USER: ${MYSQL_USER}
MYSQL_PASSWORD: ${MYSQL_USER_PASSWORD}
MYSQL_DATABASE: ${MYSQL_DATABASE_NAME}
adminer:
image: adminer
restart: always
The second API will have a similar docker-compose.yml file but with NoSQL DB instead.
Then in the mobile app repository we will have a docker-compose.yml file with a lot of duplicated code (and exactly same containers) because of its interdependence with the two other API, an some other file identicals (e.g .env files, entrypoint scripts if needed...).
The databases setup/seeding will be also done on 2 repositories too, that can be a little annoying.
The docker-compose.yml file will look like something like this:
version: "3.7"
services:
app:
build:
context: .
args:
- IP=${IP}
ports:
- 19000:19000
- 19001:19001
- 19002:19002
volumes:
- ".:/app"
depends_on:
- first-api
- second-api
first-api:
image: my-registry:5000/first-api
ports:
- 9009:3000
depends_on:
- mysql
volumes:
- ".env:/dist/.env"
mysql:
image: mysql:5.7
environment:
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
MYSQL_USER: ${MYSQL_USER}
MYSQL_PASSWORD: ${MYSQL_USER_PASSWORD}
MYSQL_DATABASE: ${MYSQL_DATABASE_NAME}
adminer:
image: adminer
restart: always
ports:
- 9099:8080
second-api:
image: my-registry:5000/second-api
ports:
- 9010:3000
depends_on:
- mongo
mongo:
image: mongo
restart: always
environment:
MONGO_INITDB_ROOT_USERNAME: ${MONGO_ROOT_USERNAME}
MONGO_INITDB_ROOT_PASSWORD: ${MONGO_ROOT_PASSWORD}
mongo-express:
image: mongo-express
restart: always
ports:
- 8081:8081
environment:
ME_CONFIG_MONGODB_ADMINUSERNAME: ${MONGO_ROOT_USERNAME}
ME_CONFIG_MONGODB_ADMINPASSWORD: ${MONGO_ROOT_PASSWORD}
In fact in this final docker-compose file we have 4 containers definition totally identical with them inside API's configurations, we also have some environment variables duplicated and versioned in 2 repositories minimum.
Sometime we can also have Dockerfile duplicated, according to specifics cases, DB setup or something too.
Did I miss something somewhere in this Docker development environment setup that would allow me to avoid some duplication?
Is there a best practice or a recommendation to avoid this?
How companies with big interdependent micro-services architecture manage these interdependence?
You can use YAML anchors & aliases with docker-compose extension fields.
Here are two other articles with useful details about that:
https://nickjanetakis.com/blog/docker-tip-82-using-yaml-anchors-and-x-properties-in-docker-compose
https://medium.com/#kinghuang/docker-compose-anchors-aliases-extensions-a1e4105d70bd
I'd probably set this up by independently running your two individual services' Docker Compose files, and then running a proxy in front that ties them together. You can "borrow" the networks from other docker-compose.yml files. You more or less have this now; the proxy would be your "app" container, and you can use an external: reference to the other applications' default networks.
version: '3'
services:
app:
build: .
environment:
- IP=${IP}
ports:
- 19000:19000
- 19001:19001
- 19002:19002
networks:
- firstapi_default
- secondapi_default
networks:
firstapi_default:
external: true
secondapi_default:
external: true
This approach also works if you have multiple services that each independently need a MySQL database backend; running a separate docker-compose up in each project directory will instantiate a new separate database for each. (In a microservice architecture you typically don't share data stores between services; they communicate only via their APIs.)
You'll pretty quickly hit scaling issue doing this if one of your backend services needs to call another, and Docker Compose might not be the right tool for this. Kubernetes is a significant commitment, but it would let you deploy each of these individual services into a separate namespace, and then use DNS names like first_api.first_namespace.svc.cluster.local to communicate between them.
I have 2 applications that are separate codebases, and they each have their own database on the same db server instance.
I am trying to replicate this in docker, locally on my laptop. I want to be able to have both apps use the same database instance.
I would like
both apps to start in docker at the same time
both apps to be able to access the database on localhost
the database data is persisted
be able to view the data in the database using an IDE on localhost
So each of my apps has its own dockerfile and docker-compose file.
On app1, I start the docker instance of the app which is tied to the database. It all starts fine.
When I try to start app2, I get the following error:
ERROR: for app2_mssql_1 Cannot start service mssql: driver failed programming external connectivity on endpoint app2_mssql_1 (12d550c8f032ccdbe67e02445a0b87bff2b2306d03da1d14ad5369472a200620): Bind for 0.0.0.0:1433 failed: port is already allocated
How can i have them both running at the same time? BOTH apps need to be able to access each others database tables!
Here is the docker-compose.yml files
app1:
version: "3"
services:
web:
build:
context: .
args:
volumes:
- .:/app
ports:
- "3000:3000"
depends_on:
- mssql
mssql:
image: 'microsoft/mssql-server-linux'
ports:
- '1433:1433'
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=P455w0rd!
volumes:
- app1_db:/var/lib/mssql/data
volumes:
app1_db:
and here is app2:
version: "3"
services:
web:
build:
context: .
args:
volumes:
- .:/app
ports:
- "3000:3000"
depends_on:
- mssql
mssql:
image: 'microsoft/mssql-server-linux'
ports:
- '1433:1433'
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=P455w0rd!
volumes:
- app2_db:/var/lib/mssql/data
volumes:
app2_db:
Should I be using the same volume in each docker-compose file?
I guess the problem is in each app i am spinning up 2 different db instances, when in reality I guess i just want one, and it be used by all my apps?
The ports part in docker-compose file will bound the container port to host's port which causes port conflict in your case.
You need to remove the ports part from at least one of the compose file. This way, docker-compose can be up for both. And you can have access to both app at same time. But remember both apps will be placed in separate network bridges.
How docker-compose up works:
Suppose your app is in a directory called myapp, and your docker-compose.yml
When you run docker-compose up, the following happens:
A network called myapp_default is created.
A container is created using web’s configuration. It joins the network myapp_default under the name web.
A container is created using db’s configuration. It joins the network myapp_default under the name db.
If you run the second docker-compose.yml in different folder myapp2, then the nework will be myapp2_default.
Current configuration creates two volumes, two datebase containers and two apps. If you can make them run in the same network and run database as the single container it will work.
I don't think you are expecting two database container two two volumes.
Approach 1:
docker-compose.yml as a single compose.
version: "3"
services:
app1:
build:
context: .
args:
volumes:
- .:/app # give the path depending up on the docker file of app1.
ports:
- "3030:3000"
depends_on:
- mssql
app2:
build:
context: .
args:
volumes:
- .:/app # give the path depending up on the docker file of app2.
ports:
- "3032:3000"
depends_on:
- mssql
mssql:
image: 'microsoft/mssql-server-linux'
ports:
- '1433:1433'
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=SqlServer1234!
volumes:
- app_docker_db:/var/lib/mssql/data
volumes:
app_docker_db:
Approach 2:
To Isolate it further, still want to run them as the sepeare composefiles, create three compose file with network.
docker-compose.yml for database with network
version: "3"
services:
mssql:
image: 'microsoft/mssql-server-linux'
ports:
- '1433:1433'
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=SqlServer1234!
volumes:
- app_docker_db:/var/lib/mssql/data
networks:
- test_network
volumes:
app_docker_db
networks:
test_network:
docker-ompose.yml for app1
remove the database container and add below lines to your compose file
version: "3"
services:
app1:
build:
context: .
args:
volumes:
- .:/app # give the path depending up on the docker file of app1.
ports:
- "3030:3000"
networks:
default:
external:
name: my-pre-existing-network
Do the same for another docker-compose by replacing the docker-compose file.
There are many other option to create docker-compose files. Configure the default network and Use a pre-existing network
You're exposing the same port (1433) two times to the host machine. (This is what "ports:..." does). This is not possible as it would block the same port on your host (That's what the message says).
I think the most common way in these cases is that you link your db's to your apps. (See https://docs.docker.com/compose/compose-file/#links). By doing this your applications can still access the databases on their common ports (1433), but the databases are not accessible from the host anymore (only from the container that is linked to it).
Another error I see in your docker compose file is that both applications are exposed by the same ports. This is also not possible for the same reason. I would suggest that you change one of them to "3000:3001", so you can access this application on port 3001.