How can docker-compose.yml include/omit environment-specific services? - docker

Is there a way in docker-compose.yml to include a db service only in specific environments ("test" in my case)?
For a Ruby project, development and production both use a remote Postgres database, but the test needs it own local Postgres database.
What I have now is shown below... "works" in the sense that when we run in development the db container is simply ignored by our code (our dev't ENV supplies a remote postres url instead of using the db host). But it would be nicer to not spin up an unused docker container for db when running in development.
version: '3'
services:
web:
build: .
ports:
- "3010:3010"
volumes:
- .:/my_app
links:
- db.local
depends_on:
- db
db:
image: postgres:10.5
ports:
- "5432:5432"

Related

Docker-compose starts a single container

I'm using docker-compose and I'm trying to run an express app and postgres db in docker containers.
My problem is that it only starts postgres image, but express app is not running.
What am I doing wrong?
I've published it on my github: https://github.com/ayakymyshyn/docker-playground
looking at your docker-compose file and Dockerfile, i assume that your intention is that the web service in the compose will actually run the image produced by the Dockerfile.
if that is the case, you need to modify the compose file and tell it to build an image based on the Dockerfile.
it should look something like
version: "3.7"
services:
web:
image: node
build: . # <--- this is the missing line
depends_on:
- db
ports:
- '3001:3001'
db:
image: postgres
environment:
POSTGRES_PASSWORD: 123123123
POSTGRES_USER: yakym
POSTGRES_DB: jwt
volumes:
- ./pgdata:/var/lib/postgresql/data
ports:
- '5433:5433'

Multiple Rails Application docker up not working

I have two Rails 6 application and I am trying to deploy in aws ec2 instance with different port 8080 and 8081 but when I trying to run docker-compose up -d it start one rails application successfully and if I tries to run docker-compose up -d for second application, It make first application down and make another application up on particular Port
Below is my docker configuration for two applications.
Application 1
version: "3.4"
services:
app:
image: "dockerhub_repo/a_api:${TAG}"
# build:
# context: .
# dockerfile: Dockerfile
container_name: a_api_container
depends_on:
- database
- redis
- sidekiq
ports:
- "8080:8080"
volumes:
- .:/app
env_file: .env
environment:
RAILS_ENV: staging
database:
image: postgres:12.1
container_name: a_database_container
restart: always
volumes:
- db_data:/var/lib/postgresql/data
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
sidekiq:
image: "dockerhub_repo/a_api:${STAG}"
container_name: a_sidekiq_container
environment:
RAILS_ENV: staging
env_file: .env
depends_on:
- redis
volumes:
- ".:/app"
redis:
image: redis:4.0-alpine
container_name: a_redis_container
volumes:
- "redis:/data"
volumes:
redis:
db_data:
Application 2
version: "3.4"
services:
app:
image: "dockerhub_repo/b_api:${PPTAG}"
build:
context: .
dockerfile: Dockerfile
container_name: b_api
depends_on:
- database
- redis
ports:
- "8081:8081"
volumes:
- .:/app
env_file: .env
environment:
RAILS_ENV: development
database:
image: postgres:12.1
container_name: pp_database
restart: always
volumes:
- db_data:/var/lib/postgresql/data
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
redis:
image: redis:4.0-alpine
container_name: pp_redis
volumes:
db_data:
This Configuration works very well in local machine. It start both application in local on different port but it has some issue on aws ec2. I am not sure is any thing wrong in configuration?
Compose has the notion of a project name. If you add or delete containers from a docker-compose.yml file, it looks for existing containers that are labeled with the project name to figure out what needs to change. The project name is also included in the Docker names of containers, networks, and volumes.
You can configure the project name with the COMPOSE_PROJECT_NAME environment variable or the docker-compose -p option. If you don't configure it, it defaults to the base name of the current directory.
You clarify in a comment that the two docker-compose.yml files are in directories app1/backend and app2/backend. Since the base name of those directories are both backend, they have the same project name; so if you run docker-compose up in the app2/backend directory, it finds the existing containers for the backend project, sees they don't match what's in the docker-compose.yml file, and deletes them (even though you as the operator think they belong to the other project).
There are a couple of ways to get around this:
Rename one or the other directory; maybe move the docker-compose.yml files up to the top-level app1 and app2 directories.
In one or both directories, create a .env file that sets COMPOSE_PROJECT_NAME=app1. (Note that file is checked in the current directory, not necessarily the directory that contains the docker-compose.yml file.)
Set and change an environment variable export COMPOSE_PROJECT_NAME=app1.
Consistently use an option docker-compose -p app1 ... with all Compose commands.

Sidekiq in dockerised rails application on AWS

I have a docker compose file with this content.
version: '3'
services:
db:
image: postgres
restart: always
ports:
- "5432:5432"
environment:
POSTGRES_PASSWORD: pass
POSTGRES_USER: user
volumes:
- postgres_data:/var/lib/postgresql/data
redis:
image: "redis:alpine"
ports:
- "6379:6379"
volumes:
- 'redis:/var/lib/redis/data'
sidekiq:
build: .
links:
- db
- redis
command: bundle exec sidekiq
volumes:
- '.:/app'
web:
image: production_image
ports:
- "80:80"
links:
- db
- redis
- sidekiq
restart: always
volumes:
postgres_data:
redis:
In this to run sidekiq, we run bundle exec sidekiq in the current directory. This works on my local machine in development environment. But on AWS EC2 container, I am sending my docker-compose.yml file and running docker-compose up. But since the project code is not there, sidekiq fails. How should I run sidekiq on EC2 instance without sending my code there and using docker container of my code only in the compose file?
The two important things you need to do are to remove the volumes: declaration that gets the actual application code from your local filesystem, and upload your built Docker image to some registry. Since you're otherwise on AWS, ECR is a ready option; public Docker Hub will work fine too.
Depending on how your Rails app is structured, it might make sense to use the same image with different commands for the main application and the Sidekiq worker(s), and it might work to just make it say
sidekiq:
image: production_image
command: bundle exec sidekiq
Since you're looking at AWS anyways you should also consider the possibility of using hosted services for data storage (RDS for the database, Elasticache for Redis). The important thing is to include the locations of those data stores as environment variables so that you can change them later (maybe they would default to localhost for developer use, but always be something different when deployed).
You'll also notice that my examples don't have links:. Docker provides an internal DNS service for containers to find each other, and Docker Compose arranges for containers to be found via their service key in the YAML file.
Finally, you should be able to test this setup locally before deploying it to EC2. Run docker build and docker-compose up as needed; debug; and if it works then docker push the image(s) and launch it on Amazon.
version: '3'
volumes: *volumes_from_the_question
services:
db: *db_from_the_question
redis: *redis_from_the_question
sidekiq:
image: 123456789012.dkr.ecr.us-east-1.amazonaws.com/myapp/sidekiq:1.0
environment:
- PGHOST: db
- REDIS_HOST: redis
app:
image: 123456789012.dkr.ecr.us-east-1.amazonaws.com/myapp/app:1.0
ports:
- "80:80"
environment:
- PGHOST: db
- REDIS_HOST: redis

docker - multiple databases on local

I have 2 applications that are separate codebases, and they each have their own database on the same db server instance.
I am trying to replicate this in docker, locally on my laptop. I want to be able to have both apps use the same database instance.
I would like
both apps to start in docker at the same time
both apps to be able to access the database on localhost
the database data is persisted
be able to view the data in the database using an IDE on localhost
So each of my apps has its own dockerfile and docker-compose file.
On app1, I start the docker instance of the app which is tied to the database. It all starts fine.
When I try to start app2, I get the following error:
ERROR: for app2_mssql_1 Cannot start service mssql: driver failed programming external connectivity on endpoint app2_mssql_1 (12d550c8f032ccdbe67e02445a0b87bff2b2306d03da1d14ad5369472a200620): Bind for 0.0.0.0:1433 failed: port is already allocated
How can i have them both running at the same time? BOTH apps need to be able to access each others database tables!
Here is the docker-compose.yml files
app1:
version: "3"
services:
web:
build:
context: .
args:
volumes:
- .:/app
ports:
- "3000:3000"
depends_on:
- mssql
mssql:
image: 'microsoft/mssql-server-linux'
ports:
- '1433:1433'
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=P455w0rd!
volumes:
- app1_db:/var/lib/mssql/data
volumes:
app1_db:
and here is app2:
version: "3"
services:
web:
build:
context: .
args:
volumes:
- .:/app
ports:
- "3000:3000"
depends_on:
- mssql
mssql:
image: 'microsoft/mssql-server-linux'
ports:
- '1433:1433'
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=P455w0rd!
volumes:
- app2_db:/var/lib/mssql/data
volumes:
app2_db:
Should I be using the same volume in each docker-compose file?
I guess the problem is in each app i am spinning up 2 different db instances, when in reality I guess i just want one, and it be used by all my apps?
The ports part in docker-compose file will bound the container port to host's port which causes port conflict in your case.
You need to remove the ports part from at least one of the compose file. This way, docker-compose can be up for both. And you can have access to both app at same time. But remember both apps will be placed in separate network bridges.
How docker-compose up works:
Suppose your app is in a directory called myapp, and your docker-compose.yml
When you run docker-compose up, the following happens:
A network called myapp_default is created.
A container is created using web’s configuration. It joins the network myapp_default under the name web.
A container is created using db’s configuration. It joins the network myapp_default under the name db.
If you run the second docker-compose.yml in different folder myapp2, then the nework will be myapp2_default.
Current configuration creates two volumes, two datebase containers and two apps. If you can make them run in the same network and run database as the single container it will work.
I don't think you are expecting two database container two two volumes.
Approach 1:
docker-compose.yml as a single compose.
version: "3"
services:
app1:
build:
context: .
args:
volumes:
- .:/app # give the path depending up on the docker file of app1.
ports:
- "3030:3000"
depends_on:
- mssql
app2:
build:
context: .
args:
volumes:
- .:/app # give the path depending up on the docker file of app2.
ports:
- "3032:3000"
depends_on:
- mssql
mssql:
image: 'microsoft/mssql-server-linux'
ports:
- '1433:1433'
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=SqlServer1234!
volumes:
- app_docker_db:/var/lib/mssql/data
volumes:
app_docker_db:
Approach 2:
To Isolate it further, still want to run them as the sepeare composefiles, create three compose file with network.
docker-compose.yml for database with network
version: "3"
services:
mssql:
image: 'microsoft/mssql-server-linux'
ports:
- '1433:1433'
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=SqlServer1234!
volumes:
- app_docker_db:/var/lib/mssql/data
networks:
- test_network
volumes:
app_docker_db
networks:
test_network:
docker-ompose.yml for app1
remove the database container and add below lines to your compose file
version: "3"
services:
app1:
build:
context: .
args:
volumes:
- .:/app # give the path depending up on the docker file of app1.
ports:
- "3030:3000"
networks:
default:
external:
name: my-pre-existing-network
Do the same for another docker-compose by replacing the docker-compose file.
There are many other option to create docker-compose files. Configure the default network and Use a pre-existing network
You're exposing the same port (1433) two times to the host machine. (This is what "ports:..." does). This is not possible as it would block the same port on your host (That's what the message says).
I think the most common way in these cases is that you link your db's to your apps. (See https://docs.docker.com/compose/compose-file/#links). By doing this your applications can still access the databases on their common ports (1433), but the databases are not accessible from the host anymore (only from the container that is linked to it).
Another error I see in your docker compose file is that both applications are exposed by the same ports. This is also not possible for the same reason. I would suggest that you change one of them to "3000:3001", so you can access this application on port 3001.

Difference between production and development docker using

I want to try docker for my web-site. I use php, nginx, mysql. I've configured docker and I've run my website locally. Now I want to publish my web-site to production.
I have few difference between developer and production version:
I need to be able connect to mysql inside container in developer mode (for debugging), but in production mode mysql must be isolated from outside for security
I want open my web-site by address app.dev and use nginx-proxy image on my developer machine, but on production I will not use nginx-proxy for increase performance.
Could I run docker with one docker-compose.yml file?
Or should I create two version of docker-compose file for developer and production version? But in this case I lose advantage of docker - same enviroment evrywhere. If I change docker-compose-dev.yml, I need to remember to change docker-compose-prod.yml.
My docker-compose.yml:
version: '2'
services:
app:
build: .
volumes:
- ./app:/app
container_name: app
app_nginx:
image: nginx
ports:
- "8080:80"
container_name: app_nginx
volumes:
- ./data/nginx:/etc/nginx/conf.d
- ./app:/app
environment:
- VIRTUAL_HOST=app.dev
app_db:
image: mysql:5.7
volumes:
- "./data/db:/var/lib/mysql"
restart: always
environment:
MYSQL_ROOT_PASSWORD:
MYSQL_ALLOW_EMPTY_PASSWORD: 1
MYSQL_DATABASE: "app_db"
container_name: app_db
nginx-proxy:
image: jwilder/nginx-proxy
container_name: nginx-proxy
ports:
- "80:80"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
You can achieve this with environment variable based configurations.
Usually different environments i.e staging and production differs only by configurations like database it needs to connect to, external service it calls, their end-points and credentials.
Instead of hard coding all such configuration, read them from environment variables. Thus you can use same docker-compose file with different environment variables for your staging and production environment.
You can also explore Rancher by Rancher Labs at http://rancher.com/ to manage your environments.

Resources