I want to try docker for my web-site. I use php, nginx, mysql. I've configured docker and I've run my website locally. Now I want to publish my web-site to production.
I have few difference between developer and production version:
I need to be able connect to mysql inside container in developer mode (for debugging), but in production mode mysql must be isolated from outside for security
I want open my web-site by address app.dev and use nginx-proxy image on my developer machine, but on production I will not use nginx-proxy for increase performance.
Could I run docker with one docker-compose.yml file?
Or should I create two version of docker-compose file for developer and production version? But in this case I lose advantage of docker - same enviroment evrywhere. If I change docker-compose-dev.yml, I need to remember to change docker-compose-prod.yml.
My docker-compose.yml:
version: '2'
services:
app:
build: .
volumes:
- ./app:/app
container_name: app
app_nginx:
image: nginx
ports:
- "8080:80"
container_name: app_nginx
volumes:
- ./data/nginx:/etc/nginx/conf.d
- ./app:/app
environment:
- VIRTUAL_HOST=app.dev
app_db:
image: mysql:5.7
volumes:
- "./data/db:/var/lib/mysql"
restart: always
environment:
MYSQL_ROOT_PASSWORD:
MYSQL_ALLOW_EMPTY_PASSWORD: 1
MYSQL_DATABASE: "app_db"
container_name: app_db
nginx-proxy:
image: jwilder/nginx-proxy
container_name: nginx-proxy
ports:
- "80:80"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
You can achieve this with environment variable based configurations.
Usually different environments i.e staging and production differs only by configurations like database it needs to connect to, external service it calls, their end-points and credentials.
Instead of hard coding all such configuration, read them from environment variables. Thus you can use same docker-compose file with different environment variables for your staging and production environment.
You can also explore Rancher by Rancher Labs at http://rancher.com/ to manage your environments.
Related
I have docker-compose.yml on my local machine like below:
version: "3.3"
services:
api:
build: ./api
volumes:
- ./api:/api
ports:
- 3000:3000
links:
- mysql
depends_on:
- mysql
app:
build: ./app
volumes:
- ./app:/app
ports:
- 80:80
mysql:
image: mysql:8.0.27
volumes:
- ./mysql:/var/lib/mysql
tty: true
restart: always
environment:
MYSQL_DATABASE: db
MYSQL_ROOT_PASSWORD: qwerty
MYSQL_USER: db
MYSQL_PASSWORD: qwerty
ports:
- '3306:3306'
The api is NestJS app, app, mysql - Angular and Mysql respectively. And I need to work with this one localy.
How could I make so, that any my changes will be applied without rebuilding containers every time?
You don't have to build an image for a development environment with your sources in it. For NestJS, and since you're using Docker (I voluntary specify this because it exists other container runtimes), you can simply run a NodeJS image from the Docker main registry: https://hub.docker.com/_/node.
You could run it with:
docker run -d -v ./app:/app node:12-alpine /app/index.js
N.B.: I choose 12-alpine for the example. I imagine the file to start your app is index.js, replace it with yours.
You must consider to install the node dependencies yourself and they must be in the ./app directory.
For docker-compose, it could look like this:
version: "3.3"
services:
app:
image: node:12-alpine
command: /app/index.js
volumes:
- ./app:/app
ports:
- "80:80"
Same way for your API project.
For a production image, it is still suggested to build the image with the sources in it.
Say you're working on your front-end application (app). This needs to make calls out to the other components, especially api. So you can start the things it depends on, but not the application itself
docker-compose up -d api
Update your application configuration for this different environment; if you would have proxied to http://api:3000 before, for example, you need to change this to http://localhost:3000 to connect to the container's published ports:.
Now you can develop your application totally normally, without doing anything Docker-specific.
# outside Docker, on your normal development workstation
yarn run dev
$EDITOR src/components/Foo.tsx
You might find it convenient to use environment variables for these settings that will, well, differ per environment. If you're developing the back-end code but want to attach a live UI to it, you'll either need to rebuild the container or update the front-end's back-end URL to point at the host system.
This approach also means you do not need to bind-mount your application's code into the container, and I'd recommend removing those volumes: blocks.
I have a docker-compose LAMP stack comprised of three services; a webserver, php and mysql.
The apache2 webroot inside the container is shared to my local machine using a volume like so:
volumes:
- ./public_html:/usr/local/apache2/htdocs
When the stack is running though, I can't edit files inside of the shared volume, since I have a different local user as the user inside the apache2 container. Additionally the installer of my CMS (Processwire) is unable to acquire permissions to the required install directories.
The apache container uses alpine 2.4.35.
I've build my docker-compose file according to this tutorial:
https://medium.com/#thivi/creating-a-lamp-stack-using-docker-compose-13ca4e3950e1
Below I have attached my docker-compose.yml.
version: '3.7'
services:
apache:
build: './apache'
restart: always
ports:
- 80:80
- 443:443
networks:
- frontend
- backend
volumes:
- ./public_html:/usr/local/apache2/htdocs
- ./cert/:/usr/local/apache2/cert/
depends_on:
- php
- mysql
php:
build: './php'
restart: always
networks:
- backend
volumes:
- ./public_html:/usr/local/apache2/htdocs
- ./tmp:/usr/local/tmp
mysql:
build: './mysql'
restart: always
ports:
- 3306:3306
expose:
- 3306
networks:
- backend
volumes:
- ./database:/var/lib/mysql
networks:
backend:
frontend:
Is there any way to fix this issue? I'd be grateful for answers, I've been dealing with this issue for the past 2 days, without getting anywhere and I'm also kind of surprised that such an essential feature like directory sharing is so complicated.
/edit:
I've also noticed something interesting; when I execute a bash inside the apache-container the ownership of apache's document root is set to nobody:nobody, which probably also isn't right.
Let's take for example a mobile application who depends on two or more API.
Each of these projects are separated in independent Git repositories. Then we have 3 repositories allowing us to develop each in parallel.
Each projects have their own dependencies:
First API, for example, requires a SQL database
Second API, requires a NoSQL database
The mobile app requires these two APIs
Now I want to "dockerize" all of these projects to simplify development environment and unify it between developers and/or production environment.
Currently in each project we can create a custom docker-compose.yml file working with each projects requirements.
For example in the 1st API
version: "3.7"
services:
first_api:
image: golang:1.13
working_dir:
- /src
depends_on:
- mysql
volumes:
- ".:/src"
command: go run main.go
mysql:
image: mysql:5.7
environment:
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
MYSQL_USER: ${MYSQL_USER}
MYSQL_PASSWORD: ${MYSQL_USER_PASSWORD}
MYSQL_DATABASE: ${MYSQL_DATABASE_NAME}
adminer:
image: adminer
restart: always
The second API will have a similar docker-compose.yml file but with NoSQL DB instead.
Then in the mobile app repository we will have a docker-compose.yml file with a lot of duplicated code (and exactly same containers) because of its interdependence with the two other API, an some other file identicals (e.g .env files, entrypoint scripts if needed...).
The databases setup/seeding will be also done on 2 repositories too, that can be a little annoying.
The docker-compose.yml file will look like something like this:
version: "3.7"
services:
app:
build:
context: .
args:
- IP=${IP}
ports:
- 19000:19000
- 19001:19001
- 19002:19002
volumes:
- ".:/app"
depends_on:
- first-api
- second-api
first-api:
image: my-registry:5000/first-api
ports:
- 9009:3000
depends_on:
- mysql
volumes:
- ".env:/dist/.env"
mysql:
image: mysql:5.7
environment:
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
MYSQL_USER: ${MYSQL_USER}
MYSQL_PASSWORD: ${MYSQL_USER_PASSWORD}
MYSQL_DATABASE: ${MYSQL_DATABASE_NAME}
adminer:
image: adminer
restart: always
ports:
- 9099:8080
second-api:
image: my-registry:5000/second-api
ports:
- 9010:3000
depends_on:
- mongo
mongo:
image: mongo
restart: always
environment:
MONGO_INITDB_ROOT_USERNAME: ${MONGO_ROOT_USERNAME}
MONGO_INITDB_ROOT_PASSWORD: ${MONGO_ROOT_PASSWORD}
mongo-express:
image: mongo-express
restart: always
ports:
- 8081:8081
environment:
ME_CONFIG_MONGODB_ADMINUSERNAME: ${MONGO_ROOT_USERNAME}
ME_CONFIG_MONGODB_ADMINPASSWORD: ${MONGO_ROOT_PASSWORD}
In fact in this final docker-compose file we have 4 containers definition totally identical with them inside API's configurations, we also have some environment variables duplicated and versioned in 2 repositories minimum.
Sometime we can also have Dockerfile duplicated, according to specifics cases, DB setup or something too.
Did I miss something somewhere in this Docker development environment setup that would allow me to avoid some duplication?
Is there a best practice or a recommendation to avoid this?
How companies with big interdependent micro-services architecture manage these interdependence?
You can use YAML anchors & aliases with docker-compose extension fields.
Here are two other articles with useful details about that:
https://nickjanetakis.com/blog/docker-tip-82-using-yaml-anchors-and-x-properties-in-docker-compose
https://medium.com/#kinghuang/docker-compose-anchors-aliases-extensions-a1e4105d70bd
I'd probably set this up by independently running your two individual services' Docker Compose files, and then running a proxy in front that ties them together. You can "borrow" the networks from other docker-compose.yml files. You more or less have this now; the proxy would be your "app" container, and you can use an external: reference to the other applications' default networks.
version: '3'
services:
app:
build: .
environment:
- IP=${IP}
ports:
- 19000:19000
- 19001:19001
- 19002:19002
networks:
- firstapi_default
- secondapi_default
networks:
firstapi_default:
external: true
secondapi_default:
external: true
This approach also works if you have multiple services that each independently need a MySQL database backend; running a separate docker-compose up in each project directory will instantiate a new separate database for each. (In a microservice architecture you typically don't share data stores between services; they communicate only via their APIs.)
You'll pretty quickly hit scaling issue doing this if one of your backend services needs to call another, and Docker Compose might not be the right tool for this. Kubernetes is a significant commitment, but it would let you deploy each of these individual services into a separate namespace, and then use DNS names like first_api.first_namespace.svc.cluster.local to communicate between them.
Is there a way in docker-compose.yml to include a db service only in specific environments ("test" in my case)?
For a Ruby project, development and production both use a remote Postgres database, but the test needs it own local Postgres database.
What I have now is shown below... "works" in the sense that when we run in development the db container is simply ignored by our code (our dev't ENV supplies a remote postres url instead of using the db host). But it would be nicer to not spin up an unused docker container for db when running in development.
version: '3'
services:
web:
build: .
ports:
- "3010:3010"
volumes:
- .:/my_app
links:
- db.local
depends_on:
- db
db:
image: postgres:10.5
ports:
- "5432:5432"
I have a docker compose file with this content.
version: '3'
services:
db:
image: postgres
restart: always
ports:
- "5432:5432"
environment:
POSTGRES_PASSWORD: pass
POSTGRES_USER: user
volumes:
- postgres_data:/var/lib/postgresql/data
redis:
image: "redis:alpine"
ports:
- "6379:6379"
volumes:
- 'redis:/var/lib/redis/data'
sidekiq:
build: .
links:
- db
- redis
command: bundle exec sidekiq
volumes:
- '.:/app'
web:
image: production_image
ports:
- "80:80"
links:
- db
- redis
- sidekiq
restart: always
volumes:
postgres_data:
redis:
In this to run sidekiq, we run bundle exec sidekiq in the current directory. This works on my local machine in development environment. But on AWS EC2 container, I am sending my docker-compose.yml file and running docker-compose up. But since the project code is not there, sidekiq fails. How should I run sidekiq on EC2 instance without sending my code there and using docker container of my code only in the compose file?
The two important things you need to do are to remove the volumes: declaration that gets the actual application code from your local filesystem, and upload your built Docker image to some registry. Since you're otherwise on AWS, ECR is a ready option; public Docker Hub will work fine too.
Depending on how your Rails app is structured, it might make sense to use the same image with different commands for the main application and the Sidekiq worker(s), and it might work to just make it say
sidekiq:
image: production_image
command: bundle exec sidekiq
Since you're looking at AWS anyways you should also consider the possibility of using hosted services for data storage (RDS for the database, Elasticache for Redis). The important thing is to include the locations of those data stores as environment variables so that you can change them later (maybe they would default to localhost for developer use, but always be something different when deployed).
You'll also notice that my examples don't have links:. Docker provides an internal DNS service for containers to find each other, and Docker Compose arranges for containers to be found via their service key in the YAML file.
Finally, you should be able to test this setup locally before deploying it to EC2. Run docker build and docker-compose up as needed; debug; and if it works then docker push the image(s) and launch it on Amazon.
version: '3'
volumes: *volumes_from_the_question
services:
db: *db_from_the_question
redis: *redis_from_the_question
sidekiq:
image: 123456789012.dkr.ecr.us-east-1.amazonaws.com/myapp/sidekiq:1.0
environment:
- PGHOST: db
- REDIS_HOST: redis
app:
image: 123456789012.dkr.ecr.us-east-1.amazonaws.com/myapp/app:1.0
ports:
- "80:80"
environment:
- PGHOST: db
- REDIS_HOST: redis