I have a simple Rails/React app that works with Docker with 3 services:
'database' for postgres
'web' for Rails
'webpack_dev_server' for react
In AWS I've created:
* built a custom image for nginx,
* set s3 to hold ecs configs.
* a production cluster,
* private repositories for the 'web' and nginx, tagged both images and pushed to the repositories
* create 4 ec2 instances, 2 for the web and 2 for react
Now I'm ready to create task definitions but I'm not sure how to handle webpack_dev_server (React).
Can we build the image with the same dockerfile as the the web?
For the task definition, should it look like the web as well?
Here's the docker-compose.yml file that works.
version: '3'
services:
web:
build: .
ports:
- "3000:3000"
volumes:
- .:/usr/src/app
- gem_cache:/gems
env_file:
- .env/development/database
- .env/development/web
environment:
- WEBPACKER_DEV_SERVER_HOST=webpack_dev_server
- DOCKERIZED=true
webpack_dev_server:
build: .
command: ./bin/webpack-dev-server
ports:
- 3035:3035
volumes:
- .:/usr/src/app
- gem_cache:/gems
env_file:
- .env/development/web
- .env/development/database
environment:
- WEBPACK_DEV_SERVER=0.0.0.0
database:
image: postgres
env_file:
- .env/development/database
volumes:
- db_data:/var/lib/postgresql/data
volumes:
db_data:
gem_cache:
Related
I'm trying to create a docker-compose file to run localstack (for sqs), 1 mysql databases and 2 services together.
The problem I'm trying to deal with is that the services starts to build and run before the queues are created (which I don't want).
Is there a way to make the services sleep ? I've tried to use health check but it didn't make a difference.
Here's how the file looks:
version: "3.8"
services:
localstack:
container_name: "DGT-localstack_main"
image: localstack/localstack
ports:
- "4566:4566" # LocalStack Gateway
- "4510-4559:4510-4559" # external services port range
- "53:53" # DNS config (only required for Pro)
- "53:53/udp" # DNS config (only required for Pro)
- "443:443" # LocalStack HTTPS Gateway (only required for Pro)
environment:
- DEBUG=${DEBUG-}
- PERSISTENCE=${PERSISTENCE-}
- LAMBDA_EXECUTOR=${LAMBDA_EXECUTOR-}
- LOCALSTACK_API_KEY=${LOCALSTACK_API_KEY-} # only required for Pro
- DOCKER_HOST=unix:///var/run/docker.sock
volumes:
- "${LOCALSTACK_VOLUME_DIR:-./volume}:/var/lib/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
networks:
- localstack_network
awslocal_cli:
image: amazon/aws-cli
depends_on:
- localstack
entrypoint: /bin/sh -c
networks:
- localstack_network
command: >
'
echo "########### Creating profile ###########"
aws configure set aws_access_key_id ignore
aws configure set aws_secret_access_key ignore
aws configure set region eu-north-1
echo "########### Creating SQS ###########"
aws sqs create-queue --endpoint-url=http://localstack:4566 --queue-name=FIRST_QUEUE
aws sqs create-queue --endpoint-url=http://localstack:4566 --queue-name=SECOND_QUEUE
echo "########### Listing SQS ###########"
aws sqs list-queues --endpoint-url=http://localstack:4566
'
db:
container_name: db
image: mysql:8.0.28
command: --lower_case_table_names=1
ports:
- "3308:3306"
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=maindb
volumes:
- ./db_config/core/data:/var/lib/mysql
networks:
- localhost_network
api:
container_name: Api
image: api:1.0
build:
context: blabla
dockerfile: blabla
ports:
- blabla
env_file: ./Server/common.env
environment:
- blabla
restart: on-failure
depends_on:
- core
networks:
- localhost_network
core:
container_name: Core
image: core:1.0
build:
context: blabla
dockerfile: blabla
ports:
- "5115:80"
env_file: .blabla
environment:
- blabla
restart: on-failure
depends_on:
- localstack
- awslocal_cli
- db
networks:
- localstack_network
- localhost_network
networks:
localstack_network:
localhost_network:
I'm sorry for the bad indentation (they are right indented in my file)
Use the depends_on function.
If you want container B to start after container A you would need to create you docker-compose like this
A:
image: ...
other_settings: ...
B:
image: ...
other_settings: ...
depends_on: A
This would ensure that container B is only started if container A is already up and running.
If there is a URL to check if the queue is already created you could also try this:
https://docs.docker.com/compose/startup-order/
You can try to wrap starting command in script wait-for-it.sh. Complete example is hire: https://docs.docker.com/compose/startup-order/
I have run into a problem where my backend (NestJs) image upload function uploads images to backend container folder sat (/resources/images/avatar/image1.jpg).
Now, i want to access this image from frontend (Reactjs) container to display user avatart. However, UNKNOW_URL_SCHEME error displays.
here is my docker-compose config
backend:
container_name: backend
build:
context: ./backend
target: development
volumes:
- ./backend:/usr/src/app
- /usr/src/app/node_modules
ports:
#- ${BACKEND_PORT}:${BACKEND_PORT}
- 5000:4000
command: npm run start:dev
env_file:
- .env
networks:
- webnet
depends_on:
- mongodb
- rabbitmq
and
frontend:
container_name: Frontend
build:
context: ./frontend
target: development
volumes:
- ./frontend:/usr/src/app
- ./usr/src/app/node_modules
ports:
- 3000:3000
stdin_open: true
environment:
- CHOKIDAR_USEPOLLING=true
- CI=true
command: npm run start
networks:
- webnet
depends_on:
- backend
- ms-consent
- mongodb
- rabbitmq
My Images are in the root folder of backend code,
/resources/images/avatar/image1.jpg
Now, in frontend, I refer to file location as below;
const image_default_path = 'backend://resources/images';
where i assume, backend:// is container name and i can access with it, however, i can't.
I would like to seek your help you have solved similar issue,
there is ruby on rails application which uses mongodb and postgresql databases. When I run it locally everything works fine, however when I try to open in a remote container, it throws error message
2021-03-14T20:22:27.985+0000 Failed: error connecting to db server: no reachable servers
the docker-compose.yml file defines following services:
redis mongodb db rails
I start remote containers with following command:
docker-compose build - build successful
docker-compose up -d - containers are up and running
when I connect to the rails container and try to do
bundle exec rake aws:restore_db
error mentioned above is thrown. I don't know what is wrong here. The mongodb container is up and running.
the docker-compose.yml is mentioned below:
version: '3.4'
services:
redis:
image: redis:5.0.5
mongodb:
image: mongo:3.6.13
volumes:
- mongo-data:/data/db
db:
image: postgres:11.3
volumes:
- db-data:/var/lib/postgresql/data
rails:
build: .
image: proj:latest
depends_on:
- db
- mongodb
- redis
volumes:
- .:/proj
ports:
- "3000:3000"
tty: true
stdin_open: true
env_file:
- .env/development.env
volumes:
db-data:
mongo-data:
this is how I start all four remote containers:
$ docker-compose up -d
Starting proj_db_1 ... done
Starting proj_redis_1 ... done
Starting proj_mongodb_1 ... done
Starting proj_rails_1 ... done
please help me to understand how remote containers should interact with each other.
Your configuration should point to the services by name and not to a port on localhost. For example, if you ware connecting to redis as localhost:6380 or 127.0.0.1:6380, now you need to use redis:6380
If this is still not helping, you can try to add links between containers in order the names given to them as services to be resolved. So the file will look something like this:
version: '3.4'
services:
redis:
image: redis:5.0.5
networks:
- front-end
links:
- "mongodb:mongodb"
- "db:db"
- "rails:rails"
mongodb:
image: mongo:3.6.13
volumes:
- mongo-data:/data/db
networks:
- front-end
links:
- "redis:redis"
- "db:db"
- "rails:rails"
db:
image: postgres:11.3
volumes:
- db-data:/var/lib/postgresql/data
networks:
- front-end
links:
- "redis:redis"
- "mongodb:mongodb"
- "rails:rails"
rails:
build: .
image: proj:latest
depends_on:
- db
- mongodb
- redis
volumes:
- .:/proj
ports:
- "3000:3000"
tty: true
stdin_open: true
env_file:
- .env/development.env
networks:
- front-end
links:
- "redis:redis"
- "mongodb:mongodb"
- "db:db"
volumes:
db-data:
mongo-data:
networks:
front-end:
The links will allow for a hostnames to be defined in the containers.
The link flag is legacy, and in new versions of docker-engine it's not required for user defined networks. Also, the links will be ignored in case of docker swarm deployment. However since there are sill old installations of Docker and docker-compose, this is one thing to try in troubleshooting.
I have followed the api-platform tutorial and successfully built and started the application using Docker on my localhost machine.
I have a production server running Ubuntu 16.04.5 LTS, and a newly installed Docker version 18.06.1-ce.
How would I build this code on my local machine and run it on the Docker server?
I have also looked at the Deploying API Platform Applications documentation but I am not sure how to use this.
I am struggling to understand how to build api-platform from my localhost to the server
this is docker-compose.yml file try this please docker-compose up -d
version: '3.4'
services:
php:
image: ${CONTAINER_REGISTRY_BASE}/php
build:
context: ./api
target: api_platform_php
cache_from:
- ${CONTAINER_REGISTRY_BASE}/php
- ${CONTAINER_REGISTRY_BASE}/nginx
- ${CONTAINER_REGISTRY_BASE}/varnish
depends_on:
- db
# Comment out these volumes in production
volumes:
- ./api:/srv/api:rw,cached
# If you develop on Linux, uncomment the following line to use a bind-mounted host directory instead
# - ./api/var:/srv/api/var:rw
api:
image: ${CONTAINER_REGISTRY_BASE}/nginx
build:
context: ./api
target: api_platform_nginx
cache_from:
- ${CONTAINER_REGISTRY_BASE}/php
- ${CONTAINER_REGISTRY_BASE}/nginx
- ${CONTAINER_REGISTRY_BASE}/varnish
depends_on:
- php
# Comment out this volume in production
volumes:
- ./api/public:/srv/api/public:ro
ports:
- "8080:80"
cache-proxy:
image: ${CONTAINER_REGISTRY_BASE}/varnish
build:
context: ./api
target: api_platform_varnish
cache_from:
- ${CONTAINER_REGISTRY_BASE}/php
- ${CONTAINER_REGISTRY_BASE}/nginx
- ${CONTAINER_REGISTRY_BASE}/varnish
depends_on:
- api
volumes:
- ./api/docker/varnish/conf:/usr/local/etc/varnish:ro
tmpfs:
- /usr/local/var/varnish:exec
ports:
- "8081:80"
db:
# In production, you may want to use a managed database service
image: postgres:10-alpine
environment:
- POSTGRES_DB=api
- POSTGRES_USER=api-platform
# You should definitely change the password in production
- POSTGRES_PASSWORD=!ChangeMe!
volumes:
- db-data:/var/lib/postgresql/data:rw
# You may use a bind-mounted host directory instead, so that it is harder to accidentally remove the volume and lose all your data!
# - ./docker/db/data:/var/lib/postgresql/data:rw
ports:
- "5432:5432"
client:
# Use a static website hosting service in production
# See https://github.com/facebookincubator/create-react-app/blob/master/packages/react-scripts/template/README.md#deployment
image: ${CONTAINER_REGISTRY_BASE}/client
build:
context: ./client
cache_from:
- ${CONTAINER_REGISTRY_BASE}/client
env_file:
- ./client/.env
volumes:
- ./client:/usr/src/client:rw,cached
- /usr/src/client/node_modules
ports:
- "80:3000"
admin:
# Use a static website hosting service in production
# See https://github.com/facebookincubator/create-react-app/blob/master/packages/react-scripts/template/README.md#deployment
image: ${CONTAINER_REGISTRY_BASE}/admin
build:
context: ./admin
cache_from:
- ${CONTAINER_REGISTRY_BASE}/admin
volumes:
- ./admin:/usr/src/admin:rw,cached
- /usr/src/admin/node_modules
ports:
- "81:3000"
h2-proxy:
# Don't use this proxy in prod
build:
context: ./h2-proxy
depends_on:
- client
- admin
- api
- cache-proxy
ports:
- "443:443"
- "444:444"
- "8443:8443"
- "8444:8444"
volumes:
db-data: {}
I'm fairly new to docker, but I recently discovered something that I just can't wrap my head around. I started a docker machine:
docker-machine create -d virtualbox machine_name
Created a docker-compose file for my application:
version: '3.3'
services:
client:
container_name: client
build:
context: ./services/client
dockerfile: Dockerfile
volumes:
- './services/client:/usr/src/app'
ports:
- '3007:3000'
environment:
- NODE_ENV=development
depends_on:
- project
links:
- project
db:
container_name: db
build:
context: ./services/db
dockerfile: Dockerfile
ports:
- 5435:5432
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
project:
container_name: project
build: ./services/project
volumes:
- './services/project:/usr/src/app'
- './services/project/package.json:/usr/src/app/package.json'
ports:
- 3000:3000
environment:
- DATABASE_URL=postgres://postgres:postgres#db:5432/esports_manager_dev
- DATABASE_TEST_URL=postgres://postgres:postgres#db:5432/esports_manager_test
- NODE_ENV=${NODE_ENV}
- TOKEN_SECRET=tempsectre
depends_on:
- db
links:
- db
and then I ssh'd into the docker machine to find my entire filesystem. Is this intended behaviour, I can't seem to find anything in the docs that talks about it.