docker-compose services not able to communicate with each other - docker

Below is my docker-compose.yml file having 3 services.
version: '3.0'
services:
mongodb:
build: ./mongodb
image: mongo:1.0
container_name: mongodb
ports:
- "27017:27017"
networks:
- appNetwork
node-service:
build: ./node-service
image: node-service-live:1.0
container_name: node-service
command:
sh -c "dockerize -wait tcp://mongodb:27017 -timeout 1m && npm start"
expose:
- "3031"
ports:
- "3031:3031"
networks:
- appNetwork
angular-app:
build: ./angular-app
image: angular-app-live:1.0
container_name: angular-app
command: ng serve --host 0.0.0.0 --port 4201 --disable-host-check
ports:
- "4201:4201"
networks:
- appNetwork
networks:
appNetwork:
external: true
When I execute docker-compose up, node-service is able to connect to mongodb service using this link: mongodb://mongodb:27017/DBName?authMechanism=DEFAULT.
But angular-app service can't communicate with node-service despite being in same network. In angular-app I am using following link(http://node-service:3031) to connect to node-service.
What am I missing??

Related

Redis, sidekiq and docker-compose using AWS elasticache(clusters)

The main problem I have got when I started this project is to assign to sidekiq the redis connection and to use elasticache redis cluster inside redis.
this is my docker-compose file:
version: "3.3"
services:
db:
image: postgres:12.9-alpine
ports:
- "5432:5432"
volumes:
- db_data_postgres:/var/lib/postgresql/data
networks:
appnet:
ipv4_address: 172.20.0.3
redis:
image: redis:latest
restart: always
ports:
- '6379:6379'
command: bash -c "redis-server" # what should go here?
networks:
appnet:
ipv4_address: 172.20.0.4
app:
build:
context: .
volumes:
- .:/app
- gem_cache:/usr/local/bundle/gems
- node_modules:/app/node_modules
depends_on:
- db
- worker_sidekiq
links:
- db
ports:
- "8000:8000"
env_file:
- .env
environment:
- REDIS_URL=redis://redis:6379
command: bash -c "rm -f tmp/pids/server.pid && RAILS_ENV=development bin/rails s -b '0.0.0.0' -p 8000"
networks:
appnet:
ipv4_address: 172.20.0.5
worker_sidekiq:
build: .
image: app
command: bash -c "bundle exec sidekiq"
volumes:
- .:/app
- gem_cache:/usr/local/bundle/gems
- node_modules:/app/node_modules
depends_on:
- redis
environment:
- REDIS_URL=redis://redis:6379
networks:
appnet:
ipv4_address: 172.20.0.6
networks:
appnet:
ipam:
config:
- subnet: 172.20.0.0/16
volumes:
gem_cache:
db_data_postgres:
node_modules:
I've tried command: redis-cli -h *.*.use1.cache.amazonaws.com -p 6379 but I got errors on start.
I have tried to figured it out for a while and i don't know how to link the AWS Redis Cluster in my docker-compose file. Any ideeas?
ElastiCache is can not be connected from outside AWS by default. You can connect to these clusters by bastion host or AWS Client VPN.
AWS Official Doc: https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/accessing-elasticache.html#access-from-outside-aws

Docker compose container fails to read the application running in different container and in same network

I have docker-compose.yml file which contains frontend,backend,testing,postgres and pgadmin container. The containers except testing are able to communicate each other. But the testing container fails to communicate with backend and frontend container in docker-compose.
version: '3.7'
services:
frontend:
container_name: test-frontend
build:
context: ./frontend
dockerfile: Dockerfile.local
ports:
- '3000:3000'
networks:
- test-network
environment:
# For the frontend can be applied only during the build!
# (while it's applied when TS is compiled)
# You have to build manually without cache if one of those are changed at least for the prod mode.
- REACT_APP_BACKEND_API=http://localhost:8000/api/v1
- REACT_APP_GOOGLE_CLIENT_ID=1234567dfghjjnfd
- CI=true
- CHOKIDAR_USEPOLLING=true
postgres:
image: postgres
environment:
POSTGRES_USER: dev
POSTGRES_PASSWORD: dev
PGDATA: /data/postgres
volumes:
- postgres:/data/postgres
ports:
- "5432:5432"
networks:
- test-network
restart: unless-stopped
pgadmin:
image: dpage/pgadmin4
environment:
PGADMIN_DEFAULT_EMAIL: "dev#dev.com"
PGADMIN_DEFAULT_PASSWORD: dev
volumes:
- pgadmin:/root/.pgadmin
- ./pgadmin-config/servers.json:/pgadmin4/servers.json
ports:
- "5050:80"
networks:
- test-network
restart: unless-stopped
backend:
container_name: test-backend
build:
context: ./backend
dockerfile: Dockerfile.local
ports:
- '8000:80'
volumes:
- ./backend:/app
command: >
bash -c "alembic upgrade head
&& exec /start-reload.sh"
networks:
- test-network
depends_on:
- postgres
environment:
- GOOGLE_APPLICATION_CREDENTIALS=/app/.secret/secret.json
- APP_DB_CONNECTION_STRING=postgresql+psycopg2://dev:dev#postgres:5432/postgres
- LOG_LEVEL=debug
- SQLALCHEMY_ECHO=True
- AUTH_ENABLED=True
- CORS=*
- GCP_ALLOWED_DOMAINS=*
testing:
container_name: test-testing
build:
context: ./testing
dockerfile: Dockerfile
volumes:
- ./testing:/isp-app
command: >
bash -c "/wait
&& robot ."
networks:
- test-network
depends_on:
- backend
- frontend
environment:
- WAIT_HOSTS= frontend:3000, backend:8000
- WAIT_TIMEOUT= 3000
- WAIT_SLEEP_INTERVAL=300
- WAIT_HOST_CONNECT_TIMEOUT=300
volumes:
postgres:
pgadmin:
networks:
test-network:
driver: bridge
All the containers are mapped to test-network. When the testing container tried to connect to frontend:3000 or backend:8000, it throws "Host [ backend:8000] not yet available"
How to fix it?

Preventing development containers from running in production

I have a docker-compose.yml file that includes a container for API mocks as well as phpmyadmin and mongo-express containers none of which should be started in my production environment.
I already have seperate .env files for production and development. Is it possible to use variables from the active .env file to disable a container?
Here is my docker-compose.yml:
services:
mysql:
build: ./docker/mysql
command: --default-authentication-plugin=mysql_native_password
container_name: mysql
entrypoint: sh -c "/usr/local/bin/docker-entrypoint.sh --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci"
environment:
- MYSQL_DATABASE=${MYSQL_DATABASE}
- MYSQL_USER=${MYSQL_USERNAME}
- MYSQL_PASSWORD=${MYSQL_PASSWORD}
- MYSQL_RANDOM_ROOT_PASSWORD=yes
- MYSQL_ONETIME_PASSWORD=yes
ports:
- 3306:3306
restart: unless-stopped
volumes:
- ./data/mysql:/var/lib/mysql
phpmyadmin:
container_name: phpmyadmin
environment:
- PMA_HOST=mysql
image: phpmyadmin/phpmyadmin
ports:
- 8080:80
mongo:
build: ./docker/mongo
container_name: mongo
environment:
- MONGO_INITDB_DATABASE=${MONGO_DATABASE}
- MONGO_INITDB_ROOT_USERNAME=${MONGO_USERNAME}
- MONGO_INITDB_ROOT_PASSWORD=${MONGO_PASSWORD}
ports:
- 27017:27017
restart: unless-stopped
volumes:
- ./data/mongo:/data/db
mongo-express:
build: ./docker/mongo-express
container_name: mongo-express
depends_on:
- mongo
environment:
- ME_CONFIG_BASICAUTH_PASSWORD=redacted
- ME_CONFIG_BASICAUTH_USERNAME=username
- ME_CONFIG_MONGODB_ADMINUSERNAME=${MONGO_USERNAME}
- ME_CONFIG_MONGODB_ADMINPASSWORD=${MONGO_PASSWORD}
ports:
- 8081:8081
redis:
build: ./docker/redis
container_name: redis
ports:
- 6379:6379
restart: unless-stopped
volumes:
- ./data/redis:/data
mock-apis:
build: ./docker/mock-apis
container_name: mock-apis
command: >
/initscript.bash
ports:
- 81:80
volumes:
- ./mock-apis:/home/nodejs
php-fpm:
build:
context: ./docker/php-fpm
args:
HOST_UID: ${HOST_UID}
command: >
/initscript.bash
container_name: php-fpm
restart: unless-stopped
depends_on:
- mongo
- mysql
- redis
volumes:
- ./laravel:/var/www/
nginx:
build: ./docker/nginx
container_name: nginx
depends_on:
- php-fpm
ports:
- 80:80
restart: unless-stopped
volumes:
- ./laravel:/var/www/
version: "3"
I'm using profiles to scope my services. If I want to use PhpMyAdmin only in dev I add this profile to the service:
phpmyadmin:
container_name: phpmyadmin
environment:
- PMA_HOST=mysql
image: phpmyadmin/phpmyadmin
ports:
- 8080:80
profiles: ["dev"]
So now I have to tell to docker compose if I want to use the dev profile. Else it will not start.
You can use one of these command (with this way you have to type --profile your_profil for each profile):
$ docker-compose --profile dev up -d
$ docker-compose --profile dev --profile profil2 up -d <- for multiple profiles
Or the cleaner way you can separate your services with a comma:
$ COMPOSE_PROFILES=dev,profil2 docker-compose up -d
Services without a profiles attribute will always be enabled.
Care about when you stop your services you have to specify the profile too:
$ COMPOSE_PROFILES=dev docker-compose down

docker container port map with remote IP (54.xxx.xxx.23) on local development

I'm working as a DevOps for some of the Projects Where I am facing an issue,
I have one docker-compose.yml which is working fine with local IP like 192.168.0.38 but I want to map it with my AWS IP (54.xxx.xxx.23) instead of local host IP.
version: '3'
services:
api:
build: ./api
image: api
environment:
- PYTHONUNBUFFERED=1
expose:
- ${scikiqapiport}
ports:
- ${scikiqapiport}:${scikiqapiport}
command:
"python3 manage.py makemigrations"
command:
"chmod -R 777 ./scikiq/scikiq/static:rw"
command:
"python3 manage.py migrate"
command: "gunicorn --workers=3 --bind=0.0.0.0:${scikiqapiport} wsgi"
restart: on-failure
depends_on:
- base
volumes:
- "../compressfile:/home/data/arun/compressfile"
- "static:/home/data/arun/scikiq/scikiq/static:rw"
scikiqweb:
build: ./web
image: web
ports:
- ${scikiqwebport}
command:
"gunicorn --workers=3 --bind=0.0.0.0:${scikiqwebport} wsgi"
restart: on-failure
depends_on:
- base
nginx:
image: nginx
ports:
- ${scikiqwebport}:80
volumes:
- ./nginx:/etc/nginx/conf.d
depends_on:
- scikiqweb1
base:
build: ./base-image
image: scikiq_base
volumes:
compressfile:
static:
Your help will be appreciated.
Thank You
Put the public IP where is used local IP its working

Multiple services with different ports and the same domain using jwilder/nginx-proxy

I have some services in docker-compose:
version: "3"
services:
site:
volumes:
- .:/app
build:
dockerfile: Dockerfile.dev
context: docker
ports:
- "80:80"
webpack:
image: node:6.12.0
ports:
- "8080:8080"
volumes:
- .:/app
working_dir: /app
command: bash -c "yarn install; yarn run gulp server"
db:
image: mysql:5.7.20
ports:
- "3306:3306"
environment:
MYSQL_DATABASE: ${DB_NAME}
MYSQL_ROOT_PASSWORD: ${DB_PASSWORD}
And I can connect to exposed ports of services:
Site -- localhost:80
Webpack -- localhost:8080
MySQL: -- localhost:3306
How can I use nginx-proxy to expose multiple ports of different servers on the same domain (?):
Site -- example.dev:80
Webpack -- example.dev:8080
MySQL: -- example.dev:3306
This works:
services:
nginx-proxy:
image: jwilder/nginx-proxy
ports:
- "80:80"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
site:
volumes:
- .:/app
build:
dockerfile: Dockerfile.dev
context: docker
expose:
- 80
environment:
VIRTUAL_HOST: ${VIRTUAL_HOST}
But this is not:
services:
nginx-proxy:
image: jwilder/nginx-proxy
ports:
- "80:80"
- "8080:8080"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
site:
volumes:
- .:/app
build:
dockerfile: Dockerfile.dev
context: docker
expose:
- 80
environment:
VIRTUAL_HOST: ${VIRTUAL_HOST}
webpack:
image: node:6.12.0
expose:
- 8080
environment:
VIRTUAL_HOST: ${VIRTUAL_HOST}
VIRTUAL_PORT: 8080
volumes:
- .:/app
working_dir: /app
command: bash -c "yarn install; yarn run gulp server"
What am I do wrong? How can I solve this problem?
//Sorry for my worst English. Hope you'll understand me
Update:
This is just an example. In the future I'll make proxy as external network and will connect services to it. And I wont to run two docker-compose "files" on the same host (VPS). Purpose: production and test versions on the same host, that use same ports BUT different domains. For example:
example.com -- Web Site
example.com:81 -- PhpMyAdmin
test.example.com -- Web Site for testing
test.example.com:81 -- PhpMyAdmin for testing

Resources