Redis, sidekiq and docker-compose using AWS elasticache(clusters) - docker

The main problem I have got when I started this project is to assign to sidekiq the redis connection and to use elasticache redis cluster inside redis.
this is my docker-compose file:
version: "3.3"
services:
db:
image: postgres:12.9-alpine
ports:
- "5432:5432"
volumes:
- db_data_postgres:/var/lib/postgresql/data
networks:
appnet:
ipv4_address: 172.20.0.3
redis:
image: redis:latest
restart: always
ports:
- '6379:6379'
command: bash -c "redis-server" # what should go here?
networks:
appnet:
ipv4_address: 172.20.0.4
app:
build:
context: .
volumes:
- .:/app
- gem_cache:/usr/local/bundle/gems
- node_modules:/app/node_modules
depends_on:
- db
- worker_sidekiq
links:
- db
ports:
- "8000:8000"
env_file:
- .env
environment:
- REDIS_URL=redis://redis:6379
command: bash -c "rm -f tmp/pids/server.pid && RAILS_ENV=development bin/rails s -b '0.0.0.0' -p 8000"
networks:
appnet:
ipv4_address: 172.20.0.5
worker_sidekiq:
build: .
image: app
command: bash -c "bundle exec sidekiq"
volumes:
- .:/app
- gem_cache:/usr/local/bundle/gems
- node_modules:/app/node_modules
depends_on:
- redis
environment:
- REDIS_URL=redis://redis:6379
networks:
appnet:
ipv4_address: 172.20.0.6
networks:
appnet:
ipam:
config:
- subnet: 172.20.0.0/16
volumes:
gem_cache:
db_data_postgres:
node_modules:
I've tried command: redis-cli -h *.*.use1.cache.amazonaws.com -p 6379 but I got errors on start.
I have tried to figured it out for a while and i don't know how to link the AWS Redis Cluster in my docker-compose file. Any ideeas?

ElastiCache is can not be connected from outside AWS by default. You can connect to these clusters by bastion host or AWS Client VPN.
AWS Official Doc: https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/accessing-elasticache.html#access-from-outside-aws

Related

Docker compose container fails to read the application running in different container and in same network

I have docker-compose.yml file which contains frontend,backend,testing,postgres and pgadmin container. The containers except testing are able to communicate each other. But the testing container fails to communicate with backend and frontend container in docker-compose.
version: '3.7'
services:
frontend:
container_name: test-frontend
build:
context: ./frontend
dockerfile: Dockerfile.local
ports:
- '3000:3000'
networks:
- test-network
environment:
# For the frontend can be applied only during the build!
# (while it's applied when TS is compiled)
# You have to build manually without cache if one of those are changed at least for the prod mode.
- REACT_APP_BACKEND_API=http://localhost:8000/api/v1
- REACT_APP_GOOGLE_CLIENT_ID=1234567dfghjjnfd
- CI=true
- CHOKIDAR_USEPOLLING=true
postgres:
image: postgres
environment:
POSTGRES_USER: dev
POSTGRES_PASSWORD: dev
PGDATA: /data/postgres
volumes:
- postgres:/data/postgres
ports:
- "5432:5432"
networks:
- test-network
restart: unless-stopped
pgadmin:
image: dpage/pgadmin4
environment:
PGADMIN_DEFAULT_EMAIL: "dev#dev.com"
PGADMIN_DEFAULT_PASSWORD: dev
volumes:
- pgadmin:/root/.pgadmin
- ./pgadmin-config/servers.json:/pgadmin4/servers.json
ports:
- "5050:80"
networks:
- test-network
restart: unless-stopped
backend:
container_name: test-backend
build:
context: ./backend
dockerfile: Dockerfile.local
ports:
- '8000:80'
volumes:
- ./backend:/app
command: >
bash -c "alembic upgrade head
&& exec /start-reload.sh"
networks:
- test-network
depends_on:
- postgres
environment:
- GOOGLE_APPLICATION_CREDENTIALS=/app/.secret/secret.json
- APP_DB_CONNECTION_STRING=postgresql+psycopg2://dev:dev#postgres:5432/postgres
- LOG_LEVEL=debug
- SQLALCHEMY_ECHO=True
- AUTH_ENABLED=True
- CORS=*
- GCP_ALLOWED_DOMAINS=*
testing:
container_name: test-testing
build:
context: ./testing
dockerfile: Dockerfile
volumes:
- ./testing:/isp-app
command: >
bash -c "/wait
&& robot ."
networks:
- test-network
depends_on:
- backend
- frontend
environment:
- WAIT_HOSTS= frontend:3000, backend:8000
- WAIT_TIMEOUT= 3000
- WAIT_SLEEP_INTERVAL=300
- WAIT_HOST_CONNECT_TIMEOUT=300
volumes:
postgres:
pgadmin:
networks:
test-network:
driver: bridge
All the containers are mapped to test-network. When the testing container tried to connect to frontend:3000 or backend:8000, it throws "Host [ backend:8000] not yet available"
How to fix it?

Attribute Error in CeleryBeat Due to DataBaseSceduler

I am trying to use celery for asynchronous jobs and am using celery, docker and digitalocean.
I have line that is depicted below in docker-compose file.
As you can see there is celery beat part.
In celery beat part, there is "django_celery_beat.schedulers:DatabaseScheduler" and as far as I understand it can not find django_celery_beat.schedulers:DatabaseScheduler. I could not understand how may I solve tha problem.
version: '3.3'
services:
web:
build: .
image: proje
command: gunicorn -b 0.0.0.0:8000 proje.wsgi -w 4 --timeout 300 -t 80
restart: unless-stopped
tty: true
env_file:
- ./.env.production
networks:
- app-network
depends_on:
- migration
- database
- redis
healthcheck:
test: ["CMD", "wget", "http://localhost/healthcheck"]
interval: 3s
timeout: 3s
retries: 10
celery:
image: proje
command: celery -A proje worker -l info -n worker1#%%h
restart: unless-stopped
networks:
- app-network
environment:
- DJANGO_SETTINGS_MODULE=proje.settings
env_file:
- ./.env.production
depends_on:
- redis
celerybeat:
image: proje
command: celery -A proje beat -l info --scheduler django_celery_beat.schedulers:DatabaseScheduler
restart: unless-stopped
networks:
- app-network
environment:
- DJANGO_SETTINGS_MODULE=proje.settings
env_file:
- ./.env.production
depends_on:
- redis
migration:
image: proje
command: python manage.py migrate
volumes:
- .:/usr/src/app/
env_file:
- ./.env.production
depends_on:
- database
networks:
- app-network
webserver:
image: nginx:alpine
container_name: webserver
restart: unless-stopped
tty: true
ports:
- "80:80"
- "443:443"
volumes:
- ./static/:/var/www/static/
- ./conf/nginx/:/etc/nginx/conf.d/
- webserver-logs:/var/log/nginx/
networks:
- app-network
database:
image: "postgres:12" # use latest official postgres version
restart: unless-stopped
env_file:
- .databaseenv # configure postgres
ports:
- "5432:5432"
volumes:
- database-data:/var/lib/postgresql/data/
networks:
- app-network
redis:
image: "redis:5.0.8"
restart: unless-stopped
command: [ "redis-server", "/redis.conf" ]
working_dir: /var/lib/redis
ports:
- "6379:6379"
volumes:
- ./conf/redis/redis.conf:/redis.conf
- redis-data:/var/lib/redis/
networks:
- app-network
#Docker Networks
networks:
app-network:
driver: bridge
volumes:
database-data:
webserver-logs:
redis-data:
And it gives me result that is depicted below. I am stuck in my project for months.
Any help will be appreciated.
I have uploaded all these to an Ubuntu server and it has worked. I think my computer(Win 10) has some incompatibility with.
Thanks.

Preventing development containers from running in production

I have a docker-compose.yml file that includes a container for API mocks as well as phpmyadmin and mongo-express containers none of which should be started in my production environment.
I already have seperate .env files for production and development. Is it possible to use variables from the active .env file to disable a container?
Here is my docker-compose.yml:
services:
mysql:
build: ./docker/mysql
command: --default-authentication-plugin=mysql_native_password
container_name: mysql
entrypoint: sh -c "/usr/local/bin/docker-entrypoint.sh --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci"
environment:
- MYSQL_DATABASE=${MYSQL_DATABASE}
- MYSQL_USER=${MYSQL_USERNAME}
- MYSQL_PASSWORD=${MYSQL_PASSWORD}
- MYSQL_RANDOM_ROOT_PASSWORD=yes
- MYSQL_ONETIME_PASSWORD=yes
ports:
- 3306:3306
restart: unless-stopped
volumes:
- ./data/mysql:/var/lib/mysql
phpmyadmin:
container_name: phpmyadmin
environment:
- PMA_HOST=mysql
image: phpmyadmin/phpmyadmin
ports:
- 8080:80
mongo:
build: ./docker/mongo
container_name: mongo
environment:
- MONGO_INITDB_DATABASE=${MONGO_DATABASE}
- MONGO_INITDB_ROOT_USERNAME=${MONGO_USERNAME}
- MONGO_INITDB_ROOT_PASSWORD=${MONGO_PASSWORD}
ports:
- 27017:27017
restart: unless-stopped
volumes:
- ./data/mongo:/data/db
mongo-express:
build: ./docker/mongo-express
container_name: mongo-express
depends_on:
- mongo
environment:
- ME_CONFIG_BASICAUTH_PASSWORD=redacted
- ME_CONFIG_BASICAUTH_USERNAME=username
- ME_CONFIG_MONGODB_ADMINUSERNAME=${MONGO_USERNAME}
- ME_CONFIG_MONGODB_ADMINPASSWORD=${MONGO_PASSWORD}
ports:
- 8081:8081
redis:
build: ./docker/redis
container_name: redis
ports:
- 6379:6379
restart: unless-stopped
volumes:
- ./data/redis:/data
mock-apis:
build: ./docker/mock-apis
container_name: mock-apis
command: >
/initscript.bash
ports:
- 81:80
volumes:
- ./mock-apis:/home/nodejs
php-fpm:
build:
context: ./docker/php-fpm
args:
HOST_UID: ${HOST_UID}
command: >
/initscript.bash
container_name: php-fpm
restart: unless-stopped
depends_on:
- mongo
- mysql
- redis
volumes:
- ./laravel:/var/www/
nginx:
build: ./docker/nginx
container_name: nginx
depends_on:
- php-fpm
ports:
- 80:80
restart: unless-stopped
volumes:
- ./laravel:/var/www/
version: "3"
I'm using profiles to scope my services. If I want to use PhpMyAdmin only in dev I add this profile to the service:
phpmyadmin:
container_name: phpmyadmin
environment:
- PMA_HOST=mysql
image: phpmyadmin/phpmyadmin
ports:
- 8080:80
profiles: ["dev"]
So now I have to tell to docker compose if I want to use the dev profile. Else it will not start.
You can use one of these command (with this way you have to type --profile your_profil for each profile):
$ docker-compose --profile dev up -d
$ docker-compose --profile dev --profile profil2 up -d <- for multiple profiles
Or the cleaner way you can separate your services with a comma:
$ COMPOSE_PROFILES=dev,profil2 docker-compose up -d
Services without a profiles attribute will always be enabled.
Care about when you stop your services you have to specify the profile too:
$ COMPOSE_PROFILES=dev docker-compose down

docker-compose services not able to communicate with each other

Below is my docker-compose.yml file having 3 services.
version: '3.0'
services:
mongodb:
build: ./mongodb
image: mongo:1.0
container_name: mongodb
ports:
- "27017:27017"
networks:
- appNetwork
node-service:
build: ./node-service
image: node-service-live:1.0
container_name: node-service
command:
sh -c "dockerize -wait tcp://mongodb:27017 -timeout 1m && npm start"
expose:
- "3031"
ports:
- "3031:3031"
networks:
- appNetwork
angular-app:
build: ./angular-app
image: angular-app-live:1.0
container_name: angular-app
command: ng serve --host 0.0.0.0 --port 4201 --disable-host-check
ports:
- "4201:4201"
networks:
- appNetwork
networks:
appNetwork:
external: true
When I execute docker-compose up, node-service is able to connect to mongodb service using this link: mongodb://mongodb:27017/DBName?authMechanism=DEFAULT.
But angular-app service can't communicate with node-service despite being in same network. In angular-app I am using following link(http://node-service:3031) to connect to node-service.
What am I missing??

Can't connect to selenium-hub from Rails sidekiq with docker-compose.yml

Trying to run selenium from sidekiq worker with docker-compose.
It works well if I run job from rails task. But It doesn't work when I run from sidekiq.
I got this error when I run Job from sidekiq.
Errno::EADDRNOTAVAIL: Failed to open TCP connection to localhost:4444 (Cannot assign requested address - connect(2) for "localhost" port 4444)
docker-compose.yml
version: '3'
services:
db:
image: mysql
volumes:
- ./tmp/db:/var/lib/postgresql/data
web:
build: .
command: bundle exec rails s -p 3000 -b '0.0.0.0'
volumes:
- .:/myapp
ports:
- "3000:3000"
depends_on:
- db
redis:
image: redis:latest
ports:
- 6379:6379
sidekiq:
build: .
command: bundle exec sidekiq
volumes:
- .:/myapp
depends_on:
- db
- redis
selenium-hub:
image: selenium/hub:3.12.0-boron
container_name: selenium-hub
ports:
- "4444:4444"
chrome:
image: selenium/node-chrome:3.12.0-boron
depends_on:
- selenium-hub
environment:
- HUB_HOST=selenium-hub
- HUB_PORT=4444
firefox:
image: selenium/node-firefox:3.12.0-boron
depends_on:
- selenium-hub
environment:
- HUB_HOST=selenium-hub
- HUB_PORT=4444
Please suggest me how to fix this problem
I have it working with a docker-compose.yml like this:
version: '3.3'
services:
selenium-hub:
container_name: selenium_hub
image: selenium/hub:3.12.0-cobalt
ports:
- 4444:4444
networks:
- selenium_grid
selenium-chrome:
container_name: selenium_chrome
image: selenium/node-chrome:3.12.0-cobalt
environment:
- HUB_HOST=selenium_hub
- HUB_PORT=4444
volumes:
- /dev/shm:/dev/shm
networks:
- selenium_grid
depends_on:
- selenium-hub
selenium-firefox:
container_name: selenium_firefox
image: selenium/node-firefox:3.12.0-cobalt
environment:
- HUB_HOST=selenium_hub
- HUB_PORT=4444
volumes:
- /dev/shm:/dev/shm
networks:
- selenium_grid
depends_on:
- selenium-hub
networks:
selenium_grid:
driver: bridge

Resources