I need to configure Sidekiq in Docker.Below the docker compose file configuration i am using for build.I am getting cannot locate specified docker file error while building sidekiq. Without sidekiq configuration docker compose build getting success. How to configure sidekiq server in Docker?
version: '3'
volumes:
postgres_data: {}
services:
redis:
image: redis
command: redis-server
ports:
- "6379:6379"
app:
build:
context: .
dockerfile: /Users/admin/git/generic/myapp/docker/app/Dockerfile
depends_on:
- db
ports:
- 3000:3000
db:
image: postgres
volumes:
- postgres_data:/var/lib/postgresql/data
web:
build:
context: .
dockerfile: /Users/admin/git/generic/myapp/docker/web/Dockerfile
depends_on:
- app
ports:
- 80:80
sidekiq:
build: .
command: bundle exec sidekiq
depends_on:
- redis
Related
I migrate from Redis to RabbitMQ.
I start my project in docker-compose, I got problem with Celery tasks after I migrate from Redis to RabbitMQ broker.
Celery don't download old tasks when I reload all containers.
Celery got simple logs without downloading old tasks from RabbitMQ.
[2022-10-27 21:44:39,263: INFO/MainProcess] Connected to amqp://admin:**#rabbitmq:5672//
[2022-10-27 21:44:39,293: INFO/MainProcess] mingle: searching for neighbors
[2022-10-27 21:44:40,349: INFO/MainProcess] mingle: all alone
[2022-10-27 21:44:40,414: INFO/MainProcess] celery#29441ac7ffed ready.
docker-compose.yaml
version: "2.2"
services:
nginx:
build:
context: ./nginx
dockerfile: Dockerfile
container_name: nginx
restart: always
ports:
- ${PUB_PORT}:80
volumes:
- static_volume:/var/www/static
- ./backend/mediafiles:/var/www/media
depends_on:
- django
django:
build:
context: ./backend
dockerfile: Dockerfile.prod
container_name: backend
restart: always
env_file:
- ./.env.prod
environment:
- IS_DOCKER=True
- DJANGO_SETTINGS_MODULE=core.settings.production
volumes:
- static_volume:/django/staticfiles
- ./backend/mediafiles:/django/mediafiles
- ./backend:/django # only for local development
depends_on:
postgres:
condition: service_healthy
aiogram:
build:
context: ./telegram_bot
dockerfile: Dockerfile
container_name: telegram_bot
restart: always # crash: not found token
command: ["python", "main.py"]
volumes:
- ./backend/mediafiles:/bot/mediafiles
env_file:
- ./.env.prod
environment:
- IS_DOCKER=True
depends_on:
- django
postgres:
image: postgres:13.0-alpine
container_name: project_db
restart: always
volumes:
- postgres_volume:/var/lib/postgresql/data
depends_on:
- redis
ports:
- 54321:5432
environment:
- POSTGRES_DB=${POSTGRES_DB}
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
healthcheck:
test: ["CMD","pg_isready", "--username=${POSTGRES_USER}","-d", "{POSTGRES_DB}"]
redis:
build: ./redis
ports:
- ${REDIS_PORT}:6379
environment:
- REDIS_PASSWORD=${REDIS_PASSWORD}
volumes:
- ./redis/redis.conf/:/usr/local/etc/redis.conf
- ./redis/data:/usr/local/redis/data
- ./redis/redis.log:/usr/local/redis/redis.log
restart: always
container_name: redis
# celery worker
celery:
container_name: celery
restart: always
build:
context: ./backend
dockerfile: Dockerfile.celery.prod
command: celery -A core worker -l info
environment:
- DJANGO_SETTINGS_MODULE=core.settings.production
env_file:
- ./.env.prod
depends_on:
- django
- redis
- postgres
- rabbitmq
# message broker for celery
rabbitmq:
container_name: rabbitmq
restart: always
image: rabbitmq:3.9-alpine
volumes:
- "./rabbitmq-data:/var/lib/rabbitmq"
environment:
- RABBITMQ_DEFAULT_USER=admin
- RABBITMQ_DEFAULT_PASS=mypass
ports:
- "5672:5672"
- "15672:15672"
volumes:
postgres_volume:
static_volume:
redis_data:
Dockerfile.celery.prod
FROM python:3.8.5
WORKDIR /django
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# Copy only requirements to cache them in docker layer
RUN pip install --upgrade pip
COPY ./requirements.txt /django/
RUN pip install -r requirements.txt
COPY . .
I tried to run delayed tasks with Celery worker after reload all containers in docker-compose including RabbitMQ too.
Using the below docker compose files, i am unable to bring up my app correctly. Docker says my LAPIS_ENV environment variable is not set, but i am setting it in my second compose file which I am expecting to be merged into the first one. I have tried including them in reverse order to no avail.
version: '2.4'
services:
backend:
mem_limit: 50mb
memswap_limit: 50mb
build:
context: ./backend
dockerfile: Dockerfile
depends_on:
- postgres
volumes:
- ./backend:/var/www
- ./data:/var/data
restart: unless-stopped
command: bash -c "/usr/local/bin/docker-entrypoint.sh ${LAPIS_ENV}"
postgres:
build:
context: ./postgres
dockerfile: Dockerfile
environment:
PGDATA: /var/lib/postgresql/data/pgdata
POSTGRES_HOST_AUTH_METHOD: trust
volumes:
- postgres:/var/lib/postgresql/data
- ./postgres/pg_hba.conf:/var/lib/postgres/data/pg_hba.conf
- ./data/backup:/pgbackup
restart: unless-stopped
volumes:
postgres:
version: '2.4'
services:
backend:
environment:
LAPIS_ENV: development
ports:
- 8080:80
#!/usr/bin/env bash
docker compose -f docker-compose.yml -f docker-compose.dev.yml up
I can't get the connection between my PostgreSQL database from my Rails app which is running in a Docker container working.
The application just works fine, I just can't connect to the database.
docker-compose.yml:
services:
app:
build:
context: .
dockerfile: app.Dockerfile
container_name: application_instance
command: bash -c "bundle exec puma -C config/puma.rb"
volumes:
- .:/app
- node-modules:/app/node_modules
- public:/app/public
depends_on:
- database
- redis
env_file:
- .env
database:
image: postgres
container_name: database_instance
restart: always
volumes:
- db_data:/var/lib/postgresql/data
ports:
- "5432:5432"
env_file:
- .env
environment:
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_DB: ${POSTGRES_PRODUCTION_DB}
nginx:
build:
context: .
dockerfile: nginx.Dockerfile
depends_on:
- app
volumes:
- public:/app/public
ports:
- "80:80"
redis:
image: redis
container_name: redis_instance
ports:
- "6379:6379"
sidekiq:
container_name: sidekiq_instance
build:
context: .
dockerfile: app.Dockerfile
depends_on:
- redis
- database
command: bundle exec sidekiq
volumes:
- .:/app
env_file:
- .env
volumes:
db_data:
node-modules:
public:
If I try to connect via DBeaver I get the following message:
Any idea what's going wrong here? The port should be exposed on my local machine. I also tried with the IP of the container, but then I get a timeout exception.
This is because you most likely have postgres running locally on your machine (port 5432) and also on a docker (port 5432). Dbeaver wants to connect to database on your local machine, than on docker.
Any solution I figure out is to temporary stop/turn of your local postgres service (on Windows: Task manager -> Services -> (postgres service) -> stop).
I was also struggling with issue.
i have serveral services in my docker-compose file, it looks like this:
version: '3.7'
services:
web:
build: ./
command: gunicorn --bind 0.0.0.0:5000 --workers 2 --worker-connections 5000 --timeout 6000 manage:app
volumes:
- ./:/usr/src/app/
- static_volume:/usr/src/app/static_files
expose:
- 5000
env_file:
- ./.env.prod
depends_on:
- mongodb
mongodb:
image: mongo:4.4.1
restart: unless-stopped
command: mongod
ports:
- '27017:27017'
environment:
MONGODB_DATA_DIR: /data/db
MONDODB_LOG_DIR: /dev/null
MONGO_INITDB_ROOT_USERNAME:
MONGO_INITDB_ROOT_PASSWORD:
volumes:
- mongodbdata:/data/db
nginx:
build: ./nginx
volumes:
- static_volume:/usr/src/app/static_files
ports:
- 5001:8000
depends_on:
- web
volumes:
mongodbdata:
static_volume:
and i have public repository on my docker hub account, i want to push all images in my app to that repo, anyone can help?
You should add image names to your services, including your docker hub id, e.g.:
services:
web:
build: ./
image: docker-hub-id/web:latest
...
Now, you can just call docker-compose push.
See docker-compose push
I cannot connect redis client in a docker container with custom redis.conf file. Also even if i remove the code for connect redis with custom redis.conf file docker will still attempt to connect to custom redis file.
Docker.compose.yml
version: '2'
services:
data:
environment:
- RHOST=redis
command: echo true
networks:
- redis-net
depends_on:
- redis
redis:
image: redis:latest
build:
context: .
dockerfile: Dockerfile_redis
ports:
- "6379:6379"
command: redis-server /etc/redis/redis.conf
volumes:
- ./redis.conf:/etc/redis/redis.conf
networks:
redis-net:
volumes:
redis-data:
Dockerfile_redis
FROM redis:latest
COPY redis.conf /etc/redis/redis.conf
CMD [ "redis-server", "/etc/redis/redis.conf" ]
This is where i connect to redis. I use requirepass in redis.conf file.
redis_client = redis.Redis(host='redis',password='password1')
Is there a way to find out original redis.conf file that docker uses so then i could just change password to make redis secure ? I just use original redis.conf file which comes after installation of redis to server with "apt install redis" then i change requirepass.
I have fixed this issue finally with help of https://github.com/sameersbn/docker-redis.
There are no need to use dockerfile for redis in this case.
Docker.compose.yml:
version: '2'
services:
data:
command: echo true
environment:
- RHOST=Redis
depends_on:
- Redis
Redis:
image: sameersbn/redis:latest
ports:
- "6379:6379"
environment:
- REDIS_PASSWORD=changeit
volumes:
- /srv/docker/redis:/var/lib/redis
restart: always
redis_connect.py
redis_client = redis.Redis(host='Redis',port=6379,password='changeit')