Attribute Error in CeleryBeat Due to DataBaseSceduler - docker

I am trying to use celery for asynchronous jobs and am using celery, docker and digitalocean.
I have line that is depicted below in docker-compose file.
As you can see there is celery beat part.
In celery beat part, there is "django_celery_beat.schedulers:DatabaseScheduler" and as far as I understand it can not find django_celery_beat.schedulers:DatabaseScheduler. I could not understand how may I solve tha problem.
version: '3.3'
services:
web:
build: .
image: proje
command: gunicorn -b 0.0.0.0:8000 proje.wsgi -w 4 --timeout 300 -t 80
restart: unless-stopped
tty: true
env_file:
- ./.env.production
networks:
- app-network
depends_on:
- migration
- database
- redis
healthcheck:
test: ["CMD", "wget", "http://localhost/healthcheck"]
interval: 3s
timeout: 3s
retries: 10
celery:
image: proje
command: celery -A proje worker -l info -n worker1#%%h
restart: unless-stopped
networks:
- app-network
environment:
- DJANGO_SETTINGS_MODULE=proje.settings
env_file:
- ./.env.production
depends_on:
- redis
celerybeat:
image: proje
command: celery -A proje beat -l info --scheduler django_celery_beat.schedulers:DatabaseScheduler
restart: unless-stopped
networks:
- app-network
environment:
- DJANGO_SETTINGS_MODULE=proje.settings
env_file:
- ./.env.production
depends_on:
- redis
migration:
image: proje
command: python manage.py migrate
volumes:
- .:/usr/src/app/
env_file:
- ./.env.production
depends_on:
- database
networks:
- app-network
webserver:
image: nginx:alpine
container_name: webserver
restart: unless-stopped
tty: true
ports:
- "80:80"
- "443:443"
volumes:
- ./static/:/var/www/static/
- ./conf/nginx/:/etc/nginx/conf.d/
- webserver-logs:/var/log/nginx/
networks:
- app-network
database:
image: "postgres:12" # use latest official postgres version
restart: unless-stopped
env_file:
- .databaseenv # configure postgres
ports:
- "5432:5432"
volumes:
- database-data:/var/lib/postgresql/data/
networks:
- app-network
redis:
image: "redis:5.0.8"
restart: unless-stopped
command: [ "redis-server", "/redis.conf" ]
working_dir: /var/lib/redis
ports:
- "6379:6379"
volumes:
- ./conf/redis/redis.conf:/redis.conf
- redis-data:/var/lib/redis/
networks:
- app-network
#Docker Networks
networks:
app-network:
driver: bridge
volumes:
database-data:
webserver-logs:
redis-data:
And it gives me result that is depicted below. I am stuck in my project for months.
Any help will be appreciated.

I have uploaded all these to an Ubuntu server and it has worked. I think my computer(Win 10) has some incompatibility with.
Thanks.

Related

Make call from one docker container to another one

So I have two different applications one wordpress and other is api. And both running on docker containers and have their own configurations. This is their docker-compose settings:
version: "3.8"
services:
app:
container_name: ${APP_NAME}_app
build:
context: .
dockerfile: ./.docker/php/Dockerfile
expose:
- 9000
volumes:
- .:/usr/src/app
- ./public:/usr/src/app/public
depends_on:
- db
networks:
- app_network
nginx:
container_name: ${APP_NAME}_nginx
build:
context: .
dockerfile: ./.docker/nginx/Dockerfile
volumes:
- ./public:/usr/src/app/public
ports:
- "8081:8081"
expose:
- 8081
environment:
NGINX_FPM_HOST: app
NGINX_ROOT: /usr/src/app/public
depends_on:
- app
networks:
- app_network
db:
container_name: ${APP_NAME}_db
image: mysql:5.7
volumes:
- db_data:/var/lib/mysql
ports:
- "3307:3306"
environment:
MYSQL_ROOT_PASSWORD: ${DB_PASSWORD}
MYSQL_PASSWORD: ${DB_PASSWORD}
MYSQL_DATABASE: ${DB_DATABASE}
MYSQL_ALLOW_EMPTY_PASSWORD: "yes"
networks:
- app_network
networks:
app_network:
driver: bridge
volumes:
db_data:
driver: local
And this is my wordpress configuration:
version: '3.8'
services:
mysql:
image: mysql:5.7
environment:
- MYSQL_ROOT_PASSWORD=somewordpress
- MYSQL_DATABASE=wordpress
- MYSQL_USER=wordpress
- MYSQL_PASSWORD=wordpress
expose:
- 3306
- 33060
healthcheck:
test: mysqladmin ping -h 127.0.0.1 -u $$MYSQL_USER --password=$$MYSQL_PASSWORD
interval: 1s
timeout: 3s
retries: 30
networks:
- app_network
wordpress:
build:
context: .
dockerfile: Dockerfile
depends_on:
mysql:
condition: service_healthy
volumes:
- .:/var/www/html/wp-content/plugins/name
ports:
- "80:80"
restart: always
environment:
- WORDPRESS_URL=http://localhost
- WORDPRESS_DB_HOST=mysql
- WORDPRESS_DB_USER=wordpress
- WORDPRESS_DB_PASSWORD=wordpress
- WORDPRESS_DB_NAME=wordpress
networks:
- app_network
networks:
app_network:
driver: bridge
And when I try to make request to api to this URL http://localhost:8081 well nothing happens. Locally works everything fine but on docker it doesn't.
Would appreciate some help how to make this work :)
If you have a docker container and you do http://localhost:8081, you wont go to your host pc, but to the container itself.
In docker-compose you need to replace localhost with the service name:
For example if you want to access 8081 of the nginx, you need to connect to http://nginx:8081

Docker compose container fails to read the application running in different container and in same network

I have docker-compose.yml file which contains frontend,backend,testing,postgres and pgadmin container. The containers except testing are able to communicate each other. But the testing container fails to communicate with backend and frontend container in docker-compose.
version: '3.7'
services:
frontend:
container_name: test-frontend
build:
context: ./frontend
dockerfile: Dockerfile.local
ports:
- '3000:3000'
networks:
- test-network
environment:
# For the frontend can be applied only during the build!
# (while it's applied when TS is compiled)
# You have to build manually without cache if one of those are changed at least for the prod mode.
- REACT_APP_BACKEND_API=http://localhost:8000/api/v1
- REACT_APP_GOOGLE_CLIENT_ID=1234567dfghjjnfd
- CI=true
- CHOKIDAR_USEPOLLING=true
postgres:
image: postgres
environment:
POSTGRES_USER: dev
POSTGRES_PASSWORD: dev
PGDATA: /data/postgres
volumes:
- postgres:/data/postgres
ports:
- "5432:5432"
networks:
- test-network
restart: unless-stopped
pgadmin:
image: dpage/pgadmin4
environment:
PGADMIN_DEFAULT_EMAIL: "dev#dev.com"
PGADMIN_DEFAULT_PASSWORD: dev
volumes:
- pgadmin:/root/.pgadmin
- ./pgadmin-config/servers.json:/pgadmin4/servers.json
ports:
- "5050:80"
networks:
- test-network
restart: unless-stopped
backend:
container_name: test-backend
build:
context: ./backend
dockerfile: Dockerfile.local
ports:
- '8000:80'
volumes:
- ./backend:/app
command: >
bash -c "alembic upgrade head
&& exec /start-reload.sh"
networks:
- test-network
depends_on:
- postgres
environment:
- GOOGLE_APPLICATION_CREDENTIALS=/app/.secret/secret.json
- APP_DB_CONNECTION_STRING=postgresql+psycopg2://dev:dev#postgres:5432/postgres
- LOG_LEVEL=debug
- SQLALCHEMY_ECHO=True
- AUTH_ENABLED=True
- CORS=*
- GCP_ALLOWED_DOMAINS=*
testing:
container_name: test-testing
build:
context: ./testing
dockerfile: Dockerfile
volumes:
- ./testing:/isp-app
command: >
bash -c "/wait
&& robot ."
networks:
- test-network
depends_on:
- backend
- frontend
environment:
- WAIT_HOSTS= frontend:3000, backend:8000
- WAIT_TIMEOUT= 3000
- WAIT_SLEEP_INTERVAL=300
- WAIT_HOST_CONNECT_TIMEOUT=300
volumes:
postgres:
pgadmin:
networks:
test-network:
driver: bridge
All the containers are mapped to test-network. When the testing container tried to connect to frontend:3000 or backend:8000, it throws "Host [ backend:8000] not yet available"
How to fix it?

Docker image names are changed to sha256 after subsequent runs

I am using Docker 20.10.7 on Mac OS and docker-compose to run multiple docker containers.
When I start it for the first time, all the docker images are properly labeled and appear as the following.
However, after subsequent runs (docker-compose up, docker-compose down), suddenly all the image names are changed to sha256 and start to look like this
Please advise how to avoid this behavior. Thank you.
UPDATE #1
This is the docker-compose file I use to start containers.
Initially the old displayed with properly labeled image names.
However, not even if I run a docker system prune command it continues to label them as sha256:...
version: '3.8'
services:
influxdb:
image: influxdb:1.8
container_name: influxdb
ports:
- "8083:8083"
- "8086:8086"
- "8090:8090"
- "2003:2003"
env_file:
- 'env.influxdb.properties'
volumes:
- /Users/user1/Docker/influxdb/data:/var/lib/influxdb
restart: unless-stopped
telegraf:
image: telegraf:latest
container_name: telegraf
links:
- db
volumes:
- ./telegraf.conf:/etc/telegraf/telegraf.conf:ro
- /var/run/docker.sock:/var/run/docker.sock
restart: unless-stopped
grafana:
image: grafana/grafana:latest
container_name: grafana
ports:
- "3000:3000"
env_file:
- 'env.grafana.properties'
links:
- influxdb
volumes:
- /Users/user1/Docker/grafana/data:/var/lib/grafana
restart: unless-stopped
db:
image: mysql
container_name: db-container
command: --default-authentication-plugin=mysql_native_password
ports:
- '3306:3306'
environment:
MYSQL_ROOT_PASSWORD: P#ssw0rd
MYSQL_USER: root
MYSQL_PASSWORD: P#ssw0rd
MYSQL_DATABASE: db1
volumes:
- /Users/user1/Docker/mysql/data:/var/lib/mysql
- "../sql/schema.sql:/docker-entrypoint-initdb.d/1.sql"
healthcheck:
test: "/usr/bin/mysql --user=root --password=P#ssw0rd --execute \"SHOW DATABASES;\""
interval: 2s
timeout: 20s
retries: 10
restart: always
adminer:
image: adminer
container_name: adminer
restart: always
ports:
- 8081:8080
redis:
image: bitnami/redis
container_name: redis
environment:
- ALLOW_EMPTY_PASSWORD=yes
#- REDIS_DISABLE_COMMANDS=FLUSHDB,FLUSHALL
ports:
- '6379:6379'
volumes:
- '/Users/user1/Docker/redis/data:/bitnami/redis/data'
- ./redis.conf:/opt/bitnami/redis/mounted-etc/redis.conf

saleor backend not available under droplet ip from digital ocean

I'd like to deploy my saleor-shop application completely via docker.
So I've built the respective images for saleor backend, storefront & dashboard.
Running the app locally works fine.
Backend is available on localhost:8000/graphql
Storefront runs at localhost:3000
Dashboard runs at localhost:9000
If I'd like to run the app on the droplet IP --> I get issues with running the saleor backend.
As of now trying to access XXX.XX.XXX.XXX:8000 results in "This site can't be reached".
The storefront and dashboard are accessible on XXX.XX.XXX.XXX:3000 and XXX.XX.XXX.XXX:9000 however without any interaction with the backend cause its not available. Thats why the graphql calls are not functioning on the storefront and logging into the dashboard does not work either cause the backend is not available. I think I'm missing something here and would appreciate any help.
[
Within my droplet I'm using the following docker-compose.yml file to get my docker containers up:
services:
api:
ports:
- 8000:8000
image: XXX/murukku-shop
restart: unless-stopped
networks:
- saleor-backend-tier
depends_on:
- db
- redis
- jaeger
env_file: common.env
environment:
- JAEGER_AGENT_HOST=jaeger
- STOREFRONT_URL=http://XXX.XX.XXX.XXX:3000/
- DASHBOARD_URL=http://XXX.XX.XXX.XXX:9000/
storefront:
image: XXX/murukku-storefront
ports:
- 3000:80
restart: unless-stopped
dashboard:
image: XXX/murukku-dashboard
ports:
- 9000:80
restart: unless-stopped
db:
image: library/postgres:11.1-alpine
ports:
- 5432:5432
restart: unless-stopped
networks:
- saleor-backend-tier
volumes:
- saleor-db:/var/lib/postgresql/data
environment:
- POSTGRES_USER=saleor
- POSTGRES_PASSWORD=saleor
redis:
image: library/redis:5.0-alpine
ports:
- 6379:6379
restart: unless-stopped
networks:
- saleor-backend-tier
volumes:
- saleor-redis:/data
worker:
image: XXX/murukku-shop
restart: unless-stopped
networks:
- saleor-backend-tier
env_file: common.env
depends_on:
- redis
- mailhog
environment:
- EMAIL_URL=smtp://mailhog:1025
jaeger:
image: jaegertracing/all-in-one
ports:
- "5775:5775/udp"
- "6831:6831/udp"
- "6832:6832/udp"
- "5778:5778"
- "16686:16686"
- "14268:14268"
- "9411:9411"
restart: unless-stopped
networks:
- saleor-backend-tier
mailhog:
image: mailhog/mailhog
ports:
- 1025:1025 # smtp server
- 8025:8025 # web ui. Visit http://localhost:8025/ to check emails
restart: unless-stopped
networks:
- saleor-backend-tier
volumes:
saleor-db:
driver: local
saleor-redis:
driver: local
saleor-media:
networks:
saleor-backend-tier:
driver: bridge
I was testing Saleor like you in a docker setup and I've found a solution ! You have to set more env variable, they are all explained on the github page of the storefront and the dashboard.
Here is my config if you want :
version: '2'
services:
api:
ports:
- 8000:8000
build:
context: ./saleor
dockerfile: ./Dockerfile
args:
STATIC_URL: '/static/'
restart: unless-stopped
networks:
- saleor-backend-tier
depends_on:
- db
- redis
- jaeger
volumes:
- ./saleor/saleor/:/app/saleor:Z
- ./saleor/templates/:/app/templates:Z
- ./saleor/tests/:/app/tests
# shared volume between worker and api for media
- saleor-media:/app/media
command: python manage.py runserver 0.0.0.0:8000
env_file: common.env
environment:
# - DEFAULT_CURRENCY=EUR
#- DEFAULT_COUNTRY=
- ALLOWED_CLIENT_HOSTS=localhost,127.0.0.1,192.168.0.50
- ALLOWED_HOSTS=localhost,192.168.0.50
- JAEGER_AGENT_HOST=jaeger
- STOREFRONT_URL=http://192.168.0.50:3000/
- DASHBOARD_URL=http://192.168.0.50:9000/
storefront:
build:
context: ./saleor-storefront
dockerfile: ./Dockerfile.dev
ports:
- 3000:3000
restart: unless-stopped
volumes:
- ./saleor-storefront/:/app:cached
- /app/node_modules/
command: npm start -- --host 0.0.0.0
environment:
- NEXT_PUBLIC_API_URI=http://192.168.0.50:8000/graphql/
- API_URI=http://192.168.0.50:8000/graphql/
dashboard:
build:
context: ./saleor-dashboard
dockerfile: ./Dockerfile.dev
ports:
- 9000:9000
restart: unless-stopped
volumes:
- ./saleor-dashboard/:/app:cached
- /app/node_modules/
command: npm start -- --host 0.0.0.0
environment:
- API_URI=http://192.168.0.50:8000/graphql/
- APP_MOUNT_URI=/dashboard/
- STATIC_URL=http://192.168.0.50:9000/
db:
image: library/postgres:11.1-alpine
ports:
- 5432:5432
restart: unless-stopped
networks:
- saleor-backend-tier
volumes:
- saleor-db:/var/lib/postgresql/data
environment:
- POSTGRES_USER=saleor
- POSTGRES_PASSWORD=saleor
redis:
image: library/redis:5.0-alpine
ports:
- 6379:6379
restart: unless-stopped
networks:
- saleor-backend-tier
volumes:
- saleor-redis:/data
worker:
build:
context: ./saleor
dockerfile: ./Dockerfile
args:
STATIC_URL: '/static/'
command: celery -A saleor --app=saleor.celeryconf:app worker --loglevel=info
restart: unless-stopped
networks:
- saleor-backend-tier
env_file: common.env
depends_on:
- redis
- mailhog
volumes:
- ./saleor/saleor/:/app/saleor:Z,cached
- ./saleor/templates/:/app/templates:Z,cached
# shared volume between worker and api for media
- saleor-media:/app/media
environment:
- EMAIL_URL=smtp://mailhog:1025
jaeger:
image: jaegertracing/all-in-one
ports:
- "5775:5775/udp"
- "6831:6831/udp"
- "6832:6832/udp"
- "5778:5778"
- "16686:16686"
- "14268:14268"
- "9411:9411"
restart: unless-stopped
networks:
- saleor-backend-tier
mailhog:
image: mailhog/mailhog
ports:
- 1025:1025 # smtp server
- 8025:8025 # web ui. Visit http://localhost:8025/ to check emails
restart: unless-stopped
networks:
- saleor-backend-tier
volumes:
saleor-db:
driver: local
saleor-redis:
driver: local
saleor-media:
networks:
saleor-backend-tier:
driver: bridge
PS : It's my first answer on stackoverflow :D Don't forget to tick me as the answer if I solved your problem ;)

How to deal with multiple services inside a docker-compose.yml?

I have been using a Microservices architecture for develop my software, and I have running my services using Docker Compose, but my problem is when the new services were created I have to add them into the docker-compose.yml, and then I got about 200+ hundred lines of code inside the docker-compose.yml, and I have around 17 services for now which the services have related each other.
Then my question is "How to manage the docker-compose.yml to be easy to maintain and clean?"
My docker-compose.yml:
version: '2'
services:
mongo:
container_name: mongodb
image: mongo:3.4.7
volumes:
- ./mongo/data:/data/db
ports:
- 54321:27017
networks:
- zensorium_backend
restart: always
command: mongod --smallfiles
golang_oauth:
container_name: golang_oauth
build: .
volumes:
- ./oauth:/go/src/oauth
working_dir: /go/src/oauth
ports:
- 8080:8082
depends_on:
- mongo
env_file:
- ./.api.env
networks:
- zensorium_backend
command: realize start --run
restart: always
golang_account:
container_name: golang_account
build: .
volumes:
- ./account:/go/src/account
working_dir: /go/src/account
ports:
- 8081:8082
depends_on:
- mongo
env_file:
- ./.api.env
networks:
- zensorium_backend
command: realize start --run
restart: always
golang_client:
container_name: golang_client
build: .
volumes:
- ./client:/go/src/client
working_dir: /go/src/client
ports:
- 8082:8082
depends_on:
- mongo
env_file:
- ./.api.env
networks:
- zensorium_backend
command: realize start --run
restart: always
golang_mail:
container_name: golang_mail
build: .
volumes:
- ./mail:/go/src/mail
working_dir: /go/src/mail
expose:
- 8082
depends_on:
- mongo
env_file:
- ./.api.env
networks:
- zensorium_backend
command: realize start --run
restart: always
golang_user:
container_name: golang_user
build: .
volumes:
- ./user:/go/src/user
working_dir: /go/src/user
ports:
- 8083:8082
depends_on:
- mongo
env_file:
- ./.api.env
networks:
- zensorium_backend
command: realize start --run
restart: always
golang_gateway2:
container_name: golang_gateway2
build: .
volumes:
- ./gateway2:/go/src/gateway
working_dir: /go/src/gateway
ports:
- 8084:8082
depends_on:
- mongo
env_file:
- ./.api.env
networks:
- zensorium_backend
command: realize start --run
restart: always
golang_measurement:
container_name: golang_measurement
build: .
volumes:
- ./measurement:/go/src/measurement
working_dir: /go/src/measurement
ports:
- 8085:8082
depends_on:
- mongo
env_file:
- ./.api.env
networks:
- zensorium_backend
command: realize start --run
restart: always
golang_app:
container_name: golang_app
build: .
volumes:
- ./app:/go/src/app
working_dir: /go/src/app
ports:
- 8086:8082
depends_on:
- mongo
env_file:
- ./.api.env
networks:
- zensorium_backend
command: realize start --run
restart: always
golang_logging:
container_name: golang_logging
build: .
volumes:
- ./logging:/go/src/logging
working_dir: /go/src/logging
ports:
- 8087:8082
networks:
- zensorium_backend
command: realize start --run
restart: always
golang_notify:
container_name: golang_notify
build: .
volumes:
- ./notify:/go/src/notify
working_dir: /go/src/notify
ports:
- 8088:8082
env_file:
- ./.api.env
networks:
- zensorium_backend
command: realize start --run
restart: always
golang_routine:
container_name: golang_routine
build: .
volumes:
- ./routine:/go/src/routine
working_dir: /go/src/routine
ports:
- 8089:8082
env_file:
- ./.api.env
networks:
- zensorium_backend
command: realize start --run
restart: always
angular_cli:
container_name: angular_cli
build: ./angular-cli
ports:
- "4200:4200"
networks:
- zensorium_frontend
working_dir: /home/node/webPortal
volumes:
- ./angular-cli/webPortal:/home/node/webPortal
- /home/node/webPortal/node_modules
restart: always
command: npm start
golang_dev:
container_name: golang_dev
build: .
volumes:
- ./dev:/go/src/dev
working_dir: /go/src/dev
ports:
- 8090:8082
env_file:
- ./.api.env
networks:
- zensorium_backend
command: realize start --run
restart: always
networks:
zensorium_backend:
driver: bridge
zensorium_frontend:
driver: bridge
Also you can use the possibilities of yaml to write repeated strings in shorter way. For example :
---
version: '3.4'
x-command: &command bash -c "ls && sleep infinity"
services:
service1:
command: *command
service2:
command: *command
And man can use templates:
https://matthiasnoback.nl/2018/03/defining-multiple-similar-services-with-docker-compose/
my recommendation is to keep the docker-compose.yml as little as possible if you have a lot of services defined. For example you could write the definition inline, use .env file so you dont always have to specify it, remove the container_name. Moving working_dir to Dockerfile. If you are using version 2, you can use the inheritance of service but this feature is not supported in docker-compose 3 and newer... You should also use named volumes.
example of inline docker-compose.yml
version: '2'
services:
mongo:
image: mongo:3.4.7
command: mongod --smallfiles
restart: always
ports: ["54321:27017"]
networks: [zensorium_backend]
volumes: ["my_mongo_data:/data/db"]
golang_oauth:
build: .
command: realize start --run
restart: always
working_dir: /go/src/oauth # could be moved do Dockerfile
ports: ["8080:8082"]
depends_on: [mongo]
environment: ["VAR_1=${VAR_1}","VAR2=${VAR2}"] # using .env file
networks: [zensorium_backend]
volumes: ["my_oauth_data:/go/src/oauth"]
volumes:
my_mongo_data:
my_oauth_data:
networks:
zensorium_backend:
driver: bridge
zensorium_frontend:
driver: bridge
and .env file in same folder as the .yml
VAR_1=mazel
VAR2=tov

Resources