I want to use celery in my Django app for a task that should run in the background. I've followed these tutorials Asynchronous Tasks With Django and Celery and First steps with Django Using Celery with Django
my project structure:
project
├──project
| ├── settings
| ├──__init__.py (empty)
| ├──base.py
| ├──development.py
| └──production.py
| ├──__init__.py
| └──celery.py
├──app_number_1
| └──tasks.py
project/project/init.py :
from __future__ import absolute_import, unicode_literals
# This will make sure the app is always imported when
# Django starts so that shared_task will use this app.
from .celery import app as celery_app
__all__ = ['celery_app']
project/project/celery.py :
from __future__ import absolute_import, unicode_literals
import os
from celery import Celery
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'project.settings.production')
app = Celery('project')
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks()
project/project/settings/production.py :
INSTALLED_APPS = [
'background_task',
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'django.contrib.gis',
]
.
.
.
CELERY_BROKER_URL = 'mongodb://mongo:27017'
CELERY_RESULT_BACKEND = 'mongodb://mongo:27017'
CELERY_ACCEPT_CONTENT = ['application/json']
CELERY_RESULT_SERIALIZER = 'json'
CELERY_TASK_SERIALIZER = 'json'
docker-compose.yml:
version: '3'
services:
web:
env_file:
- env_vars.env
build: .
restart: always
command:
- /code/runserver.sh
ports:
- "80:8000"
depends_on:
- db
- mongo
db:
...
mongo:
image: mongo:latest
restart: always
runserver.sh:
#!/bin/bash
sleep 1
python3 /code/project/manage.py migrate --settings=project.settings.production
python3 /code/project/manage.py runserver 0.0.0.0:8000 --settings=project.settings.production & celery -A project worker -Q celery
after docker-compose up --build I get the following error:
web_1 | Running migrations:
web_1 | No migrations to apply.
mongo_1 | 2019-08-28T10:24:26.478+0000 I NETWORK [conn2] end connection 172.18.0.4:42252 (1 connection now open)
mongo_1 | 2019-08-28T10:24:26.479+0000 I NETWORK [conn1] end connection 172.18.0.4:42250 (0 connections now open)
web_1 | Error:
web_1 | Unable to load celery application.
web_1 | Module 'project' has no attribute 'celery'
any hint will be great!
thanks
The celery module isn't contained in the first project folder.
You can either move it there or make project into a package by adding an __init__ module and setting the app instance module in the celery command to be:
celery -A project.project worker -Q celery
Related
I have tried to run Localstack as it described on its GitHub page and I've used a command 'pip install localstack' as well as command 'docker-compose up' with docker-compose file from documentation:
version: "3.8"
services:
localstack:
container_name: "${LOCALSTACK_DOCKER_NAME-localstack_main}"
image: localstack/localstack
network_mode: bridge
ports:
- "127.0.0.1:53:53"
- "127.0.0.1:53:53/udp"
- "127.0.0.1:443:443"
- "127.0.0.1:4566:4566"
- "127.0.0.1:4571:4571"
environment:
- SERVICES=${SERVICES- }
- DEBUG=${DEBUG- }
- DATA_DIR=${DATA_DIR- }
- LAMBDA_EXECUTOR=${LAMBDA_EXECUTOR- }
- LOCALSTACK_API_KEY=${LOCALSTACK_API_KEY- }
- KINESIS_ERROR_PROBABILITY=${KINESIS_ERROR_PROBABILITY- }
- DOCKER_HOST=unix:///var/run/docker.sock
- HOST_TMP_FOLDER="${TMPDIR:-/tmp}/localstack"
volumes:
- "${TMPDIR:-/tmp}/localstack:/tmp/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
But in both ways I get the same output:
localstack_main | 2021-09-21 15:32:26,633 CRIT Supervisor is running as root. Privileges were not dropped because no user is specified in the config file. If you intend to run as root, you can set user=root in the config file to avoid this message.
localstack_main | 2021-09-21 15:32:26,645 INFO supervisord started with pid 14
localstack_main | 2021-09-21 15:32:27,650 INFO spawned: 'dashboard' with pid 20
localstack_main | 2021-09-21 15:32:27,653 INFO spawned: 'infra' with pid 21
localstack_main | 2021-09-21 15:32:27,659 INFO success: dashboard entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
localstack_main | 2021-09-21 15:32:27,660 INFO exited: dashboard (exit status 0; expected)
localstack_main | (. .venv/bin/activate; exec bin/localstack start --host)
localstack_main | 2021-09-21 15:32:28,663 INFO success: infra entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
localstack_main | LocalStack version: 0.12.1
localstack_main | Starting local dev environment. CTRL-C to quit.
localstack_main | Waiting for all LocalStack services to be ready
localstack_main | Waiting for all LocalStack services to be ready
localstack_main | Waiting for all LocalStack services to be ready
localstack_main | Waiting for all LocalStack services to be ready
And then nothing appears except these recurring messages.
Does anybody know how to fix this problem?
This might not be the solution for everyone, but worth suggesting to try update your Docker version.
I had the same issue and for few days, and then I tried to update my docker version
I use Docker version 20.10.11 on apple silicon and can confirm it works fine. So far, after this update, I did not encounter any new issues with local-stack.
Also this Github issue suggests deleting your local-stack's volume before each run. It works, however this obviously can't be long term solution, but might be good mitigation when one is in need.
Update your docker-compose.yml as below and then run docker-compose up. It should work as expected.
version: "3.8"
services:
localstack:
container_name: "${LOCALSTACK_DOCKER_NAME-localstack}"
image: localstack/localstack
hostname: localstack
networks:
- test-net
ports:
- "4566:4566"
environment:
- SERVICES=s3,sqs,cloudformation,iam,cloudwatch
- DEBUG=1
- DATA_DIR=/tmp/localstack/data
- LAMBDA_EXECUTOR=docker-reuse
- LAMBDA_REMOTE_DOCKER=false
- LAMBDA_REMOVE_CONTAINERS=true
- DOCKER_HOST=unix:///var/run/docker.sock
- HOST_TMP_FOLDER=${TMPDIR}
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
networks:
test-net:
external: false
driver: bridge
name: test-net
I'm getting this error:
An exception occurred in driver: SQLSTATE[HY000] [2002] Connection refused
I have tried changing the IP address in my .env to localhost but I then got a not found error.
I also tried changing my .env db host to match my docker compose file:
DB_HOST=mysql
docker composer file:
version: "3.7"
services:
app:
image: kooldev/php:7.4-nginx
ports:
- ${KOOL_APP_PORT:-80}:80
environment:
ASUSER: ${KOOL_ASUSER:-0}
UID: ${UID:-0}
volumes:
- .:/app:delegated
networks:
- kool_local
- kool_global
database:
image: mysql:8.0
command: --default-authentication-plugin=mysql_native_password
ports:
- ${KOOL_DATABASE_PORT:-3306}:3306
I used kool.dev to do the Symfony install, that looks ok and the DB seems to be working as expected:
user#DESKTOP-QSCSABV:/mnt/c/dev/symfony-project$ kool status
+----------+---------+------------------------------------------------------+-------------------------+
| SERVICE | RUNNING | PORTS
| STATE |
+----------+---------+------------------------------------------------------+-------------------------+
| app | Running | 0.0.0.0:80->80/tcp, :::80->80/tcp, 9000/tcp
| Up 15 minutes |
| database | Running | 0.0.0.0:3306->3306/tcp, :::3306->3306/tcp,
33060/tcp | Up 15 minutes (healthy) |
+----------+---------+------------------------------------------------------+-------------------------+
[done] Fetching services status
g
in my .env file:
DB_USERNAME=myusername
DB_PASSWORD=mypassword
DB_HOST=127.0.0.1
DB_PORT=3306
DB_DATABASE=mydatabase
DB_VERSION=8.0
DATABASE_URL="mysql://${DB_USERNAME}:${DB_PASSWORD}#${DB_HOST}:${DB_PORT}/${DB_DATABASE}?serverVersion=${DB_VERSION}"
Any suggestions on how to resolve this?
DB_HOST=127.0.0.1
in your environment file should be
DB_HOST=database
127.0.0.1 is the address of the container itself, so in your case, the app container tries to make a connection to itself. Docker compose creates a virtual network where each container can be addressed by its service name. So in your case, you want to connect to the database service.
I am trying to work on a webscraper using the Serverless Framework that I want to be easily ran locally by users without having to install any necessary depedencies on their local machine. I am using serverless-offline-sqs with a local Elasticmq server hosted on a Docker container.
Currently, I have a docker-compose file that I run, then run serverless offline in another terminal which works well. That docker-compose.yml file looks like this:
# docker-compose.yml
version: '3'
services:
database:
image: 'mongo'
container_name: 'database'
environment:
- MONGO_INITDB_DATABASE=scraper_database
- MONGO_INITDB_ROOT_USERNAME=admin
- MONGO_INITDB_ROOT_PASSWORD=admin
volumes:
- ./init-mongo.js:/docker-entrypoint-initdb.d/init-mongo.js:ro
- ./mongo-volume:/data/db
ports:
- '27017-27019:27017-27019'
command: mongod --quiet --logpath /dev/null
sqs:
image: softwaremill/elasticmq:latest
container_name: 'sqs'
ports:
- '9324:9324'
sqs-create:
image: infrastructureascode/aws-cli:latest
container_name: 'sqs-create'
links:
- sqs
entrypoint: sh
command: ./create-queues.sh
volumes:
- ./scripts/create-queues.sh:/project/create-queues.sh:ro
environment:
- AWS_ACCESS_KEY_ID=local
- AWS_SECRET_ACCESS_KEY=local
- AWS_DEFAULT_REGION=eu-east-1
- AWS_ENDPOINT_URL=http://sqs:9324
This works well with no issues, and after ensuring that all of my containers are up, I can run serverless offline and my app works. I am trying to also include the act of running Serverless in its own docker container. I have created the following Dockerfile:
# Dockerfile
FROM node:12
RUN npm --loglevel=error install -g serverless && npm --loglevel=error install -g serverless-offline
WORKDIR /usr/src/app
COPY package*.json ./
COPY ./scripts/wait-for-it.sh ./
RUN ["chmod", "+x", "/usr/src/app/wait-for-it.sh"]
RUN npm install
COPY . .
EXPOSE 3000
I am trying to follow the Docker documentation for affecting the start-up order, found here to ensure that my queue service is up before running this. This has led me to this docker-compose.yml:
version: '3'
services:
serverless:
container_name: 'serverless'
build:
context: .
dockerfile: Dockerfile
env_file:
- .env.development
ports:
- '3000:3000'
depends_on:
- sqs
command: ["./wait-for-it.sh", "sqs:9324", "--", "serverless", "offline"]
database:
image: 'mongo'
container_name: 'database'
environment:
- MONGO_INITDB_DATABASE=scraper_database
- MONGO_INITDB_ROOT_USERNAME=admin
- MONGO_INITDB_ROOT_PASSWORD=admin
volumes:
- ./init-mongo.js:/docker-entrypoint-initdb.d/init-mongo.js:ro
- ./mongo-volume:/data/db
ports:
- '27017-27019:27017-27019'
command: mongod --quiet --logpath /dev/null
sqs:
image: softwaremill/elasticmq:latest
container_name: 'sqs'
ports:
- '9324:9324'
sqs-create:
image: infrastructureascode/aws-cli:latest
container_name: 'sqs-create'
links:
- sqs
entrypoint: sh
command: ./create-queues.sh
volumes:
- ./scripts/create-queues.sh:/project/create-queues.sh:ro
environment:
- AWS_ACCESS_KEY_ID=local
- AWS_SECRET_ACCESS_KEY=local
- AWS_DEFAULT_REGION=eu-east-1
- AWS_ENDPOINT_URL=http://sqs:9324
I am using the wait-for-it.sh script which the Docker documentation suggests, but it says that I am getting the following error:
Successfully built 38df0769a202
Successfully tagged assessorscraper_serverless:latest
Starting sqs ... done
Starting database ... done
Recreating serverless ... done
Starting sqs-create ... done
Attaching to sqs, database, sqs-create, serverless
serverless | wait-for-it.sh: waiting 15 seconds for sqs:9324
sqs | 07:54:45.046 [main] INFO org.elasticmq.server.Main$ - Starting ElasticMQ server (1.0.0) ...
sqs | 07:54:48.133 [elasticmq-akka.actor.default-dispatcher-6] INFO akka.event.slf4j.Slf4jLogger - Slf4jLogger started
sqs | 07:54:51.385 [elasticmq-akka.actor.default-dispatcher-7] INFO o.e.rest.sqs.TheSQSRestServerBuilder - Started SQS rest server, bind address 0.0.0.0:9324, visible server address http://localhost:9324
sqs | 07:54:51.643 [elasticmq-akka.actor.default-dispatcher-7] INFO o.e.r.s.TheStatisticsRestServerBuilder - Started statistics rest server, bind address 0.0.0.0:9325
sqs | 07:54:51.649 [main] INFO org.elasticmq.server.Main$ - === ElasticMQ server (1.0.0) started in 8819 ms ===
serverless | wait-for-it.sh: sqs:9324 is available after 9 seconds
sqs-create | Creating queue TownQueue
sqs | 07:54:53.808 [elasticmq-akka.actor.default-dispatcher-6] INFO o.elasticmq.actor.QueueManagerActor - Creating queue QueueData(TownQueue,MillisVisibilityTimeout(30000),PT0S,PT0S,2021-01-07T07:54:53.494Z,2021-01-07T07:54:53.494Z,None,false,false,None,None,Map())
sqs-create exited with code 0
serverless | Serverless: Running "serverless" installed locally (in service node_modules)
serverless | Serverless: DOTENV: Loading environment variables from .env.development:
serverless | Serverless: - DATABASE_URL
serverless | Serverless: - ACCOUNT_ID
serverless | Serverless: - QUEUE_URL
serverless | Serverless: Deprecation warning: Starting with next major version, default value of provider.lambdaHashingVersion will be equal to "20201221"
serverless | More Info: https://www.serverless.com/framework/docs/deprecations/#LAMBDA_HASHING_VERSION_V2
serverless | Serverless: Deprecation warning: Starting with next major version, API Gateway naming will be changed from "{stage}-{service}" to "{service}-{stage}".
serverless | Set "provider.apiGateway.shouldStartNameWithService" to "true" to adapt to the new behavior now.
serverless | More Info: https://www.serverless.com/framework/docs/deprecations/#AWS_API_GATEWAY_NAME_STARTING_WITH_SERVICE
serverless | offline: Error: connect ECONNREFUSED 0.0.0.0:9324
serverless | at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1144:16)
serverless |
serverless | Networking Error ---------------------------------------
serverless |
serverless | Error: connect ECONNREFUSED 0.0.0.0:9324
serverless | at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1144:16)
serverless |
serverless | For debugging logs, run again after setting the "SLS_DEBUG=*" environment variable.
serverless |
serverless | Get Support --------------------------------------------
serverless | Docs: docs.serverless.com
serverless | Bugs: github.com/serverless/serverless/issues
serverless | Issues: forum.serverless.com
serverless |
serverless | Your Environment Information ---------------------------
serverless | Operating System: linux
serverless | Node Version: 12.20.1
serverless | Framework Version: 2.17.0 (local)
serverless | Plugin Version: 4.4.1
serverless | SDK Version: 2.3.2
serverless | Components Version: 3.4.4
serverless |
Am I still getting some race condition? Any suggestions here would be much appreciated!
The problem is likely to be in ECONNREFUSED 0.0.0.0:9324. Judging by the port number it is an attempt to reach the sqs service, but the IP-address is bad. It should connect to sqs:9324 or an IP-address of that container. 0.0.0.0 means 'any IP-address' and it is usually used to bind a port. Check your serverless configuration.
Also, you can easily check if you are in a 'race condition' or not. For that simply start your services one by one using several terminals:
docker-compose up database
docker-compose up sqs
docker-compose up sqs-create
docker-compose up serverless
If you can start services one by one then it is likely you are. In this case you can add restart: on-failure property to a service. This way if a container exits with a code other than 0 - docker restarts the container.
It turns out, my issue was actually in my serverless.yml configuration. Here, I had my serverless.yml with a custom configuration as follows:
custom:
serverless-offline-sqs:
autoCreate: true # create queue if not exists
apiVersion: '2012-11-05'
endpoint: http://0.0.0.0:9324
region: us-east-1
accessKeyId: root
secretAccessKey: root
skipCacheInvalidation: false
The correct endpoint was actually `http://sqs:9324'. Everything else was correct!
Here is my Dockerfile for React.js with the error I got in terminal:
FROM node:8
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY ./package.json /usr/src/app
RUN npm install
RUN npm build
EXPOSE 3000
CMD ["npm", "run", "start"]
Error:-
react_1 |
react_1 | > ecom-panther#0.1.0 start /usr/src/app
react_1 | > react-scripts start
react_1 |
react_1 | ℹ 「wds」: Project is running at http://172.18.0.2/
react_1 | ℹ 「wds」: webpack output is served from
react_1 | ℹ 「wds」: Content not from webpack is served from /usr/src/app/public
react_1 | ℹ 「wds」: 404s will fallback to /
react_1 | Starting the development server...
react_1 |
ecom-panther_react_1 exited with code 0
For Node and Express, I got this:
express_1 | (node:30) DeprecationWarning: current Server Discovery and Monitoring engine is deprecated, and will be removed in a future version. To use the new Server Discover and Monitoring engine, pass option { useUnifiedTopology: true } to the MongoClient constructor.
express_1 | server is running on port: 5000
express_1 | (node:30) UnhandledPromiseRejectionWarning: MongoNetworkError: failed to connect to server [localhost:27017] on first connect [MongoNetworkError: connect ECONNREFUSED 127.0.0.1:27017]
express_1 | at Pool.<anonymous> (/usr/src/app/node_modules/mongodb/lib/core/topologies/server.js:438:11)
express_1 | at emitOne (events.js:116:13)
express_1 | at Pool.emit (events.js:211:7)
express_1 | at createConnection (/usr/src/app/node_modules/mongodb/lib/core/connection/pool.js:561:14)
express_1 | at connect (/usr/src/app/node_modules/mongodb/lib/core/connection/pool.js:994:11)
express_1 | at makeConnection (/usr/src/app/node_modules/mongodb/lib/core/connection/connect.js:31:7)
express_1 | at callback (/usr/src/app/node_modules/mongodb/lib/core/connection/connect.js:264:5)
express_1 | at Socket.err (/usr/src/app/node_modules/mongodb/lib/core/connection/connect.js:294:7)
express_1 | at Object.onceWrapper (events.js:315:30)
express_1 | at emitOne (events.js:116:13)
express_1 | at Socket.emit (events.js:211:7)
express_1 | at emitErrorNT (internal/streams/destroy.js:73:8)
express_1 | at _combinedTickCallback (internal/process/next_tick.js:139:11)
express_1 | at process._tickCallback (internal/process/next_tick.js:181:9)
express_1 | (node:30) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 1)
express_1 | (node:30) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
Docker file for backend:-
FROM node:8
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app
RUN npm install
COPY . /usr/src/app
EXPOSE 5000
CMD ["npm","start"]
Here is my docker-compose.yml file
version: '3' # specify docker-compose version
# Define the service/container to be run
services:
react: #name of first service
build: client #specify the directory of docker file
ports:
- "3000:3000" #specify port mapping
express: #name of second service
build: server #specify the directory of docker file
ports:
- "5000:5000" #specify port mapping
links:
- database #link this service to the database service
database: #name of third service
image: mongo #specify image to build contasiner flow
ports:
- "27017:27017" #specify port mapping
How I can run frontend at browser and is there any easy approach to do this in a better way ?
Error 1:
Add stdin_open: true to your react service, like:
...
services:
react: #name of first service
build: client #specify the directory of docker file
stdin_open: true
ports:
- "3000:3000" #specify port mapping
...
You might need to rebuild or clean cached so "docker-compose up --build" or "docker-compose build --no-cache" then "docker-compose up"
Error 2:
In your database connections line in your index.js file or whatever you named should have :
mongodb://database:27017/
where "database" is your named MongoDB service. You can use your container IP address too with docker inspect <container> and use the IP the see there too. Ideally you want to have a ENV in your Dockerfile or docker-compose.yml:
ENV MONGO_URL mongodb://database:27017/
I am trying to run the concourse worker using a docker image on a gentoo host. When running the docker image of the worker in privileged mode I get:
iptables: create-instance-chains: iptables: No chain/target/match by that name.
My docker-compose file is
version: '3'
services:
worker:
image: private-concourse-worker-with-keys
command: worker
ports:
- "7777:7777"
- "7788:7788"
- "7799:7799"
#restart: on-failure
privileged: true
environment:
- CONCOURSE_TSA_HOST=concourse-web-1.dev
- CONCOURSE_GARDEN_NETWORK
My Dockerfile
FROM concourse/concourse
COPY keys/tsa_host_key.pub /concourse-keys/tsa_host_key.pub
COPY keys/worker_key /concourse-keys/worker_key
Some more errors
worker_1 | {"timestamp":"1526507528.298546791","source":"guardian","message":"guardian.create.containerizer-create.finished","log_level":1,"data":{"handle":"426762cc-b9a8-47b0-711a-8f5ce18ff46c","session":"23.2"}}
worker_1 | {"timestamp":"1526507528.298666477","source":"guardian","message":"guardian.create.containerizer-create.watch.watching","log_level":1,"data":{"handle":"426762cc-b9a8-47b0-711a-8f5ce18ff46c","session":"23.2.4"}}
worker_1 | {"timestamp":"1526507528.303164721","source":"guardian","message":"guardian.create.network.started","log_level":1,"data":{"handle":"426762cc-b9a8-47b0-711a-8f5ce18ff46c","session":"23.5","spec":""}}
worker_1 | {"timestamp":"1526507528.303202152","source":"guardian","message":"guardian.create.network.config-create","log_level":1,"data":{"config":{"ContainerHandle":"426762cc-b9a8-47b0-711a-8f5ce18ff46c","HostIntf":"wbpuf2nmpege-0","ContainerIntf":"wbpuf2nmpege-1","IPTablePrefix":"w--","IPTableInstance":"bpuf2nmpege","BridgeName":"wbrdg-0afe0000","BridgeIP":"x.x.0.1","ContainerIP":"x.x.0.2","ExternalIP":"x.x.0.2","Subnet":{"IP":"x.x.0.0","Mask":"/////A=="},"Mtu":1500,"PluginNameservers":null,"OperatorNameservers":[],"AdditionalNameservers":["x.x.0.2"]},"handle":"426762cc-b9a8-47b0-711a-8f5ce18ff46c","session":"23.5","spec":""}}
worker_1 | {"timestamp":"1526507528.324085236","source":"guardian","message":"guardian.iptables-runner.command.failed","log_level":2,"data":{"argv":["/worker-state/3.6.0/assets/iptables/sbin/iptables","--wait","-A","w--instance-bpuf2nmpege-log","-m","conntrack","--ctstate","NEW,UNTRACKED,INVALID","--protocol","all","--jump","LOG","--log-prefix","426762cc-b9a8-47b0-711a-8f5c ","-m","comment","--comment","426762cc-b9a8-47b0-711a-8f5ce18ff46c"],"error":"exit status 1","exit-status":1,"session":"1.26","stderr":"iptables: No chain/target/match by that name.\n","stdout":"","took":"1.281243ms"}}
It turns out it was because we were missing the log kernel module for iptables compiled into our distro.