Docker - No module named 'celery_worker' - docker

I am trying to build a celery container in docker, like so:
celery:
image: dev3_web
restart: always
volumes:
- ./services/web:/usr/src/app
- ./services/web/logs:/usr/src/app
command: celery worker -A celery_worker.celery --loglevel=INFO -Q cache
environment:
- CELERY_BROKER=redis://redis:6379/0
- CELERY_RESULT_BACKEND=redis://redis:6379/0
depends_on:
- web
- redis
links:
- redis:redis
redis:
image: redis:5.0.3-alpine
restart: always
expose:
- '6379'
ports:
- '6379:6379'
monitor:
image: dev3_web
ports:
- 5555:5555
command: flower -A celery_worker.celery --port=5555 --broker=redis://redis:6379/0
depends_on:
- web
- redis
which runs from docker-compose-dev.yml.
but I'm getting the error:
celery_1 | Traceback (most recent call last):
celery_1 | File "/usr/bin/celery", line 10, in <module>
celery_1 | sys.exit(main())
celery_1 | File "/usr/lib/python3.6/site-packages/celery/__main__.py", line 16, in main
celery_1 | _main()
celery_1 | File "/usr/lib/python3.6/site-packages/celery/bin/celery.py", line 322, in main
celery_1 | cmd.execute_from_commandline(argv)
celery_1 | File "/usr/lib/python3.6/site-packages/celery/bin/celery.py", line 496, in execute_from_commandline
celery_1 | super(CeleryCommand, self).execute_from_commandline(argv)))
celery_1 | File "/usr/lib/python3.6/site-packages/celery/bin/base.py", line 273, in execute_from_commandline
celery_1 | argv = self.setup_app_from_commandline(argv)
celery_1 | File "/usr/lib/python3.6/site-packages/celery/bin/base.py", line 479, in setup_app_from_commandline
celery_1 | self.app = self.find_app(app)
celery_1 | File "/usr/lib/python3.6/site-packages/celery/bin/base.py", line 501, in find_app
celery_1 | return find_app(app, symbol_by_name=self.symbol_by_name)
celery_1 | File "/usr/lib/python3.6/site-packages/celery/app/utils.py", line 359, in find_app
celery_1 | sym = symbol_by_name(app, imp=imp)
celery_1 | File "/usr/lib/python3.6/site-packages/celery/bin/base.py", line 504, in symbol_by_name
celery_1 | return imports.symbol_by_name(name, imp=imp)
celery_1 | File "/usr/lib/python3.6/site-packages/kombu/utils/imports.py", line 56, in symbol_by_name
celery_1 | module = imp(module_name, package=package, **kwargs)
celery_1 | File "/usr/lib/python3.6/site-packages/celery/utils/imports.py", line 104, in import_from_cwd
celery_1 | return imp(module, package=package)
celery_1 | File "/usr/lib/python3.6/importlib/__init__.py", line 126, in import_module
celery_1 | return _bootstrap._gcd_import(name[level:], package, level)
celery_1 | File "<frozen importlib._bootstrap>", line 994, in _gcd_import
celery_1 | File "<frozen importlib._bootstrap>", line 971, in _find_and_load
celery_1 | File "<frozen importlib._bootstrap>", line 953, in _find_and_load_unlocked
celery_1 | ModuleNotFoundError: No module named 'celery_worker'
Folder structure:
web/
dockerfile
celery_worker.py
project/
__init__.py
web/celery_worker.py
#!/usr/bin/env python
import os
from project import celery, create_app
app = create_app()
app.app_context().push()
web/project/__init__.py
import os
# third party libs
from flask import Flask
from flask_sqlalchemy import SQLAlchemy
from celery import Celery
from flask_debugtoolbar import DebugToolbarExtension
from flask_cors import CORS
from flask_migrate import Migrate
from flask_bcrypt import Bcrypt
from flask_mail import Mail
# instantiate the db
db = SQLAlchemy()
# background processes instance
celery = Celery(__name__, broker='redis://redis:6379/0')
# extensions
toolbar = DebugToolbarExtension()
cors = CORS()
migrate = Migrate()
bcrypt = Bcrypt()
mail = Mail()
def create_app(script_info=None):
from .api import routes
# instantiate the app
app = Flask(__name__)
# set config
app_settings = os.getenv('APP_SETTINGS')
app.config.from_object(app_settings)
# set up extensions
db.init_app(app)
toolbar.init_app(app)
cors.init_app(app)
migrate.init_app(app, db)
bcrypt.init_app(app)
# register blueprints
routes.init_app(app)
#models.init_app(app)
celery.conf.update(app.config)
# shell context for flask cli
#app.shell_context_processor
def ctx():
return {'app': app, 'db': db}
return app
In my web/Dockerfile I have set working directory like so:
(...)
# set working directory
WORKDIR /usr/src/app
(...)
Fhis setting used to work with flask, prior to me using docker containers, running the following at root:
celery worker -A celery_worker.celery --loglevel=INFO -Q cache
Is there something I'm missing here?

celery:
image: dev3_web
restart: always
volumes:
- ./services/web:/usr/src/app
- ./services/web/logs:/usr/src/app # <--- here is your problem
You are loading in the logs subdirectory over the app directory in the compose file. The last one listed wins, so you will only have an empty folder or some log files, with no code in the running container.

Related

Can't connect to Milvus using Pymilvus inside docker. MilvusException: (code=2, message=Fail connecting to server on localhost:19530. Timeout)

I'm trying to connect to a Milvus server using Pymilvus. The server is up and running but I can't connect to it: MilvusException: (code=2, message=Fail connecting to server on localhost:19530. Timeout)
I'm running both using docker compose:
version: "3.5"
services:
etcd:
container_name: milvus-etcd
image: quay.io/coreos/etcd:v3.5.0
networks:
app_net:
environment:
- ETCD_AUTO_COMPACTION_MODE=revision
- ETCD_AUTO_COMPACTION_RETENTION=1000
- ETCD_QUOTA_BACKEND_BYTES=4294967296
volumes:
- ${DOCKER_VOLUME_DIRECTORY:-.}/volumes/etcd:/etcd
command: etcd -advertise-client-urls=http://127.0.0.1:2379 -listen-client-urls http://0.0.0.0:2379 --data-dir /etcd
minio:
container_name: milvus-minio
image: minio/minio:RELEASE.2022-03-17T06-34-49Z
networks:
app_net:
environment:
MINIO_ACCESS_KEY: minioadmin
MINIO_SECRET_KEY: minioadmin
volumes:
- ${DOCKER_VOLUME_DIRECTORY:-.}/volumes/minio:/minio_data
command: minio server /minio_data
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
interval: 30s
timeout: 20s
retries: 3
standalone:
container_name: milvus-standalone
image: milvusdb/milvus:v2.1.4
networks:
app_net:
ipv4_address: 172.16.238.10
command: ["milvus", "run", "standalone"]
environment:
ETCD_ENDPOINTS: etcd:2379
MINIO_ADDRESS: minio:9000
volumes:
- ${DOCKER_VOLUME_DIRECTORY:-.}/volumes/milvus:/var/lib/milvus
ports:
- "19530:19530"
depends_on:
- "etcd"
- "minio"
fastapi:
build: ./fastapi
command: uvicorn app.main:app --host 0.0.0.0
restart: always
networks:
app_net:
ipv4_address: 172.16.238.12
environment:
MILVUS_HOST: '172.16.238.10'
depends_on:
- standalone
ports:
- "80:80"
volumes:
- pfindertest:/data/fast
healthcheck:
test: ["CMD", "curl", "-f", "http://127.0.0.1:80"]
interval: 30s
timeout: 20s
retries: 3
networks:
app_net:
driver: bridge
ipam:
driver: default
config:
- subnet: 172.16.238.0/24
gateway: 172.16.238.1
volumes:
pfindertest:
Dockerfile
FROM python:3.8
WORKDIR /code
COPY ./requirements.txt /code/requirements.txt
RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt
COPY ./app /code/app
main.py
from fastapi import FastAPI
import uvicorn
from pymilvus import connections
app = FastAPI()
#app.get("/")
def read_root():
return {"Hello": "World"}
connections.connect(
alias="default",
host='localhost',
port='19530'
)
I'm getting the following error:
milvus-1-fastapi-1 | Traceback (most recent call last):
milvus-1-fastapi-1 | File "/usr/local/lib/python3.8/site-packages/pymilvus/client/grpc_handler.py", line 115, in _wait_for_channel_ready
milvus-1-fastapi-1 | grpc.channel_ready_future(self._channel).result(timeout=3)
milvus-1-fastapi-1 | File "/usr/local/lib/python3.8/site-packages/grpc/_utilities.py", line 139, in result
milvus-1-fastapi-1 | self._block(timeout)
milvus-1-fastapi-1 | File "/usr/local/lib/python3.8/site-packages/grpc/_utilities.py", line 85, in _block
milvus-1-fastapi-1 | raise grpc.FutureTimeoutError()
milvus-1-fastapi-1 | grpc.FutureTimeoutError
milvus-1-fastapi-1 |
milvus-1-fastapi-1 | During handling of the above exception, another exception occurred:
milvus-1-fastapi-1 |
milvus-1-fastapi-1 | Traceback (most recent call last):
milvus-1-fastapi-1 | File "/usr/local/bin/uvicorn", line 8, in <module>
milvus-1-fastapi-1 | sys.exit(main())
milvus-1-fastapi-1 | File "/usr/local/lib/python3.8/site-packages/click/core.py", line 1130, in __call__
milvus-1-fastapi-1 | return self.main(*args, **kwargs)
milvus-1-fastapi-1 | File "/usr/local/lib/python3.8/site-packages/click/core.py", line 1055, in main
milvus-1-fastapi-1 | rv = self.invoke(ctx)
milvus-1-fastapi-1 | File "/usr/local/lib/python3.8/site-packages/click/core.py", line 1404, in invoke
milvus-1-fastapi-1 | return ctx.invoke(self.callback, **ctx.params)
milvus-1-fastapi-1 | File "/usr/local/lib/python3.8/site-packages/click/core.py", line 760, in invoke
milvus-1-fastapi-1 | return __callback(*args, **kwargs)
milvus-1-fastapi-1 | File "/usr/local/lib/python3.8/site-packages/uvicorn/main.py", line 404, in main
milvus-1-fastapi-1 | run(
milvus-1-fastapi-1 | File "/usr/local/lib/python3.8/site-packages/uvicorn/main.py", line 569, in run
milvus-1-fastapi-1 | server.run()
milvus-1-fastapi-1 | File "/usr/local/lib/python3.8/site-packages/uvicorn/server.py", line 60, in run
milvus-1-fastapi-1 | return asyncio.run(self.serve(sockets=sockets))
milvus-1-fastapi-1 | File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
milvus-1-fastapi-1 | return loop.run_until_complete(main)
milvus-1-fastapi-1 | File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
milvus-1-fastapi-1 | return future.result()
milvus-1-fastapi-1 | File "/usr/local/lib/python3.8/site-packages/uvicorn/server.py", line 67, in serve
milvus-1-fastapi-1 | config.load()
milvus-1-fastapi-1 | File "/usr/local/lib/python3.8/site-packages/uvicorn/config.py", line 474, in load
milvus-1-fastapi-1 | self.loaded_app = import_from_string(self.app)
milvus-1-fastapi-1 | File "/usr/local/lib/python3.8/site-packages/uvicorn/importer.py", line 21, in import_from_string
milvus-1-fastapi-1 | module = importlib.import_module(module_str)
milvus-1-fastapi-1 | File "/usr/local/lib/python3.8/importlib/__init__.py", line 127, in import_module
milvus-1-fastapi-1 | return _bootstrap._gcd_import(name[level:], package, level)
milvus-1-fastapi-1 | File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
milvus-1-fastapi-1 | File "<frozen importlib._bootstrap>", line 991, in _find_and_load
milvus-1-fastapi-1 | File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
milvus-1-fastapi-1 | File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
milvus-1-fastapi-1 | File "<frozen importlib._bootstrap_external>", line 843, in exec_module
milvus-1-fastapi-1 | File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
milvus-1-fastapi-1 | File "/code/./app/main.py", line 11, in <module>
milvus-1-fastapi-1 | connections.connect(
milvus-1-fastapi-1 | File "/usr/local/lib/python3.8/site-packages/pymilvus/orm/connections.py", line 262, in connect
milvus-1-fastapi-1 | connect_milvus(**kwargs, password=password)
milvus-1-fastapi-1 | File "/usr/local/lib/python3.8/site-packages/pymilvus/orm/connections.py", line 233, in connect_milvus
milvus-1-fastapi-1 | gh._wait_for_channel_ready()
milvus-1-fastapi-1 | File "/usr/local/lib/python3.8/site-packages/pymilvus/client/grpc_handler.py", line 118, in _wait_for_channel_ready
milvus-1-fastapi-1 | raise MilvusException(Status.CONNECT_FAILED, f'Fail connecting to server on {self._address}. Timeout')
milvus-1-fastapi-1 | pymilvus.exceptions.MilvusException: <MilvusException: (code=2, message=Fail connecting to server on localhost:19530. Timeout)>
milvus-1-fastapi-1 exited with code 1
The Milvus server appears to be working so that's not the problem.
NAME COMMAND SERVICE STATUS PORTS
milvus-1-fastapi-1 "uvicorn app.main:ap…" fastapi restarting 0.0.0.0:80->80/tcp
milvus-etcd "etcd -advertise-cli…" etcd running 2379-2380/tcp
milvus-minio "/usr/bin/docker-ent…" minio running (healthy) 9000/tcp
milvus-standalone "/tini -- milvus run…" standalone running 0.0.0.0:9091->9091/tcp, 0.0.0.0:19530->19530/tcp
I'm running Docker on Mac if that is important. I tried using gitpod.io but the error remains.

Docker containers inside same network not able to communicate with each other

I have three docker containers running in the same network. I used docker-compose to bring up the containers. The docker-compose script is:
version: '3.5'
services:
### Jesse's Workspace ################################################
jesse:
image: salehmir/jesse:latest
depends_on:
- postgres
- redis
tty: true
env_file:
- ../.env
ports:
- "9000:9000"
# Jupyter Port
- "8888:8888"
volumes:
- ../:/home
container_name: jesse
command: bash -c "jesse install-live --no-strict && jesse run"
### PostgreSQL ################################################
postgres:
image: postgres:14-alpine
restart: always
environment:
- POSTGRES_USER=jesse_user
- POSTGRES_PASSWORD=password
- POSTGRES_DB=jesse_db
ports:
- "5432:5432"
volumes:
- postgres-data:/var/lib/postgresql/data
container_name: postgres
### Redis ################################################
redis:
image: redis:6-alpine
ports:
- "6379:6379"
container_name: redis
command: redis-server --save "" --appendonly no
volumes:
postgres-data:
Since I have not specified networks, I have checked all the containers are running inside the docker_default bridge network. DNS resolution works fine inside the containers by container names, but ping doesn't work neither any type of connectivity.
Since, I have exposed the port 6379 of the redis container, I am able to connect to redis from my host system. 127.0.0.1:6379. But, from any other container the connection is refused. I have tried to spin up another ubuntu container inside the same network, and noticed that I don't have internet connectivity inside the containers, i.e, no outgoing traffic. I am guessing this is something OS specific, as the same setup runs smoothly on my Mac.
I have checked the ufw firewall status, which is inactive.
The jesse container is trying to connect to redis, which is not accepting any connections.
Traceback (most recent call last):
jesse | File "/usr/local/bin/jesse", line 33, in <module>
jesse | sys.exit(load_entry_point('jesse', 'console_scripts', 'jesse')())
jesse | File "/usr/local/bin/jesse", line 25, in importlib_load_entry_point
jesse | return next(matches).load()
jesse | File "/usr/local/lib/python3.9/importlib/metadata.py", line 77, in load
jesse | module = import_module(match.group('module'))
jesse | File "/usr/local/lib/python3.9/importlib/__init__.py", line 127, in import_module
jesse | return _bootstrap._gcd_import(name[level:], package, level)
jesse | File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
jesse | File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
jesse | File "<frozen importlib._bootstrap>", line 972, in _find_and_load_unlocked
jesse | File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
jesse | File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
jesse | File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
jesse | File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
jesse | File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
jesse | File "<frozen importlib._bootstrap_external>", line 850, in exec_module
jesse | File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
jesse | File "/jesse-docker/jesse/__init__.py", line 12, in <module>
jesse | from jesse.services import auth as authenticator
jesse | File "/jesse-docker/jesse/services/auth.py", line 5, in <module>
jesse | from jesse.services.env import ENV_VALUES
jesse | File "/jesse-docker/jesse/services/env.py", line 18, in <module>
jesse | if jh.is_unit_testing():
jesse | File "/jesse-docker/jesse/helpers.py", line 368, in is_unit_testing
jesse | from jesse.config import config
jesse | File "/jesse-docker/jesse/config.py", line 2, in <module>
jesse | from jesse.modes.utils import get_exchange_type
jesse | File "/jesse-docker/jesse/modes/utils.py", line 3, in <module>
jesse | from jesse.services import logger
jesse | File "/jesse-docker/jesse/services/logger.py", line 3, in <module>
jesse | from jesse.services.redis import sync_publish
jesse | File "/jesse-docker/jesse/services/redis.py", line 23, in <module>
jesse | async_redis = asyncio.run(init_redis())
jesse | File "/usr/local/lib/python3.9/asyncio/runners.py", line 44, in run
jesse | return loop.run_until_complete(main)
jesse | File "/usr/local/lib/python3.9/asyncio/base_events.py", line 642, in run_until_complete
jesse | return future.result()
jesse | File "/jesse-docker/jesse/services/redis.py", line 12, in init_redis
jesse | return await aioredis.create_redis_pool(
jesse | File "/usr/local/lib/python3.9/site-packages/aioredis/commands/__init__.py", line 188, in create_redis_pool
jesse | pool = await create_pool(address, db=db,
jesse | File "/usr/local/lib/python3.9/site-packages/aioredis/pool.py", line 58, in create_pool
jesse | await pool._fill_free(override_min=False)
jesse | File "/usr/local/lib/python3.9/site-packages/aioredis/pool.py", line 383, in _fill_free
jesse | conn = await self._create_new_connection(self._address)
jesse | File "/usr/local/lib/python3.9/site-packages/aioredis/connection.py", line 111, in create_connection
jesse | reader, writer = await asyncio.wait_for(open_connection(
jesse | File "/usr/local/lib/python3.9/asyncio/tasks.py", line 442, in wait_for
jesse | return await fut
jesse | File "/usr/local/lib/python3.9/site-packages/aioredis/stream.py", line 23, in open_connection
jesse | transport, _ = await get_event_loop().create_connection(
jesse | File "/usr/local/lib/python3.9/asyncio/base_events.py", line 1056, in create_connection
jesse | raise exceptions[0]
jesse | File "/usr/local/lib/python3.9/asyncio/base_events.py", line 1041, in create_connection
jesse | sock = await self._connect_sock(
jesse | File "/usr/local/lib/python3.9/asyncio/base_events.py", line 955, in _connect_sock
jesse | await self.sock_connect(sock, address)
jesse | File "/usr/local/lib/python3.9/asyncio/selector_events.py", line 502, in sock_connect
jesse | return await fut
jesse | File "/usr/local/lib/python3.9/asyncio/selector_events.py", line 537, in _sock_connect_cb
jesse | raise OSError(err, f'Connect call failed {address}')
jesse | TimeoutError: [Errno 110] Connect call failed ('172.18.0.2', 6379)
The python code that is used to connect:
async def init_redis():
return await aioredis.create_redis_pool(
address=(ENV_VALUES['REDIS_HOST'], ENV_VALUES['REDIS_PORT']),
password=ENV_VALUES['REDIS_PASSWORD'] or None,
db=int(ENV_VALUES.get('REDIS_DB') or 0),
)
The .env values
REDIS_HOST=redis
REDIS_PORT=6379
REDIS_PASSWORD=
docker ps:
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2a741ee69b20 postgres:14-alpine "docker-entrypoint.s…" About an hour ago Up 58 minutes 0.0.0.0:5432->5432/tcp, :::5432->5432/tcp postgres
9012709c0bd1 redis:6-alpine "docker-entrypoint.s…" About an hour ago Up 58 minutes 0.0.0.0:6379->6379/tcp, :::6379->6379/tcp redis
I tried to ping the redis container from postgres container like this:
docker exec -it 2a ping redis
PING redis (172.18.0.2): 56 data bytes
^C
--- redis ping statistics ---
26 packets transmitted, 0 packets received, 100% packet loss
So, the DNS resolution works fine, but communication is not working. Whereas I can connect to redis from my host system.
The containers have to run in the same docker network https://docs.docker.com/compose/networking/

How to run airflow 2.0+ with openlineage and marquez in docker?

I'm trying to running the DAG in this example with airflow 2.0+. I setup an airflow project on docker following this example, and I want to integrate it with openlieage. I wonder how can I do that? I set environment variables for openlineage in the .env file that looks like below:
I git cloned the marquez repo on github and get marquez running following the readme. I suppose openlineage will listen on port 5000, and marquez will listen on port 5000, but when I browse localhost:3000, which is the UI of marquez, it shows no jobs found.
Here is my project directory:
Here is my yaml file, which is exactly the same as this link:
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
# Basic Airflow cluster configuration for CeleryExecutor with Redis and PostgreSQL.
#
# WARNING: This configuration is for local development. Do not use it in a production deployment.
#
# This configuration supports basic configuration using environment variables or an .env file
# The following variables are supported:
#
# AIRFLOW_IMAGE_NAME - Docker image name used to run Airflow.
# Default: apache/airflow:2.3.2
# AIRFLOW_UID - User ID in Airflow containers
# Default: 50000
# Those configurations are useful mostly in case of standalone testing/running Airflow in test/try-out mode
#
# _AIRFLOW_WWW_USER_USERNAME - Username for the administrator account (if requested).
# Default: airflow
# _AIRFLOW_WWW_USER_PASSWORD - Password for the administrator account (if requested).
# Default: airflow
# _PIP_ADDITIONAL_REQUIREMENTS - Additional PIP requirements to add when starting all containers.
# Default: ''
#
# Feel free to modify this file to suit your needs.
---
version: '3'
x-airflow-common:
&airflow-common
# In order to add custom dependencies or upgrade provider packages you can use your extended image.
# Comment the image line, place your Dockerfile in the directory where you placed the docker-compose.yaml
# and uncomment the "build" line below, Then run `docker-compose build` to build the images.
image: ${AIRFLOW_IMAGE_NAME:-apache/airflow:2.3.2}
# build: .
environment:
&airflow-common-env
AIRFLOW__CORE__EXECUTOR: CeleryExecutor
AIRFLOW__DATABASE__SQL_ALCHEMY_CONN: postgresql+psycopg2://airflow:airflow#postgres/airflow
# For backward compatibility, with Airflow <2.3
AIRFLOW__CORE__SQL_ALCHEMY_CONN: postgresql+psycopg2://airflow:airflow#postgres/airflow
AIRFLOW__CELERY__RESULT_BACKEND: db+postgresql://airflow:airflow#postgres/airflow
AIRFLOW__CELERY__BROKER_URL: redis://:#redis:6379/0
AIRFLOW__CORE__FERNET_KEY: ''
AIRFLOW__CORE__DAGS_ARE_PAUSED_AT_CREATION: 'true'
AIRFLOW__CORE__LOAD_EXAMPLES: 'true'
AIRFLOW__API__AUTH_BACKENDS: 'airflow.api.auth.backend.basic_auth'
_PIP_ADDITIONAL_REQUIREMENTS: ${_PIP_ADDITIONAL_REQUIREMENTS:-}
volumes:
- ./dags:/opt/airflow/dags
- ./logs:/opt/airflow/logs
- ./plugins:/opt/airflow/plugins
user: "${AIRFLOW_UID:-50000}:0"
depends_on:
&airflow-common-depends-on
redis:
condition: service_healthy
postgres:
condition: service_healthy
services:
postgres:
image: postgres:13
environment:
POSTGRES_USER: airflow
POSTGRES_PASSWORD: airflow
POSTGRES_DB: airflow
volumes:
- postgres-db-volume:/var/lib/postgresql/data
healthcheck:
test: ["CMD", "pg_isready", "-U", "airflow"]
interval: 5s
retries: 5
restart: always
redis:
image: redis:latest
expose:
- 6379
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 5s
timeout: 30s
retries: 50
restart: always
airflow-webserver:
<<: *airflow-common
command: webserver
ports:
- 8080:8080
healthcheck:
test: ["CMD", "curl", "--fail", "http://localhost:8080/health"]
interval: 10s
timeout: 10s
retries: 5
restart: always
depends_on:
<<: *airflow-common-depends-on
airflow-init:
condition: service_completed_successfully
airflow-scheduler:
<<: *airflow-common
command: scheduler
healthcheck:
test: ["CMD-SHELL", 'airflow jobs check --job-type SchedulerJob --hostname "$${HOSTNAME}"']
interval: 10s
timeout: 10s
retries: 5
restart: always
depends_on:
<<: *airflow-common-depends-on
airflow-init:
condition: service_completed_successfully
airflow-worker:
<<: *airflow-common
command: celery worker
healthcheck:
test:
- "CMD-SHELL"
- 'celery --app airflow.executors.celery_executor.app inspect ping -d "celery#$${HOSTNAME}"'
interval: 10s
timeout: 10s
retries: 5
environment:
<<: *airflow-common-env
# Required to handle warm shutdown of the celery workers properly
# See https://airflow.apache.org/docs/docker-stack/entrypoint.html#signal-propagation
DUMB_INIT_SETSID: "0"
restart: always
depends_on:
<<: *airflow-common-depends-on
airflow-init:
condition: service_completed_successfully
airflow-triggerer:
<<: *airflow-common
command: triggerer
healthcheck:
test: ["CMD-SHELL", 'airflow jobs check --job-type TriggererJob --hostname "$${HOSTNAME}"']
interval: 10s
timeout: 10s
retries: 5
restart: always
depends_on:
<<: *airflow-common-depends-on
airflow-init:
condition: service_completed_successfully
airflow-init:
<<: *airflow-common
entrypoint: /bin/bash
# yamllint disable rule:line-length
command:
- -c
- |
function ver() {
printf "%04d%04d%04d%04d" $${1//./ }
}
airflow_version=$$(gosu airflow airflow version)
airflow_version_comparable=$$(ver $${airflow_version})
min_airflow_version=2.2.0
min_airflow_version_comparable=$$(ver $${min_airflow_version})
if (( airflow_version_comparable < min_airflow_version_comparable )); then
echo
echo -e "\033[1;31mERROR!!!: Too old Airflow version $${airflow_version}!\e[0m"
echo "The minimum Airflow version supported: $${min_airflow_version}. Only use this or higher!"
echo
exit 1
fi
if [[ -z "${AIRFLOW_UID}" ]]; then
echo
echo -e "\033[1;33mWARNING!!!: AIRFLOW_UID not set!\e[0m"
echo "If you are on Linux, you SHOULD follow the instructions below to set "
echo "AIRFLOW_UID environment variable, otherwise files will be owned by root."
echo "For other operating systems you can get rid of the warning with manually created .env file:"
echo " See: https://airflow.apache.org/docs/apache-airflow/stable/start/docker.html#setting-the-right-airflow-user"
echo
fi
one_meg=1048576
mem_available=$$(($$(getconf _PHYS_PAGES) * $$(getconf PAGE_SIZE) / one_meg))
cpus_available=$$(grep -cE 'cpu[0-9]+' /proc/stat)
disk_available=$$(df / | tail -1 | awk '{print $$4}')
warning_resources="false"
if (( mem_available < 4000 )) ; then
echo
echo -e "\033[1;33mWARNING!!!: Not enough memory available for Docker.\e[0m"
echo "At least 4GB of memory required. You have $$(numfmt --to iec $$((mem_available * one_meg)))"
echo
warning_resources="true"
fi
if (( cpus_available < 2 )); then
echo
echo -e "\033[1;33mWARNING!!!: Not enough CPUS available for Docker.\e[0m"
echo "At least 2 CPUs recommended. You have $${cpus_available}"
echo
warning_resources="true"
fi
if (( disk_available < one_meg * 10 )); then
echo
echo -e "\033[1;33mWARNING!!!: Not enough Disk space available for Docker.\e[0m"
echo "At least 10 GBs recommended. You have $$(numfmt --to iec $$((disk_available * 1024 )))"
echo
warning_resources="true"
fi
if [[ $${warning_resources} == "true" ]]; then
echo
echo -e "\033[1;33mWARNING!!!: You have not enough resources to run Airflow (see above)!\e[0m"
echo "Please follow the instructions to increase amount of resources available:"
echo " https://airflow.apache.org/docs/apache-airflow/stable/start/docker.html#before-you-begin"
echo
fi
mkdir -p /sources/logs /sources/dags /sources/plugins
chown -R "${AIRFLOW_UID}:0" /sources/{logs,dags,plugins}
exec /entrypoint airflow version
# yamllint enable rule:line-length
environment:
<<: *airflow-common-env
_AIRFLOW_DB_UPGRADE: 'true'
_AIRFLOW_WWW_USER_CREATE: 'true'
_AIRFLOW_WWW_USER_USERNAME: ${_AIRFLOW_WWW_USER_USERNAME:-airflow}
_AIRFLOW_WWW_USER_PASSWORD: ${_AIRFLOW_WWW_USER_PASSWORD:-airflow}
_PIP_ADDITIONAL_REQUIREMENTS: ''
user: "0:0"
volumes:
- .:/sources
airflow-cli:
<<: *airflow-common
profiles:
- debug
environment:
<<: *airflow-common-env
CONNECTION_CHECK_MAX_COUNT: "0"
# Workaround for entrypoint issue. See: https://github.com/apache/airflow/issues/16252
command:
- bash
- -c
- airflow
# You can enable flower by adding "--profile flower" option e.g. docker-compose --profile flower up
# or by explicitly targeted on the command line e.g. docker-compose up flower.
# See: https://docs.docker.com/compose/profiles/
flower:
<<: *airflow-common
command: celery flower
profiles:
- flower
ports:
- 5555:5555
healthcheck:
test: ["CMD", "curl", "--fail", "http://localhost:5555/"]
interval: 10s
timeout: 10s
retries: 5
restart: always
depends_on:
<<: *airflow-common-depends-on
airflow-init:
condition: service_completed_successfully
volumes:
postgres-db-volume:
and this is my current .env file:
error message:
hdee#openlineageDEV:~/airflow-docker$ sudo docker-compose up airflow-init
WARNING: Found orphan containers (airflow-docker_marquez_1, airflow-
docker_marquez_web_1) for this project. If you removed or renamed this
service in your compose file, you can run this command with the --
remove-orphans flag to clean it up.airflow-docker_redis_1 is up-to-date
airflow-docker_postgres_1 is up-to-date
Starting airflow-docker_airflow-init_1 ... done
Attaching to airflow-docker_airflow-init_1
airflow-init_1 | The container is run as root user. For security,
consider using a regular user account.
airflow-init_1 | ....................
airflow-init_1 | ERROR! Maximum number of retries (20) reached.
airflow-init_1 |
airflow-init_1 | Last check result:
airflow-init_1 | $ airflow db check
airflow-init_1 | [2022-06-15 06:30:30,724] {configuration.py:484}
ERROR - No module named 'openlineage'
airflow-init_1 | Traceback (most recent call last):
airflow-init_1 | File "/home/airflow/.local/lib/python3.6/site-
packages/airflow/configuration.py", line 482, in getimport
airflow-init_1 | return import_string(full_qualified_path)
airflow-init_1 | File "/home/airflow/.local/lib/python3.6/site-
packages/airflow/utils/module_loading.py", line 32, in import_string
airflow-init_1 | module = import_module(module_path)
airflow-init_1 | File
"/usr/local/lib/python3.6/importlib/__init__.py", line 126, in
import_module
airflow-init_1 | return _bootstrap._gcd_import(name[level:],
package, level)
airflow-init_1 | File "<frozen importlib._bootstrap>", line
994, in _gcd_import
hdee#openlineageDEV:~/airflow-docker$
airflow-init_1 | File "<frozen importlib._bootstrap>", line
941, in _find_and_load_unlocked
airflow-init_1 | File "<frozen importlib._bootstrap>", line
219, in _call_with_frames_removed
airflow-init_1 | File "<frozen importlib._bootstrap>", line
994, in _gcd_import
airflow-init_1 | File "<frozen importlib._bootstrap>", line
971, in _find_and_load
airflow-init_1 | File "<frozen importlib._bootstrap>", line
953, in _find_and_load_unlocked
airflow-init_1 | ModuleNotFoundError: No module named
'openlineage'
airflow-init_1 |
airflow-init_1 | During handling of the above exception, another
exception occurred:
airflow-init_1 |
airflow-init_1 | Traceback (most recent call last):
airflow-init_1 | File "/home/airflow/.local/bin/airflow", line
8, in <module>
airflow-init_1 | sys.exit(main())
airflow-init_1 | File "/home/airflow/.local/lib/python3.6/site-
packages/airflow/__main__.py", line 40, in main
airflow-init_1 | args.func(args)
airflow-init_1 | File "/home/airflow/.local/lib/python3.6/site-
packages/airflow/cli/cli_parser.py", line 47, in command
airflow-init_1 | func = import_string(import_path)
airflow-init_1 | File "/home/airflow/.local/lib/python3.6/site-
packages/airflow/utils/module_loading.py", line 32, in import_string
airflow-init_1 | module = import_module(module_path)
airflow-init_1 | File
"/usr/local/lib/python3.6/importlib/__init__.py", line 126, in
import_module
airflow-init_1 | return _bootstrap._gcd_import(name[level:],
package, level)
airflow-init_1 | File "<frozen importlib._bootstrap>", line
994, in _gcd_import
airflow-init_1 | File "<frozen importlib._bootstrap>", line
971, in _find_and_load
airflow-init_1 | File "<frozen importlib._bootstrap>", line
955, in _find_and_load_unlocked
airflow-init_1 | File "<frozen importlib._bootstrap>", line
665, in _load_unlocked
airflow-init_1 | File "<frozen importlib._bootstrap_external>",
line 678, in exec_module
airflow-init_1 | File "<frozen importlib._bootstrap>", line
219, in _call_with_frames_removed
airflow-init_1 | File "/home/airflow/.local/lib/python3.6/site-
packages/airflow/cli/commands/db_command.py", line 24, in <module>
airflow-init_1 | from airflow.utils import cli as cli_utils,
db
airflow-init_1 | File "/home/airflow/.local/lib/python3.6/site-
packages/airflow/utils/db.py", line 27, in <module>
airflow-init_1 | from airflow.jobs.base_job import BaseJob #
noqa: F401
airflow-init_1 | File "/home/airflow/.local/lib/python3.6/site-
packages/airflow/jobs/__init__.py", line 19, in <module>
airflow-init_1 | import airflow.jobs.backfill_job
airflow-init_1 | File "/home/airflow/.local/lib/python3.6/site-
packages/airflow/jobs/backfill_job.py", line 28, in <module>
airflow-init_1 | from airflow import models
airflow-init_1 | File "/home/airflow/.local/lib/python3.6/site-
packages/airflow/models/__init__.py", line 20, in <module>
airflow-init_1 | from airflow.models.baseoperator import
BaseOperator, BaseOperatorLink
airflow-init_1 | File "/home/airflow/.local/lib/python3.6/site-
packages/airflow/models/baseoperator.py", line 206, in <module>
airflow-init_1 | class BaseOperator(Operator, LoggingMixin,
TaskMixin, metaclass=BaseOperatorMeta):
airflow-init_1 | File "/home/airflow/.local/lib/python3.6/site-
packages/airflow/models/baseoperator.py", line 999, in BaseOperator
airflow-init_1 | def post_execute(self, context: Any, result:
Any = None):
airflow-init_1 | File "/home/airflow/.local/lib/python3.6/site-
packages/airflow/lineage/__init__.py", line 103, in apply_lineage
airflow-init_1 | _backend = get_backend()
airflow-init_1 | File "/home/airflow/.local/lib/python3.6/site-
packages/airflow/lineage/__init__.py", line 52, in get_backend
airflow-init_1 | clazz = conf.getimport("lineage", "backend",
fallback=None)
airflow-init_1 | File "/home/airflow/.local/lib/python3.6/site-
packages/airflow/configuration.py", line 486, in getimport
airflow-init_1 | f'The object could not be loaded. Please
check "{key}" key in "{section}" section. '
airflow-init_1 | airflow.exceptions.AirflowConfigException: The
object could not be loaded. Please check "backend" key in "lineage"
section. Current value:
"openlineage.lineage_backend.OpenLineageBackend".
airflow-init_1 |
airflow-docker_airflow-init_1 exited with code 1
To set up the OpenLineage in Airflow, You can add an environment variable in the docker file.
You can add this command in DockerFile
x-airflow-common:
&airflow-common
# In order to add custom dependencies or upgrade provider packages you can use your extended image.
# Comment the image line, place your Dockerfile in the directory where you placed the docker-compose.yaml
# and uncomment the "build" line below, Then run `docker-compose build` to build the images.
image: ${AIRFLOW_IMAGE_NAME:-apache/airflow:2.3.2}
# build: .
environment:
&airflow-common-env
AIRFLOW__CORE__EXECUTOR: CeleryExecutor
AIRFLOW__DATABASE__SQL_ALCHEMY_CONN: postgresql+psycopg2://airflow:airflow#postgres/airflow
# For backward compatibility, with Airflow <2.3
AIRFLOW__CORE__SQL_ALCHEMY_CONN: postgresql+psycopg2://airflow:airflow#postgres/airflow
AIRFLOW__CELERY__RESULT_BACKEND: db+postgresql://airflow:airflow#postgres/airflow
AIRFLOW__CELERY__BROKER_URL: redis://:#redis:6379/0
AIRFLOW__CORE__FERNET_KEY: ''
AIRFLOW__CORE__DAGS_ARE_PAUSED_AT_CREATION: 'true'
AIRFLOW__CORE__LOAD_EXAMPLES: 'true'
AIRFLOW__API__AUTH_BACKENDS: 'airflow.api.auth.backend.basic_auth'
AIRFLOW__LINEAGE__BACKEND : 'openlineage.lineage_backend.OpenLineageBackend'
_PIP_ADDITIONAL_REQUIREMENTS: ${_PIP_ADDITIONAL_REQUIREMENTS:-}
volumes:
- ./dags:/opt/airflow/dags
- ./logs:/opt/airflow/logs
- ./plugins:/opt/airflow/plugins
user: "${AIRFLOW_UID:-50000}:0"
depends_on:
&airflow-common-depends-on
redis:
condition: service_healthy
postgres:
condition: service_healthy

Redis and flask app, web container does not want to start IndexError: pop from empty list

I am trying to solve a college assignment run an application written in flask + redis. I have to use a docker for this. Using docker-compose up redis starts correctly however the flask application throws errors and I don't really know why.
Some code of the flask application
If I understand this correctly, I need to declare environment variables in the Dockefile because it is from these that the application will get the ip address and port of Redis
app = Flask(__name__)
redis = redis.Redis(host=os.environ.get('REDIS_HOST'),
password=None,
port=os.environ.get('REDIS_PORT'),
db=0)
My Dockerfile
ARG PYTHON_VERSION=3.7-alpine
FROM python:${PYTHON_VERSION}
ENV REDIS_HOST 127.0.0.1 \
REDIS_PORT 6379
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
CMD ["gunicorn", "-w", "4", "-b", "0.0.0.0:8000", "main:app"]
And docker-compose
version: "3"
services:
web:
build: .
container_name: "python_app"
ports:
- "8000:8000"
depends_on:
- redis
redis:
image: "redis:alpine"
container_name: "redis"
ports:
- "6379:6379"
docker-compose build
Creating network "tt_default" with the default driver
Pulling redis (redis:alpine)...
alpine: Pulling from library/redis
Digest: sha256:fa785f9bd167b94a6b30210ae32422469f4b0f805f4df12733c2f177f500d1ba
Status: Downloaded newer image for redis:alpine
Building web
Sending build context to Docker daemon 10.75kB
Step 1/7 : ARG PYTHON_VERSION=3.7-alpine
Step 2/7 : FROM python:${PYTHON_VERSION}
---> a436fb2c575c
Step 3/7 : ENV REDIS_HOST 127.0.0.1 REDIS_PORT 6379
---> Running in ad3a17ce15e9
Removing intermediate container ad3a17ce15e9
---> 937330185f34
Step 4/7 : COPY requirements.txt .
---> d81cbb22f113
Step 5/7 : RUN pip install -r requirements.txt
---> Running in 1c0bac282a92
Collecting Flask==1.1.2
Downloading Flask-1.1.2-py2.py3-none-any.whl (94 kB)
Collecting redis==3.4.1
Downloading redis-3.4.1-py2.py3-none-any.whl (71 kB)
Collecting gunicorn<20,>=19
Downloading gunicorn-19.10.0-py2.py3-none-any.whl (113 kB)
Collecting itsdangerous>=0.24
Downloading itsdangerous-2.0.1-py3-none-any.whl (18 kB)
Collecting click>=5.1
Downloading click-8.0.1-py3-none-any.whl (97 kB)
Collecting Jinja2>=2.10.1
Downloading Jinja2-3.0.1-py3-none-any.whl (133 kB)
Collecting Werkzeug>=0.15
Downloading Werkzeug-2.0.1-py3-none-any.whl (288 kB)
Collecting importlib-metadata
Downloading importlib_metadata-4.8.1-py3-none-any.whl (17 kB)
Collecting MarkupSafe>=2.0
Downloading MarkupSafe-2.0.1.tar.gz (18 kB)
Collecting typing-extensions>=3.6.4
Downloading typing_extensions-3.10.0.2-py3-none-any.whl (26 kB)
Collecting zipp>=0.5
Downloading zipp-3.5.0-py3-none-any.whl (5.7 kB)
Building wheels for collected packages: MarkupSafe
Building wheel for MarkupSafe (setup.py): started
Building wheel for MarkupSafe (setup.py): finished with status 'done'
Created wheel for MarkupSafe: filename=MarkupSafe-2.0.1-py3-none-any.whl size=9761 sha256=43b5e0d8ef8bcbadc8e8d6845f85b770ad2b918760d7541b8c3f9c403ab04b14
Stored in directory: /root/.cache/pip/wheels/1a/18/04/e3b5bd888f000c2716bccc94a565239f9defc47ef93d9e7bea
Successfully built MarkupSafe
Installing collected packages: zipp, typing-extensions, MarkupSafe, importlib-metadata, Werkzeug, Jinja2, itsdangerous, click, redis, gunicorn, Flask
Successfully installed Flask-1.1.2 Jinja2-3.0.1 MarkupSafe-2.0.1 Werkzeug-2.0.1 click-8.0.1 gunicorn-19.10.0 importlib-metadata-4.8.1 itsdangerous-2.0.1 redis-3.4.1 typing-extensions-3.10.0.2 zipp-3.5.0
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
Removing intermediate container 1c0bac282a92
---> 6dddf2a4ad27
Step 6/7 : COPY . .
---> 37bd8f541844
Step 7/7 : CMD ["gunicorn", "-w", "4", "-b", "0.0.0.0:8000", "main:app"]
---> Running in f19c3226fff2
Removing intermediate container f19c3226fff2
---> 861a5c53a545
Successfully built 861a5c53a545
Successfully tagged tt_web:latest
WARNING: Image for service web was built because it did not already exist. To rebuild this image you must use `docker-compose build` or `docker-compose up --build`.
Creating redis ... done
Creating python_app ... done
Attaching to redis, python_app
Part of the logs which I get after using docker-compose up
Attaching to redis, python_app
redis | 1:C 27 Sep 2021 14:36:16.063 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
redis | 1:C 27 Sep 2021 14:36:16.063 # Redis version=6.2.5, bits=64, commit=00000000, modified=0, pid=1, just started
redis | 1:C 27 Sep 2021 14:36:16.063 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
redis | 1:M 27 Sep 2021 14:36:16.064 * monotonic clock: POSIX clock_gettime
redis | 1:M 27 Sep 2021 14:36:16.065 * Running mode=standalone, port=6379.
redis | 1:M 27 Sep 2021 14:36:16.065 # Server initialized
redis | 1:M 27 Sep 2021 14:36:16.065 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
redis | 1:M 27 Sep 2021 14:36:16.065 * Ready to accept connections
python_app | [2021-09-27 14:36:16 +0000] [1] [INFO] Starting gunicorn 19.10.0
python_app | [2021-09-27 14:36:16 +0000] [1] [INFO] Listening at: http://0.0.0.0:8000 (1)
python_app | [2021-09-27 14:36:16 +0000] [1] [INFO] Using worker: sync
python_app | [2021-09-27 14:36:16 +0000] [8] [INFO] Booting worker with pid: 8
python_app | [2021-09-27 14:36:16 +0000] [9] [INFO] Booting worker with pid: 9
python_app | [2021-09-27 14:36:16 +0000] [8] [ERROR] Exception in worker process
python_app | Traceback (most recent call last):
python_app | File "/usr/local/lib/python3.7/site-packages/redis/connection.py", line 1179, in get_connection
python_app | connection = self._available_connections.pop()
python_app | IndexError: pop from empty list
python_app |
python_app | During handling of the above exception, another exception occurred:
python_app |
python_app | Traceback (most recent call last):
python_app | File "/usr/local/lib/python3.7/site-packages/gunicorn/arbiter.py", line 586, in spawn_worker
python_app | worker.init_process()
python_app | File "/usr/local/lib/python3.7/site-packages/gunicorn/workers/base.py", line 135, in init_process
python_app | self.load_wsgi()
python_app | File "/usr/local/lib/python3.7/site-packages/gunicorn/workers/base.py", line 144, in load_wsgi
python_app | self.wsgi = self.app.wsgi()
python_app | File "/usr/local/lib/python3.7/site-packages/gunicorn/app/base.py", line 67, in wsgi
python_app | self.callable = self.load()
python_app | File "/usr/local/lib/python3.7/site-packages/gunicorn/app/wsgiapp.py", line 52, in load
python_app | return self.load_wsgiapp()
python_app | File "/usr/local/lib/python3.7/site-packages/gunicorn/app/wsgiapp.py", line 41, in load_wsgiapp
python_app | return util.import_app(self.app_uri)
python_app | File "/usr/local/lib/python3.7/site-packages/gunicorn/util.py", line 350, in import_app
python_app | __import__(module)
python_app | File "/main.py", line 21, in <module>
python_app | redis.set('sessionvisitors', 0)
python_app | File "/usr/local/lib/python3.7/site-packages/redis/client.py", line 1766, in set
python_app | return self.execute_command('SET', *pieces)
python_app | File "/usr/local/lib/python3.7/site-packages/redis/client.py", line 875, in execute_command
python_app | conn = self.connection or pool.get_connection(command_name, **options)
python_app | File "/usr/local/lib/python3.7/site-packages/redis/connection.py", line 1181, in get_connection
python_app | connection = self.make_connection()
python_app | File "/usr/local/lib/python3.7/site-packages/redis/connection.py", line 1220, in make_connection
python_app | return self.connection_class(**self.connection_kwargs)
python_app | File "/usr/local/lib/python3.7/site-packages/redis/connection.py", line 502, in __init__
python_app | self.port = int(port)
python_app | TypeError: int() argument must be a string, a bytes-like object or a number, not 'NoneType'
python_app | [2021-09-27 14:36:16 +0000] [8] [INFO] Worker exiting (pid: 8)
python_app | [2021-09-27 14:36:16 +0000] [10] [INFO] Booting worker with pid: 10
python_app | [2021-09-27 14:36:16 +0000] [9] [ERROR] Exception in worker process
python_app | Traceback (most recent call last):
python_app | File "/usr/local/lib/python3.7/site-packages/redis/connection.py", line 1179, in get_connection
python_app | connection = self._available_connections.pop()
python_app | IndexError: pop from empty list
python_app |
python_app | During handling of the above exception, another exception occurred:
python_app |
python_app | Traceback (most recent call last):
python_app | File "/usr/local/lib/python3.7/site-packages/gunicorn/arbiter.py", line 586, in spawn_worker
python_app | worker.init_process()
python_app | File "/usr/local/lib/python3.7/site-packages/gunicorn/workers/base.py", line 135, in init_process
python_app | self.load_wsgi()
python_app | File "/usr/local/lib/python3.7/site-packages/gunicorn/workers/base.py", line 144, in load_wsgi
python_app | self.wsgi = self.app.wsgi()
python_app | File "/usr/local/lib/python3.7/site-packages/gunicorn/app/base.py", line 67, in wsgi
python_app | self.callable = self.load()
python_app | File "/usr/local/lib/python3.7/site-packages/gunicorn/app/wsgiapp.py", line 52, in load
python_app | return self.load_wsgiapp()
python_app | File "/usr/local/lib/python3.7/site-packages/gunicorn/app/wsgiapp.py", line 41, in load_wsgiapp
python_app | return util.import_app(self.app_uri)
python_app | File "/usr/local/lib/python3.7/site-packages/gunicorn/util.py", line 350, in import_app
python_app | __import__(module)
python_app | File "/main.py", line 21, in <module>
python_app | redis.set('sessionvisitors', 0)
python_app | File "/usr/local/lib/python3.7/site-packages/redis/client.py", line 1766, in set
python_app | return self.execute_command('SET', *pieces)
python_app | File "/usr/local/lib/python3.7/site-packages/redis/client.py", line 875, in execute_command
python_app | conn = self.connection or pool.get_connection(command_name, **options)
python_app | File "/usr/local/lib/python3.7/site-packages/redis/connection.py", line 1181, in get_connection
python_app | connection = self.make_connection()
python_app | File "/usr/local/lib/python3.7/site-packages/redis/connection.py", line 1220, in make_connection
python_app | return self.connection_class(**self.connection_kwargs)
python_app | File "/usr/local/lib/python3.7/site-packages/redis/connection.py", line 502, in __init__
python_app | self.port = int(port)
python_app | TypeError: int() argument must be a string, a bytes-like object or a number, not 'NoneType'
python_app | [2021-09-27 14:36:16 +0000] [9] [INFO] Worker exiting (pid: 9)
python_app | Traceback (most recent call last):
python_app | File "/usr/local/lib/python3.7/site-packages/gunicorn/arbiter.py", line 203, in run
python_app | self.manage_workers()
python_app | File "/usr/local/lib/python3.7/site-packages/gunicorn/arbiter.py", line 548, in manage_workers
python_app | self.spawn_workers()
python_app | File "/usr/local/lib/python3.7/site-packages/gunicorn/arbiter.py", line 620, in spawn_workers
python_app | time.sleep(0.1 * random.random())
python_app | File "/usr/local/lib/python3.7/site-packages/gunicorn/arbiter.py", line 245, in handle_chld
python_app | self.reap_workers()
python_app | File "/usr/local/lib/python3.7/site-packages/gunicorn/arbiter.py", line 528, in reap_workers
python_app | raise HaltServer(reason, self.WORKER_BOOT_ERROR)
python_app | gunicorn.errors.HaltServer: <HaltServer 'Worker failed to boot.' 3>
python_app |
python_app | During handling of the above exception, another exception occurred:
python_app |
python_app | Traceback (most recent call last):
python_app | File "/usr/local/bin/gunicorn", line 8, in <module>
python_app | sys.exit(run())
python_app | File "/usr/local/lib/python3.7/site-packages/gunicorn/app/wsgiapp.py", line 61, in run
python_app | WSGIApplication("%(prog)s [OPTIONS] [APP_MODULE]").run()
python_app | File "/usr/local/lib/python3.7/site-packages/gunicorn/app/base.py", line 223, in run
python_app | super(Application, self).run()
python_app | File "/usr/local/lib/python3.7/site-packages/gunicorn/app/base.py", line 72, in run
python_app | Arbiter(self).run()
python_app | File "/usr/local/lib/python3.7/site-packages/gunicorn/arbiter.py", line 232, in run
python_app | self.halt(reason=inst.reason, exit_status=inst.exit_status)
python_app | File "/usr/local/lib/python3.7/site-packages/gunicorn/arbiter.py", line 345, in halt
python_app | self.stop()
python_app | File "/usr/local/lib/python3.7/site-packages/gunicorn/arbiter.py", line 396, in stop
python_app | time.sleep(0.1)
python_app | File "/usr/local/lib/python3.7/site-packages/gunicorn/arbiter.py", line 245, in handle_chld
python_app | self.reap_workers()
python_app | File "/usr/local/lib/python3.7/site-packages/gunicorn/arbiter.py", line 528, in reap_workers
python_app | raise HaltServer(reason, self.WORKER_BOOT_ERROR)
python_app | gunicorn.errors.HaltServer: <HaltServer 'Worker failed to boot.' 3>
The app error you're seeing is because the Redis connection is failing to open, if you look at the stack trace closely. The reason for the failed connection is that REDIS_HOST is set incorrectly to 127.0.0.1 inside your Dockerfile. To fix this, the values for REDIS_HOST and REDIS_PORT should actually be passed into your app container by docker-compose, since that's the layer that actually knows where Redis lives. Your Dockerfile is just for your app container, which depends on Redis but doesn't have a clue where it might be running.
Since compose makes services available at hostnames equal to the service name by default, Redis should be reachable at just tcp://redis:6379, so I would give these values a shot to start with:
web:
build: .
container_name: "python_app"
ports:
- "8000:8000"
depends_on:
- redis
environment:
REDIS_HOST: redis
REDIS_PORT: 6379

Odoo Bus.bus unavailable

I get regularly following error message while running odoo v14 locally in docker:
odoo-14.0-stage | 2021-04-26 10:51:00,476 10 ERROR update odoo.http: Exception during JSON request handling.
odoo-14.0-stage | Traceback (most recent call last):
odoo-14.0-stage | File "/home/odoo/addons/odoo/odoo/addons/base/models/ir_http.py", line 237, in _dispatch
odoo-14.0-stage | result = request.dispatch()
odoo-14.0-stage | File "/home/odoo/addons/odoo/odoo/http.py", line 683, in dispatch
odoo-14.0-stage | result = self._call_function(**self.params)
odoo-14.0-stage | File "/home/odoo/addons/odoo/odoo/http.py", line 359, in _call_function
odoo-14.0-stage | return checked_call(self.db, *args, **kwargs)
odoo-14.0-stage | File "/home/odoo/addons/odoo/odoo/service/model.py", line 94, in wrapper
odoo-14.0-stage | return f(dbname, *args, **kwargs)
odoo-14.0-stage | File "/home/odoo/addons/odoo/odoo/http.py", line 347, in checked_call
odoo-14.0-stage | result = self.endpoint(*a, **kw)
odoo-14.0-stage | File "/home/odoo/addons/odoo/odoo/http.py", line 912, in __call__
odoo-14.0-stage | return self.method(*args, **kw)
odoo-14.0-stage | File "/home/odoo/addons/odoo/odoo/http.py", line 531, in response_wrap
odoo-14.0-stage | response = f(*args, **kw)
odoo-14.0-stage | File "/home/odoo/addons/odoo/addons/bus/controllers/main.py", line 35, in poll
odoo-14.0-stage | raise Exception("bus.Bus unavailable")
odoo-14.0-stage | Exception
odoo-14.0-stage |
odoo-14.0-stage | The above exception was the direct cause of the following exception:
odoo-14.0-stage |
odoo-14.0-stage | Traceback (most recent call last):
odoo-14.0-stage | File "/home/odoo/addons/odoo/odoo/http.py", line 639, in _handle_exception
odoo-14.0-stage | return super(JsonRequest, self)._handle_exception(exception)
odoo-14.0-stage | File "/home/odoo/addons/odoo/odoo/http.py", line 315, in _handle_exception
odoo-14.0-stage | raise exception.with_traceback(None) from new_cause
odoo-14.0-stage | Exception: bus.Bus unavailable
My odoo.conf file:
[options]
# Service Settings
addons_path = /home/odoo/addons/odoo/addons,/home/odoo/addons/extra,/home/odoo/custom/custom_addons,/home/odoo/custom/edited_addons,/home/odoo/custom/paysy_addons
data_dir = /var/lib/odoo
# Database
db_host = postgres-12.2
db_user = odoo_13_0_stage
db_password = password
# Tuning Options
workers = 2
max_cron_threads = 1
limit_time_cpu = 600
limit_time_real = 1200
osv_memory_age_limit = 1.0
osv_memory_count_limit = False
# Network / Ports
xmlrpc_port = 8069
netrpc_port = 8070
xmlrpcs_port = 8071
longpolling_port = 8072
proxy_mode = True
I think it hast something to do with longpolling but aren't sure. As you can see I already set the proxy_mode true, configured the longpolling port and set the 2 workers. I also tried to configure zero or more than two workers as suggested elsewhere.
Hopefully someone can help.
PS: Following my docker-compose file:
version: "3.9"
services:
odoo-14.0-stage:
container_name: odoo-14.0-stage
image: odoo-14.0:stage
build: ./volumes/
ports:
- 13001:8069/tcp
- 8070:8070
- 8071:8071
- 8072:8072
depends_on:
- postgres-12.2
volumes:
- ./config:/etc/odoo:ro
- ./volumes:/home/odoo/addons
- ./addons:/home/odoo/custom
- ./data:/var/lib/odoo
restart: always
postgres-12.2:
container_name: postgres-12.2
image: postgres:12.2
build: ./postgres/12.2/
volumes:
- ./postgres/12.2/volumes/data:/var/lib/postgresql/data:delegated
restart: always
Those logs are a cryptic way Odoo tells you that you need to configure your proxy correctly.
Odoo normal operations are done through the main (also called HTTP) port, which defaults to 8069. However, long polling requests are a bit different:
In threaded mode (workers = 0, best for development), they pass though the same 8069 port.
In multiprocess mode (workers = 2 or more, best for production), they use a specific process which listens by default on port 8072.
Your browser knows nothing about how Odoo is configured. It just makes all requests through the same port, whatever it is (tip: HTTP uses 80 by default, and HTTPS uses 443).
That's why, if you are using Odoo in multiprocess mode, there must be an inverse proxy sitting between the web browser and Odoo, directing requests to the right port depending on the path.
Odoo docs give an example nginx configuration and more details. Check them out: https://www.odoo.com/documentation/14.0/administration/install/deploy.html#id7

Resources