I am unable to run a Docker container with a gunicorn entrypoint. I receive the error:
gunicorn_1 | /entrypoint.sh: 46: exec: gunicorn: not found
I'm building the docker images (gunicorn + nginx), hosting in a container registry and then pulling the image. I receive the error when I run docker-compose up on the server.
Dockerfile (gunicorn):
FROM python:3.9.5-slim-buster
WORKDIR /usr/src/app
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN pip install --upgrade pip
COPY ./requirements.txt /usr/src/app/requirements.txt
RUN pip install -r /usr/src/app/requirements.txt
COPY . /usr/src/app/
EXPOSE 8000
Dockerfile (nginx):
FROM nginx:1.19-alpine
RUN rm /etc/nginx/conf.d/default.conf
COPY nginx.conf /etc/nginx/conf.d
Docker-compose:
version: '2'
services:
gunicorn:
image: link_to_built_gunicorn_image
command: gunicorn --bind 0.0.0.0:8000 wsgi:app
ports:
- 8000:8000
nginx:
image: link_to_built_nginx_image
ports:
- 80:80
depends_on:
- gunicorn
Requirements.txt:
Flask
oauthlib
pyOpenSSL
gunicorn
I've looked at similar posts (1, 2 and 3) and checked the following:
gunicorn is included in the requirements.txt
Tried downgrading gunicorn version in requirements.txt
Additional information:
Ubuntu 20.04.3 LTS OS hosted with Digital Ocean
Related
I'm trying to create a GET API endpoint in python flask and run the flask in Docker container but I'm unable to access the API running inside the container. I tried looking on other answers but nothing worked.
server.py
if __name__ == '__main__':
app.run(host="0.0.0.0", port=5000, debug=True)
Dockerfile
# syntax=docker/dockerfile:1
FROM python:3.10.4
WORKDIR /code
ENV FLASK_APP=server.py
ENV FLASK_RUN_HOST=0.0.0.0
ENV FLASK_DEBUG="true"
EXPOSE 5000
RUN pip install --upgrade pip
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
COPY . .
# CMD ["python","server.py"]
CMD ["flask", "run", "--host", "0.0.0.0"]
docker-compose.yml
version: "3.9"
services:
flask-server:
build: ./flask-server/
ports:
- "5000:5000"
restart: on-failure
I have 2 docker files. I cant build by project FastApi + PostgreSQL. I need run my fastapi app
docker-compose.yml:
version: '3.9'
services:
web:
build: .
command: uvicorn app.main:app --host 0.0.0.0 --port 8000
volumes:
- ./app:/app
ports:
- 8000:8000
depends_on:
- db
db:
image: postgres
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=db_ok
expose:
- 5432
volumes:
postgres_data:
Dockerfile:
FROM python:3.9
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
WORKDIR /app
COPY . ./app
COPY requirements.txt requirements.txt
# RUN python -m pip install --upgrade pip
RUN pip install -r requirements.txt
# CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "80"]
Also i have requirements.txt file
Error:
ERROR: Could not find a version that satisfies the requirement apt-
clone==0.2.1 (from versions: none)
ERROR: No matching distribution found for apt-clone==0.2.1
WARNING: You are using pip version 22.0.4; however, version 22.3.1 is available.
You should consider upgrading via the '/usr/local/bin/python -m pip install --upgrade pip' command.
The command '/bin/sh -c pip install -r requirements.txt' returned a non-zero code: 1
ERROR: Service 'web' failed to build : Build failed
How to fix tihs problem??
I'm learning how to run django rest with docker. I created an image, and I it's work when I use the command: docker run -p 8000:8000 docker_django_tutorial
But now I want to run this image through a docker-compose.yml file. Here is mine (it's based on a youtube vidéo, that why I don't understand why it doesn't work f
or me):
version: '3'
services:
monapp:
image: docker_django_tutorial
ports:
- 8000:8000
networks:
- monreaseau
networks:
monreseau:
When I run docker-compose up I've got the following error:
service "monapp" refers to undefined network monreaseau: invalid compose project
Just in case, here is my Dockerfile use for my image docker_django_tutorial:
#Use the Python3.7.2 container image
FROM python:3.7.2-stretch
WORKDIR /app
COPY . /app
RUN pip install -r requirements.txt
ENV PYTHONUNBUFFERED 1
CMD ["python3", "manage.py", "runserver", "0.0.0.0:8000"]
#RUN python3 manage.py runserver
Thank you for you're anwsers.
Someone help me re-write my files like this:
Docker file:
#Use the Python3.7.2 container image
FROM python:latest
WORKDIR /app
COPY . /app
RUN pip install -r requirements.txt
ENV PYTHONUNBUFFERED 1
#CMD ["python3", "manage.py", "runserver", "0.0.0.0:90"]
#RUN python3 manage.py runserver
docker-compose.yml:
version: '3'
services:
monapp:
image: docker_django_tutorial
ports:
- 90:90
build: .
command: python3 manage.py runserver 0.0.0.0:90
I have a docker project which uses docker-compose to stand up an API container and a database container. I also have it setup to create volumes for uploads and logs. When I build a new image, and then stand it up using docker-compose up, any data that was previously in the logs or uploads ceases to exist, but the data in our db-data volume persists. What do I need to change in my process and/or config to preserve the data in those other two volumes?
Here's how we stand up the docker:
docker-compose down
docker build -t api:latest .
docker save -o api.tar api:latest postgres:14
docker load -i api.tar
docker-compose up -d
Here's the Dockerfile:
# syntax=docker/dockerfile:1
FROM cupy/cupy AS deps
COPY nodesource_setup.sh .
RUN bash nodesource_setup.sh
RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install libssl-dev ca-certificates cmake nodejs libgl-dev libglib2.0-0 -y
RUN pip3 install opencv-python python-dotenv
WORKDIR /app
COPY ./python ./ml_services/
COPY ./api/ .
RUN npm config set unsafe-perm true
RUN npm ci
# Rebuild the source only when needed
FROM deps AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY --from=deps /app .
ENV NEXT_TELEMETRY_DISABLED=1
RUN npm run build
FROM deps AS runner
WORKDIR /app
ENV NODE_ENV production
ENV NEXT_TELEMETRY_DISABLED=1
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs
COPY --from=builder /app/ml_services ./ml_services
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/next.config.js ./
COPY --from=builder /app/public ./public
COPY --from=builder /app/package.json ./package.json
COPY --from=builder /app/.env.production.local ./.env
COPY --from=builder /app/prisma ./prisma
COPY --from=builder /app/.next ./.next
RUN mkdir -p /logs
RUN mkdir -p /uploads
RUN chown -R nextjs ./ /logs /uploads
# Cleanup
RUN apt-get clean && rm -rf /var/lib/apt
USER nextjs
EXPOSE 3000
ENV PORT 3000
CMD ["npm", "run", "deploy"]
And the docker-compose.yml
version: "3.7"
services:
database:
image: postgres:14
restart: always
env_file: .env.production.local
container_name: postgres
healthcheck:
test: "pg_isready --username=autocal && psql --username=autocal --list"
timeout: 10s
retries: 20
volumes:
- db-data:/var/lib/postgresql/data
api:
image: api:latest
ports:
- 3000:3000
depends_on: [database]
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu]
volumes:
- uploads:/uploads
- logs:/logs
volumes:
db-data:
uploads:
logs:
Make sure you writing logs and uploads in /logs and /uploads in your API code.
It might be the case it is being written it in ~/logs and ~/uploads (in the home directory of execution user).
Try to use absolute path not the relative path in your code.
Please go through node js docs around the path module https://nodejs.org/api/path.html
Absolutely ended up being a code issue in the API layer. Sigh. Thanks all.
I am trying to setup a django project and dockerize it.
I'm having trouble running the container.
As far as I know, it's successfully abe to build it, but fails to run.
This is the error I get:
ERROR: for web Cannot start service web: OCI runtime create failed: container_linux.go:349: starting container process caused "exec: \"./entrpoint.sh\": stat ./entrpoint.sh: no such file or directory": unknown
ERROR: Encountered errors while bringing up the project.
This is the dockerfile:
FROM python:3.6
RUN mkdir /backend
WORKDIR /backend
ADD . /backend/
RUN pip install -r requirements.txt
RUN apt-get update \
&& apt-get install -yyq netcat
RUN chmod 755 entrypoint.sh
ENTRYPOINT ["./entrpoint.sh"]
This is the compose file:
version: '3.7'
services:
db:
image: postgres:12.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=django
- POSTGRES_PASSWORD=password
- POSTGRES_DB=database
web:
restart: on-failure
build: .
container_name:backend
volumes:
- .:/backend
env_file:
- ./api/.env
command: python manage.py runserver 0.0.0.0:8000
ports:
- "8000:8000"
hostname: web
depends_on:
- db
volumes:
postgres_data:
And there is an entrypoint file which runs automatic migrations, if any:
Here is the script:
#!/bin/sh
if [ "$DATABASE" = "postgres" ]
then
echo "Waiting for postgres..."
while ! nc -z $SQL_HOST $SQL_PORT; do
sleep 0.1
done
echo "PostgreSQL started"
fi
python manage.py migrate
exec "$#"
Where am I going wrong?
The problem is that you it's not the entrypoint.sh missing but the nc command.
To solve this you have to install the netcat package.
Since python:3.6 is based on debian buster, you can simply add the following command after the FROM directive:
RUN apt-get update \
&& apt-get install -yyq netcat
EDIT for further improvements:
copy only the requirements.txt, install the packages then copy the rest. This will improve the cache usage and every build (after the first) will be faster (unless you touch the requirements.txt)
replace the ADD with COPY unless you're exploding a tarball
The result should look like this:
FROM python:3.6
RUN apt-get update \
&& apt-get install -yyq netcat
RUN mkdir /backend
WORKDIR /backend
COPY requirements.txt /backend/
RUN pip install -r requirements.txt
COPY . /backend/
ENTRYPOINT ["./entrypoint.sh"]