I am using docker and pipenv for my virtual environment and I am getting the following error when I run docker-compose up:
ModuleNotFoundError: No module named 'rest_auth'
I tried pip install django-rest-auth and pipenv install django-rest-auth and also added the following to my INSTALLED_APPS
# Django REST Framework Apps
'rest_framework',
'rest_framework.authtoken',
'rest_auth',
INSTALLED_APPS = [
"django.contrib.admin",
"django.contrib.auth",
"django.contrib.contenttypes",
"django.contrib.sessions",
"django.contrib.messages",
"django.contrib.staticfiles",
# Other Package Apps
"storages",
# Django REST Framework Apps
'rest_framework',
'rest_framework.authtoken',
'rest_auth',
# Internal Apps
"authentication",
]
Expected to run docker container and access backend on localhost:8000
Actual: docker-compose up > ModuleNotFoundError: No module named 'rest_auth'
Dockerfile:
FROM python:3.7
ENV PYTHONUNBUFFERED 1
RUN apt-get update -y && \
apt-get install -y postgresql postgresql-contrib && \
apt-get clean
RUN mkdir /code
WORKDIR /code
ADD . /code/
RUN pip install pipenv
RUN pipenv install --system
ENTRYPOINT ["./docker-entrypoint.sh"]
docker-entrypoint.sh:
#!/bin/bash
case "$1" in
web_app)
until psql postgres://postgres:$POSTGRES_PASSWORD#db -c '\q'; do
>&2 echo "Postgres is unavailable - sleeping"
sleep 1
done
>&2 echo "Postgres is up!"
case "$2" in
migrate)
python manage.py migrate
;;
static)
python manage.py collectstatic --clear --noinput
python manage.py collectstatic --noinput
;;
migrate_and_static)
python manage.py migrate
python manage.py collectstatic --clear --noinput
python manage.py collectstatic --noinput
;;
esac
case "$3" in
prod)
echo "Starting Gunicorn."
exec gunicorn service_health.wsgi:application \
--bind 0.0.0.0:8000 \
--workers 3 \
--access-logfile '-'
;;
local)
pipenv install --system
echo "Starting local server"
python manage.py runserver 0.0.0.0:8000
;;
esac
;;
esac
docker-compose.yml:
version: '3'
services:
postgres:
image: postgres:11.1
environment:
- POSTGRES_PASSWORD=password
ports:
- "5432:5432"
api:
build:
context: backend
environment:
- POSTGRES_PASSWORD=password
volumes:
- $PWD/backend:/code
ports:
- 8000:8000
links:
- postgres:db
command: web_app migrate local
frontend:
build:
context: frontend
volumes:
- $PWD/frontend:/code
environment:
- NODE_ENV=development
ports:
- 3000:3000
I had same error and did like this:
First I did:
pip uninstall django-rest_auth
then:
pip3 install django-rest_auth.
Hope this works.
Related
I have a Django API that is completely dockerized and it works locally as well as in my Heroku deployment for production. However when I try to connect the Git repo to Portainer, it is able to successfully pull it but it doesn't publish all the images. It is only able to give the port for the pgadmin image, not for the database or the redis image or the nginx or the django web service itself. These are things I need to get the whole thing working. I'm not sure what's wrong or what to do about it.
This is my docker-compose.yml file:-
version: "3.9"
services:
nginx:
build: ./nginx
ports:
- 8001:80
volumes:
- static-data:/vol/static
depends_on:
- web
restart: "on-failure"
redis:
image: redis:latest
ports:
- 6379:6379
volumes:
- ./config/redis.conf:/redis.conf
command: ["redis-server", "/redis.conf"]
restart: "on-failure"
db:
image: postgres:13
volumes:
- ./data/db:/var/lib/postgresql/data
env_file:
- database.env
restart: always
web:
build: .
command: bash -c "python manage.py makemigrations && python manage.py migrate && python manage.py runserver 0.0.0.0:8101"
container_name: vidhya_io_api
volumes:
- .:/shuddhi
ports:
- 8101:8101
depends_on:
- db
- redis
restart: "on-failure"
volumes:
database-data: # named volumes can be managed easier using docker-compose
static-data:
This is the Dockerfile:-
FROM python:3.8.3
LABEL maintainer="https://github.com/ryarasi"
# ENV MICRO_SERVICE=/app
# RUN addgroup -S $APP_USER && adduser -S $APP_USER -G $APP_USER
# set work directory
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
COPY ./requirements.txt /requirements.txt
# create root directory for our project in the container
RUN mkdir /shuddhi
# COPY ./scripts /scripts
WORKDIR /shuddhi
# Copy the current directory contents into the container at /shuddhi
ADD . /shuddhi/
# Install any needed packages specified in requirements.txt
# This is to create the collectstatic folder for whitenoise
RUN pip install --upgrade pip && \
pip install --no-cache-dir -r /requirements.txt && \
mkdir -p /vol/web/static && \
mkdir -p /vol/web/media
# ENV PATH="/scripts:$PATH"
# CMD ["run.sh"]
CMD python manage.py wait_for_db && python manage.py collectstatic --noinput && python manage.py migrate && gunicorn shuddhi.wsgi:application --bind 0.0.0.0:8101
After I set up the stack and run it, this is what I see:-
You can see that the published ports are missing for all the images other than the Redis image.
I have no idea why this is happening.
What should I do to get it all published and working?
Here is my docker-compose file
It does not even run wait_for_db command
I tried to put the commands in a bash script but it didn't work also
If somebody could help me writing these commands
I want to run manage.py commands and also
run celer and celery beat
version: "3.7"
services:
web:
build: .
command: >
sh -c "
python app/manage.py wait_for_db &&
python app/manage.py makemigrations &&
python app/manage.py makemigrations csvreader &&
python app/manage.py migrate &&
python app/manage.py wait_for_migrate &&
python app/manage.py create_admin --username admin --password admin --noinput --email admin#admin.com &&
python app/manage.py runserver 0.0.0.0:8000 &
celery -A app --workdir app worker --loglevel=info &
celery -A app --workdir app beat -l INFO --scheduler django_celery_beat.schedulers:DatabaseScheduler "
volumes:
- .:/djangoapp
ports:
- "8000:8000"
- "23:22"
depends_on:
- db
- broker
environment:
- DB_HOST=db
- DB_PORT=5432
- DB_NAME=mycsv
- DB_USER=postgres
- DB_PASSWORD=password
- CELERY_BROKER=amqp://admin:password#broker:5672//
restart: on-failure
db:
image: postgres:13.3-alpine
environment:
- POSTGRES_DB=mycsv
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=password
ports:
- "5432:5432"
broker:
image: rabbitmq
environment:
- RABBITMQ_DEFAULT_USER=admin
- RABBITMQ_DEFAULT_PASS=password
And here is my Dockerfile
FROM python:3.8
ENV PYTHONUNBUFFERED=1
WORKDIR /djangoapp
RUN apt-get update
RUN apt-get install -y python3-dev build-essential
COPY requirements.txt requirements.txt
RUN pip install -U pip setuptools wheel
RUN pip install -r requirements.txt
EXPOSE 8000
EXPOSE 22
COPY . /djangoapp
It seems that none of the commands are being run
I want to Running docker-compose inside a dockerized jenkins container.
but my docker-compose file is work on my local but when I try to CD in jenkins it doesn't work
with this error
[0m[32mgunicorn-backend |[0m sh: 0: Can't open /code/gunicorn/gunicorn_start.sh
jenkinsfile
#!groovy
node {
environment {
Django_secret_key = credentials('Django_secret_key')
}
stage("Checkout") {
checkout scm
}
stage('Stop previous containers') {
dir('backend') {
withEnv(["PATH=$PATH:/usr/local/bin"]){
sh """
docker-compose -p LBS_Platform down
"""
}
}
}
stage('Run current containers') {
dir('backend') {
withEnv(["PATH=$PATH:/usr/local/bin"]){
sh """
docker-compose -p LBS_Platform up --build
"""
}
}
}
}
jenkins docker, docker-compose
# dockerfile
FROM jenkins/jenkins:lts
ARG HOST_UID=1004
ARG HOST_GID=999
USER root
RUN curl -fsSL https://get.docker.com -o get-docker.sh && sh get-docker.sh
RUN curl -L "https://github.com/docker/compose/releases/download/1.28.6/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose ; chmod +x /usr/local/bin/docker-compose
RUN usermod -u $HOST_UID jenkins
RUN groupmod -g $HOST_GID docker
RUN usermod -aG docker jenkins
USER jenkins
# docker-compose file
version: "3"
services:
jenkins:
privileged: true
build:
context: ./
container_name: jenkins
restart: always
user: root
ports:
- 8083:8080
- 50003:50000
expose:
- "8080"
- "50000"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
- "./jenkins_home:/var/jenkins_home"
environment:
TZ: "Asia/Seoul"
volumes:
jenkins_home:
driver: local
docker-compose what i want to run on jenkins coniatiner
# dockerfile
FROM python:3.9
ENV PYTHONUNBUFFERED 1
RUN apt-get -y update
ARG Django_secret_key
ENV Django_secret_key $Django_secret_key
ENV BOARD_DEBUG 1
# 유저, 그룹 나중에 수정 TODO
# the user to run as
ENV USER root
# how many worker processes should Gunicorn spawn
ENV NUM_WORKERS 3
# which settings file should Django use
ENV DJANGO_SETTINGS_MODULE backend.settings
# WSGI module name
ENV DJANGO_WSGI_MODULE backend.wsgi
ENV PORT 8000
RUN echo "Starting $NAME as $(whoami)"
RUN mkdir /code
WORKDIR /code
COPY . /code/
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive apt-get install -y netcat
RUN chmod 755 /code/gunicorn/gunicorn_start.sh
ENTRYPOINT ["sh", "/code/gunicorn/gunicorn_start.sh"]
# docker-compose file
networks:
app-tier:
driver: bridge
services:
gunicorn-backend:
restart: always
container_name: gunicorn-backend
build:
context: .
args:
Django_secret_key: "${Django_secret_key}"
command: bash -c "pipenv run python manage.py runserver 0.0.0.0:8000"
volumes:
- .:/code
networks:
- app-tier
ports:
- "8000:8000"
nginx-backend:
restart: always
container_name: nginx-backend
image: nginx:latest
volumes:
- ./nginx/config:/etc/nginx/conf.d
- ./nginx/logs:/var/backend-logs
expose:
- "80"
- "443"
ports:
- "80:80"
- "443:443"
networks:
- app-tier
depends_on:
- gunicorn-backend
environment:
- NGINX_HOST=0.0.0.0
- NGINX_PORT=80
# gunicorn/gunicorn_start.sh
#!/bin/bash
# Name of the application
NAME="backend"
# https://stackoverflow.com/questions/4774054/reliable-way-for-a-bash-script-to-get-the-full-path-to-itself
SCRIPT_PATH=$(dirname `which $0`)
# Django project directory
# . 경로
DJANGODIR=$SCRIPT_PATH
# /Users/Han/programming/DjangoCRUDBoard
PARENT_PATH=$(cd $SCRIPT_PATH ; cd .. ; pwd)
# we will communicte using this unix socket
SOCKFILE=$PARENT_PATH/run/gunicorn.sock
echo $PARENT_PATH
# Activate the virtual environment
export DJANGO_SETTINGS_MODULE=$DJANGO_SETTINGS_MODULE
export PYTHONPATH=$DJANGODIR:$PYTHONPATH
# Create the run directory if it doesn't exist
RUNDIR=$(dirname $SOCKFILE)
test -d $RUNDIR || mkdir -p $RUNDIR
cd $PARENT_PATH
# # DB 연결될때까지 블로킹 (미그레이션은 DB가 연결되어야 가능하다)
# while ! nc -z database 5432; do sleep 1; done;
pip install --upgrade pip
pip install pipenv
pipenv install
pipenv run python manage.py makemigrations
pipenv run python manage.py migrate
# Start your Django Unicorn
# Programs meant to be run under supervisor should not daemonize themselves (do not use --daemon)
# pipenv 사용
exec pipenv run gunicorn ${DJANGO_WSGI_MODULE}:application \
-b 0.0.0.0:$PORT \
--name $NAME \
--workers $NUM_WORKERS \
--user=$USER \
--bind=unix:$SOCKFILE \
--log-level=debug \
--log-file=-
docker & docker-compose work on jenkins container but I don't understand why
[0m[32mgunicorn-backend |[0m sh: 0: Can't open /code/gunicorn/gunicorn_start.sh
this error showed.....
Is there any solution to solve this problem?!
I am trying to do a Docker image but I have some problems. Here is my docker-compose.yml :
version: '3.7'
services:
web:
container_name: web
build:
context: .
dockerfile: Dockerfile
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/usr/src/web/
ports:
- 8000:8000
- 3000:3000
- 35729:35729
stdin_open: true
depends_on:
- db
db:
restart: always
environment:
- POSTGRES_USER=admin
- POSTGRES_PASS=pass
- POSTGRES_DB=mydb
- POSTGRES_PORT=5432
- POSTGRES_HOST=localhost
- POSTGRES_HOST_AUTH_METHOD=trust
container_name: db
image: postgres:12.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
volumes:
postgres_data:
And there my Dockerfile :
# pull official base image
FROM python:3.8.3-alpine
# set work directory
WORKDIR /usr/src/web
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install psycopg2 dependencies
RUN apk update \
&& apk add postgresql-dev gcc python3-dev musl-dev
# install nodejs
RUN apk add --update nodejs nodejs-npm
RUN apk add zlib-dev jpeg-dev gcc musl-dev
# copy project
COPY . .
RUN python -m pip install -U --force-reinstall pip
RUN python -m pip install Pillow
# install dependencies
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
RUN pip install Pillow
# run entrypoint.sh
ENTRYPOINT ["sh", "./entrypoint.sh"]
Anf finally my entrypoint.sh :
#!/bin/sh
if [ "$DATABASE" = "postgres" ]
then
echo "Waiting for postgres..."
while ! nc -z $SQL_HOST $SQL_PORT; do
sleep 0.1
done
echo "PostgreSQL started"
fi
exec "$#"
When I do that :
docker-compose up -d --build
It works perfectly. Then I type that :
docker-compose exec web npm start --prefix ./front/
It looks ok but when I type in my browser http://localhost:3000/
I got that kind of messages : Error : NetworkError when attempting to fetch resource.
I thought the front is ok but I am not able to communicate with the back and so the database.
Could you help me please ?
Thank you very much !
As I can see in docker-compose.yml file you did not define the environment variables for Postgres in the web container. Please define the environment variables for the below :
DATABASE
SQL_HOST
SQL_PORT
Then bring down the docker and bring up the docker hopefully it will help you.
I am trying to build a docker image in multiple stages. My app exits immediately after getting up
My Dockerfile:
################# Builder #####################
FROM python:3.6 AS dependencies
COPY ./requirements.txt requirements.txt
RUN pip install --upgrade pip
RUN pip install --user -r requirements.txt
################# Release #####################
FROM python:3.6-alpine AS release
WORKDIR /src/
COPY . /src
COPY --from=dependencies /root/.local /root/.local/
COPY entrypoint.sh /
RUN chmod +x /entrypoint.sh
RUN mv /src/wait-for /bin/wait-for
RUN chmod +x /bin/wait-for
ENV PATH=/root/.local/bin:$PATH
ENTRYPOINT [ "/entrypoint.sh" ]
My docker-compose:
version: '3.4'
services:
django_app:
build: ./app
command: sh -c "wait-for db:5432 && python manage.py collectstatic --no-input && python manage.py runserver 0.0.0.0:8000"
ports:
- "8000:8000"
env_file:
- ./.env
volumes:
- ./app:/src/
restart: on-failure
db:
image: postgres:9.6
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=${POSTGRESQL_DB_USER}
- POSTGRES_PASSWORD=${POSTGRESQL_DB_PASSWORD}
- POSTGRES_DB=${POSTGRESQL_DB_NAME}
ports:
- 5432:5432
restart: on-failure
entrypoint.sh
#! /bin/sh
cd /src/ || exit
# Run migrations
echo "RUNNING MIGRATIONS" && python manage.py migrate
# echo "COLLECT STATIC" && python manage.py collectstatic --noinput
exec "$#"
If I use django image with single stage build, everything works fine. Not able to understand the problem here.