This is my Dockerfile for a simple Django project:
FROM python:3.10.5-alpine
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
RUN adduser -D appuser
USER appuser
WORKDIR /home/appuser/
COPY requirements.txt .
RUN python -m pip install --user --no-cache-dir --disable-pip-version-check --requirement requirements.txt
COPY . .
ENTRYPOINT [ "./entrypoint.sh" ]
And Compose file:
version: "3.9"
services:
app:
build: ./app
ports:
- "8000:8000"
volumes:
- ./app:/app
restart: unless-stopped
And entrypoint.sh:
#!/bin/sh
python manage.py makemigrations
python manage.py migrate
python manage.py collectstatic --no-input
python manage.py runserver 0.0.0.0:8000
And the structure of the project:
app
|----project
|----venv
|----static
|----.dockerignore
|----Dockerfile
|----entrypoint.sh
|----manage.py
|----requirements.txt
docker-compose.yaml
Everything works except that when python manage.py migrate creates the db.sqlite3 file in the image, and python manage.py collectstatic copies static files to the static folder of the image (both in app folder), I don't see them in app folder of my host machine. Perhaps my understanding of how volumes work is not correct?
Related
I'm learning how to run django rest with docker. I created an image, and I it's work when I use the command: docker run -p 8000:8000 docker_django_tutorial
But now I want to run this image through a docker-compose.yml file. Here is mine (it's based on a youtube vidéo, that why I don't understand why it doesn't work f
or me):
version: '3'
services:
monapp:
image: docker_django_tutorial
ports:
- 8000:8000
networks:
- monreaseau
networks:
monreseau:
When I run docker-compose up I've got the following error:
service "monapp" refers to undefined network monreaseau: invalid compose project
Just in case, here is my Dockerfile use for my image docker_django_tutorial:
#Use the Python3.7.2 container image
FROM python:3.7.2-stretch
WORKDIR /app
COPY . /app
RUN pip install -r requirements.txt
ENV PYTHONUNBUFFERED 1
CMD ["python3", "manage.py", "runserver", "0.0.0.0:8000"]
#RUN python3 manage.py runserver
Thank you for you're anwsers.
Someone help me re-write my files like this:
Docker file:
#Use the Python3.7.2 container image
FROM python:latest
WORKDIR /app
COPY . /app
RUN pip install -r requirements.txt
ENV PYTHONUNBUFFERED 1
#CMD ["python3", "manage.py", "runserver", "0.0.0.0:90"]
#RUN python3 manage.py runserver
docker-compose.yml:
version: '3'
services:
monapp:
image: docker_django_tutorial
ports:
- 90:90
build: .
command: python3 manage.py runserver 0.0.0.0:90
I have a Django API that is completely dockerized and it works locally as well as in my Heroku deployment for production. However when I try to connect the Git repo to Portainer, it is able to successfully pull it but it doesn't publish all the images. It is only able to give the port for the pgadmin image, not for the database or the redis image or the nginx or the django web service itself. These are things I need to get the whole thing working. I'm not sure what's wrong or what to do about it.
This is my docker-compose.yml file:-
version: "3.9"
services:
nginx:
build: ./nginx
ports:
- 8001:80
volumes:
- static-data:/vol/static
depends_on:
- web
restart: "on-failure"
redis:
image: redis:latest
ports:
- 6379:6379
volumes:
- ./config/redis.conf:/redis.conf
command: ["redis-server", "/redis.conf"]
restart: "on-failure"
db:
image: postgres:13
volumes:
- ./data/db:/var/lib/postgresql/data
env_file:
- database.env
restart: always
web:
build: .
command: bash -c "python manage.py makemigrations && python manage.py migrate && python manage.py runserver 0.0.0.0:8101"
container_name: vidhya_io_api
volumes:
- .:/shuddhi
ports:
- 8101:8101
depends_on:
- db
- redis
restart: "on-failure"
volumes:
database-data: # named volumes can be managed easier using docker-compose
static-data:
This is the Dockerfile:-
FROM python:3.8.3
LABEL maintainer="https://github.com/ryarasi"
# ENV MICRO_SERVICE=/app
# RUN addgroup -S $APP_USER && adduser -S $APP_USER -G $APP_USER
# set work directory
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
COPY ./requirements.txt /requirements.txt
# create root directory for our project in the container
RUN mkdir /shuddhi
# COPY ./scripts /scripts
WORKDIR /shuddhi
# Copy the current directory contents into the container at /shuddhi
ADD . /shuddhi/
# Install any needed packages specified in requirements.txt
# This is to create the collectstatic folder for whitenoise
RUN pip install --upgrade pip && \
pip install --no-cache-dir -r /requirements.txt && \
mkdir -p /vol/web/static && \
mkdir -p /vol/web/media
# ENV PATH="/scripts:$PATH"
# CMD ["run.sh"]
CMD python manage.py wait_for_db && python manage.py collectstatic --noinput && python manage.py migrate && gunicorn shuddhi.wsgi:application --bind 0.0.0.0:8101
After I set up the stack and run it, this is what I see:-
You can see that the published ports are missing for all the images other than the Redis image.
I have no idea why this is happening.
What should I do to get it all published and working?
I have a docker file which has a command RUN python3 manage.py dumpdata --natural-foreign --exclude=auth.permission --exclude=contenttypes --indent=4 > data.json" this creates a Json file.
when i build the docker file it creates an image of specific name and when i run that using below command and open in bash i am able to see the data.json file created.
docker run -it --rm vijeth11/fassionplaza bash
files in Docker container created via above cmd
when i use the same image and run docker compose run web bash cmd
i am not able to see the data.json file, while other files are present in the container.
files in Docker container created via Docker compose
Is there anything wrong in my docker commands
Command used to build:
docker build --no-cache -t vijeth11/fassionplaza .
Docker-compose.yml
version: "3"
services:
db:
image: postgres
environment:
- POSTGRES_DB=fashionplaza
ports:
- "5432:5432"
web:
image: vijeth11/fassionplaza
command: >
sh -c "ls -l && python3 manage.py makemigrations && python3 manage.py migrate && python3 manage.py loaddata data.json && gunicorn --bind :8000 --workers 3 FashionPlaza.wsgi"
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
Dockerfile
FROM python:3.7
ENV PYTHONUNBUFFERED=1
WORKDIR /code
COPY ./Backend /code/Backend
COPY ./frontEnd /code/frontEnd
WORKDIR /code/Backend
RUN pip3 install -r requirements.txt
WORKDIR /code/Backend/FashionPlaza
RUN python3 manage.py dumpdata --natural-foreign \
--exclude=auth.permission --exclude=contenttypes \
--indent=4 > data.json
RUN chmod 755 data.json
WORKDIR /code/frontEnd/FashionPlaza
RUN apt-get update -y
RUN apt -y install curl dirmngr apt-transport-https lsb-release ca-certificates
RUN curl -sL https://deb.nodesource.com/setup_12.x | bash
RUN apt install nodejs -y
RUN npm i
RUN npm run prod
ARG buildtime_variable=PROD
ENV server_type=$buildtime_variable
WORKDIR /code/Backend/FashionPlaza
Thank you in advance.
You map your current directory to /code when you run with these lines in your docker-compose file
volumes:
- .:/code
That hides all existing files in /code and replaces it with the mapped directory.
Since your data.json file is located in /code/Backend/FashionPlaza in the image, it becomes hidden and inaccessible.
The best thing to do is to map your volumes to empty directories in the image, so you don't inadvertently hide anything.
I have a web app with Django (and angular as compiled app) and I'm trying to run it using docker containers.
Here is my docker-compose.yml file:
version: '3'
services:
web:
build: ./
command: bash -c "python manage.py migrate && python manage.py collectstatic --noinput && gunicorn decoderservice.wsgi:application --bind 0.0.0.0:8000"
ports:
- "8000:8000"
env_file:
- ./.env.dev
Dockerfile:
FROM python:3.7
WORKDIR /orionprotocolservice/
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
COPY ./requirements.txt ./requirements.txt
RUN pip install -r requirements.txt
COPY . /orionprotocolservice/
Program works, but performance is extremely slow (about 17s for loading the main page). Routing between pages are also slow. Interestingly, if I click to some button twice, it immediately loads.
What is happening, who could help?
I am trying to build a docker image in multiple stages. My app exits immediately after getting up
My Dockerfile:
################# Builder #####################
FROM python:3.6 AS dependencies
COPY ./requirements.txt requirements.txt
RUN pip install --upgrade pip
RUN pip install --user -r requirements.txt
################# Release #####################
FROM python:3.6-alpine AS release
WORKDIR /src/
COPY . /src
COPY --from=dependencies /root/.local /root/.local/
COPY entrypoint.sh /
RUN chmod +x /entrypoint.sh
RUN mv /src/wait-for /bin/wait-for
RUN chmod +x /bin/wait-for
ENV PATH=/root/.local/bin:$PATH
ENTRYPOINT [ "/entrypoint.sh" ]
My docker-compose:
version: '3.4'
services:
django_app:
build: ./app
command: sh -c "wait-for db:5432 && python manage.py collectstatic --no-input && python manage.py runserver 0.0.0.0:8000"
ports:
- "8000:8000"
env_file:
- ./.env
volumes:
- ./app:/src/
restart: on-failure
db:
image: postgres:9.6
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=${POSTGRESQL_DB_USER}
- POSTGRES_PASSWORD=${POSTGRESQL_DB_PASSWORD}
- POSTGRES_DB=${POSTGRESQL_DB_NAME}
ports:
- 5432:5432
restart: on-failure
entrypoint.sh
#! /bin/sh
cd /src/ || exit
# Run migrations
echo "RUNNING MIGRATIONS" && python manage.py migrate
# echo "COLLECT STATIC" && python manage.py collectstatic --noinput
exec "$#"
If I use django image with single stage build, everything works fine. Not able to understand the problem here.