Overriding wagtail modeladmin templates in docker with windows and ubuntu - docker

I'm follow this article to override modeladmin template:
templates/modeladmin/app-name/model-name/
Overriding templates
It was overridded in windows 11 with docker, but not work in ubuntu.
I have no idea why have to use volumes to override modeladmin template and it was only work in windows?
It is also work in windows not use docker.
dockerfile
FROM python:3.8.1-slim-buster
RUN useradd wagtail
EXPOSE 8088
ENV PYTHONUNBUFFERED=1 \
PORT=8088
RUN apt-get update --yes --quiet && apt-get install --yes --quiet --no-install-recommends \
build-essential \
libpq-dev \
libmariadbclient-dev \
libjpeg62-turbo-dev \
zlib1g-dev \
libwebp-dev \
&& rm -rf /var/lib/apt/lists/*
RUN pip install --upgrade pip
RUN pip install "gunicorn==20.0.4"
COPY requirements.txt /
RUN pip install -r /requirements.txt
WORKDIR /app
RUN chown wagtail:wagtail /app
COPY --chown=wagtail:wagtail . /app
USER wagtail
RUN python manage.py collectstatic --noinput --clear --no-post-process
docker-compose.yml
version: '3.8'
services:
web:
build: .
command: gunicorn web.wsgi:application --bind 0.0.0.0:8088
volumes:
- ./project/templates:/app/project/templates
ports:
- '8088:8088'
Can you give me some pointers about how to override it in ubuntu? thanks.

Related

Run 32bit app nn ubuntu 20.04 docker container

I built a ubuntu image using the following Dockerfile:
FROM ubuntu:20.04
# Disable Prompt During Packages Installation
ARG DEBIAN_FRONTEND=noninteractive
# Add 32bit architecture
RUN dpkg --add-architecture i386 \
&& apt-get update \
&& apt-get install -y libc6:i386 libncurses5:i386 libstdc++6:i386 zlib1g:i386
RUN apt-get update && apt-get install -y locales && rm -rf /var/lib/apt/lists/* \
&& localedef -i en_US -c -f UTF-8 -A /usr/share/locale/locale.alias en_US.UTF-8
ENV LANG en_US.utf8
RUN apt-get update && apt-get install -y \
iputils-ping \
python3 python3-pip
# Copy app to container
COPY . /app
WORKDIR /app
# Install pip requirements
COPY requirements.txt /app
RUN python3 -m pip install -r requirements.txt
# During debugging, this entry point will be overridden. For more information, please refer to https://aka.ms/vscode-docker-python-debug
CMD ["bash"]
I've been trying to run a 32bit app (hence the first run command in the Dockerfile) I have inside the my_app directory using:
./app
but I keep getting
bash: ./app: No such file or directory
I build your docker file with no error, do you have more detail ?

Docker runs on WIndows and only on one of two Linux systems

I have a docker image that I have built that runs on my windows laptop as expected. When I copy and load it on to one of my two Linux systems I get this error when I run docker logs:
Error: 'docker/semantic_search_django/gunicorn.conf' doesn't exist
When I inspect the running container on Windows I can see that "missing" file! Furthermore, if I copy and load the same docker image to my second Linux system, it runs as expected.
This issue just happened today. I've been having success on all 3 systems for the past couple of months until today. Any suggestions would be greatly appreciated. Both Linux systems are running Ubuntu 18.04.5 LTS.
I've tried renamed the images, I've stopped and started the docker daemon, I've even restarted both Linux boxes.
Here are the commands I have used:
docker pull my.artifactory.com/ciee_ssrdjango
docker-compose up -d
My docker-compose.yml
version: "3.8"
services:
web:
image: m.artifactory.com/ciee_ssrdjango
env_file:
- proxy.env
- django.env
container_name: ciee_ssrdjango
volumes:
- query-results-volume:/code
expose:
- "${SSRDJANGO_PORT}"
extra_hosts:
dbhost: ${POSTGRES_DOCKER_IP}
depends_on:
- db
networks:
- ssr_network
networks:
ssr_network:
external: true
volumes:
postgresql-volume:
external: true
query-results-volume:
external: true
My Dockerfile:
FROM ubuntu:18.04
ENV PYTHONUNBUFFERED 1
ENV PYTHONDONTWRITEBYTECODE 1
COPY ./requirements.txt /requirements.txt
#prevents being asked to set TZ
ARG DEBIAN_FRONTEND=noninteractive
RUN apt update -y && \
apt -y upgrade && \
apt install -y python3-pip && \
apt install -y build-essential libssl-dev libffi-dev libpq-dev python3-dev && \
apt install -y software-properties-common python3.8
RUN python3 -m pip install --upgrade pip setuptools wheel
ENV TZ=US/Eastern
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
RUN apt update -y & apt install gcc libxml2-dev libxslt-dev postgresql postgresql-contrib postgresql-plpython-10 --no-install-recommends unixodbc-dev unixodbc libpq-dev -y
RUN mkdir /code # && mkdir /code/ciee
RUN pip install nltk
RUN export PATH=~/.local/bin:$PATH
RUN pip install -r /requirements.txt
COPY . /code/
WORKDIR /code
RUN useradd -m user && chmod 777 /home/user && mkdir /code/query_results && chmod 777 /code/query_results
USER user
CMD ["gunicorn", "semantic_search_django.wsgi:application", "--config", "docker/semantic_search_django/gunicorn.conf", "--keep-alive", "600"]
Here's the thing, I've been using these files and commands successfully for many weeks.
I can make one assumption. You are mounting query-results-volume into /code directory in container and your conf file is located inside it. The volume persists between containers – that's the nature of the volumes. So, somehow, the file in question (or even the folder) has been removed from the volume on the problem machine and now container can not get it.

Docker Debian nc command not found

When I build my Debian image from docker-compose, with the command $ docker-compose -f docker-compose-dev.yml build web, like so:
docker-compose-fev.yml
services:
web:
build:
context: ./services/web
dockerfile: Dockerfile-dev
volumes:
- './services/web:/usr/src/app'
ports:
- 5001:5000
environment:
- FLASK_ENV=development
- APP_SETTINGS=project.config.DevelopmentConfig
- DATABASE_URL=postgres://postgres:postgres#web-db:5432/web_dev
- DATABASE_TEST_URL=postgres://postgres:postgres#web-db:5432/web_test
- SECRET_KEY=my_precious
depends_on:
- web-db
- redis
As though it appears to build all packages successfully, I'm getting:
web_1| /usr/src/app/entrypoint.sh: 5: /usr/src/app/entrypoint.sh: nc: not found
If I change #!/bin/sh to #!/bin/bash, error log changes:
web_1| /usr/src/app/entrypoint.sh: line 5: nc: command not found
Dockerfile:
FROM python:3.7-slim-buster
RUN apt-get update && apt-get -y dist-upgrade
RUN apt-get -y install build-essential libssl-dev libffi-dev libblas3 libc6 liblapack3 gcc python3-dev python3-pip cython3
RUN apt-get -y install python3-numpy python3-scipy
# set working directory
WORKDIR /usr/src/app
COPY ./requirements.txt /usr/src/app/requirements.txt
RUN pip3 install -r requirements.txt
# add entrypoint.sh
COPY ./entrypoint.sh /usr/src/app/entrypoint.sh
RUN chmod +x /usr/src/app/entrypoint.sh
# add app
COPY . /usr/src/app
# run server
CMD ["/usr/src/app/entrypoint.sh"]
entrypoint.sh
#!/bin/sh
echo "Waiting for postgres..."
while ! nc -z web-db 5432; do
sleep 0.1
done
rm -rf celery_logs/*
echo "PostgreSQL started"
python manage.py run -h 0.0.0.0
Note: this entrypoint configuration used to work with Alpine, and now has changed to Debian.
what am I missing?
Update the Dockerfile and append,
RUN apt install -y netcat
It should be like,
FROM python:3.7-slim-buster
RUN apt-get update && apt-get -y dist-upgrade
RUN apt-get -y install build-essential libssl-dev libffi-dev libblas3 libc6 liblapack3 gcc python3-dev python3-pip cython3
RUN apt-get -y install python3-numpy python3-scipy
RUN apt install -y netcat

Docker - No such file or directory. Copy does not copy all files

Dockerfile does not completely copy all files from the local directory.
I don’t understand why he copies the folder from the backend, but it doesn’t have all the files.
Thanks for help..
structure:
docker/
django/
/Dockerfile
/backend/
requirements.txt
src/
angular4/
/Dockerfile
/client
docker-compose.yml
docker-compose.yml
version: '3'
services:
db:
image: postgres:9.6
hostname: db
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: alinta
ports:
- "5432:5432"
backend:
build: ./django
image: alinta
command: python3 manage.py runserver 0.0.0.0:8000
volumes:
- .:/var/www/alinta/
ports:
- "8000:8000"
depends_on:
- db
migration:
image: alinta
command: python3 manage.py migrate --noinput
volumes:
- .:/var/www/alinta/
depends_on:
- db
frontend:
build: ./angular4
volumes:
- .:/var/www/alinta
ports:
- "4200:4200"
Dockerfile (django):
FROM postgres:9.6
RUN apt-get update && apt-get install -q -y postgresql-9.6 postgresql-client-9.6 postgresql-contrib-9.6 postgresql-client-common postgresql-common
#RUN echo postgres:postgres | chpasswd
#RUN pg_createcluster 9.6 main --start
#RUN /etc/init.d/postgresql start
FROM python:3.7
MAINTAINER Nikita Alekseev <nik_alekseev#outlook.com>
# Alinta
# Version: 1.0
# Install Python and Package Libraries
RUN apt-get update && apt-get upgrade -y && apt-get autoremove && apt-get autoclean
RUN apt-get install -y \
libffi-dev \
libssl-dev \
libxml2-dev \
libxslt-dev \
libjpeg-dev \
libfreetype6-dev \
zlib1g-dev \
net-tools \
vim
RUN apt-get install -y \
python3-pip \
python3-dev \
python3-virtualenv \
libpq-dev \
postgresql \
postgresql-contrib \
nginx \
curl
RUN pip3 install virtualenv
# Project Files and Settings
ARG PROJECT=alinta
ARG PROJECT_DIR=/var/www/${PROJECT}
RUN mkdir -p $PROJECT_DIR/backend/src
RUN mkdir -p $PROJECT_DIR/backend/src/static
RUN mkdir -p $PROJECT_DIR/backend/src/media
#WORKDIR $PROJECT_DIR
COPY ./backend /var/www/alinta/
WORKDIR $PROJECT_DIR/backend
RUN virtualenv -p python3.7 --no-site-packages env
RUN /bin/bash -c "source env/bin/activate"
RUN pip3 install -r requirements.txt
WORKDIR $PROJECT_DIR/backend/src
EXPOSE 8000
docker-compose build
ERROR: Could not open requirements file: [Errno 2] No such file or directory: 'requirements.txt'
For your Dockerfile your requirements.txt is in backend directory, and not on the same level, so adding or copying it will make it visible:
so add:
ADD ./backend/requirements.txt requirements.txt
or
COPY ./backend/requirements.txt requirements.txt
before you run
RUN pip3 install -r requirements.txt
Just copying backend directory as:
COPY ./backend /var/www/alinta/
will not allow Docker to know the context of your requrements.txt. Using ADD or COPY will instruct Docker how to locate file.

Docker-compose does not reflect changes in requirements.txt

Changes in my requirements.txt are not being reflected when I run:
docker-compose -f docker-compose-dev.yml up -d
docker-compose-dev.yml
version: '3.6'
services:
web:
build:
context: ./services/web
dockerfile: Dockerfile-dev
volumes:
- './services/web:/usr/src/app'
ports:
- 5001:5000
environment:
- FLASK_ENV=development
- APP_SETTINGS=project.config.DevelopmentConfig
- DATABASE_URL=postgres://postgres:postgres#web-db:5432/web_dev
- DATABASE_TEST_URL=postgres://postgres:postgres#web-db:5432/web_test
depends_on:
- web-db
web-db:
build:
context: ./services/web/project/db
dockerfile: Dockerfile
ports:
- 5435:5432
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
nginx:
build:
context: ./services/nginx
dockerfile: Dockerfile-dev
restart: always
ports:
- 80:80
depends_on:
- web
- client
client:
build:
context: ./services/client
dockerfile: Dockerfile-dev
volumes:
- './services/client:/usr/src/app'
- '/usr/src/app/node_modules'
ports:
- 3007:3000
environment:
- NODE_ENV=development
- REACT_APP_WEB_SERVICE_URL=${REACT_APP_WEB_SERVICE_URL}
depends_on:
- web
Dockerfile-dev
# base image
FROM python:3.6-alpine
# install dependencies
RUN apk update && \
apk add --virtual build-deps gcc python-dev musl-dev && \
apk add libffi-dev && \
apk add postgresql-dev && \
apk add netcat-openbsd && \
apk add bind-tools && \
apk add --update --no-cache g++ libxslt-dev && \
apk add jpeg-dev zlib-dev
ENV PACKAGES="\
dumb-init \
musl \
libc6-compat \
linux-headers \
build-base \
bash \
git \
ca-certificates \
freetype \
libgfortran \
libgcc \
libstdc++ \
openblas \
tcl \
tk \
libssl1.0 \
"
ENV PYTHON_PACKAGES="\
numpy \
matplotlib \
scipy \
scikit-learn \
nltk \
"
RUN apk add --no-cache --virtual build-dependencies python3 \
&& apk add --virtual build-runtime \
build-base python3-dev openblas-dev freetype-dev pkgconfig gfortran \
&& ln -s /usr/include/locale.h /usr/include/xlocale.h \
&& python3 -m ensurepip \
&& rm -r /usr/lib/python*/ensurepip \
&& pip3 install --upgrade pip setuptools \
&& ln -sf /usr/bin/python3 /usr/bin/python \
&& ln -sf pip3 /usr/bin/pip \
&& rm -r /root/.cache \
&& pip install --no-cache-dir $PYTHON_PACKAGES \
&& pip3 install 'pandas<0.21.0' \
&& apk del build-runtime \
&& apk add --no-cache --virtual build-dependencies $PACKAGES \
&& rm -rf /var/cache/apk/*
# set working directory
WORKDIR /usr/src/app
# add and install requirements
COPY ./requirements.txt /usr/src/app/requirements.txt # <--- refer to EDIT
RUN pip install -r requirements.txt
# add entrypoint.sh
COPY ./entrypoint.sh /usr/src/app/entrypoint.sh
RUN chmod +x /usr/src/app/entrypoint.sh
# add app
COPY . /usr/src/app
# run server
CMD ["/usr/src/app/entrypoint.sh"]
what am I missing?
EDIT
Like the accepted answer in [Docker how to run pip requirements.txt only if there was a change?, I'm already copying the requirements.txt file in a separate build step before adding the entire application into the image, but it does not seem to work.
I think the problem likely is that $ docker-compose up alone will not rebuild your images if you make changes. In order to get docker-compose to include your changes to your requirements.txt you will need to pass the --build flag to docker-compose.
I.e instead run:
docker-compose -f docker-compose-dev.yml up --build -d
Which will force a docker-compose rebuild the image. However this will rebuild all images in the docker-compose file which may or may not be desired.
If you only want to rebuild the image of a single service you can first run docker-compose -f docker-compose-dev.yml build web, then afterwards just run your original docker-compose command.
More info on the build command here.
Try to install requirements from the copied file
https://docs.docker.com/develop/develop-images/dockerfile_best-practices/
It is an example of their Dockerfile
COPY requirements.txt /tmp/
RUN pip install --requirement /tmp/requirements.txt
This is what you have
RUN pip install -r requirements.txt
Then after you have changed your docker file, you have to stop your container, remove your image, build a new one, and run container from it.
Stop container and remove the image.
docker-compose down
docker-compose --rmi all
--rmi all - removes all images. You might want to use --rmi IMAGE_NAME
And to start it (if you use not default parameters, change these commands with your arguments).
docker-compose up
Update
In case you have running docker and you do not want to stop it and rebuild an image (if you just want to install a package or run some commands or even start a new application), you can connect the container from your local machine and run command line commands.
docker exec -it [CONTAINER_ID] bash
To get [CONTAINER_ID], run
docker ps
Note docker-compose ps will give you containers names, but you need container id to ssh the container.

Resources