Docker runs on WIndows and only on one of two Linux systems - docker

I have a docker image that I have built that runs on my windows laptop as expected. When I copy and load it on to one of my two Linux systems I get this error when I run docker logs:
Error: 'docker/semantic_search_django/gunicorn.conf' doesn't exist
When I inspect the running container on Windows I can see that "missing" file! Furthermore, if I copy and load the same docker image to my second Linux system, it runs as expected.
This issue just happened today. I've been having success on all 3 systems for the past couple of months until today. Any suggestions would be greatly appreciated. Both Linux systems are running Ubuntu 18.04.5 LTS.
I've tried renamed the images, I've stopped and started the docker daemon, I've even restarted both Linux boxes.
Here are the commands I have used:
docker pull my.artifactory.com/ciee_ssrdjango
docker-compose up -d
My docker-compose.yml
version: "3.8"
services:
web:
image: m.artifactory.com/ciee_ssrdjango
env_file:
- proxy.env
- django.env
container_name: ciee_ssrdjango
volumes:
- query-results-volume:/code
expose:
- "${SSRDJANGO_PORT}"
extra_hosts:
dbhost: ${POSTGRES_DOCKER_IP}
depends_on:
- db
networks:
- ssr_network
networks:
ssr_network:
external: true
volumes:
postgresql-volume:
external: true
query-results-volume:
external: true
My Dockerfile:
FROM ubuntu:18.04
ENV PYTHONUNBUFFERED 1
ENV PYTHONDONTWRITEBYTECODE 1
COPY ./requirements.txt /requirements.txt
#prevents being asked to set TZ
ARG DEBIAN_FRONTEND=noninteractive
RUN apt update -y && \
apt -y upgrade && \
apt install -y python3-pip && \
apt install -y build-essential libssl-dev libffi-dev libpq-dev python3-dev && \
apt install -y software-properties-common python3.8
RUN python3 -m pip install --upgrade pip setuptools wheel
ENV TZ=US/Eastern
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
RUN apt update -y & apt install gcc libxml2-dev libxslt-dev postgresql postgresql-contrib postgresql-plpython-10 --no-install-recommends unixodbc-dev unixodbc libpq-dev -y
RUN mkdir /code # && mkdir /code/ciee
RUN pip install nltk
RUN export PATH=~/.local/bin:$PATH
RUN pip install -r /requirements.txt
COPY . /code/
WORKDIR /code
RUN useradd -m user && chmod 777 /home/user && mkdir /code/query_results && chmod 777 /code/query_results
USER user
CMD ["gunicorn", "semantic_search_django.wsgi:application", "--config", "docker/semantic_search_django/gunicorn.conf", "--keep-alive", "600"]
Here's the thing, I've been using these files and commands successfully for many weeks.

I can make one assumption. You are mounting query-results-volume into /code directory in container and your conf file is located inside it. The volume persists between containers – that's the nature of the volumes. So, somehow, the file in question (or even the folder) has been removed from the volume on the problem machine and now container can not get it.

Related

Adding libvips to a Rails 7 Docker image

I'm new to Docker and trying to create a Dockerfile for this new Rails 7 app. I'm using vips instead of imagemagick for the memory benefits.
and my local machine is a mac so brew install vips takes care of my non docker development flow, but it hasn't gone so well using the ruby-vips gem, or installing from source.
Running $ docker compose up results in:
/usr/local/bundle/gems/ffi-1.15.5/lib/ffi/library.rb:145:in block in ffi_lib': Could not open library 'vips.so.42': vips.so.42: cannot open shared object file: No such file or directory. (LoadError)
With the following docker-compose.yml:
version: "3.9"
services:
db:
image: postgres
volumes:
- ./tmp/db:/var/lib/postgresql/data
environment:
POSTGRES_PASSWORD: password
web:
build: .
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
volumes:
- .:/myapp
ports:
- "3000:3000"
depends_on:
- db
and a Dockerfile:
FROM ruby:3.0.1
RUN apt-get update -qq && apt-get install -y nodejs postgresql-client
WORKDIR /myapp
COPY Gemfile /myapp/Gemfile
COPY Gemfile.lock /myapp/Gemfile.lock
RUN gem install ruby-vips
RUN bundle install
# Add a script to be executed every time the container starts.
COPY entrypoint.sh /usr/bin/
RUN chmod +x /usr/bin/entrypoint.sh
ENTRYPOINT ["entrypoint.sh"]
EXPOSE 3000
# Configure the main process to run when running the image
CMD ["rails", "server", "-b", "0.0.0.0"]
I've also tried installing from source (https://www.libvips.org/install.html) install of using ruby-vips with no luck.
TLDR; ruby-vips need libvips42 installed on your docker image.
Update your Dockerfile to use the following:
RUN apt-get update -qq && apt-get install -y --no-install-recommends nodejs postgresql-client libvips42
PS: run docker compose down and docker compose up --build to force a rebuild of your docker images.
I don't think you're actually installing libvips in your dockerfile. Try this:
FROM ruby:3.0.1
RUN apt-get update -qq \
&& apt-get install -y nodejs postgresql-client
RUN apt install -y --no-install-recommends libvips42
WORKDIR /myapp
...
However, this will install the libvips that comes with buster, and it's 8.7.x from five years ago (!!). Debian does not move quickly.
I would build current libvips from source. Something like this:
# based on buster
FROM ruby:3.0.1
RUN apt-get update && apt-get install -y \
build-essential \
unzip \
wget \
git \
pkg-config
# stuff we need to build our own libvips ... this is a pretty random selection
# of dependencies, you'll want to adjust these
RUN apt-get install -y \
glib-2.0-dev \
libexpat-dev \
librsvg2-dev \
libpng-dev \
libgif-dev \
libjpeg-dev \
libexif-dev \
liblcms2-dev \
liborc-dev
ARG VIPS_VERSION=8.12.2
ARG VIPS_URL=https://github.com/libvips/libvips/releases/download
RUN apt-get install -y \
wget
RUN cd /usr/local/src \
&& wget ${VIPS_URL}/v${VIPS_VERSION}/vips-${VIPS_VERSION}.tar.gz \
&& tar xzf vips-${VIPS_VERSION}.tar.gz \
&& cd vips-${VIPS_VERSION} \
&& ./configure --disable-deprecated \
&& make -j 4 V=0 \
&& make install
RUN gem install ruby-vips
That won't include support for GIF save, or for formats like HEIC or PDF. You'll probably want to adjust it a bit. And of course you should not build packages in your deployment docker image, you'd want to do that in a separate dockerfile.
Hopefully ruby-vips deployment will become more automated over the next few months now that's it's the rail7 default. It's rather manual right now.

Overriding wagtail modeladmin templates in docker with windows and ubuntu

I'm follow this article to override modeladmin template:
templates/modeladmin/app-name/model-name/
Overriding templates
It was overridded in windows 11 with docker, but not work in ubuntu.
I have no idea why have to use volumes to override modeladmin template and it was only work in windows?
It is also work in windows not use docker.
dockerfile
FROM python:3.8.1-slim-buster
RUN useradd wagtail
EXPOSE 8088
ENV PYTHONUNBUFFERED=1 \
PORT=8088
RUN apt-get update --yes --quiet && apt-get install --yes --quiet --no-install-recommends \
build-essential \
libpq-dev \
libmariadbclient-dev \
libjpeg62-turbo-dev \
zlib1g-dev \
libwebp-dev \
&& rm -rf /var/lib/apt/lists/*
RUN pip install --upgrade pip
RUN pip install "gunicorn==20.0.4"
COPY requirements.txt /
RUN pip install -r /requirements.txt
WORKDIR /app
RUN chown wagtail:wagtail /app
COPY --chown=wagtail:wagtail . /app
USER wagtail
RUN python manage.py collectstatic --noinput --clear --no-post-process
docker-compose.yml
version: '3.8'
services:
web:
build: .
command: gunicorn web.wsgi:application --bind 0.0.0.0:8088
volumes:
- ./project/templates:/app/project/templates
ports:
- '8088:8088'
Can you give me some pointers about how to override it in ubuntu? thanks.

docker-compose - cannot use bind mount in folder created when using BUILD

I have a docker-compose file which uses a Dockerfile to build the image. In this image (Dockerfile) I created the folder /workspace which I'd like to bind mount for persistence in my local filesystem.
After the docker-compose up, the folder is empty if I bind mount, but if I do not mount this folder everything works fine (and the folder exist with all the files I added).
This is my docker-compose.yml:
version: "3.9"
services:
web:
build: .
command: uwsgi --ini /workspace/confs/uwsgi.ini --logger file:/workspace/logs/uswgi.log --processes 1 --workers 1 --plugins-dir=/usr/lib/uwsgi/plugins/ --plugin=python
environment:
- DB_HOST=db
- DB_NAME=***
- DB_USER=***
- DB_PASS=***
depends_on:
- db
- redis
- memcached
volumes:
- ./workspace:/workspace
networks:
- asyncmail
- traefik
# db, redis and memcached are ommited here
# aditional labels for traefik is also ommited
This is my Dockerfile:
FROM ubuntu:trusty
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
SHELL ["/bin/bash", "-c"]
RUN mkdir /workspace
RUN apt-get update -y && apt-get upgrade -y
RUN apt-get install -y redis-server python3-pip git-core postgresql-client
RUN apt-get install -y libpq-dev python3-dev libffi-dev libtiff5-dev zlib1g-dev libjpeg8-dev libyaml-dev libpython3-dev openssh-client uwsgi-plugin-python3 libpcre3 libpcre3-dev uwsgi-plugin-python
ADD myapp /workspace/
WORKDIR /workspace/src/
RUN /bin/bash -c "pip3 install cffi \
&& pip3 install -r /workspace/src/requirements.txt \
&& ./manage.py collectstatic --noinput"
RUN ln -sf /usr/share/zoneinfo/America/Sao_Paulo /etc/localtime
# CMD ["uwsgi", "--ini", "/workspace/confs/uwsgi.ini", "--logger", "file:/workspace/logs/uswgi.log"]
I know there is some items it could be optimized, but when I do a docker-compose up -d the folder ./workspace is created with only a folder inside called src. Inside the container the /workspace only have this empty folder too;
If I remove the volumes line in docker-compose, inside the container, the folder /workspace have all the sourcecode of my app.
What am I doing wrong that I can't bind mount the workspace folder?
PS: I know this image i'm using (ubuntu trusty) is old, but my old app only run with this version.
am I correct in assuming that the files you want to appear inside workspace are actually in a folder called "myapp" in your host machine
(it seems so from this line)
ADD myapp /workspace/
I think you meant to map that into your docker container, so under volumes
volumes:
- ./myapp:/workspace
volume maps work one way, that is the folder inside the container is replaced by the contents of the mapped folder on the host, not the other way around...
I ended up with adding to the container the sourcecode directory to fix this problem. #NiRR answer helped a lot.
The final Dockerfile was changes to not include sourcecode in the image:
FROM ubuntu:trusty
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
ARG DEBIAN_FRONTEND=noninteractive
SHELL ["/bin/bash", "-c"]
RUN apt-get update -y && apt-get upgrade -y
RUN apt-get install -y python3-pip git-core postgresql-client
RUN apt-get install -y libpq-dev python3-dev libffi-dev libtiff5-dev zlib1g-dev libjpeg8-dev libyaml-dev libpython3-dev openssh-client uwsgi-plugin-python3 libpcre3 libpcre3-dev
WORKDIR /workspace/src
COPY myapp/src/requirements.txt .
RUN /bin/bash -c "pip3 install cffi \
&& pip3 install -r requirements.txt"
# To set timezone
RUN ln -sf /usr/share/zoneinfo/America/Sao_Paulo /etc/localtime
And I changed the docker-compose to the following final version:
version: "3.9"
services:
web:
build: .
command: ./start.sh
environment:
- DB_HOST=db
- DB_NAME=***
- DB_USER=***
- DB_PASS=***
volumes:
- ./myapp:/workspace
Now in the container start all the sourcecode from myapp is copied to inside the container;
Everything is under GIT control
If the code changes, we can make a push/pull and docker-compose up -d to restart the container. The new version will already be there.

Errors that occur when installing anaconda using Docker on M1 chip MacBooks

I tried to build an existing dockerfile on my MacBookPro with M1 chip, but I got the following error when installing anaconda3.
Why does it output such an error?
Also, how can I fix this?
In docker-compose, the platform is set to linux/arm64 and the cpus is 2, but the same error is output.
PREFIX=/opt/anaconda3
Unpacking payload ...
/lib64/ld-linux-x86-64.so.2: No such file or directory
/lib64/ld-linux-x86-64.so.2: No such file or directory
My Dockerfile is below.
FROM ubuntu:latest
#update
RUN apt-get -y update && apt-get install -y \
sudo \
wget \
gcc \
g++ \
vim
#install anaconda3
WORKDIR /opt
#download anaconda package and install anaconda
# archive => https://repo.continuum.io/archive/
RUN wget https://repo.continuum.io/archive/Anaconda3-2020.07-Linux-x86_64.sh && \
sh /opt/Anaconda3-2020.07-Linux-x86_64.sh -b -p /opt/anaconda3 && \
rm -f Anaconda3-2020.07-Linux-x86_64.sh
ENV PATH /opt/anaconda3/bin:$PATH
#update pip and conda
RUN pip install -U pip
WORKDIR /code
ADD requirements.txt /code
RUN pip install -r requirements.txt
WORKDIR /
#execute jupyter lab as a default command
CMD ["jupyter","lab","--ip=0.0.0.0","--allow-root","--LabApp.token=''"]
My docker-compose.yml file is below
version: "3"
services:
jupyter:
image: investment-project:1.0.0
container_name: investment-jupyter
build: .
platform: linux/arm64/v8
cpus: "2"
volumes:
- $PWD:/tmp/working
working_dir: /tmp/working
ports:
- 8888:8888
command: jupyter notebook --ip=0.0.0.0 --allow-root --LabApp.token=''

How to map the host OS file to the container at the time of container build process

docker-compose.yml:
version: '3'
services:
ezmove:
volumes:
- /host-dir:/home/container-dir
build:
context: .
args:
BRANCH: develop
Dockerfile:
FROM appcontainers/ubuntu:xenial
MAINTAINER user <user>
RUN apt-get update -y --no-install-recommends \
&& apt-get install -y --no-install-recommends python3.5-minimal python3.5-venv \
&& apt-get install -y --no-install-recommends git \
&& apt-get install -y --no-install-recommends python-pip \
&& pip install --upgrade pip \
&& rm -rf /var/lib/apt/lists/* \
&& mkdir -p /home/container-dir
WORKDIR /home/container-dir
RUN /bin/bash - c "sh ./script.sh"
At the time of build the docker container How to map local directory to container
When $ docker-compose up, it will starts to build container but after installation of the packag dependancies it will try to execute the script.sh file but got error "FILE NOT FOUND! "
Tried:
Not want todo git clone inside docker continer
Not want to store source code inside the continer
So, how to map the host OS file to the container at build time
you lack some COPY or ADD in your Dockerfile in order to copy your script.sh in your image.
Check the docs
https://docs.docker.com/engine/reference/builder/#add
https://docs.docker.com/engine/reference/builder/#copy
By the way, Docker is about isolation, so a running container should be isolated from the host, and certainly not access the host OS.

Resources