docker-compose - cannot use bind mount in folder created when using BUILD - docker

I have a docker-compose file which uses a Dockerfile to build the image. In this image (Dockerfile) I created the folder /workspace which I'd like to bind mount for persistence in my local filesystem.
After the docker-compose up, the folder is empty if I bind mount, but if I do not mount this folder everything works fine (and the folder exist with all the files I added).
This is my docker-compose.yml:
version: "3.9"
services:
web:
build: .
command: uwsgi --ini /workspace/confs/uwsgi.ini --logger file:/workspace/logs/uswgi.log --processes 1 --workers 1 --plugins-dir=/usr/lib/uwsgi/plugins/ --plugin=python
environment:
- DB_HOST=db
- DB_NAME=***
- DB_USER=***
- DB_PASS=***
depends_on:
- db
- redis
- memcached
volumes:
- ./workspace:/workspace
networks:
- asyncmail
- traefik
# db, redis and memcached are ommited here
# aditional labels for traefik is also ommited
This is my Dockerfile:
FROM ubuntu:trusty
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
SHELL ["/bin/bash", "-c"]
RUN mkdir /workspace
RUN apt-get update -y && apt-get upgrade -y
RUN apt-get install -y redis-server python3-pip git-core postgresql-client
RUN apt-get install -y libpq-dev python3-dev libffi-dev libtiff5-dev zlib1g-dev libjpeg8-dev libyaml-dev libpython3-dev openssh-client uwsgi-plugin-python3 libpcre3 libpcre3-dev uwsgi-plugin-python
ADD myapp /workspace/
WORKDIR /workspace/src/
RUN /bin/bash -c "pip3 install cffi \
&& pip3 install -r /workspace/src/requirements.txt \
&& ./manage.py collectstatic --noinput"
RUN ln -sf /usr/share/zoneinfo/America/Sao_Paulo /etc/localtime
# CMD ["uwsgi", "--ini", "/workspace/confs/uwsgi.ini", "--logger", "file:/workspace/logs/uswgi.log"]
I know there is some items it could be optimized, but when I do a docker-compose up -d the folder ./workspace is created with only a folder inside called src. Inside the container the /workspace only have this empty folder too;
If I remove the volumes line in docker-compose, inside the container, the folder /workspace have all the sourcecode of my app.
What am I doing wrong that I can't bind mount the workspace folder?
PS: I know this image i'm using (ubuntu trusty) is old, but my old app only run with this version.

am I correct in assuming that the files you want to appear inside workspace are actually in a folder called "myapp" in your host machine
(it seems so from this line)
ADD myapp /workspace/
I think you meant to map that into your docker container, so under volumes
volumes:
- ./myapp:/workspace
volume maps work one way, that is the folder inside the container is replaced by the contents of the mapped folder on the host, not the other way around...

I ended up with adding to the container the sourcecode directory to fix this problem. #NiRR answer helped a lot.
The final Dockerfile was changes to not include sourcecode in the image:
FROM ubuntu:trusty
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
ARG DEBIAN_FRONTEND=noninteractive
SHELL ["/bin/bash", "-c"]
RUN apt-get update -y && apt-get upgrade -y
RUN apt-get install -y python3-pip git-core postgresql-client
RUN apt-get install -y libpq-dev python3-dev libffi-dev libtiff5-dev zlib1g-dev libjpeg8-dev libyaml-dev libpython3-dev openssh-client uwsgi-plugin-python3 libpcre3 libpcre3-dev
WORKDIR /workspace/src
COPY myapp/src/requirements.txt .
RUN /bin/bash -c "pip3 install cffi \
&& pip3 install -r requirements.txt"
# To set timezone
RUN ln -sf /usr/share/zoneinfo/America/Sao_Paulo /etc/localtime
And I changed the docker-compose to the following final version:
version: "3.9"
services:
web:
build: .
command: ./start.sh
environment:
- DB_HOST=db
- DB_NAME=***
- DB_USER=***
- DB_PASS=***
volumes:
- ./myapp:/workspace
Now in the container start all the sourcecode from myapp is copied to inside the container;
Everything is under GIT control
If the code changes, we can make a push/pull and docker-compose up -d to restart the container. The new version will already be there.

Related

Running rust sqlx migrations locally with docker-compose

I'm working through Zero to Prod in Rust and I've gone off script a bit. I'm working on dockerizing the whole setup locally including the database. On ENTRYPOINT the container calls a startup script that attempts to call sqlx migrate run, leading to the error ./scripts/init_db.sh: line 10: sqlx: command not found.
I think I've worked it out that because I'm using bullseye-slim as the runtime it doesn't keep the installed rust packages around for the final image, which helps with the build time and image size.
Is there a way to run sqlx migrations without having rust, cargo etc installed? Or is there a better way altogether to accomplish this? I'd like to avoid just reinstalling everything in the bullseye-slim image and losing some of the docker optimization there.
# Dockerfile
# .... chef segment omitted
FROM chef as builder
COPY --from=planner /app/recipe.json recipe.json
# Build our project dependencies, not our application!
RUN cargo chef cook --release --recipe-path recipe.json
# Up to this point, if our dependency tree stays the same,
# all layers should be cached.
COPY . .
ENV SQLX_OFFLINE true
# Build our project
RUN cargo build --release --bin my_app
FROM debian:bullseye-slim AS runtime
WORKDIR /app
RUN apt-get update -y \
&& apt-get install -y --no-install-recommends openssl ca-certificates \
&& apt-get install -y --no-install-recommends postgresql-client \
# Clean up
&& apt-get autoremove -y \
&& apt-get clean -y \
&& rm -rf /var/lib/apt/lists/*
COPY --from=builder /app/target/release/my_app my_app
COPY configuration configuration
COPY scripts scripts
RUN chmod -R +x scripts
ENTRYPOINT ["./scripts/docker_startup.sh"]
docker-compose.yml looks like below
version: '3'
services:
db:
image: postgres:latest
environment:
- POSTGRES_DB=my_app
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=password
ports:
- "5432:5432"
volumes:
- dbdata:/var/lib/postgresql/data
app:
image: my_app
environment:
- DATABASE_URL=postgres://postgres:password#postgres:5432/my_app
depends_on:
- db
ports:
- "8080:8080"
volumes:
dbdata:
driver: local
You can install sqlx-cli with cargo install in your build stage
cargo install sqlx-cli
then copy it over to the deployment stage with
COPY --from=builder $HOME/.cargo/bin/sqlx-cli sqlx-cli
Or you can run the migrations when your application starts with the migrate! macro
sqlx::migrate!("db/migrations")
.run(&pool)
.await?;

How to deploy dockerized laravel app with elastic beanstalk?

I'm new to Docker. Trying to deploy dockerized laravel app using elastic beanstalk. Current Docker files -
docker-compose.yml -
version: '3'
services:
#PHP Service
app:
build:
context: ./
dockerfile: Dockerfile
image: admin
container_name: admin-app
restart: unless-stopped
working_dir: /usr/share/nginx/app/
volumes:
- ./:/usr/share/nginx/app/
networks:
- app-network
nginx:
image: nginx:stable-alpine
container_name: admin-nginx
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- ./:/usr/share/nginx/app/
- ./nginx/conf.d/:/etc/nginx/conf.d
networks:
- app-network
#Docker Networks
networks:
app-network:
driver: bridge
and Dockerfile
FROM php:7.4-fpm
ARG uid=1000
ARG user=sammy
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
curl \
libpng-dev \
libonig-dev \
libxml2-dev \
zip \
unzip \
libcurl4-openssl-dev pkg-config libssl-dev
# Clear cache
RUN apt-get clean && rm -rf /var/lib/apt/lists/*
# Install PHP extensions
RUN docker-php-ext-install pdo_mysql mbstring exif pcntl bcmath gd
RUN pecl install mongodb && docker-php-ext-enable mongodb && \
pecl install xdebug && docker-php-ext-enable xdebug
RUN pecl config-set php_ini /etc/php.ini
# Get latest Composer
COPY --from=composer:latest /usr/bin/composer /usr/bin/composer
# Add user for laravel application
RUN useradd -G www-data,root -u $uid -d /home/$user $user
RUN mkdir -p /home/$user/.composer && \
chown -R $user:$user /home/$user
# Copy existing application directory contents
COPY . /usr/share/nginx/app
WORKDIR /usr/share/nginx/app
RUN chown -R $user:$user .
USER $user
RUN chown -R $user:$user storage bootstrap/cache
RUN chmod -R 775 storage bootstrap/cache
RUN composer install
RUN php artisan cache:clear
RUN php artisan view:clear
RUN php artisan config:clear
# Expose port 9000 and start php-fpm server
EXPOSE 9000
CMD ["php-fpm"]
It works fine at local machine when I run docker compose up -d only if I have already run composer install otherwise throws following error
It is ok for development purpose I have to run composer install once, but for production, I think it is not right way to manually do composer install every time new version is deployed. Doesn't the command RUN composer install at Dockerfile install the required dependencies? I can see progress bar of dependencies being installed but no vendor folder is generated if I ssh into container. Again it works fine if I ssh into instance and manually install dependencies.
I have deployed a nodejs app also successfully using elastic beanstalk. There the dependencies were installed properly using command RUN npm install at Dockerfile. I don't see any difference in the process. Do I have to include vendor folder also in the zip file?. Please suggest correct way to deploy.

Docker runs on WIndows and only on one of two Linux systems

I have a docker image that I have built that runs on my windows laptop as expected. When I copy and load it on to one of my two Linux systems I get this error when I run docker logs:
Error: 'docker/semantic_search_django/gunicorn.conf' doesn't exist
When I inspect the running container on Windows I can see that "missing" file! Furthermore, if I copy and load the same docker image to my second Linux system, it runs as expected.
This issue just happened today. I've been having success on all 3 systems for the past couple of months until today. Any suggestions would be greatly appreciated. Both Linux systems are running Ubuntu 18.04.5 LTS.
I've tried renamed the images, I've stopped and started the docker daemon, I've even restarted both Linux boxes.
Here are the commands I have used:
docker pull my.artifactory.com/ciee_ssrdjango
docker-compose up -d
My docker-compose.yml
version: "3.8"
services:
web:
image: m.artifactory.com/ciee_ssrdjango
env_file:
- proxy.env
- django.env
container_name: ciee_ssrdjango
volumes:
- query-results-volume:/code
expose:
- "${SSRDJANGO_PORT}"
extra_hosts:
dbhost: ${POSTGRES_DOCKER_IP}
depends_on:
- db
networks:
- ssr_network
networks:
ssr_network:
external: true
volumes:
postgresql-volume:
external: true
query-results-volume:
external: true
My Dockerfile:
FROM ubuntu:18.04
ENV PYTHONUNBUFFERED 1
ENV PYTHONDONTWRITEBYTECODE 1
COPY ./requirements.txt /requirements.txt
#prevents being asked to set TZ
ARG DEBIAN_FRONTEND=noninteractive
RUN apt update -y && \
apt -y upgrade && \
apt install -y python3-pip && \
apt install -y build-essential libssl-dev libffi-dev libpq-dev python3-dev && \
apt install -y software-properties-common python3.8
RUN python3 -m pip install --upgrade pip setuptools wheel
ENV TZ=US/Eastern
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
RUN apt update -y & apt install gcc libxml2-dev libxslt-dev postgresql postgresql-contrib postgresql-plpython-10 --no-install-recommends unixodbc-dev unixodbc libpq-dev -y
RUN mkdir /code # && mkdir /code/ciee
RUN pip install nltk
RUN export PATH=~/.local/bin:$PATH
RUN pip install -r /requirements.txt
COPY . /code/
WORKDIR /code
RUN useradd -m user && chmod 777 /home/user && mkdir /code/query_results && chmod 777 /code/query_results
USER user
CMD ["gunicorn", "semantic_search_django.wsgi:application", "--config", "docker/semantic_search_django/gunicorn.conf", "--keep-alive", "600"]
Here's the thing, I've been using these files and commands successfully for many weeks.
I can make one assumption. You are mounting query-results-volume into /code directory in container and your conf file is located inside it. The volume persists between containers – that's the nature of the volumes. So, somehow, the file in question (or even the folder) has been removed from the volume on the problem machine and now container can not get it.

How to define a docker cli service in docker-compose

I have a docker-compose file that runs a few services.
services:
cli:
build:
context: .
dockerfile: docker/cli/Dockerfile
volumes:
- ./drupal8site:/var/www/html/drupal8site
drupal:
container_name: drupal
build:
context: .
dockerfile: docker/DockerFile.drupal
args:
DOC_ROOT: /var/www/html/drupal8site
ports:
- 80:80
volumes:
- ./drupal8site:/var/www/html/drupal8site
restart: always
environment:
APACHE_DOCUMENT_ROOT: /var/www/html/drupal8site/web
mysql:
image: mysql:5.7
ports:
- 3306:3306
volumes:
- ./mysql:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: root
I would like to add another service which will be a container in which I could run CLI commands (composer, drush for drupal, php, etc).
The following Dockerfile was how I initially defined the cli service but it stops right after it is run. How do I define it so it is part of my docker-compose, shares my mounted volume, and I can interactively connect to it and run CLI commands on it ?
FROM php:7.2-cli
#various programs
RUN apt-get update \
&& apt-get install vim --assume-yes \
&& apt-get install git --assume-yes \
&& apt-get install mysql-client --assume-yes
CMD ["bash"]
Thanks,
Yaron
If you want to run automated scripts on docker images this is obviously a job for a ci-pipeline. You can use CloudFoundry or OpenStack to do this.
But there are many other questions in this post:
1.) How can i share my mounted volume:
You can pass a volume with the -v option to a container. e.g.:
docker run -it -d -v $(pwd)/localFolder:/exposedFolderFromDocker mydockerhub/myawesomeimage
2.) Can I interactively connect to it and run CLI commands on it
docker exec -it docker_cli_1 bash
I recommend to implement features of an docker-image to the individual docker-images Dockerfile. For example copying and running a prepared shell-script:
# your Dockerfile
FROM php:7.2-cli
#various programs
RUN apt-get update \
&& apt-get install vim --assume-yes \
&& apt-get install git --assume-yes \
&& apt-get install mysql-client --assume-yes
# individual changes
COPY your_script.sh /
RUN chown root:root /your_script.sh && \
chmod 0755 /your_script.sh
CMD ["/your_script.sh"]
# a folder to expose
VOLUME /exposedFolderFromDocker
CMD ["bash"]

How to map the host OS file to the container at the time of container build process

docker-compose.yml:
version: '3'
services:
ezmove:
volumes:
- /host-dir:/home/container-dir
build:
context: .
args:
BRANCH: develop
Dockerfile:
FROM appcontainers/ubuntu:xenial
MAINTAINER user <user>
RUN apt-get update -y --no-install-recommends \
&& apt-get install -y --no-install-recommends python3.5-minimal python3.5-venv \
&& apt-get install -y --no-install-recommends git \
&& apt-get install -y --no-install-recommends python-pip \
&& pip install --upgrade pip \
&& rm -rf /var/lib/apt/lists/* \
&& mkdir -p /home/container-dir
WORKDIR /home/container-dir
RUN /bin/bash - c "sh ./script.sh"
At the time of build the docker container How to map local directory to container
When $ docker-compose up, it will starts to build container but after installation of the packag dependancies it will try to execute the script.sh file but got error "FILE NOT FOUND! "
Tried:
Not want todo git clone inside docker continer
Not want to store source code inside the continer
So, how to map the host OS file to the container at build time
you lack some COPY or ADD in your Dockerfile in order to copy your script.sh in your image.
Check the docs
https://docs.docker.com/engine/reference/builder/#add
https://docs.docker.com/engine/reference/builder/#copy
By the way, Docker is about isolation, so a running container should be isolated from the host, and certainly not access the host OS.

Resources