Making `volume` in docker-compose.yml work earlier than dockerfile - docker

my docker-compose.yml
version: "3.9"
services:
admin_site:
build:
context: ./
dockerfile: Dockerfile.local
volumes:
- .:/usr/src/app
ports:
- "8010:8010"
restart: always
I want to mount current folder to /usr/src/app
Dockerfile.local
FROM python:3.9.5
ENV PYTHONUNBUFFERED 1
RUN apt-get update && apt-get install -y netcat
WORKDIR /usr/src/app
RUN pip install pipenv
RUN pipenv install --dev --system
When I try to docker compose -f docker-compose.yml build
This error occurs.
ERROR:: --system is intended to be used for pre-existing Pipfile installation, not installation of specific packages. Aborting.
I guess it means there is no Pipfile
However There is Pipfile in my current directly.
So I guess volumes: doesn't work before Dockerfile?
How can I solve this problem?

Mounts always apply to Container and not to the Image Build.
So would use
# DO
# $ pipenv lock --dev --requirements > requirements.txt
# before build!
COPY requirements*.txt ./
RUN python -m pip --no-cache-dir install -r requirements.txt
This is not using your Pipenv-File, you have to manually generate the requirements-file. There are some benefits doing it like that:
Packages in Docker-Image can't be changed by accident
You can define versions for dependencies separately, very helpful if you have bugs in dependency-packages, but need to update the main package
Pipenv does not run lock on image build (takes ages every time)
Furthermore copying whole folder forces docker build to not reuse this layer on any file change.

Related

Running rust sqlx migrations locally with docker-compose

I'm working through Zero to Prod in Rust and I've gone off script a bit. I'm working on dockerizing the whole setup locally including the database. On ENTRYPOINT the container calls a startup script that attempts to call sqlx migrate run, leading to the error ./scripts/init_db.sh: line 10: sqlx: command not found.
I think I've worked it out that because I'm using bullseye-slim as the runtime it doesn't keep the installed rust packages around for the final image, which helps with the build time and image size.
Is there a way to run sqlx migrations without having rust, cargo etc installed? Or is there a better way altogether to accomplish this? I'd like to avoid just reinstalling everything in the bullseye-slim image and losing some of the docker optimization there.
# Dockerfile
# .... chef segment omitted
FROM chef as builder
COPY --from=planner /app/recipe.json recipe.json
# Build our project dependencies, not our application!
RUN cargo chef cook --release --recipe-path recipe.json
# Up to this point, if our dependency tree stays the same,
# all layers should be cached.
COPY . .
ENV SQLX_OFFLINE true
# Build our project
RUN cargo build --release --bin my_app
FROM debian:bullseye-slim AS runtime
WORKDIR /app
RUN apt-get update -y \
&& apt-get install -y --no-install-recommends openssl ca-certificates \
&& apt-get install -y --no-install-recommends postgresql-client \
# Clean up
&& apt-get autoremove -y \
&& apt-get clean -y \
&& rm -rf /var/lib/apt/lists/*
COPY --from=builder /app/target/release/my_app my_app
COPY configuration configuration
COPY scripts scripts
RUN chmod -R +x scripts
ENTRYPOINT ["./scripts/docker_startup.sh"]
docker-compose.yml looks like below
version: '3'
services:
db:
image: postgres:latest
environment:
- POSTGRES_DB=my_app
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=password
ports:
- "5432:5432"
volumes:
- dbdata:/var/lib/postgresql/data
app:
image: my_app
environment:
- DATABASE_URL=postgres://postgres:password#postgres:5432/my_app
depends_on:
- db
ports:
- "8080:8080"
volumes:
dbdata:
driver: local
You can install sqlx-cli with cargo install in your build stage
cargo install sqlx-cli
then copy it over to the deployment stage with
COPY --from=builder $HOME/.cargo/bin/sqlx-cli sqlx-cli
Or you can run the migrations when your application starts with the migrate! macro
sqlx::migrate!("db/migrations")
.run(&pool)
.await?;

After I run docker compose up, my mac returns error stating that it can't find my mix phx.server. How do I show docker where my mix.exs file is?

When I'm running Docker Compose up, I receive an error
** (Mix) The task "phx.server" could not be found
Note no mix.exs was found in the current directory
I believe it's the very last step I need to run the project. This is a phoenix/Elixir Docker project. Mix.exs is a top level file in my project, same level as my dockerfile/docker-compose file.
Dockerfile
FROM elixir:1.13.1
# Build Args
ARG PHOENIX_VERSION=1.6.6
ARG NODEJS_VERSION=16.x
# Apt
RUN apt-get update && apt-get upgrade -y
RUN apt-get install -y apt-utils
RUN apt-get install -y build-essential
RUN apt-get install -y inotify-tools
# Nodejs
RUN curl -sL https://deb.nodesource.com/setup_${NODEJS_VERSION} | bash
RUN apt-get install -y nodejs
# Phoenix
RUN mix local.hex --force
RUN mix archive.install --force hex phx_new #{PHOENIX_VERSION}
RUN mix local.rebar --force
# App Directory
ENV APP_HOME /app
RUN mkdir -p $APP_HOME
WORKDIR $APP_HOME
COPY . .
# App Port
EXPOSE 4000
# Default Command
CMD ["mix", "phx.server"]
Docker-compose.yml
version: "3"
services:
book-search:
build: .
volumes:
- ./src:/app
ports:
- "4000:4000"
depends_on:
- db
db:
image: postgres:9.6
environment:
POSTGRES_DB: "db"
POSTGRES_HOST_AUTH_METHOD: "trust"
POSTGRES_USER: tmclean
POSTGRES_PASSWORD: tmclean
PGDATA: /var/lib/postgresql/data/pgdata
restart: always
volumes:
- ./pgdata:/var/lib/postgresql/data
Let me know what other questions I can answer
The problem is your docker-compose.yml file.
volumes:
- ./src:/app
You are overwriting the app with a probably non-existant src directory. Change it to:
volumes:
- .:/app
and it should work. However, if you do that, there is no point in copying the files in your Dockerfile, so you can also remove the
COPY . .
Alternatively, leave the COPY if you want the source files to be in the image, and remove the volumes section from the book-search service in docker-compose.yml.

Multiple Docker containers using identical Dockerfiles

In the project I've been working on, there's a few microservices, each one built from its own Dockerfile. The Dockerfiles for four of them are exactly identical:
#Dockerfile
FROM python:3.7
RUN pip install pip --upgrade
RUN pip install pipenv
COPY Pipfile.lock /code/Pipfile.lock
COPY Pipfile /code/Pipfile
WORKDIR /code
RUN pipenv install --system --deploy
The containers are built with docker-compose.
I have been given a suggestion to "do something" about these identical Dockerfiles, however I'm not sure if there's any point in it.
On the one hand, this is obviously repeated code, and I suppose I could just use one copy of the Dockerfile for all the four services (e.g. built those four containers according to the same recipe), but on the other - I imagine that if there's need to adjust anything in one of the images in the future, the whole setup will have to be reversed again.
I haven't found any similar cases described over the internet. Is there any "good practice" in such situation? What would be the advantages (are there any?) of using a single Dockerfile?
you can build an image and instead of using
#Dockerfile
FROM python:3.7
RUN pip install pip --upgrade
RUN pip install pipenv
COPY Pipfile.lock /code/Pipfile.lock
COPY Pipfile /code/Pipfile
WORKDIR /code
RUN pipenv install --system --deploy
#Dockerfile Replaced
FROM python:3.7
RUN pip install pip --upgrade
RUN pip install pipenv
RUN mkdir /code
WORKDIR /code
RUN pipenv install --system --deploy
docker-compose
service1:
container_name: name1
image: yourimage
volumes:
- ./service1/files/:/code
service2:
container_name: name2
image: yourimage
volumes:
- ./service2/files/:/code
service3:
container_name: name3
image: yourimage
volumes:
- ./service3/files/:/code

Dockerising pelican project

I'm trying to dockerise my pelican site project. I've created a docker-compose.yml file and a Dockerfile.
However, every time I try to build my project (docker-compose up) I get the following errors for both pip install and npm install:
npm WARN saveError ENOENT: no such file or directory, open '/src/package.json'
...
Could not open requirements file: [Errno 2] No such file or directory: 'requirements.txt'
The directory structure of the project is as follows:
- **Dockerfile**
- **docker-compose.yml**
- content/
- pelican-plugins/
- src/
- Themes/
- Pelican config files
- requirements.txt
- gulpfile.js
- package.js
All the pelican makefiles etc. are in the src directory.
I'm trying to load the content, src, and pelican-plugins directories as volumes so I can modify them on my local machine for the docker container to use.
Here is my Dockerfile:
FROM python:3
WORKDIR /src
RUN apt-get update -y
RUN apt-get install -y python-pip python-dev build-essential
# Install Node.js 8 and npm 5
RUN apt-get update
RUN apt-get -qq update
RUN apt-get install -y build-essential
RUN apt-get install -y curl
RUN curl -sL https://deb.nodesource.com/setup_8.x | bash
RUN apt-get install -y nodejs
# Set the locale
ENV LANG en_US.UTF-8
ENV LANGUAGE en_US:en
ENV LC_ALL en_US.UTF-8
RUN npm install
RUN python -m pip install --upgrade pip
RUN pip install -r requirements.txt
ENV SRV_DIR=/src
RUN chmod +x $SRV_DIR
RUN make clean
VOLUME /src/output
RUN make devserver
RUN gulp
And here is my docker-compose.yml file:
version: '3'
services:
web:
build: .
ports:
- "80:80"
volumes:
- ./content:/content
- ./src:/src
- ./pelican-plugins:/pelican-plugins
volumes:
logvolume01: {}
It definitely looks like I have set up my volumes directories properly in dockerfiles...
Thanks in advance!
Your Dockerfile doesn't COPY (or ADD) any files at all, so the /src directory is empty.
You can verify this yourself. When you run docker build it will print out output like:
Step 13/22 : ENV LC_ALL en_US.UTF-8
---> Running in 3ab80c3741f8
Removing intermediate container 3ab80c3741f8
---> d240226b6600
Step 14/22 : RUN npm install
---> Running in 1d31955d5b28
npm WARN saveError ENOENT: no such file or directory, open '/src/package.json'
The last line in each step with just a hex number is actually a valid image ID that's the final result of running each step, and you can then:
% docker run --rm -it d240226b6600 sh
# pwd
/src
# ls
To fix this you need a line in the Dockerfile like
COPY . .
You probably also need to change into the src subdirectory to run npm install and the like as you've shown your directory layout. This can look like:
WORKDIR /src
COPY . .
# Either put "cd" into the command itself
# (Each RUN command starts a fresh container at the current WORKDIR)
RUN cd src && npm install
# Or change WORKDIRs
WORKDIR /src/src
RUN pip install -r requirements.txt
WORKDIR /src
Remember that everything in the Dockerfile happens before any setting in docker-compose.yml outside the build: block is even considered. Environment variables, volume mounts, and networking options for a container have no effect on the image build sequence.
In terms of Dockerfile style, your VOLUME declaration will have some tricky unexpected side effects and probably is unnecessary; I'd remove it. Your Dockerfile is also missing the CMD that the container should run. You should also combine RUN apt-get update && apt-get install into single commands; the way Docker layer caching works and the way the Debian repositories work, it's very easy to wind up with a cached package index that names files from a week ago that don't exist any more.
While the setup you're describing is fairly popular, it also essentially hides everything the Dockerfile does with your local source tree. The npm install you're describing here, for example, will be a no-op because the volume mount will hide /src/src/node_modules. I generally find it easier to just run python, npm, etc. locally while I'm developing, rather than write and debug this 50-line YAML file and run sudo docker-compose up.

Download data in Dockerfile and use it in volume

I want to clone a git repo, install a fresh neos.io or something like that in the Dockerfile via RUN. Later on I want to mount the directory including the files into my local filesystem.
According to this github issue its not possible since
mounting the volume will remove the data.
How to archive the wanted behaviour? Using CMD or ENTRYPOINT will clone the git repo f.e. on every start. Thats not necessary.
Dockerfile
FROM debian:stable
RUN apt-get update \
&& apt-get install -y git
WORKDIR /home/app
RUN git clone https://github.com/libgit2/libgit2
CMD ["sleep", "infinity"]
docker-compose.yml
version: "2"
services:
app:
build: .
# Uncomment this will remove data on docker-compose up
# volumes:
# - ./app:/home/app
You can clone the repo to an arbitrary directory, say /home/app-clone for example, then in your ENTRYPOINT or CMD, you copy the files from this directory to your volume directory, so something like:
.
.
RUN git clone https://github.com/libgit2/libgit2 /home/app-clone
CMD cp -r /home/app-clone /home/app && sleep infinity
Dockerfile
FROM debian:stable
RUN apt-get update \
&& apt-get install -y git
WORKDIR /home/app
RUN git clone https://github.com/libgit2/libgit2.git
CMD ["cp", "-r", "libgit2", "/tmp"]
docker-compose.yml
version: "2"
services:
app:
build: .
volumes:
- ./app:/tmp

Resources