Use directory in docker-compose.yml's parent folder as volume - docker

I have the following directory structure:
.
├── README.md
├── alice
├── docker
│   ├── compose-prod.yml
│   ├── compose-stage.yml
│   ├── compose.yml
│   └── dockerfiles
├── gauntlet
├── nexus
│   ├── Procfile
│   ├── README.md
│   ├── VERSION.txt
│   ├── alembic
│   ├── alembic.ini
│   ├── app
│   ├── poetry.lock
│   ├── pyproject.toml
│   └── scripts
nexus.Dockerfile
FROM python:3.10
RUN addgroup --system app && adduser --system --group app
WORKDIR /usr/src/pdn/nexus
COPY ../../nexus/pyproject.toml ../../nexus/poetry.lock* ./
ARG INSTALL_DEV=true
RUN bash -c "if [ $INSTALL_DEV == 'true' ] ; then poetry install --no-root ; else poetry install --no-root --no-dev ; fi"
COPY ../../nexus .
RUN chmod +x scripts/run.sh
ENV PYTHONPATH=/usr/src/pdn/nexus
RUN chown -R app:app $HOME
USER app
CMD ["./run.sh"]
The relevant service in compose.yml looks like this:
services:
nexus:
platform: linux/arm64
build:
context: ../
dockerfile: ./docker/dockerfiles/nexus.Dockerfile
container_name: nexus
restart: on-failure
ports:
- "8000:8000"
volumes:
- ../nexus:/usr/src/pdn/nexus:ro
environment:
- DATABASE_HOSTNAME=${DATABASE_HOSTNAME?}
env_file:
- .env
When I run compose up, I get the following error:
Error response from daemon: failed to create shim: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "./scripts/run.sh": permission denied: unknown
The service starts ok without the volume definition. I think it might be because of the the location of nexus in relation to the dockerfile or compose file, but the context is set to the parent.
I tried defining the volume as follows:
volumes:
- ./nexus:/usr/src/pdn/nexus:ro
But I get a similar error, in this case run.sh is not found: and a directory named nexus gets created in the docker directory
Error response from daemon: failed to create shim: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "./run.sh": stat ./run.sh: no such file or directory: unknown
Not sure what I'm missing.

I've two comments, not sure if they can solve your issue.
First although, in your compose.yml, your are allowed to reference your parent directories, that not the case in your Dockerfile, you can't copy from outside the context which you specified in your compose.yml file (.. which resolve to your app root). So you should change those lines:
COPY ../../nexus/pyproject.toml ../../nexus/poetry.lock* ./
COPY ../../nexus .
to
COPY ./nexus/pyproject.toml ./nexus/poetry.lock* ./
COPY ./nexus .
Second the volume overrides whatever in /usr/src/pdn/nexus by the content of ../nexus. This will render your whole copies, to /usr/src/pdn/nexus, useless. That may not be an issue if the contents are the same, but whatever permission you defined in your files may be gone. So if your contents are the same, the only issue you may have is your starting script, you can put it into a separate directory out of the /usr/src/pdn/nexus so that it won't be overridden, and don't forget to reference it correctly into the CMD.

Related

Creating an image with Docker and resources outside the main directory

I try to build an image with Docker (10.20.13 on RH 7.9). But some of my resources are outside the Dockerfile directory. Below is my tree :
/dir1
├── dir2
│   ├── dir3
│   │   ├── dir4
│   │   │   ├── boost
│   │   │   │   └── lib
│   │   │   │   ├── linuxV2_6_18A32
│   │   │   │   │   ├── libboost_atomic-mt.a
│   │   │   │   │   ├── ....
/home/myproject/myDockerfile
I want to add in my image the resources that are in /dir1/dir2/dir3/dir4/boost which are not necessary my resources (but I do have at least read access).
My first try was to build an image from /home/myproject/myDockerfile with the following command :
/home/myproject/myDockerfile/docker build -t myimage:1.0 .
But it failed with, saying this:
ADD failed: file not found in build context or excluded by .dockerignore: stat dir1: file does not exist
Okay, the dir1 is not in the context. So I tried to make a link to dir1 in the Dockerfile directory, and again the same command, but different issue :
ADD failed: forbidden path outside the build context: netdata ()
Third try, I launch the command from the root directory (to get all the context as I understand), with the following command:
docker build -t myimage:1.0 -f /home/myproject/myDockerfile
This time I get this response:
error checking context: 'no permission to read from '/boot/System.map-3.10.0-1160.31.1.el7.x86_64''
So I image to add the last directory to my .dockerignore, but it should be in the context (root directory) which is impossible.
So is there a solution to my problem apart copying in project directory all the resources I need?
You have to copy all of the resources you need into the project directory. You can't really build a Docker image containing files from completely unrelated parts of the filesystem (you can't include a library from /usr/lib from a Dockerfile in your home directory).
Since what you're trying to include is a static library, you have a couple of options to get it. The easiest is to just install it via your base image's package manager:
FROM debian:stable
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive \
apt-get install --no-install-recommends --assume-yes \
libboost-atomic1.74-dev
A second harder option is to use a multi-stage build or a similar technique to build the library from source.
Since the specific file you're referencing is a static library, another option could be to build the binary on the host system and COPY it into the image unmodified. This requires you to be on a native-Linux host with a similar base Linux distribution.
FROM debian:stable
COPY ./myapp /usr/local/bin
CMD ["myapp"]
gcc -o myapp ... -lboost_atomic-mt
docker build -t myapp .
If all else fails then you can make a copy locally. You might write a script to do this; this is also a place where Make works well since it's largely dealing with concrete files.
#!/bin/sh
mkdir ./docker
cp -a Dockerfile .dockerignore src ./docker
cp /dir1/dir2/.../libboost_atomic-mt.a ./docker
docker build -t myapp ./docker

COPY failed while using docker

Im building a express app but when i use the command sudo docker build - < Dockerfile i get the error COPY failed: file not found in build context or excluded by .dockerignore: stat package.json: file does not exist.
This is how my proyect structure looks like:
.
├── build
│   ├── server.js
│   └── server.js.map
├── Dockerfile
├── esbuild.js
├── package.json
├── package-lock.json
├── Readme.md
└── src
├── index.ts
├── navigate.ts
├── pages.ts
├── responses
│   ├── Errors.ts
│   └── index.ts
└── server.ts
And this is my Dockerfile content
FROM node:14.0.0
WORKDIR /usr/src/app
RUN ls -all
COPY [ "package.json", \
"./"]
COPY src/ ./src
RUN npm install
RUN node esbuild.js
RUN npx nodemon build/server.js
EXPOSE 3001
CMD ["npm", "run", "serve", ]
At the moment of run the command, im located in the root of the project.

Can you have a non top-level Dockerfile when invoking COPY?

Have a Dockerfile to build releases for an Elixir/Phoenix application...The tree directory structure is as follows, where the Dockerfile (which has a dependency on this other Dockerfile) is in the "infra" subfolder and needs access to all the files one level above "infra".
.
├── README.md
├── assets
│   ├── css
│   ├── js
│   ├── node_modules
│   ├── package-lock.json
│   ├── package.json
├── lib
├── infra
│   ├── Dockerfile
│   ├── config.yaml
│   ├── deployment.yaml
The Dockerfile looks like:
# https://github.com/bitwalker/alpine-elixir
FROM bitwalker/alpine-elixir:latest
# Set exposed ports
EXPOSE 4000
ENV PORT=4000
ENV MIX_ENV=prod
ENV APP_HOME /app
ENV APP_VERSION=0.0.1
COPY ./ ${HOME}
WORKDIR ${HOME}
RUN mix deps.get
RUN mix compile
RUN MIX_ENV=${MIX_ENV} mix distillery.release
RUN echo $HOME
COPY ${HOME}/_build/${MIX_ENV}/rel/my_app/releases/${APP_VERSION}/my_app.tar.gz .
RUN tar -xzvf my_app.tar.gz
USER default
CMD ./bin/my_app foreground
The command "mix distillery.release" is what builds the my_app.tar.gz file in the path indicated by the COPY command.
I invoke the docker build as follows in the top-level directory (the parent directory of "infra"):
docker build -t my_app:local -f infra/Dockerfile .
I basically then get an error with COPY:
Step 13/16 : COPY ${HOME}/_build/${MIX_ENV}/rel/my_app/releases/${APP_VERSION}/my_app.tar.gz .
COPY failed: stat /var/lib/docker/tmp/docker-builder246562111/opt/app/_build/prod/rel/my_app/releases/0.0.1/my_app.tar.gz: no such file or directory
I understand that the COPY command depends on the "build context" but I thought that by issuing the "docker build" in the parent directory of infra meant I had the appropriate context set for the COPY, but clearly that doesn't seem to be the case. Is there a way to have a Dockerfile one level below the parent directory that contains all the files needed to build an Elixir/Phoenix "release" (the my_app.tar.gz and associated files created via the command mix distillery.release)? What bits am I missing?

How to copy a folder from a dockerfile's parent into workdir

So I have a tree that looks like this:
.
├── README.md
├── dataloader
│   ├── Dockerfile
...
│   ├── log.py
│   ├── logo.py
│   ├── processors
...
│   └── tests
├── datastore
│   ├── datastore.py
and the Dockerfile inside the dataloader application looks like this:
FROM python:3.7
WORKDIR /var/dsys-2uid-dataloader
COPY assertions/ ./assertions/
COPY events/ ./events/
COPY processors/ ./processors/
COPY requirements.txt ./
<*>COPY datastore/ ./datastore/
COPY *.py ./
RUN pip3 install -r requirements.txt
ENTRYPOINT ["python", "dataloader.py"]
the line with the asterisk doesn't work since the datastore folder is in the parent of the Dockerfile. What can be done? I need this Dockerfile to be correct because it's going to be used as the image in the kubernetes deployment.
You can't access a file outside of your build context, but you can "trick" docker to be in a different build context.
Just run docker build -t foo -f dataloader/Dockerfile . from the root directory (where you have the README and the dirs)
$ tree
.
├── bar
│   └── wii
└── foo
└── Dockerfile
2 directories, 2 files
$ cat foo/Dockerfile
FROM ubuntu
COPY bar/wii .
$ docker build -t test -f foo/Dockerfile .
Sending build context to Docker daemon 3.584kB
Step 1/2 : FROM ubuntu
---> cf0f3ca922e0
Step 2/2 : COPY bar/wii .
---> c3ff3f652b4d
Successfully built c3ff3f652b4d
Successfully tagged test:latest

MLflow 1.2.0 define MLproject file

Trying to run mlflow run by specifying MLproject and code which lives in a different location as MLproject file.
I have the following directory structure:
/root/mflow_test
.
├── conda
│   ├── conda.yaml
│   └── MLproject
├── docker
│   ├── Dockerfile
│   └── MLproject
├── README.md
├── requirements.txt
└── trainer
├── __init__.py
├── task.py
└── utils.py
When I'm run from: /root/
mlflow run mlflow_test/docker
I get:
/root/miniconda3/bin/python: Error while finding module specification for 'trainer.task' (ImportError: No module named 'trainer')
Since my MLproject file can't find the Python code.
I moved MLproject to mflow_test and this works fine.
This is my MLproject entry point:
name: mlflow_sample
docker_env:
image: mlflow-docker-sample
entry_points:
main:
parameters:
job_dir:
type: string
default: '/tmp/'
command: |
python -m trainer.task --job-dir {job_dir}
How can I run mlflow run and pass the MLproject and ask it to look in a different folder?
I tried:
"cd .. && python -m trainer.task --job-dir {job_dir}"
and I get:
/entrypoint.sh: line 5: exec: cd: not found
Dockerfile
# docker build -t mlflow-gcp-example -f Dockerfile .
FROM gcr.io/deeplearning-platform-release/tf-cpu
RUN git clone github.com/GoogleCloudPlatform/ml-on-gcp.git
WORKDIR ml-on-gcp/tutorials/tensorflow/mlflow_gcp
RUN pip install -r requirements.txt

Resources