MLflow 1.2.0 define MLproject file - docker

Trying to run mlflow run by specifying MLproject and code which lives in a different location as MLproject file.
I have the following directory structure:
/root/mflow_test
.
├── conda
│   ├── conda.yaml
│   └── MLproject
├── docker
│   ├── Dockerfile
│   └── MLproject
├── README.md
├── requirements.txt
└── trainer
├── __init__.py
├── task.py
└── utils.py
When I'm run from: /root/
mlflow run mlflow_test/docker
I get:
/root/miniconda3/bin/python: Error while finding module specification for 'trainer.task' (ImportError: No module named 'trainer')
Since my MLproject file can't find the Python code.
I moved MLproject to mflow_test and this works fine.
This is my MLproject entry point:
name: mlflow_sample
docker_env:
image: mlflow-docker-sample
entry_points:
main:
parameters:
job_dir:
type: string
default: '/tmp/'
command: |
python -m trainer.task --job-dir {job_dir}
How can I run mlflow run and pass the MLproject and ask it to look in a different folder?
I tried:
"cd .. && python -m trainer.task --job-dir {job_dir}"
and I get:
/entrypoint.sh: line 5: exec: cd: not found
Dockerfile
# docker build -t mlflow-gcp-example -f Dockerfile .
FROM gcr.io/deeplearning-platform-release/tf-cpu
RUN git clone github.com/GoogleCloudPlatform/ml-on-gcp.git
WORKDIR ml-on-gcp/tutorials/tensorflow/mlflow_gcp
RUN pip install -r requirements.txt

Related

Use directory in docker-compose.yml's parent folder as volume

I have the following directory structure:
.
├── README.md
├── alice
├── docker
│   ├── compose-prod.yml
│   ├── compose-stage.yml
│   ├── compose.yml
│   └── dockerfiles
├── gauntlet
├── nexus
│   ├── Procfile
│   ├── README.md
│   ├── VERSION.txt
│   ├── alembic
│   ├── alembic.ini
│   ├── app
│   ├── poetry.lock
│   ├── pyproject.toml
│   └── scripts
nexus.Dockerfile
FROM python:3.10
RUN addgroup --system app && adduser --system --group app
WORKDIR /usr/src/pdn/nexus
COPY ../../nexus/pyproject.toml ../../nexus/poetry.lock* ./
ARG INSTALL_DEV=true
RUN bash -c "if [ $INSTALL_DEV == 'true' ] ; then poetry install --no-root ; else poetry install --no-root --no-dev ; fi"
COPY ../../nexus .
RUN chmod +x scripts/run.sh
ENV PYTHONPATH=/usr/src/pdn/nexus
RUN chown -R app:app $HOME
USER app
CMD ["./run.sh"]
The relevant service in compose.yml looks like this:
services:
nexus:
platform: linux/arm64
build:
context: ../
dockerfile: ./docker/dockerfiles/nexus.Dockerfile
container_name: nexus
restart: on-failure
ports:
- "8000:8000"
volumes:
- ../nexus:/usr/src/pdn/nexus:ro
environment:
- DATABASE_HOSTNAME=${DATABASE_HOSTNAME?}
env_file:
- .env
When I run compose up, I get the following error:
Error response from daemon: failed to create shim: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "./scripts/run.sh": permission denied: unknown
The service starts ok without the volume definition. I think it might be because of the the location of nexus in relation to the dockerfile or compose file, but the context is set to the parent.
I tried defining the volume as follows:
volumes:
- ./nexus:/usr/src/pdn/nexus:ro
But I get a similar error, in this case run.sh is not found: and a directory named nexus gets created in the docker directory
Error response from daemon: failed to create shim: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "./run.sh": stat ./run.sh: no such file or directory: unknown
Not sure what I'm missing.
I've two comments, not sure if they can solve your issue.
First although, in your compose.yml, your are allowed to reference your parent directories, that not the case in your Dockerfile, you can't copy from outside the context which you specified in your compose.yml file (.. which resolve to your app root). So you should change those lines:
COPY ../../nexus/pyproject.toml ../../nexus/poetry.lock* ./
COPY ../../nexus .
to
COPY ./nexus/pyproject.toml ./nexus/poetry.lock* ./
COPY ./nexus .
Second the volume overrides whatever in /usr/src/pdn/nexus by the content of ../nexus. This will render your whole copies, to /usr/src/pdn/nexus, useless. That may not be an issue if the contents are the same, but whatever permission you defined in your files may be gone. So if your contents are the same, the only issue you may have is your starting script, you can put it into a separate directory out of the /usr/src/pdn/nexus so that it won't be overridden, and don't forget to reference it correctly into the CMD.

Building a Docker container for Golang code: package PACKAGE_NAME is not in GOROOT

I built a small Golang application and I want to run it on a Docker container.
I wrote the following Dockerfile:
# syntax=docker/dockerfile:1
FROM golang:1.16-alpine
WORKDIR /app
COPY go.mod ./
COPY go.sum ./
RUN go mod download
COPY ./* .
RUN go env -w GO111MODULE=on
RUN go build -o /docker-gs-ping
EXPOSE 8080
CMD [ "/docker-gs-ping" ]
However, when I run the command:
docker build --tag docker-gs-ping .
I get the errors:
#16 0.560 found packages controllers (controller.go) and repositories (csv_file_repository.go) in /app
#16 0.560 main.go:4:2: package MyExercise/controllers is not in GOROOT (/usr/local/go/src/MyExercise/controllers)
I want to mention that the package controllers exists in my working directory and all files associated with this directory are placed in MyExercise/controllers folder.
Do you know how to resolve this error?
Edit:
This is the directory tree:
.
├── Dockerfile
├── REDAME
├── controllers
│   └── controller.go
├── go.mod
├── go.sum
├── logging
│   └── logger.go
├── main.go
├── models
│   └── location.go
├── output.log
├── repositories
│   ├── csv_file_repository.go
│   ├── csv_file_repository_builder.go
│   ├── csv_file_repository_builder_test.go
│   ├── csv_file_repository_test.go
│   ├── repository_builder_interface.go
│   ├── repository_interface.go
│   └── resources
│   └── ip_address_list.txt
└── services
├── ip_location_service.go
├── ip_location_service_test.go
├── rate_limiter_service.go
├── rate_limiter_service_interface.go
├── rate_limiter_service_test.go
└── time_service.go
import section in main.go:
import (
"MyExercise/controllers"
"MyExercise/logging"
"MyExercise/repositories"
"MyExercise/services"
"errors"
"github.com/gin-gonic/gin"
"os"
"strconv"
"sync"
)
Do go mod vendor in your app directory. Documentaion.
For build the container docker build -t app:v1 .
Dockerfile
FROM golang:1.16-alpine
WORKDIR /app/
ADD . .
RUN go build -o /app/main
EXPOSE 5055
CMD [ "/app/main" ]
There is actually an issue with your Dockerfile.
COPY ./* .
does not actually do what you think. It will copy all files recursively in a flat structure to the /app directory.
Modify your Dockerfile to something like:
# syntax=docker/dockerfile:1
FROM golang:1.16-alpine
WORKDIR /app
ADD . /app
RUN go mod download
RUN go env -w GO111MODULE=on
RUN go build -o /docker-gs-ping
EXPOSE 8080
CMD [ "/docker-gs-ping" ]
Basically, remove all of the COPY directives and replace with a single ADD directive

COPY failed while using docker

Im building a express app but when i use the command sudo docker build - < Dockerfile i get the error COPY failed: file not found in build context or excluded by .dockerignore: stat package.json: file does not exist.
This is how my proyect structure looks like:
.
├── build
│   ├── server.js
│   └── server.js.map
├── Dockerfile
├── esbuild.js
├── package.json
├── package-lock.json
├── Readme.md
└── src
├── index.ts
├── navigate.ts
├── pages.ts
├── responses
│   ├── Errors.ts
│   └── index.ts
└── server.ts
And this is my Dockerfile content
FROM node:14.0.0
WORKDIR /usr/src/app
RUN ls -all
COPY [ "package.json", \
"./"]
COPY src/ ./src
RUN npm install
RUN node esbuild.js
RUN npx nodemon build/server.js
EXPOSE 3001
CMD ["npm", "run", "serve", ]
At the moment of run the command, im located in the root of the project.

Can you have a non top-level Dockerfile when invoking COPY?

Have a Dockerfile to build releases for an Elixir/Phoenix application...The tree directory structure is as follows, where the Dockerfile (which has a dependency on this other Dockerfile) is in the "infra" subfolder and needs access to all the files one level above "infra".
.
├── README.md
├── assets
│   ├── css
│   ├── js
│   ├── node_modules
│   ├── package-lock.json
│   ├── package.json
├── lib
├── infra
│   ├── Dockerfile
│   ├── config.yaml
│   ├── deployment.yaml
The Dockerfile looks like:
# https://github.com/bitwalker/alpine-elixir
FROM bitwalker/alpine-elixir:latest
# Set exposed ports
EXPOSE 4000
ENV PORT=4000
ENV MIX_ENV=prod
ENV APP_HOME /app
ENV APP_VERSION=0.0.1
COPY ./ ${HOME}
WORKDIR ${HOME}
RUN mix deps.get
RUN mix compile
RUN MIX_ENV=${MIX_ENV} mix distillery.release
RUN echo $HOME
COPY ${HOME}/_build/${MIX_ENV}/rel/my_app/releases/${APP_VERSION}/my_app.tar.gz .
RUN tar -xzvf my_app.tar.gz
USER default
CMD ./bin/my_app foreground
The command "mix distillery.release" is what builds the my_app.tar.gz file in the path indicated by the COPY command.
I invoke the docker build as follows in the top-level directory (the parent directory of "infra"):
docker build -t my_app:local -f infra/Dockerfile .
I basically then get an error with COPY:
Step 13/16 : COPY ${HOME}/_build/${MIX_ENV}/rel/my_app/releases/${APP_VERSION}/my_app.tar.gz .
COPY failed: stat /var/lib/docker/tmp/docker-builder246562111/opt/app/_build/prod/rel/my_app/releases/0.0.1/my_app.tar.gz: no such file or directory
I understand that the COPY command depends on the "build context" but I thought that by issuing the "docker build" in the parent directory of infra meant I had the appropriate context set for the COPY, but clearly that doesn't seem to be the case. Is there a way to have a Dockerfile one level below the parent directory that contains all the files needed to build an Elixir/Phoenix "release" (the my_app.tar.gz and associated files created via the command mix distillery.release)? What bits am I missing?

How to copy a folder from a dockerfile's parent into workdir

So I have a tree that looks like this:
.
├── README.md
├── dataloader
│   ├── Dockerfile
...
│   ├── log.py
│   ├── logo.py
│   ├── processors
...
│   └── tests
├── datastore
│   ├── datastore.py
and the Dockerfile inside the dataloader application looks like this:
FROM python:3.7
WORKDIR /var/dsys-2uid-dataloader
COPY assertions/ ./assertions/
COPY events/ ./events/
COPY processors/ ./processors/
COPY requirements.txt ./
<*>COPY datastore/ ./datastore/
COPY *.py ./
RUN pip3 install -r requirements.txt
ENTRYPOINT ["python", "dataloader.py"]
the line with the asterisk doesn't work since the datastore folder is in the parent of the Dockerfile. What can be done? I need this Dockerfile to be correct because it's going to be used as the image in the kubernetes deployment.
You can't access a file outside of your build context, but you can "trick" docker to be in a different build context.
Just run docker build -t foo -f dataloader/Dockerfile . from the root directory (where you have the README and the dirs)
$ tree
.
├── bar
│   └── wii
└── foo
└── Dockerfile
2 directories, 2 files
$ cat foo/Dockerfile
FROM ubuntu
COPY bar/wii .
$ docker build -t test -f foo/Dockerfile .
Sending build context to Docker daemon 3.584kB
Step 1/2 : FROM ubuntu
---> cf0f3ca922e0
Step 2/2 : COPY bar/wii .
---> c3ff3f652b4d
Successfully built c3ff3f652b4d
Successfully tagged test:latest

Resources