How do I specify a path in CMD in Dockerfile - docker

I am trying to deploy an app in AWS Lambda. I am having issue accessing the handler that is deep inside the directory structure. Here is a simplified version of the issue.
My dir structure looks something like:
.
├── app.py
├── Dockerfile
├── events
│   ├── event.json
├── requirements.txt
├── template.yaml
And, here the contents of the files:
app.py
from fastapi import FastAPI
from mangum import Mangum
from pydantic import BaseModel
app = FastAPI(title='Serverless Lambda FastAPI', root_path="/Prod/")
class BodyModelAdd(BaseModel):
num1: int
num2: int
#app.post("/add", tags=["Add Two Numbers"])
def add(item: BodyModelAdd):
return {'result': {'num1': item.num1, 'num2': item.num2, 'sum': item.num1 + item.num2}}
handler = Mangum(app=app)
Dockerfile
FROM amazon/aws-lambda-python:3.9
COPY ./requirements.txt .
RUN pip3 install --no-cache-dir --upgrade -r requirements.txt --target "${LAMBDA_TASK_ROOT}"
COPY . ${LAMBDA_TASK_ROOT}
CMD ["app.handler"]
event.json
{
"body": "{\"num1\": 1, \"num2\": 2}",
"resource": "/add",
"path": "/add",
"httpMethod": "POST",
"requestContext": {
"path": "/add",
"resourcePath": "add",
"httpMethod": "POST"
}
}
requirements.txt
fastapi==0.85.1
uvicorn==0.19.0
mangum==0.16.0
pydantic==1.10.2
template.yaml
Note: I am very very new to Docker and AWS Lambda, and do not really have a good grasp of the contents of the yaml, but I grabbed this from a tutorial and it seems to work.
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: >
python3.9
Sample SAM Template for api_qna
Globals:
Function:
Timeout: 600
MemorySize: 1024
Resources:
PracticeFunction:
Type: AWS::Serverless::Function
Properties:
PackageType: Image
Architectures:
- x86_64
Events:
Root:
Type: Api
Properties:
Path: /
Method: ANY
NonRoot:
Type: Api
Properties:
Path: /{proxy+}
Method: ANY
Metadata:
Dockerfile: Dockerfile
DockerContext: ./
DockerTag: python3.9-v1
Outputs:
PracticeApi:
Description: "API Gateway endpoint URL for Prod stage"
Value: !Sub "https://${ServerlessRestApi}.execute-api.${AWS::Region}.amazonaws.com/Prod"
PracticeFunction:
Description: "Practice Tab Lambda Function ARN"
Value: !GetAtt PracticeFunction.Arn
PracticeFunctionIamRole:
Description: "Implicit IAM Role created"
Value: !GetAtt PracticeFunctionRole.Arn
To test it locally:
Once I have all these files, I first run sam build and then sam local invoke PracticeFunction --event events/event.json, and it gives the expected result as follows:
<some details deleted>
...
{"statusCode": 200, ..., "body": "{\"result\":{\"num1\":1,\"num2\":2,\"sum\":3}}", ...}%
So far so good. However, let me move the app.py to inside a dir as given below:
.
├── ... <-- app.py was here earlier
├── Dockerfile
├── events
│   ├── event.json
├── requirements.txt
├── src
│   ├── dir1
│      ├── __init__.py
│      ├── app.py <-- app.py moved here inside src/dir1/
├── template.yaml
The content of all the files are still the same (as given above). Now if I run sam build and then sam local invoke PracticeFunction --event events/event.json, it gives me an error as follows:
<some details deleted>
...
{"errorMessage": "Unable to import module 'app': No module named 'app'", "errorType": "Runtime.ImportModuleError", "requestId": "6cad0c02-73f4-4484-98fd-86c851d13548", "stackTrace": []}%
So it was expecting the app.py in the root dir and it could not find. Now my question is how do I handle this case when my app.py is not in the root project directory? Where do I specify the path to the app.py?
If I simply replace CMD ["app.handler"] with CMD ["./src/dir1/app.handler"] in the Dockerfile it gives an error saying {"errorMessage": "the 'package' argument is required to perform a relative import for '..src.dir1.app'", "errorType": "TypeError", .... I have also done some search but no luck yet (I must be missing something very trivial).
Sorry about the rather long post, but wanted to include all the details. Thank you so much for any help.

Related

Use directory in docker-compose.yml's parent folder as volume

I have the following directory structure:
.
├── README.md
├── alice
├── docker
│   ├── compose-prod.yml
│   ├── compose-stage.yml
│   ├── compose.yml
│   └── dockerfiles
├── gauntlet
├── nexus
│   ├── Procfile
│   ├── README.md
│   ├── VERSION.txt
│   ├── alembic
│   ├── alembic.ini
│   ├── app
│   ├── poetry.lock
│   ├── pyproject.toml
│   └── scripts
nexus.Dockerfile
FROM python:3.10
RUN addgroup --system app && adduser --system --group app
WORKDIR /usr/src/pdn/nexus
COPY ../../nexus/pyproject.toml ../../nexus/poetry.lock* ./
ARG INSTALL_DEV=true
RUN bash -c "if [ $INSTALL_DEV == 'true' ] ; then poetry install --no-root ; else poetry install --no-root --no-dev ; fi"
COPY ../../nexus .
RUN chmod +x scripts/run.sh
ENV PYTHONPATH=/usr/src/pdn/nexus
RUN chown -R app:app $HOME
USER app
CMD ["./run.sh"]
The relevant service in compose.yml looks like this:
services:
nexus:
platform: linux/arm64
build:
context: ../
dockerfile: ./docker/dockerfiles/nexus.Dockerfile
container_name: nexus
restart: on-failure
ports:
- "8000:8000"
volumes:
- ../nexus:/usr/src/pdn/nexus:ro
environment:
- DATABASE_HOSTNAME=${DATABASE_HOSTNAME?}
env_file:
- .env
When I run compose up, I get the following error:
Error response from daemon: failed to create shim: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "./scripts/run.sh": permission denied: unknown
The service starts ok without the volume definition. I think it might be because of the the location of nexus in relation to the dockerfile or compose file, but the context is set to the parent.
I tried defining the volume as follows:
volumes:
- ./nexus:/usr/src/pdn/nexus:ro
But I get a similar error, in this case run.sh is not found: and a directory named nexus gets created in the docker directory
Error response from daemon: failed to create shim: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "./run.sh": stat ./run.sh: no such file or directory: unknown
Not sure what I'm missing.
I've two comments, not sure if they can solve your issue.
First although, in your compose.yml, your are allowed to reference your parent directories, that not the case in your Dockerfile, you can't copy from outside the context which you specified in your compose.yml file (.. which resolve to your app root). So you should change those lines:
COPY ../../nexus/pyproject.toml ../../nexus/poetry.lock* ./
COPY ../../nexus .
to
COPY ./nexus/pyproject.toml ./nexus/poetry.lock* ./
COPY ./nexus .
Second the volume overrides whatever in /usr/src/pdn/nexus by the content of ../nexus. This will render your whole copies, to /usr/src/pdn/nexus, useless. That may not be an issue if the contents are the same, but whatever permission you defined in your files may be gone. So if your contents are the same, the only issue you may have is your starting script, you can put it into a separate directory out of the /usr/src/pdn/nexus so that it won't be overridden, and don't forget to reference it correctly into the CMD.

AWS SAM Golang app using containers, error when building the image with local modules

I have an application in AWS SAM consisting of lambda functions in Golang that use docker image build and deploy, I have a problem with build lambda image along with other local modules
The structure of the project looks like this:
├── src
│   ├── configuration # module1
│   │   ├── go.mod
│   │   └── values.go
│   ├── logger # module2
│   │   ├── go.mod
│   │   ├── go.sum
│   │   └── logger.go
│   └── dockerTestLambda # lambda function that use module#1, module#2
│      ├── Dockerfile
│      ├── go.mod
│      ├── go.sum
│      └── main.go
└── template.yml
Two separate modules configuration and logger which are used in the dockerTest Lambda function.
The go.mod in the dockerTestLambda module looks like this:
module my-private-repo.io/my-app/dockerTest
go 1.18
replace my-private-repo.io/my-app/logger => ../logger
replace my-private-repo.io/my-app/configuration => ../configuration
require (
github.com/aws/aws-lambda-go v1.31.1
my-private-repo.io/my-app/configuration v0.0.0-00010101000000-000000000000
my-private-repo.io/my-app/logger v0.0.0-00010101000000-000000000000
)
require (
github.com/pkg/errors v0.9.1 // indirect
github.com/rs/zerolog v1.26.1 // indirect
)
And the lambda itself (file main.go in dockerTestLambda) looks like this:
import (
"context"
"fmt"
"github.com/aws/aws-lambda-go/cfn"
"github.com/aws/aws-lambda-go/lambda"
"my-private-repo.io/my-app/configuration"
"my-private-repo.io/my-app/logger"
)
func HandleRequest(ctx context.Context, event cfn.Event) error {
logger.Initialize()
fmt.Println(configuration.GetRegion())
logger.Log(nil, logger.LOG_LEVEL_INFO, "Hello from external deps, logger")
return nil
}
func main() {
lambda.Start(HandleRequest)
}
With definition in tempalte.yml
DockerTestFunction:
Type: AWS::Serverless::Function
Properties:
PackageType: Image
Metadata:
DockerTag: docker-test-v1
DockerContext: ./src/dockerTestLambda
Dockerfile: Dockerfile
The Dockerfile in the lambda function looks like this:
FROM golang:1.18 AS build
WORKDIR /app
COPY go.mod go.mod
COPY go.sum .
RUN go mod download
COPY main.go .
RUN go build -o /usr/local/bin/lambda .
FROM ubuntu:latest
COPY --from=build /usr/local/bin/lambda /usr/local/bin/lambda
CMD [ "/usr/local/bin/lambda" ]
During the sam build command I get an error:
go: my-private-repo.io/my-app/configuration#v0.0.0-00010101000000-000000000000 (replaced by ../configuration): reading /configuration/go.mod: open /configuration/go.mod: no such file or directory
I have no idea how to build an image so that each lambda has its own Dockerfile and each lambda function has all external dependencies packed into it.
Do you have any ideas how to solve this, or some resources that will allow me to better structure the application code to make this problem easier to solve?

COPY failed while using docker

Im building a express app but when i use the command sudo docker build - < Dockerfile i get the error COPY failed: file not found in build context or excluded by .dockerignore: stat package.json: file does not exist.
This is how my proyect structure looks like:
.
├── build
│   ├── server.js
│   └── server.js.map
├── Dockerfile
├── esbuild.js
├── package.json
├── package-lock.json
├── Readme.md
└── src
├── index.ts
├── navigate.ts
├── pages.ts
├── responses
│   ├── Errors.ts
│   └── index.ts
└── server.ts
And this is my Dockerfile content
FROM node:14.0.0
WORKDIR /usr/src/app
RUN ls -all
COPY [ "package.json", \
"./"]
COPY src/ ./src
RUN npm install
RUN node esbuild.js
RUN npx nodemon build/server.js
EXPOSE 3001
CMD ["npm", "run", "serve", ]
At the moment of run the command, im located in the root of the project.

Can you have a non top-level Dockerfile when invoking COPY?

Have a Dockerfile to build releases for an Elixir/Phoenix application...The tree directory structure is as follows, where the Dockerfile (which has a dependency on this other Dockerfile) is in the "infra" subfolder and needs access to all the files one level above "infra".
.
├── README.md
├── assets
│   ├── css
│   ├── js
│   ├── node_modules
│   ├── package-lock.json
│   ├── package.json
├── lib
├── infra
│   ├── Dockerfile
│   ├── config.yaml
│   ├── deployment.yaml
The Dockerfile looks like:
# https://github.com/bitwalker/alpine-elixir
FROM bitwalker/alpine-elixir:latest
# Set exposed ports
EXPOSE 4000
ENV PORT=4000
ENV MIX_ENV=prod
ENV APP_HOME /app
ENV APP_VERSION=0.0.1
COPY ./ ${HOME}
WORKDIR ${HOME}
RUN mix deps.get
RUN mix compile
RUN MIX_ENV=${MIX_ENV} mix distillery.release
RUN echo $HOME
COPY ${HOME}/_build/${MIX_ENV}/rel/my_app/releases/${APP_VERSION}/my_app.tar.gz .
RUN tar -xzvf my_app.tar.gz
USER default
CMD ./bin/my_app foreground
The command "mix distillery.release" is what builds the my_app.tar.gz file in the path indicated by the COPY command.
I invoke the docker build as follows in the top-level directory (the parent directory of "infra"):
docker build -t my_app:local -f infra/Dockerfile .
I basically then get an error with COPY:
Step 13/16 : COPY ${HOME}/_build/${MIX_ENV}/rel/my_app/releases/${APP_VERSION}/my_app.tar.gz .
COPY failed: stat /var/lib/docker/tmp/docker-builder246562111/opt/app/_build/prod/rel/my_app/releases/0.0.1/my_app.tar.gz: no such file or directory
I understand that the COPY command depends on the "build context" but I thought that by issuing the "docker build" in the parent directory of infra meant I had the appropriate context set for the COPY, but clearly that doesn't seem to be the case. Is there a way to have a Dockerfile one level below the parent directory that contains all the files needed to build an Elixir/Phoenix "release" (the my_app.tar.gz and associated files created via the command mix distillery.release)? What bits am I missing?

MLflow 1.2.0 define MLproject file

Trying to run mlflow run by specifying MLproject and code which lives in a different location as MLproject file.
I have the following directory structure:
/root/mflow_test
.
├── conda
│   ├── conda.yaml
│   └── MLproject
├── docker
│   ├── Dockerfile
│   └── MLproject
├── README.md
├── requirements.txt
└── trainer
├── __init__.py
├── task.py
└── utils.py
When I'm run from: /root/
mlflow run mlflow_test/docker
I get:
/root/miniconda3/bin/python: Error while finding module specification for 'trainer.task' (ImportError: No module named 'trainer')
Since my MLproject file can't find the Python code.
I moved MLproject to mflow_test and this works fine.
This is my MLproject entry point:
name: mlflow_sample
docker_env:
image: mlflow-docker-sample
entry_points:
main:
parameters:
job_dir:
type: string
default: '/tmp/'
command: |
python -m trainer.task --job-dir {job_dir}
How can I run mlflow run and pass the MLproject and ask it to look in a different folder?
I tried:
"cd .. && python -m trainer.task --job-dir {job_dir}"
and I get:
/entrypoint.sh: line 5: exec: cd: not found
Dockerfile
# docker build -t mlflow-gcp-example -f Dockerfile .
FROM gcr.io/deeplearning-platform-release/tf-cpu
RUN git clone github.com/GoogleCloudPlatform/ml-on-gcp.git
WORKDIR ml-on-gcp/tutorials/tensorflow/mlflow_gcp
RUN pip install -r requirements.txt

Resources