I am experimnting in automating parts of a project that I'd ideally deploy and forget about. The project is comprised of an XML parser and a small Flask website. At the moment the folder structure looks like this:
.
├── init.sql
└── parser
├── cloudbuild.yaml
├── cloud_func
│ └── main.py
├── Dockerfile
├── feed_parse.py
├── get_so.py
├── requirements.txt
└── utils.py
Now, I can correctly set up the trigger to look at /parser/cloudbuild.yaml, but building the image with the following command raises an error:
build . --build-arg "CLIENT_CERT=$CSQL_CERT CLIENT_KEY=$CSQL_KEY SERVER_CA=$CSQL_CA SERVER_PW=$CSQL_PW SERVER_HOST=$CSQL_IP" -t gcr.io/and-reporting/appengine/so-parser:latest
BUILD
Already have image (with digest): gcr.io/cloud-builders/docker
unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /workspace/Dockerfile: no such file or directory
ERROR
ERROR: build step 0 "gcr.io/cloud-builders/docker" failed: exit status 1
it looks to me that gcp has troubles locating my Dockerfile which is in the same folder cloudbuild.yaml is located.
What am I missing?
For the sake of completeness, the Dockerfile looks like this:
FROM python:3.7-alpine
RUN apk update \
&& apk add gcc python3-dev musl-dev libffi-dev \
&& apk del libressl-dev \
&& apk add openssl-dev
COPY requirements.txt /
RUN pip install --upgrade pip
RUN pip install --no-cache-dir -r requirements.txt
ADD . /parser
WORKDIR /parser/
RUN mkdir -p certs
# Set env variables from secrets
ARG CLIENT_CERT
ENV CSQL_CERT=${CLIENT_CERT}
ARG CLIENT_KEY
ENV CSQL_KEY=${CLIENT_KEY}
ARG SERVER_CA
ENV CSQL_CA=${SERVER_CA}
ARG SERVER_PW
ENV CSQL_PW=${SERVER_PW}
ARG SERVER_HOST
ENV CSQL_IP=${SERVER_HOST}
# Get ssl certs in files
RUN echo $CLIENT_CERT > ./certs/ssl_cert.pem \
&& echo $CLIENT_KEY > ./certs/ssl_key.pem \
&& echo $SERVER_CA > ./certs/ssl_ca.pem
CMD python get_so.py
edit: and the cloudbuild.yaml I'm using for the build
steps:
# Building image
- name: 'gcr.io/cloud-builders/docker'
args: [
'build',
'-f',
'Dockerfile',
'--build-arg',
'CLIENT_CERT=$$CSQL_CERT CLIENT_KEY=$$CSQL_KEY SERVER_CA=$$CSQL_CA SERVER_PW=$$CSQL_PW SERVER_HOST=$$CSQL_IP',
'-t',
'gcr.io/$PROJECT_ID/appengine/so-parser:latest',
'.'
]
secretEnv: ['CSQL_CERT', 'CSQL_KEY', 'CSQL_CA', 'CSQL_PW', 'CSQL_IP']
# Push Images
# - name: 'gcr.io/cloud-builders/docker'
# args: ['push', 'gcr.io/$PROJECT_ID/appengine/so-parser:latest']
secrets:
- kmsKeyName: projects/myproject/locations/global/keyRings/so-jobs/cryptoKeys/board
secretEnv:
CSQL_CERT: [base64 string]
CSQL_KEY: [base64 string]
CSQL_CA: [base64 string]
CSQL_PW: [base64 string]
CSQL_IP: [base64 string]
Because of the dot in your cloudbuild.yaml file, docker is not able to find Dockerfile which is in parser directory:
steps:
- name: "gcr.io/cloud-builders/docker"
args: ["build", "-t", "gcr.io/$PROJECT_ID/mynodejs:$SHORT_SHA", "./parser"]
If you want to mention the dockerfile name:
steps:
- name: "gcr.io/cloud-builders/docker"
args: ["build", "-t", "gcr.io/$PROJECT_ID/mynodejs:$SHORT_SHA", "-f", "./parser/your-dockerfile"]
Related
I have a question regarding an approach and how secure it is. Basically I have a next js app which I push to Cloud Run via Github Actions. Now in Github Actions I have defined secrets which I then pass via github action yml file to Docker. Then in Docker I pass them to environment variables at build time. But then I use next.config.js to make it available to the app at build time. Here are my files
Github Action Yml File
name: nextjs-cloud-run
on:
push:
branches:
- main
env:
GCP_PROJECT_ID: ${{ secrets.GCP_PROJECT_ID }}
GCP_REGION: europe-west1
# project-name but it can be anything you want
REPO_NAME: some-repo-name
jobs:
build-and-deploy:
name: Setup, Build, and Deploy
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout#v2
# This step is where our service account will be authenticated
- uses: google-github-actions/setup-gcloud#v0.2.0
with:
project_id: ${{ secrets.GCP_PROJECT_ID }}
service_account_key: ${{ secrets.GCP_SA_KEYBASE64 }}
service_account_email: ${{ secrets.GCP_SA_EMAIL }}
- name: Enable the necessary APIs and enable docker auth
run: |-
gcloud services enable containerregistry.googleapis.com
gcloud services enable run.googleapis.com
gcloud --quiet auth configure-docker
- name: Build, tag image & push
uses: docker/build-push-action#v3
with:
push: true
tags: "gcr.io/${{ secrets.GCP_PROJECT_ID }}/collect-opinion-frontend:${{ github.sha }}"
secrets: |
"NEXT_PUBLIC_STRAPI_URL=${{ secrets.NEXT_PUBLIC_STRAPI_URL }}"
"NEXTAUTH_SECRET=${{ secrets.NEXTAUTH_SECRET }}"
"NEXTAUTH_URL=${{ secrets.NEXTAUTH_URL }}"
- name: Deploy Image
run: |-
gcloud components install beta --quiet
gcloud beta run deploy $REPO_NAME --image gcr.io/$GCP_PROJECT_ID/$REPO_NAME:$GITHUB_SHA \
--project $GCP_PROJECT_ID \
--platform managed \
--region $GCP_REGION \
--allow-unauthenticated \
--quiet
This is my Dockerfile for Next Js
# Install dependencies only when needed
FROM node:16-alpine AS deps
# Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed.
RUN apk add --no-cache libc6-compat
WORKDIR /app
# Install dependencies based on the preferred package manager
COPY package.json yarn.lock* package-lock.json* pnpm-lock.yaml* ./
RUN \
if [ -f yarn.lock ]; then yarn --frozen-lockfile; \
elif [ -f package-lock.json ]; then npm ci; \
elif [ -f pnpm-lock.yaml ]; then yarn global add pnpm && pnpm i --frozen-lockfile; \
else echo "Lockfile not found." && exit 1; \
fi
# Rebuild the source code only when needed
FROM node:16-alpine AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
# Next.js collects completely anonymous telemetry data about general usage.
# Learn more here: https://nextjs.org/telemetry
# Uncomment the following line in case you want to disable telemetry during the build.
# ENV NEXT_TELEMETRY_DISABLED 1
# get the environment vars from secrets
RUN --mount=type=secret,id=NEXT_PUBLIC_STRAPI_URL \
--mount=type=secret,id=NEXTAUTH_SECRET \
--mount=type=secret,id=NEXTAUTH_URL \
export NEXT_PUBLIC_STRAPI_URL=$(cat /run/secrets/NEXT_PUBLIC_STRAPI_URL) && \
export NEXTAUTH_SECRET=$(cat /run/secrets/NEXTAUTH_SECRET) && \
export NEXTAUTH_URL=$(cat /run/secrets/NEXTAUTH_URL) && \
yarn build
# RUN yarn build
# If using npm comment out above and use below instead
# RUN npm run build
# Production image, copy all the files and run next
FROM node:16-alpine AS runner
WORKDIR /app
ENV NODE_ENV production
# Uncomment the following line in case you want to disable telemetry during runtime.
# ENV NEXT_TELEMETRY_DISABLED 1
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs
COPY --from=builder /app/public ./public
# Automatically leverage output traces to reduce image size
# https://nextjs.org/docs/advanced-features/output-file-tracing
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static
USER nextjs
EXPOSE 3000
ENV PORT 3000
CMD ["node", "server.js"]
My next.config.js file
// next.config.js
module.exports = {
// ... rest of the configuration.
output: "standalone",
env: {
// Will only be available on the server side
NEXTAUTH_SECRET: process.env.NEXTAUTH_SECRET,
NEXTAUTH_URL: process.env.NEXTAUTH_URL, // Pass through env variables
},
};
My question is that, does this create a security issue with the fact that the environment variable could somehow be accessed via the client?
According to Next JS Documentation that should not be the case or at least that's how I understand. Snippet from the site
Note: environment variables specified in this way will always be included in the JavaScript bundle, prefixing the environment variable name with NEXT_PUBLIC_ only has an effect when specifying them through the environment or .env files.
I would appreciate if you could advise me on this
I am wanting to use Serverless Framework to deploy a lambda that is built using a docker container image, but the Dockerfile to build the image for the lambda is in a separate folder to the source code. When I run sls deploy I get an error saying the Dockerfile cannot find the src folder where the code is in order to copy it over. I understand that Dockerfiles access files/folders that are outside it's cwd, and to do this you need to run docker compose and context set correctly, something like:
instagram_image:
image: instagram_image
build:
context: ../.
dockerfile: ./build/instagram/Dockerfile
volumes:
- ./bin:/out
However I do not know how to do this incontext of serverless framework. Does anyone know how to? All examples are always with the Dockerfile in the same directory as the code itself.
I have a project structure like so
▾ build/
▾ instagram/
Dockerfile
docker-compose.yml
Dockerfile
lambda-env-tokens.yml
serverless.yml
▾ src/my_selector/
... code in here
My Dockerfile:
# Install tar and xz
RUN yum install tar xz unzip -y
# Install ffmpeg
RUN curl https://johnvansickle.com/ffmpeg/releases/ffmpeg-release-amd64-static.tar.xz -o ffmpeg.tar.xz -s
RUN tar -xf ffmpeg.tar.xz
RUN mv ffmpeg-*-amd64-static/ffmpeg /usr/bin
RUN mkdir ./download_videos
RUN mkdir ./media
COPY src ${LAMBDA_TASK_ROOT}
# Copy function code
COPY pyproject.toml .
RUN python -m pip install --target "${LAMBDA_TASK_ROOT}" .
CMD [ "my_selector.handlers.instagram.handler" ]
serverless.yml
service: MySelector
provider:
name: aws
profile: personal-profile
region: eu-west-1
runtime: python3.9
ecr:
images:
instagramdockerimage:
path: ./instagram
functions:
instagram:
image:
name: instagramdockerimage
timeout: 900 # Max out timeout of 15 mins
memorySize: 200
events:
- schedule: rate(2 hours)
environment:
...
According to the serverless docs, you can specify the Dockerfile to use with file.
provider:
name: aws
ecr:
images:
baseimage:
path: ../.
file: build/instagram/Dockerfile
Set path to root of your repo and file to the specified Dockerfile.
Can you try: COPY src ${LAMBDA_TASK_ROOT} to COPY ../../src ${LAMBDA_TASK_ROOT}
btw these steps:
RUN curl https://johnvansickle.com/ffmpeg/releases/ffmpeg-release-amd64-static.tar.xz -o ffmpeg.tar.xz -s
RUN tar -xf ffmpeg.tar.xz
RUN mv ffmpeg-*-amd64-static/ffmpeg /usr/bin
I think should locate in local instead of curl to download: for better performance after that use cp instead of mv
This question already has answers here:
How to include files outside of Docker's build context?
(19 answers)
Closed 1 year ago.
THis is the project structure
Project
/deployment
/Dockerfile
/docker-compose.yml
/services
/ui
/widget
Here is the docker file
FROM node:14
WORKDIR /app
USER root
# create new user (only root can do this) and assign owenership to newly created user
RUN echo "$(date '+%Y-%m-%d %H:%M:%S'): ======> Setup Appusr" \
&& groupadd -g 1001 appusr \
&& useradd -r -u 1001 -g appusr appusr \
&& mkdir /home/appusr/ \
&& chown -R appusr:appusr /home/appusr/\
&& chown -R appusr:appusr /app
# switch to new created user so that appuser will be responsible for all files and has access
USER appusr:appusr
COPY ../services/ui/widget/ /app/
COPY ../.env /app/
# installing deps
RUN npm install
and docker-compose
version: "3.4"
x-env: &env
HOST: 127.0.0.1
services:
widget:
build:
dockerfile: Dockerfile
context: .
ports:
- 3002:3002
command:
npm start
environment:
<<: *env
restart: always
and from project/deplyment/docker-compose up it shows
Step 6/8 : COPY ../services/ui/widget/ /app/
ERROR: Service 'widget' failed to build : COPY failed: forbidden path outside the build context: ../services/ui/widget/ ()
am i setting the wrong context?
You cannot COPY or ADD files outside the current path where Dockerfile exists.
You should either move these two directories to where Dockerfile is and then change your Dockerfile to:
COPY ./services/ui/widget/ /app/
COPY ./.env /app/
Or use volumes in docker-compose, and remove the two COPY lines.
So, your docker-compose should look like this:
x-env: &env
HOST: 127.0.0.1
services:
widget:
build:
dockerfile: Dockerfile
context: .
ports:
- 3002:3002
command:
npm start
environment:
<<: *env
restart: always
volumes:
- /absolute/path/to/services/ui/widget/:/app/
- /absolute/path/to/.env/:/app/
And this should be your Dockerfile if you use volumesindocker-compose`:
FROM node:14
WORKDIR /app
USER root
# create new user (only root can do this) and assign owenership to newly created user
RUN echo "$(date '+%Y-%m-%d %H:%M:%S'): ======> Setup Appusr" \
&& groupadd -g 1001 appusr \
&& useradd -r -u 1001 -g appusr appusr \
&& mkdir /home/appusr/ \
&& chown -R appusr:appusr /home/appusr/\
&& chown -R appusr:appusr /app
# switch to new created user so that appuser will be responsible for all files and has access
USER appusr:appusr
# installing deps
RUN npm install
You problem is that you are referencing a file which is outside Dockerfile context. By default, is the location from where you execute the build command.
From docker documentation - Copy section:
The path must be inside the context of the build; you cannot COPY ../something /something, because the first step of a docker build is to send the context directory (and subdirectories) to the docker daemon.
However, you can use the parameter -f to specify the dockerfile independently of the folder you are running your build. So you could use the next line executing it from projects:
docker build -f ./deployment/Dockerfile .
You will need to modify your copy lines as well to point at the right location.
COPY ./services/ui/widget/ /app/
COPY ./.env /app/
In getting a django env setup, was working on how to containerize the env. In doing so, I can't get the entrypoint to work on Docker for Windows/Linux.
Successfully built e9cb8e009d91
Successfully tagged avengervision_web:latest
avengervision_db_1 is up-to-date
Starting avengervision_web_1 ... done
CONTAINER ID IMAGE COMMAND CREATED
1da83169ba41 avengervision_web "sh /usr/src/app/ent…" 44 minutes
STATUS PORTS NAMES
Exited (2) 20 seconds ago avengervision_web_1
docker logs 1da83169ba41
sh: can't open '/usr/src/app/entrypoint.sh': No such file or directory
Have simplified the entrypoint.sh to just get it to execute.
Have tried
ENTRYPOINT ["sh","/usr/src/app/entrypoint.sh"] &
ENTRYPOINT ["/usr/src/app/entrypoint.sh"]
Made sure the line ending in git and vscode are set to LF and ran the code through dos2unix
Ran the same Docker Compose on Windows and Linux and get the same exception on both
added to the Dockerfile as extra precaution to remove all line endings and made sure to chmod +x the script
Commented out the EntryPoint and ran docker run -tdi and I was able to docker attach and execute the script from within the container without any issue.
*****docker-compose.yml*****
version: '3.7'
services:
web:
build:
context: .
dockerfile: ./docker/Dockerfile
#command: python manage.py runserver 0.0.0.0:8000
volumes:
- ./main/:/usr/src/app/
ports:
- 8000:8000
environment:
- DEBUG=1
- SECRET_KEY=foo
- SQL_ENGINE=django.db.backends.postgresql
- SQL_DATABASE=hello_django_dev
- SQL_USER=hello_django
- SQL_PASSWORD=hello_django
- SQL_HOST=db
- SQL_PORT=5432
- DATABASE=postgres
depends_on:
- db
db:
image: postgres:11.2-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=hello_django
- POSTGRES_PASSWORD=hello_django
- POSTGRES_DB=hello_django_dev
volumes:
postgres_data:
*****Dockerfile*****
# pull official base image
FROM python:3.7-alpine
# set work directory
WORKDIR /usr/src/app
# set environment varibles
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install psycopg2
RUN apk update \
&& apk add --virtual build-deps gcc python3-dev musl-dev \
&& apk add postgresql-dev \
&& pip install psycopg2 \
&& apk del build-deps
# install dependencies
RUN pip install --upgrade pip
RUN pip install pipenv
COPY ./docker/Pipfile /usr/src/app/Pipfile
RUN pipenv install --skip-lock --system --dev
# copy entrypoint.sh
COPY ./docker/entrypoint.sh /usr/src/app/entrypoint.sh
#RUN chmod +x /usr/src/app/entrypoint.sh
# copy project
COPY main /usr/src/app/main
COPY manage.py /usr/src/app
#RUN /usr/src/app/entrypoint.sh
RUN sed -i 's/\r$//' /usr/src/app/entrypoint.sh && \
chmod +x /usr/src/app/entrypoint.sh
# run entrypoint.sh
ENTRYPOINT ["sh","/usr/src/app/entrypoint.sh"]
*****entrypoint.sh*****
#!/bin/sh
if [ "$DATABASE" = "postgres" ]
then
echo "Waiting for postgres..."
while ! nc -z $SQL_HOST $SQL_PORT; do
sleep 0.1
done
echo "PostgreSQL started"
fi
echo "Testing"
#python /usr/src/app/manage.py flush
#python /usr/src/app/manage.py migrate
#python /usr/src/app/manage.py collectstatic --no-input --clear
exec "$#"
The goal in the end is that the container would be up and running with the django application created.
In leveraging the layout listed here - https://github.com/testdrivenio/django-on-docker it worked. The difference in what I was doing is I created a new docker directory at the root and then had docker compose leverage that. Everything seemed to copy into the container as it was supposed to, but for some reason the EntryPoint would not work. Without changing any of the code other than updating the references to the new file locations, everything worked. Below were the changes made:
web:
build:
context: .
dockerfile: ./docker/Dockerfile
to
web:
build: ./app
and then changing the directory structure from
Project Layout:
├───.vscode
├───docker
│ └───Dockerfile
│ └───entrypoint.sh
│ └───Pipfile
│ └───nginx
└───main
├───migrations
├───static
│ └───images
├───templates
├───Artwork
├───django-env
│ ├───Include
│ ├───Lib
│ └───Scripts
└───docker-compose.yml
└───managy.py
to
Project Layout:
├───.vscode
├───app
│ └───main
│ ├───migrations
│ ├───static
│ │ └───images
│ ├───templates
│ └───Dockerfile
│ └───entrypoint.sh
│ └───managy.py
│ └───Pipfile
├───Artwork
├───django-env
│ ├───Include
│ ├───Lib
│ └───Scripts
└───nginx
└───docker-compose.yml
I have to configure custom build process of GC AppEngine application with GC Cloud Build.
First of all - I have an internal python repository on the GC ComputeEngine instance. It's accessible only through internal network and I use Remote-builder to run pip installcommand on the internal GC instance.
After downloading of dependencies from the internal repository I have to deploy results into the GC AppEngine.
Cloudbuild.yaml:
steps:
/#Download dependencies from the internal repository
- name: gcr.io/${ProjectName}/remote-builder
env:
- COMMAND=sudo bash workspace/download-dependencies.bash
- ZONE=us-east1-b
- INSTANCE_NAME=remote-cloud-build
- INSTANCE_ARGS=--image-project centos-cloud --image-family centos-7
- name: gcr.io/cloud-builders/docker
args: ['build', '-t', 'gcr.io/${ProjectName}/app', '.']
- name: gcr.io/cloud-builders/docker
args: ['push', 'gcr.io/${ProjectName}/app']
- name: gcr.io/cloud-builders/gcloud
args: ['app', 'deploy', 'app.yaml', '--image-url=gcr.io/${ProjectName}/${ProjectName}']
images: ['gcr.io/${ProjectName}/${ProjectName}']
app.yaml:
runtime: python
env: flex
entrypoint: python main.py
service: service-name
runtime_config:
python_version: 3
Dockerfile:
FROM gcr.io/google-appengine/python
WORKDIR /app
COPY . /app
download-dependencies.bash:
#!/usr/bin/env bash
easy_install pip
pip install --upgrade pip
pip install --upgrade setuptools
pip install -r workspace/requirements.txt'
After running of gcloud builds submit --config cloudbuild.yaml
new version of the application is deployed on the AppEngine but it doesn't work
Maybe the issue is the wrong image? As far as I understand, I need to configure Dockefile to collect all custom python dependencies into the image.
Could you please help me with it
Thanks in advance!
Update
I changed my Dockerfile according to the google guidline:
FROM gcr.io/google-appengine/python
RUN virtualenv /env
ENV VIRTUAL_ENV /env
ENV PATH /env/bin:$PATH
ADD . /app
CMD main.py
And new error is: /bin/sh: 1: main.py: not found
If I change last line to: CMD app/main.py - it creates version and doesn't work
Finally, I finished. There were some issues and I will share right configs below. Hope it will help someone.
steps:
# Move our code to instance inside the project to have access to the private repo
- name: gcr.io/${PROJECT_NAME}/remote-builder
env:
- COMMAND=sudo bash workspace/download-dependencies.bash:
- ZONE=us-east1-b
- INSTANCE_NAME=remote-cloud-build
- INSTANCE_ARGS=--image-project centos-cloud --image-family centos-7
#Build image with downloaded deps
- name: gcr.io/cloud-builders/docker
args: ['build', '-t', 'gcr.io/${PROJECT_NAME}/${APP_NAME}', '.']
#Push image to project repo
- name: gcr.io/cloud-builders/docker
args: ['push', 'gcr.io/${PROJECT_NAME}/${APP_NAME}']
#Deploy image to AppEngine
- name: gcr.io/cloud-builders/gcloud
args: ['app', 'deploy', 'app.yaml', '--image-url=gcr.io/${PROJECT_NAME}/${APP_NAME}']
images: ['gcr.io/${PROJECT_NAME}/${APP_NAME}']
timeout: '1800s'
download-dependencies.bash:
#!/usr/bin/env bash
easy_install pip
pip install --upgrade pip
pip install --upgrade setuptools
pip install wheel
#Download private deps and save it to volume (share folder between steps)
pip wheel --no-deps -r workspace/private-dependencies.txt -w workspace/lib --no-binary :all:
Dockerfile:
FROM gcr.io/google-appengine/python
COPY . /${APP_NAME}
RUN virtualenv /env
ENV VIRTUAL_ENV /env
ENV PATH /env/bin:$PATH
RUN pip install -r /${APP_NAME}/workspace/public-dependencies.txt
#Install private deps from volume
RUN pip install -f /${APP_NAME}/workspace/lib --no-index ${LIBRARY_NAME}
CMD gunicorn -b :$PORT main:app