Docker build different in Github Actions - docker

When I build my docker file locally and push my application runs correctly. However when I Build through github actions I get an error that 'flask' is not installed.
It seems that the pip install step does nothing in Github actions - it just shows:
Step 8/13 : RUN pip install --trusted-host pypi.python.org -r /app/requirements.txt
---> Running in 6b0816c1bdc8
Removing intermediate container 6b0816c1bdc8
However on my local i get the full pip install output..
Is there something I am missing with Github Actions?
DockerFile:
FROM python:3.8-alpine
WORKDIR /app
ARG DB_PASSWORD
RUN apk update && apk add postgresql-dev gcc python3-dev musl-dev
ADD ./requirements.txt /app
ADD ./src /app
RUN cat /app/requirements.txt
RUN pip install -r /app/requirements.txt
ENV DEBUG=false
ENV FLASE_DEBUG=false
ENV TESTING=false
ENTRYPOINT [ "python" ]
CMD [ "app.py" ]
Action Step:
- name: Build docker image and push to ECR
run: /bin/bash $GITHUB_WORKSPACE/scripts/build_and_push.sh
env:
AWS_ACCESS_KEY_ID: ${{ secrets.aws_access_key_id }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.aws_secret_access_key }}
AWS_DEFAULT_REGION: "eu-west-1"
Build Script:
pipenv run pip freeze > requirements.txt
aws ecr get-login-password --region eu-west-1 | docker login --username AWS --password-stdin {{ ECR Address}}
docker build -t {{ image name }} .

Related

Travis-CI Deployment (sort of/technically) fails because cannot find decrypted file

One of 5 of my Docker images (client/frontend) fails to build in the deploy section of my travis.yml config because it cannot find a decrypted file (mdb5-react-ui-kit-pro-essential-master.tar.gz). In my before_install script I build a test image and the Dockerfile.travis file successfully runs COPY mdb5-react-ui-kit-pro-essential-master.tar.gz ./, but when proceeding to the deploy script which runs a 'deploy.sh' script using Dockerfile it cannot run COPY mdb5-react-ui-kit-pro-essential-master.tar.gz ./. I have added the secret unencrypted files to .gitignore (but Travis CI linux VM seems to ignore this file and throw a warning I have untracked changes after decrypting the files).
travis.yml:
sudo: required
language: generic
services:
- docker
env:
global:
- SHA=$(git rev-parse HEAD)
- CLOUDSDK_CORE_DISABLE_PROMPTS=1
- secure: <some secret>
before_install:
- openssl aes-256-cbc -K $encrypted_<some key>_key -iv $encrypted_<some iv>_iv
-in secretfiles.tar.enc -out secretfiles.tar -d
- tar xvf secretfiles.tar
- ls && cd packages/SPA && ls
- cd ../../
- sudo apt-get update
- sudo apt-get -y install libxml2-dev
- sudo apt install git openssh-client bash
- git config --global user.name “theCosmicGame”
- git config --global user.email bridgetmelvin42#gmail.com
- mkdir -p -m 0600 ~/.ssh && ssh-keyscan git.mdbootstrap.com >> ~/.ssh/known_hosts
- curl https://sdk.cloud.google.com | bash > /dev/null;
- source $HOME/google-cloud-sdk/path.bash.inc
- gcloud components update kubectl
- gcloud auth activate-service-account --key-file service-account.json
- gcloud config set project mavata
- gcloud config set compute/zone us-east1-b
- gcloud container clusters get-credentials mavata-cluster
- echo "$DOCKER_PASSWORD" | docker login -u "$DOCKER_ID" --password-stdin
- docker build -t bridgetmelvin/react-test -f ./packages/SPA/Dockerfile.travis ./packages/SPA
--build-arg MDB_INSTALL="$MDB_INSTALL" --build-arg MDB_TOKEN="$MDB_TOKEN" --build-arg
MDB_URL="$MDB_URL"
script:
- docker run -e CI=true bridgetmelvin/react-test npm test
before_script:
- echo \n .env \n mdb5-react-ui-kit-pro-essential-master.tar.gz \n secretfiles.tar \n service-account.json \n > .gitignore
deploy:
provider: script
script: bash ./deploy.sh # THIS FAILS
on:
branch: main
Dockerfile.travis:
FROM node:16-alpine as builder
WORKDIR /app
ENV DOCKER_TRAVIS_RUNNING=true
ENV SKIP_PREFLIGHT_CHECK=true
ARG MDB_INSTALL
ENV MDB_INSTALL $MDB_INSTALL
ARG MDB_URL
ARG MDB_TOKEN
ENV MDB_URL $MDB_URL
ENV MDB_TOKEN $MDB_TOKEN
COPY package.json ./
COPY postinstall.js ./
COPY postinstall-travis.sh ./
COPY mdb5-react-ui-kit-pro-essential-master.tar.gz ./ # THIS WORKS
COPY .env ./
RUN echo ${MDB_TOKEN}
RUN npm install
RUN npm install mdb5-react-ui-kit-pro-essential-master.tar.gz --save
RUN ls node_modules
COPY . .
RUN ls
# FROM nginx
EXPOSE 9000
COPY ./nginx/default.conf /etc/nginx/conf.d/default.conf
# COPY --from=builder /app/build /usr/share/nginx/html
CMD ["npm", "run", "start"]
Dockerfile (SPA):
# syntax=docker/dockerfile:1.2
FROM node:16-alpine as builder
WORKDIR /app
ENV DOCKER_RUNNING_PROD=true
RUN ls
COPY package.json ./
COPY postinstall.js ./
COPY mdb5-react-ui-kit-pro-essential-master.tar.gz ./ # THIS FAILS
RUN npm install
RUN npm install mdb5-react-ui-kit-pro-essential-master.tar.gz --save
RUN ls node_modules
COPY . .
RUN npm run build
FROM nginx
EXPOSE 9000
COPY ./nginx/default.conf /etc/nginx/conf.d/default.conf
COPY --from=builder /app/build /usr/share/nginx/html
project directory:
.github
packages/
| infra/
| k8s/
| k8s-dev/
| k8s-prod/
| server/
| SPA/
| config/
| nginx/
| public/
| src/
| Dockerfile
| Dockerfile.travis
| **(when decrypted) mdb5-react-ui-kit-pro-essential-master.tar.gz
| package.json
| postinstall.js
.gitignore
travis.yml
deploy.sh
secretfiles.tar.enc
service-account.json.enc
skaffold.yaml

Flask app deployed to GCP Run through bitbucket

I am struggling to get my dockerised flask app to a GCP run instance through bitbucket pipelines.
Here is my bitbucket-pipeline.yml:
image: python:3.9
pipelines:
default:
- parallel:
- step:
name: Build and test
caches:
- pip
script:
- pip install -r requirements.txt
- pytest
- step:
name: Linter
script:
- pip install flake8
- flake8 . --extend-exclude=dist,build --show-source --statistics
branches:
develop:
- parallel:
- step:
name: Build and test
caches:
- pip
script:
- pip install -r requirements.txt
- pytest
- step:
name: Linter
script:
- pip install flake8
- flake8 . --extend-exclude=dist,build --show-source --statistics
- step:
name: Deploy to Development
deployment: Development
image: google/cloud-sdk:latest
caches:
- docker
script:
- echo ${KEY_FILE_AUTH} | base64 --decode --ignore-garbage > /tmp/gcloud-api.json
- gcloud auth activate-service-account --key-file /tmp/gcloud-api.json
- gcloud config set project PROJECT
- gcloud builds submit --tag eu.gcr.io/PROJECT/APP
- gcloud beta run deploy APP --image eu.gcr.io/PROJECT/APP --platform managed --region europe-west2 --allow-unauthenticated
options:
docker: true
My Dockerfile:
# syntax=docker/dockerfile:1
FROM python:3.9
ENV APP_HOME /app
WORKDIR $APP_HOME
COPY . .
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
COPY . .
CMD exec gunicorn --bind :$PORT --workers 1 --threads 8 app:app
and, the error from bitbucket pipelines:
Uploading tarball of [.] to [gs://PROJECT_cloudbuild/source/1645481495.615291-f6c287df7e6d4fd8991bed1fe6a5b9ca.tgz]
ERROR: (gcloud.builds.submit) PERMISSION_DENIED: The caller does not have permission
I don't know which permission I am missing - if anyone can give any pointers on this, that would be awesome!

Gitlab CI Split Docker Build Into Multiple Stages

I have a react/django app that's dockerized. There's 2 stages to the GitLab CI process. Build_Node and Build_Image. Build node just builds the react app and stores it as an artifact. Build image runs docker build to build the actual docker image, and relies on the node step because it copies the built files into the image.
However, the build process on the image takes a long time if package dependencies have changed (apt or pip), because it has to reinstall everything.
Is there a way to split the docker build job into multiple parts, so that I can say install the apt and pip packages in the dockerfile while build_node is still running, then finish the docker build once that stage is done?
gitlab-ci.yml:
stages:
- Build Node Frontend
- Build Docker Image
services:
- docker:18.03-dind
variables:
DOCKER_DRIVER: overlay2
DOCKER_HOST: tcp://localhost:2375
DOCKER_TLS_CERTDIR: ""
build_node:
stage: Build Node Frontend
only:
- staging
- production
image: node:14.8.0
variables:
GIT_SUBMODULE_STRATEGY: recursive
artifacts:
paths:
- http
cache:
key: "node_modules"
paths:
- frontend/node_modules
script:
- cd frontend
- yarn install --network-timeout 100000
- CI=false yarn build
- mv build ../http
build_image:
stage: Build Docker Image
only:
- staging
- production
image: docker
script:
#- sleep 10000
- tar -cvf app.tar api/ discordbot/ helpers/ http/
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
#- docker pull $CI_REGISTRY_IMAGE:latest
#- docker build --network=host --cache-from $CI_REGISTRY_IMAGE:latest --tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA --tag $CI_REGISTRY_IMAGE:latest .
- docker build --network=host --tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA --tag $CI_REGISTRY_IMAGE:latest .
- docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA
- docker push $CI_REGISTRY_IMAGE:latest
Dockerfile:
FROM python:3.7-slim
# Add user
ARG APP_USER=abc
RUN groupadd -r ${APP_USER} && useradd --no-log-init -r -g ${APP_USER} ${APP_USER}
WORKDIR /app
ENV PYTHONUNBUFFERED=1
EXPOSE 80
EXPOSE 8080
ADD requirements.txt /app/
RUN set -ex \
&& BUILD_DEPS=" \
gcc \
" \
&& RUN_DEPS=" \
ffmpeg \
postgresql-client \
nginx \
dumb-init \
" \
&& apt-get update && apt-get install -y $BUILD_DEPS \
&& pip install --no-cache-dir --default-timeout=100000 -r /app/requirements.txt \
&& apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false $BUILD_DEPS \
&& apt-get update && apt-get install -y --no-install-recommends $RUN_DEPS \
&& rm -rf /var/lib/apt/lists/*
# Set uWSGI settings
ENV UWSGI_WSGI_FILE=/app/api/api/wsgi.py UWSGI_HTTP=:8000 UWSGI_MASTER=1 UWSGI_HTTP_AUTO_CHUNKED=1 UWSGI_HTTP_KEEPALIVE=1 UWSGI_LAZY_APPS=1 UWSGI_WSGI_ENV_BEHAVIOR=holy PYTHONUNBUFFERED=1 UWSGI_WORKERS=2 UWSGI_THREADS=4
ENV UWSGI_STATIC_EXPIRES_URI="/static/.*\.[a-f0-9]{12,}\.(css|js|png|jpg|jpeg|gif|ico|woff|ttf|otf|svg|scss|map|txt) 315360000"
ENV PYTHONPATH=$PYTHONPATH:/app/api:/app
ENV DB_PORT=5432 DB_NAME=shittywizard DB_USER=shittywizard DB_HOST=localhost
ADD nginx.conf /etc/nginx/nginx.conf
# Set entrypoint
ADD entrypoint.sh /
RUN chmod 755 /entrypoint.sh
ENTRYPOINT ["dumb-init", "--", "/entrypoint.sh"]
ADD app.tar /app/
RUN python /app/api/manage.py collectstatic --noinput
Sure! Check out the gitlab docs on stages and on building docker images with gitlab-ci.
If you have multiple pipeline steps defined within a stage they will run in parallel. For example, the following pipeline would build the node and image artifacts in parallel and then build the final image using both artifacts.
stages:
- build
- bundle
build-node:
stage: build
script:
- # steps to build node and push to artifact registry
build-base-image:
stage: build
script:
- # steps to build image and push to artifact registry
bundle-node-in-image:
stage: bundle
script:
- # pull image artifact
- # download node artifact
- # build image on top of base image with node artifacts embedded
Note that all the pushing and pulling and starting and stopping might not save you time depending on your image size relative to build time, but this will do what you're asking for.

Github actions Pylint step unable to create directory with test job

I'm actually trying to finish my first GitHub action with CI/CD and Heroku deploy and a i get this error.
Error image:
This is my public repo.
https://github.com/jovicon/the_empire_strikes_back_challenge
Everything is updated in "development" branch
This is my test job: (full file)
Note: When I comment Pylint step everything works fine.
test:
name: Test Docker Image
runs-on: ubuntu-latest
needs: build
steps:
- name: Checkout master
uses: actions/checkout#v1
- name: Log in to GitHub Packages
run: echo ${GITHUB_TOKEN} | docker login -u ${GITHUB_ACTOR} --password-stdin docker.pkg.github.com
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Pull image
run: |
docker pull ${{ env.IMAGE }}:latest || true
- name: Build image
run: |
docker build \
--cache-from ${{ env.IMAGE }}:latest \
--tag ${{ env.IMAGE }}:latest \
--file ./backend/Dockerfile.prod \
"./backend"
- name: Run container
run: |
docker run \
-d \
--name fastapi-tdd \
-e PORT=8765 \
-e ENVIRONMENT=dev \
-e DATABASE_TEST_URL=sqlite://sqlite.db \
-p 5003:8765 \
${{ env.IMAGE }}:latest
- name: Pytest
run: docker exec fastapi-tdd python -m pytest .
- name: Pylint
run: docker exec fastapi-tdd python -m pylint app/
- name: Black
run: docker exec fastapi-tdd python -m black . --check
- name: isort
run: docker exec fastapi-tdd /bin/sh -c "python -m isort ./*/*.py --check-only"
I let here my Dockerfile.prod too:
# pull official base image
FROM python:3.8.3-slim-buster
# create directory for the app user
RUN mkdir -p /home/app
# create the app user
RUN addgroup --system app && adduser --system --group app
# create the appropriate directories
ENV HOME=/home/app
ENV APP_HOME=/home/app/web
RUN mkdir $APP_HOME
WORKDIR $APP_HOME
# set environment varibles
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
ENV ENVIRONMENT prod
ENV TESTING 0
# install system dependencies
RUN apt-get update \
&& apt-get -y install netcat gcc postgresql \
&& apt-get clean
# install python dependencies
RUN pip install --upgrade pip
COPY ./requirements.txt .
COPY ./dev-requirements.txt .
RUN pip install -r requirements.txt
RUN pip install -r dev-requirements.txt
# add app
COPY . .
RUN chmod 755 $HOME
# chown all the files to the app user
RUN chown -R app:app $APP_HOME
# change to the app user
USER app
# run gunicorn
CMD gunicorn --bind 0.0.0.0:$PORT app.main:app -k uvicorn.workers.UvicornWorker
You're setting the $HOME directory permissions to 755 from the default user. chown -R app:app $APP_HOME targets only $APP_HOME, which is only a subdirectory of $HOME.
In consequence, the user app doesn't have write permissions to $HOME and pylint can't create the directory /home/app/.pylint.d.

How to optimise the Docker build process in JenkinsPipeline

I am having trouble in optimizing my docker build step.
Below is my use case:
In my jenkinsfile I am building 3 docker images.
from *docker/test/Dockerfile*
from *docker/dev/Dockerfile*
stage('Build') {
steps {
sh 'docker build -t Test -f docker/test/Dockerfile .'
sh 'set +x && eval $(/usr/local/bin/aws-login/aws-login.sh $AWS_ACCOUNT jenkins eu-west-2) \
&& docker build -t DEV --build-arg S3_FILE_NAME=environment.dev.ts \
--build-arg CONFIG_S3_BUCKET_URI=s3://bucket \
--build-arg AWS_SESSION_TOKEN=$AWS_SESSION_TOKEN \
--build-arg AWS_DEFAULT_REGION=$AWS_DEFAULT_REGION \
--build-arg AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY \
--build-arg AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID \
-f docker/dev/Dockerfile .'
sh 'set +x && eval $(/usr/local/bin/aws-login/aws-login.sh $AWS_ACCOUNT jenkins eu-west-2) \
&& docker build -t QA --build-arg S3_FILE_NAME=environment.qa.ts \
--build-arg CONFIG_S3_BUCKET_URI=s3://bucket \
--build-arg AWS_SESSION_TOKEN=$AWS_SESSION_TOKEN \
--build-arg AWS_DEFAULT_REGION=$AWS_DEFAULT_REGION \
--build-arg AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY \
--build-arg AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID \
-f docker/dev/Dockerfile .'
}
}
stage('Test') {
steps {
sh 'docker run --rm TEST npm run test'
}
}
Below is my two docker file:
docker/test/Dockerfile:
FROM node:lts
RUN mkdir /usr/src/app
WORKDIR /usr/src/app
ENV PATH /usr/src/app/node_modules/.bin:$PATH
RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add -
RUN sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list'
RUN apt-key update && apt-get update && apt-get install -y google-chrome-stable
COPY . /usr/src/app
RUN npm install
CMD sh ./docker/test/docker-entrypoint.sh
docker/dev/Dockerfile:
FROM node:lts as dev-builder
ARG CONFIG_S3_BUCKET_URI
ARG S3_FILE_NAME
ARG AWS_SESSION_TOKEN
ARG AWS_DEFAULT_REGION
ARG AWS_SECRET_ACCESS_KEY
ARG AWS_ACCESS_KEY_ID
RUN apt-get update
RUN apt-get install python3-dev -y
RUN curl -O https://bootstrap.pypa.io/get-pip.py
RUN python3 get-pip.py
RUN pip3 install awscli --upgrade
RUN mkdir /app
WORKDIR /app
COPY . .
RUN aws s3 cp "$CONFIG_S3_BUCKET_URI/$S3_FILE_NAME" src/environments/environment.dev.ts
RUN cat src/environments/environment.dev.ts
RUN npm install
RUN npm run build-dev
FROM nginx:stable
COPY nginx.conf /etc/nginx/nginx.conf
COPY --from=dev-builder /app/dist/ /usr/share/nginx/html/
Every time it takes 20-25 mins to build the images.
Is there any way I can optimize the docker file for a better build process?
suggestion are welcome. RUN npm run build-dev uses package.json to install the dependencies. which is one on the reason that it install all dependency for every build.
Thanks
You can use combination of base images and multi stage builds to speed up your builds.
Base image with pre-installed packages/dependencies
Stuff like installing python3, pip, google-chrome, awscli etc need not be done every build. These layers might get cached if you are building on single machine but if you have multiple build machines or clean the cache, you will be re-building these layers unnecessarily. You can build a base image which already has these stuff and use this new image as the base for your app.
Multi stage builds
You are copying your source code and then doing npm install. Even if package.json has not changed, the layer will be re-built if any other file in source code might have changed.
You can create a multi stage dockerfile where you just copy the package.json in the first stage and run npm install and other such commands. This layer will be re-built only if package.json is changed.
In your second stage, you can just copy the npm cache from first stage.
FROM node:lts as dev-builder
WORKDIR /cache/
COPY package.json .
RUN npm install
RUN npm run build-dev
FROM NEW_BASE_IMAGE_WITH_CHROME_ETC_DEPENDENCIES
COPY --from=node_cache /cache/ .
COPY . .
<snip>
Identify any other such optimisations that you can make.

Resources