Flask app deployed to GCP Run through bitbucket - docker

I am struggling to get my dockerised flask app to a GCP run instance through bitbucket pipelines.
Here is my bitbucket-pipeline.yml:
image: python:3.9
pipelines:
default:
- parallel:
- step:
name: Build and test
caches:
- pip
script:
- pip install -r requirements.txt
- pytest
- step:
name: Linter
script:
- pip install flake8
- flake8 . --extend-exclude=dist,build --show-source --statistics
branches:
develop:
- parallel:
- step:
name: Build and test
caches:
- pip
script:
- pip install -r requirements.txt
- pytest
- step:
name: Linter
script:
- pip install flake8
- flake8 . --extend-exclude=dist,build --show-source --statistics
- step:
name: Deploy to Development
deployment: Development
image: google/cloud-sdk:latest
caches:
- docker
script:
- echo ${KEY_FILE_AUTH} | base64 --decode --ignore-garbage > /tmp/gcloud-api.json
- gcloud auth activate-service-account --key-file /tmp/gcloud-api.json
- gcloud config set project PROJECT
- gcloud builds submit --tag eu.gcr.io/PROJECT/APP
- gcloud beta run deploy APP --image eu.gcr.io/PROJECT/APP --platform managed --region europe-west2 --allow-unauthenticated
options:
docker: true
My Dockerfile:
# syntax=docker/dockerfile:1
FROM python:3.9
ENV APP_HOME /app
WORKDIR $APP_HOME
COPY . .
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
COPY . .
CMD exec gunicorn --bind :$PORT --workers 1 --threads 8 app:app
and, the error from bitbucket pipelines:
Uploading tarball of [.] to [gs://PROJECT_cloudbuild/source/1645481495.615291-f6c287df7e6d4fd8991bed1fe6a5b9ca.tgz]
ERROR: (gcloud.builds.submit) PERMISSION_DENIED: The caller does not have permission
I don't know which permission I am missing - if anyone can give any pointers on this, that would be awesome!

Related

Gitlab CI Split Docker Build Into Multiple Stages

I have a react/django app that's dockerized. There's 2 stages to the GitLab CI process. Build_Node and Build_Image. Build node just builds the react app and stores it as an artifact. Build image runs docker build to build the actual docker image, and relies on the node step because it copies the built files into the image.
However, the build process on the image takes a long time if package dependencies have changed (apt or pip), because it has to reinstall everything.
Is there a way to split the docker build job into multiple parts, so that I can say install the apt and pip packages in the dockerfile while build_node is still running, then finish the docker build once that stage is done?
gitlab-ci.yml:
stages:
- Build Node Frontend
- Build Docker Image
services:
- docker:18.03-dind
variables:
DOCKER_DRIVER: overlay2
DOCKER_HOST: tcp://localhost:2375
DOCKER_TLS_CERTDIR: ""
build_node:
stage: Build Node Frontend
only:
- staging
- production
image: node:14.8.0
variables:
GIT_SUBMODULE_STRATEGY: recursive
artifacts:
paths:
- http
cache:
key: "node_modules"
paths:
- frontend/node_modules
script:
- cd frontend
- yarn install --network-timeout 100000
- CI=false yarn build
- mv build ../http
build_image:
stage: Build Docker Image
only:
- staging
- production
image: docker
script:
#- sleep 10000
- tar -cvf app.tar api/ discordbot/ helpers/ http/
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
#- docker pull $CI_REGISTRY_IMAGE:latest
#- docker build --network=host --cache-from $CI_REGISTRY_IMAGE:latest --tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA --tag $CI_REGISTRY_IMAGE:latest .
- docker build --network=host --tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA --tag $CI_REGISTRY_IMAGE:latest .
- docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA
- docker push $CI_REGISTRY_IMAGE:latest
Dockerfile:
FROM python:3.7-slim
# Add user
ARG APP_USER=abc
RUN groupadd -r ${APP_USER} && useradd --no-log-init -r -g ${APP_USER} ${APP_USER}
WORKDIR /app
ENV PYTHONUNBUFFERED=1
EXPOSE 80
EXPOSE 8080
ADD requirements.txt /app/
RUN set -ex \
&& BUILD_DEPS=" \
gcc \
" \
&& RUN_DEPS=" \
ffmpeg \
postgresql-client \
nginx \
dumb-init \
" \
&& apt-get update && apt-get install -y $BUILD_DEPS \
&& pip install --no-cache-dir --default-timeout=100000 -r /app/requirements.txt \
&& apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false $BUILD_DEPS \
&& apt-get update && apt-get install -y --no-install-recommends $RUN_DEPS \
&& rm -rf /var/lib/apt/lists/*
# Set uWSGI settings
ENV UWSGI_WSGI_FILE=/app/api/api/wsgi.py UWSGI_HTTP=:8000 UWSGI_MASTER=1 UWSGI_HTTP_AUTO_CHUNKED=1 UWSGI_HTTP_KEEPALIVE=1 UWSGI_LAZY_APPS=1 UWSGI_WSGI_ENV_BEHAVIOR=holy PYTHONUNBUFFERED=1 UWSGI_WORKERS=2 UWSGI_THREADS=4
ENV UWSGI_STATIC_EXPIRES_URI="/static/.*\.[a-f0-9]{12,}\.(css|js|png|jpg|jpeg|gif|ico|woff|ttf|otf|svg|scss|map|txt) 315360000"
ENV PYTHONPATH=$PYTHONPATH:/app/api:/app
ENV DB_PORT=5432 DB_NAME=shittywizard DB_USER=shittywizard DB_HOST=localhost
ADD nginx.conf /etc/nginx/nginx.conf
# Set entrypoint
ADD entrypoint.sh /
RUN chmod 755 /entrypoint.sh
ENTRYPOINT ["dumb-init", "--", "/entrypoint.sh"]
ADD app.tar /app/
RUN python /app/api/manage.py collectstatic --noinput
Sure! Check out the gitlab docs on stages and on building docker images with gitlab-ci.
If you have multiple pipeline steps defined within a stage they will run in parallel. For example, the following pipeline would build the node and image artifacts in parallel and then build the final image using both artifacts.
stages:
- build
- bundle
build-node:
stage: build
script:
- # steps to build node and push to artifact registry
build-base-image:
stage: build
script:
- # steps to build image and push to artifact registry
bundle-node-in-image:
stage: bundle
script:
- # pull image artifact
- # download node artifact
- # build image on top of base image with node artifacts embedded
Note that all the pushing and pulling and starting and stopping might not save you time depending on your image size relative to build time, but this will do what you're asking for.

GCP Cloud Run error on deploying my Docker image to Google Container Registry / Cloud Run

i´m quite new to Docker and GCP and try to find a working way, to deploy my Laravel App on GCP.
I already set up CI and and selected "cloudbuild.yaml" as build configuration. I followed innumerable tutorials and read the Google Container Docs, so i created a "cloudbuild.yaml" which includes arguments to use my docker-composer.yaml, to create the stack of my app (app code, database, server).
During the Google Cloud Build process i get:
Step #0: Creating workspace_app_1 ...
Step #0: Creating workspace_web_1 ...
Step #0: Creating workspace_db_1 ...
Step #0: Creating workspace_app_1 ... done
Step #0: Creating workspace_web_1 ... done
Step #0: Creating workspace_db_1 ... done
Finished Step #0
Starting Step #1
Step #1: Already have image (with digest): gcr.io/cloud-builders/docker
Step #1: unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /workspace/Dockerfile: no such file or directory
Finished Step #1
ERROR
ERROR: build step 1 "gcr.io/cloud-builders/docker" failed: step exited with non-zero status: 1
docker-compose.yml:
version: "3.8"
volumes:
php-fpm-socket:
db-store:
services:
app:
build:
context: .
dockerfile: ./infra/docker/php/Dockerfile
volumes:
- php-fpm-socket:/var/run/php-fpm
- ./backend:/work/backend
environment:
- DB_CONNECTION=mysql
- DB_HOST=db
- DB_PORT=3306
- DB_DATABASE=${DB_NAME:-laravel_local}
- DB_USERNAME=${DB_USER:-phper}
- DB_PASSWORD=${DB_PASS:-secret}
web:
build:
context: .
dockerfile: ./infra/docker/nginx/Dockerfile
ports:
- ${WEB_PORT:-80}:80
volumes:
- php-fpm-socket:/var/run/php-fpm
- ./backend:/work/backend
db:
build:
context: .
dockerfile: ./infra/docker/mysql/Dockerfile
ports:
- ${DB_PORT:-3306}:3306
volumes:
- db-store:/var/lib/mysql
environment:
- MYSQL_DATABASE=${DB_NAME:-laravel_local}
- MYSQL_USER=${DB_USER:-phper}
- MYSQL_PASSWORD=${DB_PASS:-secret}
- MYSQL_ROOT_PASSWORD=${DB_PASS:-secret}
cloudbuild.yaml
steps:
# running docker-compose
- name: 'docker/compose:1.28.4'
args: ['up', '-d']
# Build the container image
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/MY_PROJECT_ID/laravel-docker-1', '.']
# Push the container image to Container Registry
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/MY_PROJECT_ID/laravel-docker-1']
# Deploy container image to Cloud Run
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: gcloud
args: ['run', 'deploy', 'laravel-docker-1', '--image', 'gcr.io/MY_PROJECT_ID/laravel-docker-1', '--region', 'europe-west3', '--platform', 'managed']
images:
- gcr.io/MY_PROJECT_ID/laravel-docker-1
What is wrong in this configuration?
I solved this issue to deploy a running Laravel 8 application to Google Cloud with the following Dockerfile. PS: Any optimization regarding the FROM and RUN steps are appreciated:
#
# PHP Dependencies
#
FROM composer:2.0 as vendor
WORKDIR /app
COPY database/ database/
COPY composer.json composer.json
COPY composer.lock composer.lock
RUN composer install \
--no-interaction \
--no-plugins \
--no-scripts \
--no-dev \
--prefer-dist
COPY . .
RUN composer dump-autoload
#
# Frontend
#
FROM node:14.9 as frontend
WORKDIR /app
COPY artisan package.json webpack.mix.js package-lock.json ./
RUN npm audit fix
RUN npm cache clean --force
RUN npm cache verify
RUN npm install -f
COPY resources/js ./resources/js
COPY resources/sass ./resources/sass
RUN npm run development
#
# Application
#
FROM php:7.4-fpm
WORKDIR /app
# Install PHP dependencies
RUN apt-get update -y && apt-get install -y build-essential libxml2-dev libonig-dev
RUN docker-php-ext-install pdo pdo_mysql opcache tokenizer xml ctype json bcmath pcntl
# Install Linux and Python dependencies
RUN apt-get install -y curl wget git file ruby-full locales vim
# Run definitions to make Brew work
RUN localedef -i en_US -f UTF-8 en_US.UTF-8
RUN useradd -m -s /bin/zsh linuxbrew && \
usermod -aG sudo linuxbrew && \
mkdir -p /home/linuxbrew/.linuxbrew && \
chown -R linuxbrew: /home/linuxbrew/.linuxbrew
USER linuxbrew
RUN /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)"
USER root
#RUN chown -R $CONTAINER_USER: /home/linuxbrew/.linuxbrew
ENV PATH "$PATH:/home/linuxbrew/.linuxbrew/bin"
#Install Chrome
RUN wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
RUN apt install -y ./google-chrome-stable_current_amd64.deb
# Install Python modules (dependencies) of scraper
RUN brew install python3
RUN pip3 install selenium
RUN pip3 install bs4
RUN pip3 install pandas
# Copy Frontend build
COPY --from=frontend app/node_modules/ ./node_modules/
COPY --from=frontend app/public/js/ ./public/js/
COPY --from=frontend app/public/css/ ./public/css/
COPY --from=frontend app/public/mix-manifest.json ./public/mix-manifest.json
# Copy Composer dependencies
COPY --from=vendor app/vendor/ ./vendor/
COPY . .
RUN cp /app/drivers/chromedriver /usr/local/bin
#COPY .env.prod ./.env
COPY .env.local-docker ./.env
# Copy the scripts to docker-entrypoint-initdb.d which will be executed on container startup
COPY ./docker/ /docker-entrypoint-initdb.d/
COPY ./docker/init_db.sql .
RUN php artisan config:cache
RUN php artisan route:cache
CMD php artisan serve --host=0.0.0.0 --port=8080
EXPOSE 8080

How get report file from docker and insert to gitlab repository

I created docker which runs automated tests. I run it by gitlab script. All works except the report file. I cannot get a report file from docker and insert it to the repository. Command docker cp not working. My GitLab script and docker file:
Gitlab script:
stages:
- test
test:
image: docker:latest
services:
- name: docker:dind
entrypoint: ["env", "-u", "DOCKER_HOST"]
command: ["dockerd-entrypoint.sh"]
variables:
DOCKER_HOST: tcp://docker:2375/
DOCKER_DRIVER: overlay2
# See https://github.com/docker-library/docker/pull/166
DOCKER_TLS_CERTDIR: ""
stage: test
before_script:
- docker login -u "xxx" -p "yyy" docker.io
script:
- docker run --name authContainer "rrr/image:0.0.1"
after_script:
- docker cp authContainer:/artifacts $CI_PROJECT_DIR/artifacts/
artifacts:
when: always
paths:
- $CI_PROJECT_DIR/artifacts/test-result.xml
reports:
junit:
- $CI_PROJECT_DIR/artifacts/test-result.xml
Dockerfile:
FROM mcr.microsoft.com/dotnet/core/sdk:2.1
COPY /publish /Spinelle.AutomaticTests
WORKDIR /Spinelle.AutomaticTests
RUN apt-get update -y
RUN apt install unzip
RUN wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
RUN dpkg -i google-chrome-stable_current_amd64.deb; apt-get -fy install
RUN curl https://chromedriver.storage.googleapis.com/84.0.4147.30/chromedriver_linux64.zip -o /usr/local/bin/chromedriver
RUN unzip -o /usr/local/bin/chromedriver -d /Spinelle.AutomaticTests
RUN chmod 777 /Spinelle.AutomaticTests
CMD dotnet vstest /Parallel Spinelle.AutomaticTests.dll --TestAdapterPath:. --logger:"nunit;LogFilePath=/artifacts/test-result.xml;MethodFormat=Class;FailureBodyFormat=Verbose"

Using node caching dependencies from bitbucket when docker build runs Dockerfile

I'm trying to use custom cache dependencies from bitbucket, using Dockerfile.
This is my bitbucket-pipelines.yml:
hml:
- step:
caches:
- node-cache
name: Tests and build
services:
- docker
volumes:
- "$BITBUCKET_CLONE_DIR/node_modules:/root/node_modules"
- "$BITBUCKET_CLONE_DIR:/code"
script:
# - apt update
# - apt-get install -y curl
# - curl -L "https://github.com/docker/compose/releases/download/1.25.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
# - chmod +x /usr/local/bin/docker-compose
# - echo 'DIALOGFLOW_PROJECT_ID=a' > .env
# - docker-compose up -d
# - docker exec api npm run test
- docker image inspect $(docker image ls -aq) --format {{.Size}} | awk '{totalSizeInBytes += $0} END {print totalSizeInBytes}'
- echo $BITBUCKET_CLONE_DIR/node_modules
- docker build -t cloudia/api .
- docker save --output api.docker cloudia/api
artifacts:
- api.docker
- step:
name: Deploy
services:
- docker
deployment: staging
script:
- apt-get update
- apt-get install -y curl unzip python jq
- curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip"
- mv awscli-bundle.zip /tmp/awscli-bundle.zip
- unzip /tmp/awscli-bundle.zip -d /tmp
- /tmp/awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws
- docker load --input ./api.docker
- chmod +x ./deploy_hml.sh
- ./deploy_hml.sh
definitions:
caches:
node-cache: node_modules
services:
docker:
memory: 2048
And here's my Dockerfile:
FROM node:10.15.3
WORKDIR /code
# Using some comments for tests
COPY [ "package*.json", "/code/" ]
RUN npm install --silent
COPY . /code
RUN npm run build
EXPOSE 5000
CMD npm start
My pipeline run without problem, but the cache is not working.
Message received when I tried to run pipeline:
Cache "node-cache": Downloading
Cache "node-cache": Not found
How can I set up the pipeline when docker build runs a Dockerfile?

Docker build different in Github Actions

When I build my docker file locally and push my application runs correctly. However when I Build through github actions I get an error that 'flask' is not installed.
It seems that the pip install step does nothing in Github actions - it just shows:
Step 8/13 : RUN pip install --trusted-host pypi.python.org -r /app/requirements.txt
---> Running in 6b0816c1bdc8
Removing intermediate container 6b0816c1bdc8
However on my local i get the full pip install output..
Is there something I am missing with Github Actions?
DockerFile:
FROM python:3.8-alpine
WORKDIR /app
ARG DB_PASSWORD
RUN apk update && apk add postgresql-dev gcc python3-dev musl-dev
ADD ./requirements.txt /app
ADD ./src /app
RUN cat /app/requirements.txt
RUN pip install -r /app/requirements.txt
ENV DEBUG=false
ENV FLASE_DEBUG=false
ENV TESTING=false
ENTRYPOINT [ "python" ]
CMD [ "app.py" ]
Action Step:
- name: Build docker image and push to ECR
run: /bin/bash $GITHUB_WORKSPACE/scripts/build_and_push.sh
env:
AWS_ACCESS_KEY_ID: ${{ secrets.aws_access_key_id }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.aws_secret_access_key }}
AWS_DEFAULT_REGION: "eu-west-1"
Build Script:
pipenv run pip freeze > requirements.txt
aws ecr get-login-password --region eu-west-1 | docker login --username AWS --password-stdin {{ ECR Address}}
docker build -t {{ image name }} .

Resources