I have to configure custom build process of GC AppEngine application with GC Cloud Build.
First of all - I have an internal python repository on the GC ComputeEngine instance. It's accessible only through internal network and I use Remote-builder to run pip installcommand on the internal GC instance.
After downloading of dependencies from the internal repository I have to deploy results into the GC AppEngine.
Cloudbuild.yaml:
steps:
/#Download dependencies from the internal repository
- name: gcr.io/${ProjectName}/remote-builder
env:
- COMMAND=sudo bash workspace/download-dependencies.bash
- ZONE=us-east1-b
- INSTANCE_NAME=remote-cloud-build
- INSTANCE_ARGS=--image-project centos-cloud --image-family centos-7
- name: gcr.io/cloud-builders/docker
args: ['build', '-t', 'gcr.io/${ProjectName}/app', '.']
- name: gcr.io/cloud-builders/docker
args: ['push', 'gcr.io/${ProjectName}/app']
- name: gcr.io/cloud-builders/gcloud
args: ['app', 'deploy', 'app.yaml', '--image-url=gcr.io/${ProjectName}/${ProjectName}']
images: ['gcr.io/${ProjectName}/${ProjectName}']
app.yaml:
runtime: python
env: flex
entrypoint: python main.py
service: service-name
runtime_config:
python_version: 3
Dockerfile:
FROM gcr.io/google-appengine/python
WORKDIR /app
COPY . /app
download-dependencies.bash:
#!/usr/bin/env bash
easy_install pip
pip install --upgrade pip
pip install --upgrade setuptools
pip install -r workspace/requirements.txt'
After running of gcloud builds submit --config cloudbuild.yaml
new version of the application is deployed on the AppEngine but it doesn't work
Maybe the issue is the wrong image? As far as I understand, I need to configure Dockefile to collect all custom python dependencies into the image.
Could you please help me with it
Thanks in advance!
Update
I changed my Dockerfile according to the google guidline:
FROM gcr.io/google-appengine/python
RUN virtualenv /env
ENV VIRTUAL_ENV /env
ENV PATH /env/bin:$PATH
ADD . /app
CMD main.py
And new error is: /bin/sh: 1: main.py: not found
If I change last line to: CMD app/main.py - it creates version and doesn't work
Finally, I finished. There were some issues and I will share right configs below. Hope it will help someone.
steps:
# Move our code to instance inside the project to have access to the private repo
- name: gcr.io/${PROJECT_NAME}/remote-builder
env:
- COMMAND=sudo bash workspace/download-dependencies.bash:
- ZONE=us-east1-b
- INSTANCE_NAME=remote-cloud-build
- INSTANCE_ARGS=--image-project centos-cloud --image-family centos-7
#Build image with downloaded deps
- name: gcr.io/cloud-builders/docker
args: ['build', '-t', 'gcr.io/${PROJECT_NAME}/${APP_NAME}', '.']
#Push image to project repo
- name: gcr.io/cloud-builders/docker
args: ['push', 'gcr.io/${PROJECT_NAME}/${APP_NAME}']
#Deploy image to AppEngine
- name: gcr.io/cloud-builders/gcloud
args: ['app', 'deploy', 'app.yaml', '--image-url=gcr.io/${PROJECT_NAME}/${APP_NAME}']
images: ['gcr.io/${PROJECT_NAME}/${APP_NAME}']
timeout: '1800s'
download-dependencies.bash:
#!/usr/bin/env bash
easy_install pip
pip install --upgrade pip
pip install --upgrade setuptools
pip install wheel
#Download private deps and save it to volume (share folder between steps)
pip wheel --no-deps -r workspace/private-dependencies.txt -w workspace/lib --no-binary :all:
Dockerfile:
FROM gcr.io/google-appengine/python
COPY . /${APP_NAME}
RUN virtualenv /env
ENV VIRTUAL_ENV /env
ENV PATH /env/bin:$PATH
RUN pip install -r /${APP_NAME}/workspace/public-dependencies.txt
#Install private deps from volume
RUN pip install -f /${APP_NAME}/workspace/lib --no-index ${LIBRARY_NAME}
CMD gunicorn -b :$PORT main:app
Related
I'm new in docker and I want to setting-up a docker-compose for my django app. in the backend of my app, I have golang packages too and run that in djang with subprocess library.
But, when I want to install a package using go install github.com/x/y#latest and then copy its binary to the project directory, it gives me the error: package github.com/x/y#latest: cannot use path#version syntax in GOPATH mode
I searched a lot in the internet but didn't find a solution to solve my problem. Could you please tell me where I'm wrong?
here is my Dockerfile:
FROM golang:1.18.1-bullseye as go-build
# Install go package
RUN go install github.com/hakluke/hakrawler#latest \
&& cp $GOPATH/bin/hakrawler /usr/local/bin/
# Install main image for backend
FROM python:3.8.11-bullseye
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# Install Dist packages
RUN apt-get update \
&& apt-get -y install --no-install-recommends software-properties-common libpq5 python3-dev musl-dev git netcat-traditional golang \
&& rm -rf /var/lib/apt/lists/
# Set work directory
WORKDIR /usr/src/redteam_toolkit/
# Install dependencies
RUN pip install --upgrade pip
COPY ./requirements.txt .
RUN pip install -r requirements.txt
# Copy project, and then the go package
COPY . .
COPY --from=go-build /usr/local/bin/hakrawler /usr/src/redteam_toolkit/toolkit/scripts/webapp/
docker-compose.yml:
version: '3.3'
services:
webapp:
build: .
command: python manage.py runserver 0.0.0.0:4334
container_name: toolkit_webapp
volumes:
- .:/usr/src/redteam_toolkit/
ports:
- 4334:4334
env_file:
- ./.env
depends_on:
- db
db:
image: postgres:13.4-bullseye
container_name: database
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=password
- POSTGRES_DB=redteam_toolkit_db
volumes:
postgres_data:
the get.py file inside /usr/src/redteam_toolkit/toolkit/scripts/webapp/ directory, to just run the go package, and list files in this dir:
import os
import subprocess
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
print(f"Current path is: {BASE_DIR}")
def go(target_url):
run_go_package = subprocess.getoutput(
f"echo {target_url} | {BASE_DIR}/webapp/hakrawler -t 15 -u"
)
list_files = subprocess.getoutput(f"ls {BASE_DIR}/webapp/")
print(run_go_package)
print(list_files)
go("https://example.org")
and then I just run:
$ docker-compose up -d --build
$ docker-compose exec webapp python toolkit/scripts/webapp/get.py
The output is:
Current path is: /usr/src/redteam_toolkit/toolkit/scripts
/bin/sh: 1: /usr/src/redteam_toolkit/toolkit/scripts/webap/hakrawler: not found
__init__.py
__pycache__
scr.py
gather.py
This looks like a really good candidate for a multi-stage build:
FROM golang:1.18.0 as go-build
# Install packages
RUN go install github.com/x/y#latest \
&& cp $GOPATH/bin/pacakge /usr/local/bin/
FROM python:3.8.11-bullseye as release
...
COPY --from=go-build /usr/local/bin/package /usr/src/toolkit/toolkit/scripts/webapp/
...
Your compose file also needs to be updated, it is masking the entire /usr/src/redteam_toolkit folder with the volume mount. Delete that volume mount to see the content of the image.
GOPATH mode does not work with Golang modules, in your Dockerfile file, add:
RUN unset GOPATH
use RUN go get <package_repository>
How can I install a private repo inside a python image docker? I tried many alternatives but all were unsuccesful. Seems I cant get to set ssh credentials inside a python based image.
My Docker image:
FROM python:3.8
ENV PATH="/scripts:${PATH}"
# Django files
COPY ./requirements.txt /requirements.txt
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
the requirements file has:
git+ssh://git#github.com/my_repo_name.git#dev
And build is triggered from aocker compose file:
....
django_service:
build:
context: ..
dockerfile: Dockerfile
volumes:
- static_data:/vol/web
environment:
- SECRET_KEY=${SECRET_KEY}
depends_on:
....
Perhaps you could use https instead of ssh:
git clone https://${GH_TOKEN}#github.com/username/my_repo_name.git#dev
To set the token inside the Dockerfile use: ARG GH_TOKEN
To keep the token outside the Dockerfile you can build your docker
image with passing the arg like this --build-arg GH_TOKEN=MY_TOKEN
I have developed a Django dockerized web app using docker-compose. It runs in my local fine.
The point is that when I define a CI pipeline, specifically CircleCI (I don't know how it works with any other alternative), to upload it to GCloud App Engine the workflow works fine but when visiting the url it returns nothing (500 error).
The code I have and that I run locally using is the following. When I set the CircleCI pipeline I have no clue on how the app.yaml file interacts and what the steps in the .circleci/config.yml should be in order to run the docker-compose. Any idea or resource I might use?
My Dockerfile:
FROM python:3.9-alpine
ENV PATH="/scripts:${PATH}"
COPY ./requirements.txt /requirements.txt
RUN apk add --update --no-cache --virtual .tmp gcc libc-dev linux-headers
RUN pip install -r /requirements.txt
RUN apk del .tmp
RUN mkdir -p /app
COPY ./app /app
WORKDIR /app
COPY ./scripts /scripts
#this allows for execute permission in all files inside /scripts/
RUN chmod +x /scripts/*
RUN mkdir -p /vol/web/media
RUN mkdir -p /vol/web/static
RUN adduser -D user
RUN chown -R user:user /vol
RUN chmod -R 755 /vol/web
USER user
CMD ["entrypoint.sh"]
My docker-compose file:
version: '3.9'
services:
app:
build:
context: .
volumes:
- static_data:/vol/web
environment:
- SECRET_KEY=samplesecret123
- ALLOWED_HOSTS=127.0.0.1,localhost
proxy:
build:
context: ./proxy
volumes:
- static_data:/vol/static
ports:
- "8080:8080"
depends_on:
- app
volumes:
static_data:
Nginx Dockerfile:
FROM nginxinc/nginx-unprivileged:1-alpine
COPY ./default.conf /etc/nginx/conf.d/default.conf
COPY ./uwsgi_params /etc/nginx/uwsgi_params
USER root
RUN mkdir -p /vol/static
RUN chmod 755 /vol/static
USER nginx
Nginx default.conf
server {
listen 8080;
location /static {
alias /vol/static;
}
location / {
uwsgi_pass app:8000;
include /etc/nginx/uwsgi_params;
}
}
entrypoint.sh
#!/bin/sh
set -e
python manage.py collectstatic --no-input
uwsgi --socket :8000 --master --enable-threads --module app.wsgi
.circleci/config.yml
version: 2.1
workflows:
version: 2
build_and_deploy_workflow:
jobs:
- build_and_deploy_job:
filters:
branches:
only:
- master
jobs:
build_and_deploy_job:
docker:
- image: google/cloud-sdk ##based in Debian
steps:
- checkout
- restore_cache:
key: deps1-{{ .Branch }}-{{ checksum "requirements.txt" }}
- run:
name: Install requirements.txt
command: |
apt install -y python-pip
python3 -m pip install -r requirements.txt
- save_cache:
key: deps1-{{ .Branch }}-{{ checksum "requirements.txt" }}
paths:
- "venv"
- run:
name: Install Docker Compose
command: |
curl -L https://github.com/docker/compose/releases/download/1.29.2/docker-compose-`uname -s`-`uname -m` > ~/docker-compose
chmod +x ~/docker-compose
apt-get install -y sudo
sudo mv ~/docker-compose /usr/local/bin/docker-compose
- setup_remote_docker
- run:
name: 'Collect static'
command: |
docker-compose -f docker-compose-deploy.yml up --build
# docker-compose build
# docker-compose run --rm app
# docker-compose run --rm app sh -c "python manage.py collectstatic"
- run:
name: 'Deploy to app engine'
command: |
echo ${GCLOUD_SERVICE_KEY} > /tmp/sa_key.json | \
gcloud auth activate-service-account --key-file=/tmp/sa_key.json
rm /tmp/sa_key.json
gcloud config set project [projectname]
gcloud config set compute/region [region]
gcloud app deploy app.yaml
app.yaml GCloud App Engine:
runtime: python39
#entrypoint: gunicorn -b :$PORT --chdir app/ app.wsgi:application
#entrypoint: gunicorn -b :$PORT app:wsgi
entrypoint: uwsgi --socket :8000 --master --enable-threads --module app.wsgi
handlers:
- url: /static
static_dir: static/
- url: /.*
script: auto
Here is a link that could help you with an example of app.yaml file for a Python 3 application:
https://cloud.google.com/appengine/docs/standard/python3/config/appref
Code example:
runtime: python39 # or another supported version
instance_class: F2
env_variables:
BUCKET_NAME: "example-gcs-bucket"
handlers:
# Matches requests to /images/... to files in static/images/...
- url: /images
static_dir: static/images
- url: /.*
secure: always
redirect_http_response_code: 301
script: auto
For Python 3, the app.yaml is required to contain at least a runtime: python39 entry.
For a brief overview, see defining runtime settings:
https://cloud.google.com/appengine/docs/standard/python3/configuring-your-app-with-app-yaml
To deploy to Google App Engine with CircleCi I found this article that may help you with your main issue:
https://medium.com/#1555398769574/deploy-to-google-app-engine-with-circleci-or-github-actions-cb1bab15ca80
Code example:
.circleci/config.yaml
version: 2
jobs:
build:
working_directory: ~/workspace
docker:
- image: circleci/php:7.2-stretch-node-browsers
steps:
- checkout
- run: |
cp .env.example .env &&
php artisan key:generate
- persist_to_workspace:
root: .
paths:
- .
deploy:
working_directory: ~/workspace
docker:
- image: google/cloud-sdk
steps:
- attach_workspace:
at: .
- run:
name: Service Account Key
command: echo ${GCLOUD_SERVICE_KEY} > ${HOME}/gcloud-service-key.json
- run:
name: Set gcloud command
command: |
gcloud auth activate-service-account --key-file=${HOME}/gcloud-service-key.json
gcloud --quiet config set project ${GOOGLE_PROJECT_ID}
- run:
name: deploy to Google App Engine
command: |
gcloud app deploy app.yaml
workflows:
version: 2
build:
jobs:
- build
- deploy:
context: gcp
requires:
- build
filters:
branches:
only: master
Adding additional documentation on how to create CI/CD pipeline for Google App Engine with CircleCI 2.0:
https://runzhuoli.me/2018/12/21/ci-cd-gcp-gae-circleci.html
I am trying to implement Travis CI in my Django/ Vue.js project.
I added this .travis.yml file to my root folder:
language: python
python:
- '3.7.3'
sudo: required
before_install:
- chmod +x ./pizza/manage.py
before_script:
- pip install -r requirements.txt
env: DJANGO_SETTINGS_MODULE="pizzago.settings"
services:
- postgresql
script:
- ./pizza/manage.py test --keepdb
But as I run the build I get this output:
pip install -r requirements.txt
./pizza/manage.py test --keepdb
System check identified no issues (0 silenced).
Ran 0 tests in 0.000s
OK
The command "./pizza/manage.py test --keepdb" exited with 0.
Done. Your build exited with 0.
Running my tests locally with 'python3 manage.py test --keepdb' works perfectly.
My manage.py is not in my root folder.
Looks like my tests are not found… How can I fix it?
If I get it right, your manage.py is not in your root directory but in a /pizza/ directory. Travis should run the script inside this directory.
Change your .travis.yml this way:
language: python
python:
- '3.7.3'
sudo: required
before_install:
- chmod +x ./pizza/manage.py
before_script:
- pip install -r requirements.txt
- cd ./pizza/
env: DJANGO_SETTINGS_MODULE="pizzago.settings"
services:
- postgresql
script:
- python manage.py test --keepdb
I have a Dockerfile for a Django and Vue.js app that I use along with Gitlab.
The problem that I'm about to describe only happens when deploying via Gitlab CI and the corresponding .gitlab-ci.yml file. When running the docker-compose up command in my local machine, this doesn happen.
So I run docker-compose up and all the instructions in the Dockerfile run apparently fine. But when I check the production server, the dist folder (where the bundle.js and bundle.css should be stored) doesn't exist.
The logs that are spit out while running the Dockerfile confirm that the npm install and npm run build commands are run, and it even confirms that the dist/bundle.js and dist/bundle.css files have been generated. But for some reason they seem to be deleted.
This is my Dockerfile:
FROM python:3.7-alpine
MAINTAINER My Name
ENV PYTHONUNBUFFERED 1
RUN mkdir /app
# make the 'app' folder the current working directory
WORKDIR /app
# copy project files and folders to the current working directory (i.e. 'app' folder)
COPY ./app .
COPY ./requirements.txt /requirements.txt
RUN apk add --update --no-cache postgresql-client
RUN apk add --update --no-cache --virtual .tmp-build-deps \
gcc libc-dev linux-headers postgresql-dev
RUN pip install -r /requirements.txt
RUN apk del .tmp-build-deps
# copy both 'package.json' and 'package-lock.json' (if available)
COPY app/frontend/package*.json ./frontend/
# Install npm
RUN apk add --update nodejs && apk add --update nodejs-npm
# install project dependencies
WORKDIR /app/frontend
RUN npm install
# build app for production with minification
RUN npm run build
RUN adduser -D user
USER user
CMD ["sh ../scripts/entrypoint.sh"]
This is the .gitlab-ci.yml file:
image: docker:latest
services:
- docker:dind
before_script:
- echo "Runnig before_script"
- sudo apt-get install -y python-pip
- sudo apt-get install -y nodejs
- pip install docker-compose
stages:
- test
- build
- deploy
test:
stage: test
script:
- echo "Testing the app"
- docker-compose run app sh -c "python /app/manage.py test && flake8"
build:
stage: build
only:
- develop
- production
- feature/gitlab_ci
script:
- echo "Building the app"
- docker-compose build
deploy:
stage: deploy
only:
- master
- develop
- feature/gitlab_ci
script:
- echo "Deploying the app"
- docker-compose up --build -d
This is the content of the docker-compose.yml file:
version: "3"
services:
app:
build:
context: .
ports:
- "8000:8000"
volumes:
- ./app:/app
command: >
sh -c "python /app/manage.py runserver 0.0.0.0:8000"
environment:
- DB_HOST=db
- DB_NAME=app
- DB_USER=postgres
- DB_PASS=postgres
depends_on:
- db
db:
image: postgres:10-alpine
environment:
- POSTGRES_DB=app
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
This is the content of the entrypoint.sh file:
#!/bin/bash
(cd .. && ./manage.py collectstatic --noinput)
# Migration files are commited to git. Makemigrations is not needed.
# ./manage.py makemigrations app_name
(cd .. && ./manage.py migrate)
I would like to know why the dist/ folder disappears and how to keep it.
When your docker-compose.yml file says
volumes:
- ./app:/app
that hides everything that your Dockerfile builds in the /app directory and replaces it with whatever's in your local system. If your host doesn't have a ./app/frontend/dist then your container won't have that path either, regardless of whatever the Dockerfile does.
I would generally recommend just deleting this volumes: block entirely. It introduces an awkward live-development path (where all of your tooling needs to know that the actual service runs in Docker) and simultaneously isn't what you'd run in development (you want the image to be self-contained and not to need to copy the application separately from the image).
In your compose file, you set a volume which is going to replace your local environment with the one in your container even after npm run build
volumes:
- ./app:/app
You can either build it in your local or remove volumes
We had a similar issue with a nestjs build. Lately we noticed, that we had excluded the src file in the .dockerignore.
Issue is not with docker file. It issue with your dependency. please check package.json file in root folder.