Tests in Travis CI are not found - travis-ci

I am trying to implement Travis CI in my Django/ Vue.js project.
I added this .travis.yml file to my root folder:
language: python
python:
- '3.7.3'
sudo: required
before_install:
- chmod +x ./pizza/manage.py
before_script:
- pip install -r requirements.txt
env: DJANGO_SETTINGS_MODULE="pizzago.settings"
services:
- postgresql
script:
- ./pizza/manage.py test --keepdb
But as I run the build I get this output:
pip install -r requirements.txt
./pizza/manage.py test --keepdb
System check identified no issues (0 silenced).
Ran 0 tests in 0.000s
OK
The command "./pizza/manage.py test --keepdb" exited with 0.
Done. Your build exited with 0.
Running my tests locally with 'python3 manage.py test --keepdb' works perfectly.
My manage.py is not in my root folder.
Looks like my tests are not found… How can I fix it?

If I get it right, your manage.py is not in your root directory but in a /pizza/ directory. Travis should run the script inside this directory.
Change your .travis.yml this way:
language: python
python:
- '3.7.3'
sudo: required
before_install:
- chmod +x ./pizza/manage.py
before_script:
- pip install -r requirements.txt
- cd ./pizza/
env: DJANGO_SETTINGS_MODULE="pizzago.settings"
services:
- postgresql
script:
- python manage.py test --keepdb

Related

How to deploy dockerized Django+uWSGI+Nginx app to Google App Engine using CircleCI

I have developed a Django dockerized web app using docker-compose. It runs in my local fine.
The point is that when I define a CI pipeline, specifically CircleCI (I don't know how it works with any other alternative), to upload it to GCloud App Engine the workflow works fine but when visiting the url it returns nothing (500 error).
The code I have and that I run locally using is the following. When I set the CircleCI pipeline I have no clue on how the app.yaml file interacts and what the steps in the .circleci/config.yml should be in order to run the docker-compose. Any idea or resource I might use?
My Dockerfile:
FROM python:3.9-alpine
ENV PATH="/scripts:${PATH}"
COPY ./requirements.txt /requirements.txt
RUN apk add --update --no-cache --virtual .tmp gcc libc-dev linux-headers
RUN pip install -r /requirements.txt
RUN apk del .tmp
RUN mkdir -p /app
COPY ./app /app
WORKDIR /app
COPY ./scripts /scripts
#this allows for execute permission in all files inside /scripts/
RUN chmod +x /scripts/*
RUN mkdir -p /vol/web/media
RUN mkdir -p /vol/web/static
RUN adduser -D user
RUN chown -R user:user /vol
RUN chmod -R 755 /vol/web
USER user
CMD ["entrypoint.sh"]
My docker-compose file:
version: '3.9'
services:
app:
build:
context: .
volumes:
- static_data:/vol/web
environment:
- SECRET_KEY=samplesecret123
- ALLOWED_HOSTS=127.0.0.1,localhost
proxy:
build:
context: ./proxy
volumes:
- static_data:/vol/static
ports:
- "8080:8080"
depends_on:
- app
volumes:
static_data:
Nginx Dockerfile:
FROM nginxinc/nginx-unprivileged:1-alpine
COPY ./default.conf /etc/nginx/conf.d/default.conf
COPY ./uwsgi_params /etc/nginx/uwsgi_params
USER root
RUN mkdir -p /vol/static
RUN chmod 755 /vol/static
USER nginx
Nginx default.conf
server {
listen 8080;
location /static {
alias /vol/static;
}
location / {
uwsgi_pass app:8000;
include /etc/nginx/uwsgi_params;
}
}
entrypoint.sh
#!/bin/sh
set -e
python manage.py collectstatic --no-input
uwsgi --socket :8000 --master --enable-threads --module app.wsgi
.circleci/config.yml
version: 2.1
workflows:
version: 2
build_and_deploy_workflow:
jobs:
- build_and_deploy_job:
filters:
branches:
only:
- master
jobs:
build_and_deploy_job:
docker:
- image: google/cloud-sdk ##based in Debian
steps:
- checkout
- restore_cache:
key: deps1-{{ .Branch }}-{{ checksum "requirements.txt" }}
- run:
name: Install requirements.txt
command: |
apt install -y python-pip
python3 -m pip install -r requirements.txt
- save_cache:
key: deps1-{{ .Branch }}-{{ checksum "requirements.txt" }}
paths:
- "venv"
- run:
name: Install Docker Compose
command: |
curl -L https://github.com/docker/compose/releases/download/1.29.2/docker-compose-`uname -s`-`uname -m` > ~/docker-compose
chmod +x ~/docker-compose
apt-get install -y sudo
sudo mv ~/docker-compose /usr/local/bin/docker-compose
- setup_remote_docker
- run:
name: 'Collect static'
command: |
docker-compose -f docker-compose-deploy.yml up --build
# docker-compose build
# docker-compose run --rm app
# docker-compose run --rm app sh -c "python manage.py collectstatic"
- run:
name: 'Deploy to app engine'
command: |
echo ${GCLOUD_SERVICE_KEY} > /tmp/sa_key.json | \
gcloud auth activate-service-account --key-file=/tmp/sa_key.json
rm /tmp/sa_key.json
gcloud config set project [projectname]
gcloud config set compute/region [region]
gcloud app deploy app.yaml
app.yaml GCloud App Engine:
runtime: python39
#entrypoint: gunicorn -b :$PORT --chdir app/ app.wsgi:application
#entrypoint: gunicorn -b :$PORT app:wsgi
entrypoint: uwsgi --socket :8000 --master --enable-threads --module app.wsgi
handlers:
- url: /static
static_dir: static/
- url: /.*
script: auto
Here is a link that could help you with an example of app.yaml file for a Python 3 application:
https://cloud.google.com/appengine/docs/standard/python3/config/appref
Code example:
runtime: python39 # or another supported version
instance_class: F2
env_variables:
BUCKET_NAME: "example-gcs-bucket"
handlers:
# Matches requests to /images/... to files in static/images/...
- url: /images
static_dir: static/images
- url: /.*
secure: always
redirect_http_response_code: 301
script: auto
For Python 3, the app.yaml is required to contain at least a runtime: python39 entry.
For a brief overview, see defining runtime settings:
https://cloud.google.com/appengine/docs/standard/python3/configuring-your-app-with-app-yaml
To deploy to Google App Engine with CircleCi I found this article that may help you with your main issue:
https://medium.com/#1555398769574/deploy-to-google-app-engine-with-circleci-or-github-actions-cb1bab15ca80
Code example:
.circleci/config.yaml
version: 2
jobs:
build:
working_directory: ~/workspace
docker:
- image: circleci/php:7.2-stretch-node-browsers
steps:
- checkout
- run: |
cp .env.example .env &&
php artisan key:generate
- persist_to_workspace:
root: .
paths:
- .
deploy:
working_directory: ~/workspace
docker:
- image: google/cloud-sdk
steps:
- attach_workspace:
at: .
- run:
name: Service Account Key
command: echo ${GCLOUD_SERVICE_KEY} > ${HOME}/gcloud-service-key.json
- run:
name: Set gcloud command
command: |
gcloud auth activate-service-account --key-file=${HOME}/gcloud-service-key.json
gcloud --quiet config set project ${GOOGLE_PROJECT_ID}
- run:
name: deploy to Google App Engine
command: |
gcloud app deploy app.yaml
workflows:
version: 2
build:
jobs:
- build
- deploy:
context: gcp
requires:
- build
filters:
branches:
only: master
Adding additional documentation on how to create CI/CD pipeline for Google App Engine with CircleCI 2.0:
https://runzhuoli.me/2018/12/21/ci-cd-gcp-gae-circleci.html

How to install docker-compose along with openjdk in gitlab-ci file?

I have a spring boot application I want to test via .gitlab-ci.yml.
It's set up already like this:
image: openjdk:12
# services:
# - docker:dind
stages:
- build
before_script:
# - apk add --update python py-pip python-dev && pip install docker-compose
# - docker version
# - docker-compose version
- chmod +x mvnw
build:
stage: build
script:
# - docker-compose up -d
- ./mvnw package
artifacts:
paths:
- target/rest-SNAPSHOT.jar
The commented out portions are from the answer to Run docker-compose build in .gitlab-ci.yml which I noticed has a fully distinct docker image.
Obviously I need java installed to run my spring boot application, so does that mean docker is just not an option?

Installing NPM during build fails Docker build

I'm trying to get GitLab CI runner to build my project off the Docker image and install NPM package during the build. My .gitlab-ci.yml file was inspired by this topic Gitlab CI with Docker and NPM where the PO was dealing with identical problem:
image: docker:stable
services:
- docker:dind
stages:
- build
cache:
paths:
- node_modules/
before_script:
- export REACT_APP_USERS_SERVICE_URL=http://127.0.0.1
compile:
image: node:8
stage: build
script:
- apk add --no-cache py-pip python-dev libffi-dev openssl-dev gcc libc-dev make
- pip install docker-compose
- docker-compose up -d
- docker-compose exec -T users python manage.py recreate_db
- docker-compose exec -T users python manage.py seed_db
- npm install
- bash test.sh
after_script:
- docker-compose down
Sadly, that solution didn't work well but I feel like I'm little bit closer to actual solution now. I'm getting two errors during the build:
/bin/bash: line 89: apk: command not found
Running after script...
$ docker-compose down
/bin/bash: line 88: docker-compose: command not found
How can I troubleshoot this ?
Edit:
image: docker:stable
services:
- docker:dind
stages:
- build
- test
before_script:
- export REACT_APP_USERS_SERVICE_URL=http://127.0.0.1
compile:
stage: build
script:
- apk add --no-cache py-pip python-dev libffi-dev openssl-dev gcc libc-dev make
- pip install docker-compose
- docker-compose up -d
- docker-compose exec -T users python manage.py recreate_db
- docker-compose exec -T users python manage.py seed_db
testing:
image: node:alpine
stage: test
script:
- npm install
- bash test.sh
after_script:
- docker-compose down
I moved tests into separate stage testing which I should've done anyway and I figured I'd defined the image there to separate it from the build stage. No change. Docker can't be found and bash test also can't be ran:
$ bash test.sh
/bin/sh: eval: line 87: bash: not found
Running after script...
$ docker-compose down
/bin/sh: eval: line 84: docker-compose: not found
image: node:8 this image is not based on alpine so as a result, you got error
apk: command not found
node:<version>
These are the suite code names for releases of Debian and indicate
which release the image is based on. If your image needs to install
any additional packages beyond what comes with the image, you'll
likely want to specify one of these explicitly to minimize breakage
when there are new releases of Debian.
just replace image to
node:alpine
and it should work.
the second error is because docker-compose is not installed.
You can check this answer for more details about composer.

"npm run build" in Dockerfile: dist folder is generated but disappears

I have a Dockerfile for a Django and Vue.js app that I use along with Gitlab.
The problem that I'm about to describe only happens when deploying via Gitlab CI and the corresponding .gitlab-ci.yml file. When running the docker-compose up command in my local machine, this doesn happen.
So I run docker-compose up and all the instructions in the Dockerfile run apparently fine. But when I check the production server, the dist folder (where the bundle.js and bundle.css should be stored) doesn't exist.
The logs that are spit out while running the Dockerfile confirm that the npm install and npm run build commands are run, and it even confirms that the dist/bundle.js and dist/bundle.css files have been generated. But for some reason they seem to be deleted.
This is my Dockerfile:
FROM python:3.7-alpine
MAINTAINER My Name
ENV PYTHONUNBUFFERED 1
RUN mkdir /app
# make the 'app' folder the current working directory
WORKDIR /app
# copy project files and folders to the current working directory (i.e. 'app' folder)
COPY ./app .
COPY ./requirements.txt /requirements.txt
RUN apk add --update --no-cache postgresql-client
RUN apk add --update --no-cache --virtual .tmp-build-deps \
gcc libc-dev linux-headers postgresql-dev
RUN pip install -r /requirements.txt
RUN apk del .tmp-build-deps
# copy both 'package.json' and 'package-lock.json' (if available)
COPY app/frontend/package*.json ./frontend/
# Install npm
RUN apk add --update nodejs && apk add --update nodejs-npm
# install project dependencies
WORKDIR /app/frontend
RUN npm install
# build app for production with minification
RUN npm run build
RUN adduser -D user
USER user
CMD ["sh ../scripts/entrypoint.sh"]
This is the .gitlab-ci.yml file:
image: docker:latest
services:
- docker:dind
before_script:
- echo "Runnig before_script"
- sudo apt-get install -y python-pip
- sudo apt-get install -y nodejs
- pip install docker-compose
stages:
- test
- build
- deploy
test:
stage: test
script:
- echo "Testing the app"
- docker-compose run app sh -c "python /app/manage.py test && flake8"
build:
stage: build
only:
- develop
- production
- feature/gitlab_ci
script:
- echo "Building the app"
- docker-compose build
deploy:
stage: deploy
only:
- master
- develop
- feature/gitlab_ci
script:
- echo "Deploying the app"
- docker-compose up --build -d
This is the content of the docker-compose.yml file:
version: "3"
services:
app:
build:
context: .
ports:
- "8000:8000"
volumes:
- ./app:/app
command: >
sh -c "python /app/manage.py runserver 0.0.0.0:8000"
environment:
- DB_HOST=db
- DB_NAME=app
- DB_USER=postgres
- DB_PASS=postgres
depends_on:
- db
db:
image: postgres:10-alpine
environment:
- POSTGRES_DB=app
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
This is the content of the entrypoint.sh file:
#!/bin/bash
(cd .. && ./manage.py collectstatic --noinput)
# Migration files are commited to git. Makemigrations is not needed.
# ./manage.py makemigrations app_name
(cd .. && ./manage.py migrate)
I would like to know why the dist/ folder disappears and how to keep it.
When your docker-compose.yml file says
volumes:
- ./app:/app
that hides everything that your Dockerfile builds in the /app directory and replaces it with whatever's in your local system. If your host doesn't have a ./app/frontend/dist then your container won't have that path either, regardless of whatever the Dockerfile does.
I would generally recommend just deleting this volumes: block entirely. It introduces an awkward live-development path (where all of your tooling needs to know that the actual service runs in Docker) and simultaneously isn't what you'd run in development (you want the image to be self-contained and not to need to copy the application separately from the image).
In your compose file, you set a volume which is going to replace your local environment with the one in your container even after npm run build
volumes:
- ./app:/app
You can either build it in your local or remove volumes
We had a similar issue with a nestjs build. Lately we noticed, that we had excluded the src file in the .dockerignore.
Issue is not with docker file. It issue with your dependency. please check package.json file in root folder.

GC Cloud Build custom build process with internal repostory

I have to configure custom build process of GC AppEngine application with GC Cloud Build.
First of all - I have an internal python repository on the GC ComputeEngine instance. It's accessible only through internal network and I use Remote-builder to run pip installcommand on the internal GC instance.
After downloading of dependencies from the internal repository I have to deploy results into the GC AppEngine.
Cloudbuild.yaml:
steps:
/#Download dependencies from the internal repository
- name: gcr.io/${ProjectName}/remote-builder
env:
- COMMAND=sudo bash workspace/download-dependencies.bash
- ZONE=us-east1-b
- INSTANCE_NAME=remote-cloud-build
- INSTANCE_ARGS=--image-project centos-cloud --image-family centos-7
- name: gcr.io/cloud-builders/docker
args: ['build', '-t', 'gcr.io/${ProjectName}/app', '.']
- name: gcr.io/cloud-builders/docker
args: ['push', 'gcr.io/${ProjectName}/app']
- name: gcr.io/cloud-builders/gcloud
args: ['app', 'deploy', 'app.yaml', '--image-url=gcr.io/${ProjectName}/${ProjectName}']
images: ['gcr.io/${ProjectName}/${ProjectName}']
app.yaml:
runtime: python
env: flex
entrypoint: python main.py
service: service-name
runtime_config:
python_version: 3
Dockerfile:
FROM gcr.io/google-appengine/python
WORKDIR /app
COPY . /app
download-dependencies.bash:
#!/usr/bin/env bash
easy_install pip
pip install --upgrade pip
pip install --upgrade setuptools
pip install -r workspace/requirements.txt'
After running of gcloud builds submit --config cloudbuild.yaml
new version of the application is deployed on the AppEngine but it doesn't work
Maybe the issue is the wrong image? As far as I understand, I need to configure Dockefile to collect all custom python dependencies into the image.
Could you please help me with it
Thanks in advance!
Update
I changed my Dockerfile according to the google guidline:
FROM gcr.io/google-appengine/python
RUN virtualenv /env
ENV VIRTUAL_ENV /env
ENV PATH /env/bin:$PATH
ADD . /app
CMD main.py
And new error is: /bin/sh: 1: main.py: not found
If I change last line to: CMD app/main.py - it creates version and doesn't work
Finally, I finished. There were some issues and I will share right configs below. Hope it will help someone.
steps:
# Move our code to instance inside the project to have access to the private repo
- name: gcr.io/${PROJECT_NAME}/remote-builder
env:
- COMMAND=sudo bash workspace/download-dependencies.bash:
- ZONE=us-east1-b
- INSTANCE_NAME=remote-cloud-build
- INSTANCE_ARGS=--image-project centos-cloud --image-family centos-7
#Build image with downloaded deps
- name: gcr.io/cloud-builders/docker
args: ['build', '-t', 'gcr.io/${PROJECT_NAME}/${APP_NAME}', '.']
#Push image to project repo
- name: gcr.io/cloud-builders/docker
args: ['push', 'gcr.io/${PROJECT_NAME}/${APP_NAME}']
#Deploy image to AppEngine
- name: gcr.io/cloud-builders/gcloud
args: ['app', 'deploy', 'app.yaml', '--image-url=gcr.io/${PROJECT_NAME}/${APP_NAME}']
images: ['gcr.io/${PROJECT_NAME}/${APP_NAME}']
timeout: '1800s'
download-dependencies.bash:
#!/usr/bin/env bash
easy_install pip
pip install --upgrade pip
pip install --upgrade setuptools
pip install wheel
#Download private deps and save it to volume (share folder between steps)
pip wheel --no-deps -r workspace/private-dependencies.txt -w workspace/lib --no-binary :all:
Dockerfile:
FROM gcr.io/google-appengine/python
COPY . /${APP_NAME}
RUN virtualenv /env
ENV VIRTUAL_ENV /env
ENV PATH /env/bin:$PATH
RUN pip install -r /${APP_NAME}/workspace/public-dependencies.txt
#Install private deps from volume
RUN pip install -f /${APP_NAME}/workspace/lib --no-index ${LIBRARY_NAME}
CMD gunicorn -b :$PORT main:app

Resources