how to deploy quasar - vue application on gitlab ci - docker

I have a basic quasar page that is created using $ quasar create .
I want to deploy the application on Gitlab ci but the deplyment keeps giving me errors i have managed to fix the build and test errors but cant figure out the deployment part of it.
.gitlab-ci.yml
build site:
image: node:10
stage: build
script:
- npm install -g #quasar/cli
- npm install --progress=false
- quasar build
artifacts:
expire_in: 1 week
paths:
- dist
unit test:
image: node:10
stage: test
script:
- npm install --progress=false
deploy:
image: alpine
stage: deploy
script:
- apk add --no-cache rsync openssh
- mkdir -p ~/.ssh
- echo "$SSH_PRIVATE_KEY" >> ~/.ssh/id_dsa
- chmod 600 ~/.ssh/id_dsa
- echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config
- rsync -rav --delete dist/ user#server.com:/your/project/path/
Error during deplyment phase
i tried adding rsync -av -e "ssh -vv" --delete ...
this is the error i get

Try and do your rsync with ssh verbose active, in order to see more about the error:
rsync -av -e "ssh -vv" --delete ...
Check the permission for your ssh elements.
For instance:
chmod 700 ~/.ssh

Related

The gitlab-ci job does not fail but the test does

I need a failed test in my pipeline to fail the job so that I can have control over it. The problem is that the tests are being run in a "docker in docker" so the job doesn't fail because the container did run correctly, but the test doesn't return an error code (even if one fails).
The script "docker:test" run my test suit in a container and my pipeline is like:
image: docker:dind #Alpine
stages:
- install
- test
# - build
- deploy
env:
stage: install
script:
- chmod +x ./setup_env.sh
- ./setup_env.sh
artifacts:
paths:
- .env
expire_in: 1 days
tests:
stage: test
before_script:
- docker rm extractos-bancarios-test || true
script:
- apk add --update nodejs npm
- npm run docker:test
- docker cp extractos-bancarios-test:/usr/src/coverage .
- docker cp extractos-bancarios-test:/usr/src/junit.xml .
cache:
paths:
- coverage/
artifacts:
when: always
paths:
- coverage/
reports:
junit:
- junit.xml
# docker image:
# stage: build
# script:
# - npm run docker:build
remove .env:
stage: deploy
script:
- rm .env
pages:
stage: deploy
script:
- mkdir .public
- cp -r coverage/* .public
- mv .public public
artifacts:
paths:
- public
# only:
# - main
And my npm script is:
"docker:test": "npm i && tsc && docker build -t extractos-bancarios-test --target test . && docker run -d --name extractos-bancarios-test extractos-bancarios-test && docker logs -f extractos-bancarios-test >> logs.log",
I need to fail the pipeline when a test fails while using docker in docker
I was able to solve the problem on my own and I leave it documented so that no one wastes as much time as I did.
For the container inside the first container to fail, I needed it to return an exit code 1 when there is an error in the report. So I added a conditional with a grep in the scripts section of my .gitlab-ci.yml:
tests:
stage: test
before_script:
- docker rm extractos-bancarios-test || true
- rm junit.xml || true
- rm -r coverage || true
script:
- apk add --update nodejs npm
- npm run docker:test
- docker cp extractos-bancarios-test:/usr/src/coverage .
- docker cp extractos-bancarios-test:/usr/src/junit.xml .
- if grep '<failure' junit.xml; then exit 1; else exit 0; fi
cache:
paths:
- coverage/
artifacts:
when: always
paths:
- coverage/
reports:
junit:
- junit.xml

How to deploy dockerized Django+uWSGI+Nginx app to Google App Engine using CircleCI

I have developed a Django dockerized web app using docker-compose. It runs in my local fine.
The point is that when I define a CI pipeline, specifically CircleCI (I don't know how it works with any other alternative), to upload it to GCloud App Engine the workflow works fine but when visiting the url it returns nothing (500 error).
The code I have and that I run locally using is the following. When I set the CircleCI pipeline I have no clue on how the app.yaml file interacts and what the steps in the .circleci/config.yml should be in order to run the docker-compose. Any idea or resource I might use?
My Dockerfile:
FROM python:3.9-alpine
ENV PATH="/scripts:${PATH}"
COPY ./requirements.txt /requirements.txt
RUN apk add --update --no-cache --virtual .tmp gcc libc-dev linux-headers
RUN pip install -r /requirements.txt
RUN apk del .tmp
RUN mkdir -p /app
COPY ./app /app
WORKDIR /app
COPY ./scripts /scripts
#this allows for execute permission in all files inside /scripts/
RUN chmod +x /scripts/*
RUN mkdir -p /vol/web/media
RUN mkdir -p /vol/web/static
RUN adduser -D user
RUN chown -R user:user /vol
RUN chmod -R 755 /vol/web
USER user
CMD ["entrypoint.sh"]
My docker-compose file:
version: '3.9'
services:
app:
build:
context: .
volumes:
- static_data:/vol/web
environment:
- SECRET_KEY=samplesecret123
- ALLOWED_HOSTS=127.0.0.1,localhost
proxy:
build:
context: ./proxy
volumes:
- static_data:/vol/static
ports:
- "8080:8080"
depends_on:
- app
volumes:
static_data:
Nginx Dockerfile:
FROM nginxinc/nginx-unprivileged:1-alpine
COPY ./default.conf /etc/nginx/conf.d/default.conf
COPY ./uwsgi_params /etc/nginx/uwsgi_params
USER root
RUN mkdir -p /vol/static
RUN chmod 755 /vol/static
USER nginx
Nginx default.conf
server {
listen 8080;
location /static {
alias /vol/static;
}
location / {
uwsgi_pass app:8000;
include /etc/nginx/uwsgi_params;
}
}
entrypoint.sh
#!/bin/sh
set -e
python manage.py collectstatic --no-input
uwsgi --socket :8000 --master --enable-threads --module app.wsgi
.circleci/config.yml
version: 2.1
workflows:
version: 2
build_and_deploy_workflow:
jobs:
- build_and_deploy_job:
filters:
branches:
only:
- master
jobs:
build_and_deploy_job:
docker:
- image: google/cloud-sdk ##based in Debian
steps:
- checkout
- restore_cache:
key: deps1-{{ .Branch }}-{{ checksum "requirements.txt" }}
- run:
name: Install requirements.txt
command: |
apt install -y python-pip
python3 -m pip install -r requirements.txt
- save_cache:
key: deps1-{{ .Branch }}-{{ checksum "requirements.txt" }}
paths:
- "venv"
- run:
name: Install Docker Compose
command: |
curl -L https://github.com/docker/compose/releases/download/1.29.2/docker-compose-`uname -s`-`uname -m` > ~/docker-compose
chmod +x ~/docker-compose
apt-get install -y sudo
sudo mv ~/docker-compose /usr/local/bin/docker-compose
- setup_remote_docker
- run:
name: 'Collect static'
command: |
docker-compose -f docker-compose-deploy.yml up --build
# docker-compose build
# docker-compose run --rm app
# docker-compose run --rm app sh -c "python manage.py collectstatic"
- run:
name: 'Deploy to app engine'
command: |
echo ${GCLOUD_SERVICE_KEY} > /tmp/sa_key.json | \
gcloud auth activate-service-account --key-file=/tmp/sa_key.json
rm /tmp/sa_key.json
gcloud config set project [projectname]
gcloud config set compute/region [region]
gcloud app deploy app.yaml
app.yaml GCloud App Engine:
runtime: python39
#entrypoint: gunicorn -b :$PORT --chdir app/ app.wsgi:application
#entrypoint: gunicorn -b :$PORT app:wsgi
entrypoint: uwsgi --socket :8000 --master --enable-threads --module app.wsgi
handlers:
- url: /static
static_dir: static/
- url: /.*
script: auto
Here is a link that could help you with an example of app.yaml file for a Python 3 application:
https://cloud.google.com/appengine/docs/standard/python3/config/appref
Code example:
runtime: python39 # or another supported version
instance_class: F2
env_variables:
BUCKET_NAME: "example-gcs-bucket"
handlers:
# Matches requests to /images/... to files in static/images/...
- url: /images
static_dir: static/images
- url: /.*
secure: always
redirect_http_response_code: 301
script: auto
For Python 3, the app.yaml is required to contain at least a runtime: python39 entry.
For a brief overview, see defining runtime settings:
https://cloud.google.com/appengine/docs/standard/python3/configuring-your-app-with-app-yaml
To deploy to Google App Engine with CircleCi I found this article that may help you with your main issue:
https://medium.com/#1555398769574/deploy-to-google-app-engine-with-circleci-or-github-actions-cb1bab15ca80
Code example:
.circleci/config.yaml
version: 2
jobs:
build:
working_directory: ~/workspace
docker:
- image: circleci/php:7.2-stretch-node-browsers
steps:
- checkout
- run: |
cp .env.example .env &&
php artisan key:generate
- persist_to_workspace:
root: .
paths:
- .
deploy:
working_directory: ~/workspace
docker:
- image: google/cloud-sdk
steps:
- attach_workspace:
at: .
- run:
name: Service Account Key
command: echo ${GCLOUD_SERVICE_KEY} > ${HOME}/gcloud-service-key.json
- run:
name: Set gcloud command
command: |
gcloud auth activate-service-account --key-file=${HOME}/gcloud-service-key.json
gcloud --quiet config set project ${GOOGLE_PROJECT_ID}
- run:
name: deploy to Google App Engine
command: |
gcloud app deploy app.yaml
workflows:
version: 2
build:
jobs:
- build
- deploy:
context: gcp
requires:
- build
filters:
branches:
only: master
Adding additional documentation on how to create CI/CD pipeline for Google App Engine with CircleCI 2.0:
https://runzhuoli.me/2018/12/21/ci-cd-gcp-gae-circleci.html

Is it possible to copy a file from docker container to gitlab repository by gitlab-ci.yml

I created a docker image with automated tests that generates a report XML file. After the test run, this file is generated. I want to copy this file to the repository because the pipeline needs this file to show result tests:
My gitlab script:
stages:
- test
test:
image: docker:latest
services:
- name: docker:dind
entrypoint: ["env", "-u", "DOCKER_HOST"]
command: ["dockerd-entrypoint.sh"]
variables:
DOCKER_HOST: tcp://docker:2375/
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: ""
stage: test
before_script:
- docker login -u "xxxx" -p "yyyy" docker.io
script:
- docker run --name authContainer "xxxx/dockerImage:0.0.1"
after_script:
- docker cp authContainer:/artifacts/test-result.xml .
artifacts:
when: always
paths:
- test-result.xml
reports:
junit:
- test-result.xml
Dockerfile:
FROM mcr.microsoft.com/dotnet/core/sdk:2.1
COPY /publish /AutomaticTests
WORKDIR /Spinelle.AutomaticTests
RUN apt-get update -y
RUN apt install unzip
RUN wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
RUN dpkg -i google-chrome-stable_current_amd64.deb; apt-get -fy install
RUN curl https://chromedriver.storage.googleapis.com/84.0.4147.30/chromedriver_linux64.zip -o /usr/local/bin/chromedriver
RUN unzip -o /usr/local/bin/chromedriver -d /Spinelle.AutomaticTests
RUN chmod 777 /Spinelle.AutomaticTests
CMD dotnet vstest /Parallel AutomaticTests.dll --TestAdapterPath:. --logger:"nunit;LogFilePath=/artifacts/test-result.xml;MethodFormat=Class;FailureBodyFormat=Verbose"
You're .gitlab-ci file is looking fine. You can have the XML report as artifact and gitlab will populate the results from that. Below is the script that i've used and could see the results.
script:
- pytest -o junit_family=xunit2 --junitxml=report.xml --cov=. --cov-report html
- coverage report
coverage: '/^TOTAL.+?(\d+\%)$/'
artifacts:
paths:
- coverage
reports:
junit: report.xml
when: always

Why Gitlab-CI sometimes creates project directory with root owner (but i have specified another user) and how to solve it?

I setup Gitlab CI/CD for my test project. I use docker containers with postgres and go and sometimes I need to change sql init script (which creates tables in database), so I use these commands:
docker-compose stop
docker system prune
docker system prune --volumes
sudo rm -rf pay
then on my PC I push changes to Gitlab and it runs pipelines
But sometimes after step 5 Gitlab-CI throws me a permission denied error on deploy step (see below) as it creates pay directory with root owner.
Here is my project structure:
Here is my .gitlab-ci.yml file:
stages:
- tools
- build
- docker
- deploy
variables:
GO_PACKAGE: gitlab.com/$CI_PROJECT_PATH
REGISTRY_BASE_URL: registry.gitlab.com/$CI_PROJECT_PATH
# ######################################################################################################################
# Base
# ######################################################################################################################
# Base job for docker build and push in private gitlab registry.
.docker:
image: docker:latest
services:
- docker:dind
stage: docker
variables:
IMAGE_SUBNAME: ''
DOCKERFILE: Dockerfile
BUILD_CONTEXT: .
BUILD_ARGS: ''
script:
- adduser --disabled-password --gecos "" builder
- su -l builder
- su builder -c "whoami"
- echo "$CI_JOB_TOKEN" | docker login -u gitlab-ci-token --password-stdin registry.gitlab.com
- IMAGE_TAG=$CI_COMMIT_REF_SLUG
- IMAGE=${REGISTRY_BASE_URL}/${IMAGE_SUBNAME}:${IMAGE_TAG}
- docker build -f ${DOCKERFILE} ${BUILD_ARGS} -t ${IMAGE} ${BUILD_CONTEXT}
- docker push ${IMAGE}
tags:
- docker
# ######################################################################################################################
# Stage 0. Tools
#
# ######################################################################################################################
# Job for building base golang image.
tools:golang:
extends: .docker
stage: tools
variables:
IMAGE_SUBNAME: 'golang'
DOCKERFILE: ./docker/golang/Dockerfile
BUILD_CONTEXT: ./docker/golang/
only:
refs:
- dev
# changes:
# - docker/golang/**/*
# ######################################################################################################################
# Stage 1. Build
#
# ######################################################################################################################
# Job for building golang backend in single image.
build:backend:
image: ${REGISTRY_BASE_URL}/golang
stage: build
# TODO: enable cache
# cache:
# paths:
# - ${CI_PROJECT_DIR}/backend/vendor
before_script:
- cd backend/
script:
# Install dependencies
- go mod download
- mkdir bin/
# Build binaries
- CGO_ENABLED=1 GOOS=linux GOARCH=amd64 go build -a -ldflags "-linkmode external -extldflags '-static' -s -w" -o bin/backend ./cmd/main.go
- cp -r /usr/share/zoneinfo .
- cp -r /etc/ssl/certs/ca-certificates.crt .
- cp -r /etc/passwd .
artifacts:
expire_in: 30min
paths:
- backend/bin/*
- backend/zoneinfo/**/*
- backend/ca-certificates.crt
- backend/passwd
only:
refs:
- dev
# changes:
# - backend/**/*
# - docker/golang/**/*
# ######################################################################################################################
# Stage 2. Docker
#
# ######################################################################################################################
# Job for building backend (written on golang). Only change backend folder.
docker:backend:
extends: .docker
variables:
IMAGE_SUBNAME: 'backend'
DOCKERFILE: ./backend/Dockerfile
BUILD_CONTEXT: ./backend/
only:
refs:
- dev
# changes:
# - docker/golang/**/*
# - backend/**/*
# ######################################################################################################################
# Stage 3. Deploy on Server
#
# ######################################################################################################################
deploy:dev:
stage: deploy
variables:
SERVER_HOST: 'here is my server ip'
SERVER_USER: 'here is my server user (it is not root, but in root group)'
before_script:
## Install ssh-agent if not already installed, it is required by Docker.
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
## Run ssh-agent
- eval $(ssh-agent -s)
## Add the SSH key stored in SSH_PRIVATE_KEY_DEV variable to the agent store
- echo "$SSH_PRIVATE_KEY_DEV" | tr -d '\r' | ssh-add - > /dev/null
## Create the SSH directory and give it the right permissions
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
## Enable host key checking (to prevent man-in-the-middle attacks)
- ssh-keyscan $SERVER_HOST >> ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
## Git settings
- git config --global user.email ""
- git config --global user.name ""
## Install rsync if not already installed to upload files to server.
- 'which rsync || ( apt-get update -y && apt-get install rsync -y )'
script:
- rsync -r deploy/dev/pay $SERVER_USER#$SERVER_HOST:/home/$SERVER_USER/dev/backend
- ssh -tt $SERVER_USER#$SERVER_HOST 'cd dev/backend/pay && ./up.sh'
only:
refs:
- dev
I have already tried to turn off change triggers and clear gitlab container registry, but it didn't help.
Also I have found interesting thing, that when tools pipeline starts (it is the first pipeline) at that moment my server immediately creates pay folder with root owner and empty sub-folders.
What am I doing wrong? Thank you.
Hey there—GitLab team member here: I am looking into your post to help troubleshoot your issue. Linked here is a doc on what to do when you encounter Permissions Problems with GitLab+Docker.
It's likely that you have tried some of these steps, so please let me know! I'll keep researching while I wait to hear back from you. Thanks!

How to use GitLab CI with a custom Docker image?

I made a simple Dockerfile:
FROM openjdk
EXPOSE 8080
and built an image using:
docker build -t test .
I installed and configured a docker GitLab CI runner and now I would like to use this runner with my test image. So I wrote the following .gitlab-ci.yml file:
image: test
run:
script:
- echo "Hello world!"
But to my disappointment, the local test image that I can use on my machine was not found.
Running with gitlab-ci-multi-runner 9.4.2 (6d06f2e)
on martin-docker-rawip (70747a61)
Using Docker executor with image test ...
Using docker image sha256:fa91c6ea64ce4b9b44672c6e56eed8312d0ec2afc80730cbee7754bc448ea22b for predefined container...
Pulling docker image test ...
ERROR: Job failed: Error response from daemon: repository test not found: does not exist or no pull access
I do not even know what is going on anymore. How can I make the runner aware of this image that I made?
I had the same question. And I found the answer here: https://forum.gitlab.com/t/runner-cant-use-local-docker-images/5507/6
Add the following in the /etc/gitlab-runner/config.toml
[runners.docker]
# more config for the runner here...
pull_policy = "if-not-present"
More info here: https://docs.gitlab.com/runner/executors/docker.html#how-pull-policies-work
My Dockerfile
FROM node:latest
RUN apt-get update -y && apt-get install openssh-client rsync -y
On the runner I build the image:
docker build -t node_rsync .
The .gitlab-ci.yml in the project using this runner.
image: node_rsync
job:
stage: deploy
before_script:
# now in the custom docker image
#- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
- mkdir -p ~/.ssh
- eval $(ssh-agent -s)
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
- ssh-add <(tr '#' '\n' <<< "$STAGING_PRIVATE_KEY" | base64 --decode)
# now in the custom docker image
#- apt-get install -y rsync
script:
- rsync -rav -e ssh --exclude='.git/' --exclude='.gitlab-ci.yml' --delete-excluded ./ $STAGING_USER#$STAGING_SERVER:./deploy/
only:
- master
tags:
- ssh

Resources