I'm learning about the gitlab-ci.yml and I started reading about environments which makes me wonder, is it possible to access a folder on the host machine and perform a git pull command in the deploy stage?
image: php:5.6
# The folders we should cache
cache:
paths:
- vendor/
before_script:
# Install git, the php image doesn't have installed
- apt-get update -yqq
- apt-get install git mysql-client -yqq
# Install mysql driver
- docker-php-ext-install pdo_mysql
# Install composer
- curl -sS https://getcomposer.org/installer | php
# Install all project dependencies
- php composer.phar install
services:
- mysql
variables:
MYSQL_DATABASE: test
MYSQL_ROOT_PASSWORD: root
test:
stage: test
script:
- echo "CREATE DATABASE IF NOT EXISTS test;" | mysql --user=root --password="${MYSQL_ROOT_PASSWORD}" --host=mysql
- mysql --user=root --password="${MYSQL_ROOT_PASSWORD}" --host=mysql test < tests/assets/test.sql
- php composer.phar tests
deploy_staging:
stage: deploy
script:
- echo "Deploying to staging server..."
environment:
name: staging
url: http://staging.example.com
only:
- master
Basically all my script does is import the database file and execute the tests of the PDO service. Just a random test to learn about the CI.
Now instead of:
- echo "Deploying to staging server..."
I was wondering how I could achieve something like:
- cd /path/on/host/machine/to/repository
- git pull
If not, is there an alternative to accomplish what I want to do?
Related
I have a spring boot application I want to test via .gitlab-ci.yml.
It's set up already like this:
image: openjdk:12
# services:
# - docker:dind
stages:
- build
before_script:
# - apk add --update python py-pip python-dev && pip install docker-compose
# - docker version
# - docker-compose version
- chmod +x mvnw
build:
stage: build
script:
# - docker-compose up -d
- ./mvnw package
artifacts:
paths:
- target/rest-SNAPSHOT.jar
The commented out portions are from the answer to Run docker-compose build in .gitlab-ci.yml which I noticed has a fully distinct docker image.
Obviously I need java installed to run my spring boot application, so does that mean docker is just not an option?
I setup Gitlab CI/CD for my test project. I use docker containers with postgres and go and sometimes I need to change sql init script (which creates tables in database), so I use these commands:
docker-compose stop
docker system prune
docker system prune --volumes
sudo rm -rf pay
then on my PC I push changes to Gitlab and it runs pipelines
But sometimes after step 5 Gitlab-CI throws me a permission denied error on deploy step (see below) as it creates pay directory with root owner.
Here is my project structure:
Here is my .gitlab-ci.yml file:
stages:
- tools
- build
- docker
- deploy
variables:
GO_PACKAGE: gitlab.com/$CI_PROJECT_PATH
REGISTRY_BASE_URL: registry.gitlab.com/$CI_PROJECT_PATH
# ######################################################################################################################
# Base
# ######################################################################################################################
# Base job for docker build and push in private gitlab registry.
.docker:
image: docker:latest
services:
- docker:dind
stage: docker
variables:
IMAGE_SUBNAME: ''
DOCKERFILE: Dockerfile
BUILD_CONTEXT: .
BUILD_ARGS: ''
script:
- adduser --disabled-password --gecos "" builder
- su -l builder
- su builder -c "whoami"
- echo "$CI_JOB_TOKEN" | docker login -u gitlab-ci-token --password-stdin registry.gitlab.com
- IMAGE_TAG=$CI_COMMIT_REF_SLUG
- IMAGE=${REGISTRY_BASE_URL}/${IMAGE_SUBNAME}:${IMAGE_TAG}
- docker build -f ${DOCKERFILE} ${BUILD_ARGS} -t ${IMAGE} ${BUILD_CONTEXT}
- docker push ${IMAGE}
tags:
- docker
# ######################################################################################################################
# Stage 0. Tools
#
# ######################################################################################################################
# Job for building base golang image.
tools:golang:
extends: .docker
stage: tools
variables:
IMAGE_SUBNAME: 'golang'
DOCKERFILE: ./docker/golang/Dockerfile
BUILD_CONTEXT: ./docker/golang/
only:
refs:
- dev
# changes:
# - docker/golang/**/*
# ######################################################################################################################
# Stage 1. Build
#
# ######################################################################################################################
# Job for building golang backend in single image.
build:backend:
image: ${REGISTRY_BASE_URL}/golang
stage: build
# TODO: enable cache
# cache:
# paths:
# - ${CI_PROJECT_DIR}/backend/vendor
before_script:
- cd backend/
script:
# Install dependencies
- go mod download
- mkdir bin/
# Build binaries
- CGO_ENABLED=1 GOOS=linux GOARCH=amd64 go build -a -ldflags "-linkmode external -extldflags '-static' -s -w" -o bin/backend ./cmd/main.go
- cp -r /usr/share/zoneinfo .
- cp -r /etc/ssl/certs/ca-certificates.crt .
- cp -r /etc/passwd .
artifacts:
expire_in: 30min
paths:
- backend/bin/*
- backend/zoneinfo/**/*
- backend/ca-certificates.crt
- backend/passwd
only:
refs:
- dev
# changes:
# - backend/**/*
# - docker/golang/**/*
# ######################################################################################################################
# Stage 2. Docker
#
# ######################################################################################################################
# Job for building backend (written on golang). Only change backend folder.
docker:backend:
extends: .docker
variables:
IMAGE_SUBNAME: 'backend'
DOCKERFILE: ./backend/Dockerfile
BUILD_CONTEXT: ./backend/
only:
refs:
- dev
# changes:
# - docker/golang/**/*
# - backend/**/*
# ######################################################################################################################
# Stage 3. Deploy on Server
#
# ######################################################################################################################
deploy:dev:
stage: deploy
variables:
SERVER_HOST: 'here is my server ip'
SERVER_USER: 'here is my server user (it is not root, but in root group)'
before_script:
## Install ssh-agent if not already installed, it is required by Docker.
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
## Run ssh-agent
- eval $(ssh-agent -s)
## Add the SSH key stored in SSH_PRIVATE_KEY_DEV variable to the agent store
- echo "$SSH_PRIVATE_KEY_DEV" | tr -d '\r' | ssh-add - > /dev/null
## Create the SSH directory and give it the right permissions
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
## Enable host key checking (to prevent man-in-the-middle attacks)
- ssh-keyscan $SERVER_HOST >> ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
## Git settings
- git config --global user.email ""
- git config --global user.name ""
## Install rsync if not already installed to upload files to server.
- 'which rsync || ( apt-get update -y && apt-get install rsync -y )'
script:
- rsync -r deploy/dev/pay $SERVER_USER#$SERVER_HOST:/home/$SERVER_USER/dev/backend
- ssh -tt $SERVER_USER#$SERVER_HOST 'cd dev/backend/pay && ./up.sh'
only:
refs:
- dev
I have already tried to turn off change triggers and clear gitlab container registry, but it didn't help.
Also I have found interesting thing, that when tools pipeline starts (it is the first pipeline) at that moment my server immediately creates pay folder with root owner and empty sub-folders.
What am I doing wrong? Thank you.
Hey there—GitLab team member here: I am looking into your post to help troubleshoot your issue. Linked here is a doc on what to do when you encounter Permissions Problems with GitLab+Docker.
It's likely that you have tried some of these steps, so please let me know! I'll keep researching while I wait to hear back from you. Thanks!
I have the gitlab repository setup with the frontend and backend folders inside it. Basically my folder structure is as below,
--repo
- frontend folder
- backend folder
- gitlab-ci.yml
According to the docs, the gitlab-ci.yml file is placed in the root folder as provided in the image.
I am getting the error while running the pipeline. "npm install" command does not gets executed, instead it gets errored out as no such file or directory. The package.json file is placed inside the backend folder.
I would require to change the directory while npm install command and also to deploy.
My gitlab-ci.yml file is as below,
# Node docker image on which this would be run
image: node:8.10.0
cache:
paths:
- node_modules/
stages:
- test
- deploy_production
# Job 1:
Test:
stage: test
script:
- npm install
# Job 2:
# Deploy to staging
Production:
image: ruby:latest
only:
- master
stage: deploy_production
script:
- apt-get update -qy
- apt-get install -y ruby-dev
- gem install dpl
- dpl --provider=heroku --app=XXXXXXX --api-key=XXXXXXXXXXXXXXXXXXXXXXXXXX
Any help would be really appreciated! Thanks
npm install needs to run in a folder containing a package.json file. I suspect this file might be present in your subfolders (frontend and/or backend).
You should add
before_script:
- cd backend # or frontend
to your Test job.
I am using a Gitlag Server and got 2 gitlab-runners (one on my local and one on a VServer) - both work perfectly with echo and simple stuff like building a ubuntu server with mysql and php
stages:
- dbserver
- deploy
build:
stage: dbserver
image: ubuntu:16.04
services:
- mysql:5.7
- php:7.0
variables:
MYSQL_DATABASE: test
MYSQL_ROOT_PASSWORD: test2
script:
- apt-get update -q && apt-get install -qqy --no-install-recommends
mysql-client
- mysql --user=root --password=\"$MYSQL_ROOT_PASSWORD\" --
host=mysql < test.sql
I want to import a DB now, but I do not get the idea or the technic behind it. How do I import a .sql file lying on my local PC or server? Do I need to create a DOCKERFILE by myself or can I do that just with the gitlab.yml file?
You can use scp to copy the .sql file to the runner.
You may need to add the commands to install openssh-client e.g.:
script:
apt-get update -y && apt-get install openssh-client -y
and then just add the scp line before invoking mysql, e.g.:
- scp user#server:/path/to/file.sql /tmp/temp.sql
- mysql --user=root --password=\"$MYSQL_ROOT_PASSWORD\" --host=mysql < /tmp/temp.sql
I found a solution which binds a directory from the gitlab-runner machine to the actual container I am using:
sudo nano /etc/gitlab-runner/config.toml
there you change the volumes to something like this
volumes = ["/home/ubuntu/test:/cache"]
/home/ubuntu/test is the directory from the machine and /cache the one from the container
Before you do so I recommend to stop the runner and then start it again
I developp a php symfony project and I use gitlab.
I want to build the unit test on gitlab, for that I use gitlab-ci.yml file :
image: php:5.6
# Select what we should cache
cache:
paths:
- vendor/
before_script:
# Install git, the php image doesn't have installed
- apt-get update -yqq
- apt-get install git -yqq
# Install mysql driver
- docker-php-ext-install pdo_mysql
# Install composer
- curl -sS https://getcomposer.org/installer | php
# Install all project dependencies
- php composer.phar install
services:
- mysql
variables:
# Configure mysql service (https://hub.docker.com/_/mysql/)
MYSQL_DATABASE: hello_world_test
MYSQL_ROOT_PASSWORD: mysql
# We test PHP5.5 (the default) with MySQL
test:mysql:
script:
- phpunit --configuration phpunit_mysql.xml --coverage-text -c app/
It currently doesn't work because my hostname isn't resolved on the docker container.
I found the solution here : How can you make the docker container use the host machine's /etc/hosts file?
My question is : Where do I write the --net=host option ?
Thanks.
You need to use the network_mode parameter in the docker image itself, by editing the config.toml file as (somewhat poorly) described in the gitlab-runner advanced configuration docs.
you can also do it when you create the docker image:
gitlab-runner register --docker-network-mode 'host'
I don't believe you can set it directly from the gitlab yml file.