I have successfully installed Gitlab CI and also linked it with my gitlab. I have also configured shared runners using docker with a ruby-2.2 image with mysql service.
Following was what I executed to configure a runner by referring https://about.gitlab.com/2015/04/17/unofficial-gitlab-ci-runner/:
$ gitlab-ci-multi-runner register \
--non-interactive \
--url "https://my.gitlab.ip/" \
--registration-token "REGISTRATION_TOKEN" \
--description "ruby-mysql" \
--executor "docker" \
--docker-image ruby:2.2 --docker-mysql latest
I have a sample Ruby / Rails application and for some reason the runner doesn't run the build. Here is my .gitlab-ci.yml:
image: ruby:2.2
services:
- mysql:latest
before_script:
- ruby -v
- gem install bundler
- cp config/database.yml.example config/database.yml
- cp config/secrets.yml.example config/secrets.yml
- bundle install
spec:
script:
- bundle exec rspec
tags:
- ruby-mysql
Try removing the first line image: ruby:2.2 of your .gitlab-ci.yml
I had a similar issue where the CI reported a success, yet didn't do any work.
I used the lint provided on http://my.domain/lint
It flagged "image" and "stage/stages" as not correct.
That's why I think removing the first line will help you.
I think the problem is that the community version doesn't recognize the keywords yet.
Related
I'm using AWS ECR to host a private Dockerfile image, and I would like to use it in GitLab CI.
Accordingly to the documentation I need to set docker-credential-ecr-login to fetch the private image, but I have no idea how to do that before anything else. That's my .gitlab-ci file:
image: 0222822883.dkr.ecr.us-east-1.amazonaws.com/api-build:latest
tests:
stage: test
before_script:
- echo "before_script"
- apt install amazon-ecr-credential-helper
- apk add --no-cache curl jq python py-pip
- pip install awscli
script:
- echo "script"
- bundle install
- bundle exec rspec
allow_failure: true # for now as we do not have tests
Thank you.
I confirm the feature at stake is not yet available in GitLab CI; however I've recently seen it is possible to implement a generic workaround to run a dedicated CI script within a container taken from a private Docker image.
The template file .gitlab-ci.yml below is adapted from the OP's example, using the Docker-in-Docker approach I suggested in this other SO answer, itself inspired by the GitLab CI doc dealing with dind:
stages:
- test
variables:
IMAGE: "0222822883.dkr.ecr.us-east-1.amazonaws.com/api-build:latest"
REGION: "ap-northeast-1"
tests:
stage: test
image: docker:latest
services:
- docker:dind
variables:
# GIT_STRATEGY: none # uncomment if "git clone" is unneeded for this job
before_script:
- ': before_script'
- apt install amazon-ecr-credential-helper
- apk add --no-cache curl jq python py-pip
- pip install awscli
- $(aws ecr get-login --no-include-email --region "$REGION")
- docker pull "$IMAGE"
script:
- ': script'
- |
docker run --rm -v "$PWD:/build" -w /build "$IMAGE" /bin/bash -c "
export PS4='+ \e[33;1m($CI_JOB_NAME # line \$LINENO) \$\e[0m ' # optional
set -ex
## TODO insert your multi-line shell script here ##
echo \"One comment\" # quotes must be escaped here
: A better comment
echo $PWD # interpolated outside the container
echo \$PWD # interpolated inside the container
bundle install
bundle exec rspec
## (cont'd) ##
"
- ': done'
allow_failure: true # for now as we do not have tests
This example assumes the Docker $IMAGE contains the /bin/bash binary, and relies on the so-called block style of YAML.
The above template already contains comments, but to be self-contained:
You need to escape double quotes if your Bash commands contain them, because the whole code is surrounded by docker run … " and ";
You also need to escape local Bash variables (cf. the \$PWD above), otherwise these variables will be resolved prior running the docker run … "$IMAGE" /bin/bash -c "…" command itself.
I replaced the echo "stuff" or so commands with their more effective colon counterpart:
set -x
: stuff
: note that these three shell commands do nothing
: but printing their args thanks to the -x option.
[Feedback is welcome as I can't directly test this config (I'm not an AWS ECR user), but I'm puzzled by the fact the OP's example contained at the same time some apt and apk commands…]
Related remark on a pitfall of set -e
Beware that the following script is buggy:
set -e
command1 && command2
command3
Namely, write instead:
set -e
command1 ; command2
command3
or:
set -e
( command1 && command2 )
command3
To be convinced about this, you can try running:
bash -e -c 'false && true; echo $?; echo this should not be run'
→ 1
→ this should not be run
bash -e -c 'false; true; echo $?; echo this should not be run'
bash -e -c '( false && true ); echo $?; echo this should not be run'
From GitLab documentation. In order to interact with your AWS account, the GitLab CI/CD pipelines require both AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY to be defined in your GitLab settings under Settings > CI/CD > Variables. Then add to your before script:
image: 0222822883.dkr.ecr.us-east-1.amazonaws.com/api-build:latest
tests:
stage: test
before_script:
- echo "before_script"
- apt install amazon-ecr-credential-helper
- apk add --no-cache curl jq python py-pip
- pip install awscli
- $( aws ecr get-login --no-include-email )
script:
- echo "script"
- bundle install
- bundle exec rspec
allow_failure: true # for now as we do not have tests
Also, you had a typo is awscli, not awsclir.Then add the builds, tests and push accordingly.
I think that you have some sort of logic error in the case. image in the build configuration is a CI scripts runner image, not image you build and deploy.
I think you don't have to use it in any case since it is just an image which has utilities & connections to the GitLab CI & etc. The image shouldn't have any dependencies of your project normally.
Please check examples like this one https://gist.github.com/jlis/4bc528041b9661ae6594c63cd2ef673c to get it more clear how to do it a correct way.
I faced the same problem using docker executor mode of gitlab runner.
SSH into the EC2 instance showed that docker-credential-ecr-login was present in /usr/bin/. To pass it to the container I had to mount this package to the gitlab runner container.
gitlab-runner register -n \
--url '${gitlab_url}' \
--registration-token '${registration_token}' \
--template-config /tmp/gitlab_runner.template.toml \
--executor docker \
--tag-list '${runner_name}' \
--description 'gitlab runner for ${runner_name}' \
--docker-privileged \
--docker-image "alpine" \
--docker-disable-cache=true \
--docker-volumes "/var/run/docker.sock:/var/run/docker.sock" \
--docker-volumes "/cache" \
--docker-volumes "/usr/bin/docker-credential-ecr-login:/usr/bin/docker-credential-ecr-login" \
--docker-volumes "/home/gitlab-runner/.docker:/root/.docker"
More information on this thread as well: https://gitlab.com/gitlab-org/gitlab-runner/-/issues/1583#note_375018948
We have a similar setup where we need to run CI jobs based off of an Image that is hosted on ECR.
Steps to follow:-
follow this guide here>> https://github.com/awslabs/amazon-ecr-credential-helper
gist of this above link is if you are on "Amazon Linux 2"
sudo amazon-linux-extras enable docker
sudo yum install amazon-ecr-credential-helper
open the ~/.docker/config.json on your gitlab runner in VI editor
Paste this code in the ~/.docker/config.json
{
"credHelpers":
{
"aws_account_id.dkr.ecr.region.amazonaws.com": "ecr-login"
}
}
source ~/.bashrc
systemctl restart docker
also remove any references of DOCKER_AUTH_CONFIG from your GitLab>>CI/CD>> Variables
That's it
I am trying to setup bitbucket pipeline for a php based (Laravel-Lumen) app intended to be deployed on nanobox.io. I want this pipeline to deploy my app as soon as code changes are committed.
My bitbucket-pipelines.yml looks like this
image: php:7.1.29
pipelines:
branches:
staging:
- step:
name: Publish to staging version
deployment: staging
caches:
- composer
script:
- apt-get update && apt-get install -y unzip
- curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
- composer install
# - vendor/bin/phpunit
- bash -c "$(curl -fsSL https://s3.amazonaws.com/tools.nanobox.io/bootstrap/ci.sh)"
- nanobox deploy
This gives Following error
+ nanobox deploy
Failed to validate provider - missing docker - exec: "docker": executable file not found in $PATH
Using nanobox with native requires tools that appear to not be available on your system.
docker
View these requirements at docs.nanobox.io/install
I then followed this page and changed second last line to look like this
sudo bash -c "$(curl -fsSL https://s3.amazonaws.com/tools.nanobox.io/bootstrap/ci.sh)"
when done that, I am getting following error
+ sudo bash -c "$(curl -fsSL https://s3.amazonaws.com/tools.nanobox.io/bootstrap/ci.sh)"
bash: sudo: command not found
I ran out of tricks here, also I don't have experience in this area. Any help is very much appreciated.
First, you can't use sudo in pipelines, but that's probably not relevant here. The issue is that nanobox cli wan't to execute docker, which isn't installed. You should enable the docker service for your step.
image: php:7.1.29
pipelines:
branches:
staging:
- step:
name: Publish to staging version
deployment: staging
# Enable docker service
services:
- docker
caches:
- composer
script:
- docker version
You might wan't to have a look at Pipelines docs as well: Run Docker commands in Bitbucket Pipelines
I am trying to setup gitlab-runner in a aws instance.
My project is in a group. Hence setting up group-runner so that I can use it for other projects in the group.
# gitlab-runner version
Version: 11.7.0
Git revision: 8bb608ff
Git branch: 11-7-stable
GO version: go1.8.7
Built: 2019-01-22T11:46:13+0000
OS/Arch: linux/amd64
# docker
Docker version 18.09.1, build 4c52b90
docker-machine version 0.16.0, build 702c267f
Registered the runner with
sudo gitlab-runner register
Please enter the gitlab-ci coordinator URL (e.g. https://gitlab.com )
https://gitlab.com
Please enter the gitlab-ci token for this runner
<group runner token>
Please enter the gitlab-ci description for this runner
[hostame] my-runner
Please enter the executor: ssh, docker+machine, docker-ssh+machine, kubernetes, docker, parallels, virtualbox, docker-ssh, shell:
docker
Please enter the Docker image (eg. ruby:2.1):
ruby:2.5
I can see my-runner runner registered from gitlab UI.
Whenever I run/retry the pipeline, it always execute with gitlab's auto-scaling instance.
Running with gitlab-runner 11.7.0-rc1 (6e20bd76)
on docker-auto-scale ed2dce3a
Using Docker executor with image ruby:2.5 ...
Pulling docker image ruby:2.5 ...
Using docker image sha256:feea8cad6f9e7cc58f7ae793ac92bd80fa1ce4da54a381921f161447e978021f for ruby:2.5 ...
Running on runner-ed2dce3a-project-10682917-concurrent-0 via runner-ed2dce3a-srm-1549352595-5d0f29b8...
Cloning repository...
Cloning into '/builds/dr5nn/gitlab-ci-demo'...
Where and what I am missing to run gitlab-runner from my custom machine?
Do I need to add IP address somewhere or enable some port in my aws instance?
Bellow is my .gitlab-ci.yml
before_script:
- apt-get update -qq && apt-get install -y -qq sqlite3 libsqlite3-dev nodejs
- ruby -v
- which ruby
- gem install bundler -v 2.0.1
- bundle install --jobs $(nproc) "${FLAGS[#]}"
rubocop:
script:
- bundle exec rubocop
You are on right way - all you need is to disable shared gitlab runners for your group or particular project.
Registering group runner enables it for the group, but that doesn't actually disable all other runners - pipeline still chooses the most convenient one based on tags and other criteria.
Other way - use your private tags (not like docker and etc) to select runner. Runners pick their jobs according to the set of tags, specified for jobs. For example, if job has tags docker and linux, only runners with such tags can pick it up. So, you can simply mark jobs, which you want to execute on your group runner (and not on shared runners) with tag like private-runner and add this tag to your runner.
With suggestion from grapes. I'm able to run with bellow configurations.
While registering gitlab-runner from my aws intance
...
...
Please enter the gitlab-ci tags for this runner (comma separated):
my-tag
...
...
And changed gitlab-ci.yml file with
before_script:
- apt-get update -qq && apt-get install -y -qq sqlite3 libsqlite3-dev nodejs
- ruby -v
- which ruby
- gem install bundler -v 2.0.1
- bundle install --jobs $(nproc) "${FLAGS[#]}"
rubocop:
tags:
- my-tag
script:
- bundle exec rubocop
I am building my project on CircleCI and I have a build job that looks like this:
build:
<<: *defaults
steps:
- checkout
- setup_remote_docker
- run:
name: Install pip
command: curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py && sudo python get-pip.py
- run:
name: Install AWS CLI
command: curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip" && unzip awscli-bundle.zip && sudo ./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws
- run:
name: Login to Docker Registry
command: aws ecr get-login --no-include-email --region us-east-1 | sh
- run:
name: Install Dep
command: curl https://raw.githubusercontent.com/golang/dep/master/install.sh | sh
- run:
name: Save Version Number
command: echo "export VERSION_NUM=${CIRCLE_TAG}.${CIRCLE_BUILD_NUM}" > deployment/dev/.env
- run:
name: Build App
command: source deployment/dev/.env && docker-compose -f deployment/dev/docker-compose.yml build
- run:
name: Test App
command: |
git config --global url."https://${GITHUB_PERSONAL_ACCESS_TOKEN} :x-oauth-basic#github.com/".insteadOf "https://github.com/"
dep ensure
go test -v ./...
- run:
name: Push Image
command: |
if [[ "${CIRCLE_TAG}" =~ ^v[0.9]+(\.[0-9]+)*-[a-z]*$ ]]; then
source deployment/dev/.env
docker-compose -f deployment/dev/docker-compose.yml push
else
echo 'No tag, not deploying'
fi
- persist_to_workspace:
root: .
paths:
- deployment/*
- tools/*
When I push a change to a branch, the build fails every time with Couldn't connect to Docker daemon at ... - is it running? when it reaches the Build App step of the build job.
Please help me figure out why branch builds are failing but tag builds are not.
I suspect you are hitting this docker-compose bug: https://github.com/docker/compose/issues/6050
The bug reports a misleading error (the one you're getting) when an image name in the docker-compose file is invalid.
If you use an environment variable for the image name or image tag, and that variable is set from a branch name, then it would fail on some branches, but not others.
The problem was occurring on the Save Version Number step. Sometimes that version would be .${CIRCLE_BUILD_NUM} since no tag was passed. Docker dislikes these tags starting with ., so I added a conditional check to see if CIRCLE_TAG was empty, and if it was, use some default version: v0.1.0-build.
I'm trying to setup codecov as code coverage tool in my repository. I referred to this link to pass reports through docker container -
Link - https://github.com/codecov/support/wiki/Testing-with-Docker
But travis ci fails and gives this error -
docker: Error parsing reference: "..." is not a valid repository/tag.
Here is my travis.yml
sudo: required
dist: trusty
language: node_js
node_js:
- 6
before_install:
- export CHROME_BIN=chromium-browser
- export DISPLAY=:99.0
- sh -e /etc/init.d/xvfb start
- docker run -v "$PWD/shared:/shared" ...
before_script:
- ng build
script:
- ng test --watch=false
- ng lint
- >
docker run -ti -v $(pwd):/app --workdir=/app coala/base coala --version
after_success:
- bash ./deploy.sh
- bash <(curl -s https://codecov.io/bash)
- mv -r coverage/ shared
cache:
bundler: true
directories:
- node_modules
- .coala-cache
services: docker
branches:
only:
- angular
How should I solve this? Thanks!
I assume you refer to Codecov Outside Docker. The current error message already tells you that the three dots ... need to be replaced with a real Docker repository name, e.g. node:6-alpine.
What you're still missing is the part of running the tests (including reports) inside the Docker container, so that you can mv the test reports to the shared folder. You could achieve that by adding a custom Dockerfile based on node, similar to the one below. I chose a more or less full base image including Chrome and other tools to make your use case work:
FROM markadams/chromium-xvfb-js:7
WORKDIR /proj
CMD npm install && \
node_modules/.bin/ng build && \
node_modules/.bin/ng test --watch=false && \
node_modules/.bin/ng lint && \
mkdir -p shared && \
mv coverage.txt shared
That custom image needs to be built and then run like this (assuming the Dockerfile to be in your project root directory):
docker build -t ci-build .
docker run --rm -v "$(pwd):/proj" ci-build
I suggest to change the .travis.yml like follows:
sudo: required
dist: trusty
language: node_js
node_js:
- 6
before_install:
- docker build -t ci-build .
script:
- >
docker run --rm -v $(pwd):/proj ci-build
- >
docker run -ti -v $(pwd):/app --workdir=/app coala/base coala --version
after_success:
- bash ./deploy.sh
- bash <(curl -s https://codecov.io/bash)
cache:
bundler: true
directories:
- node_modules
- .coala-cache
services: docker
branches:
only:
- angular
Another note: the coala/base image works similarly.