Google compute and servicce account for registry access - docker

I' ve got a private gitlab host and want to add some runners on gcp.
So, I've:
create a service account (with Editor rights on the project)
create a compute instance (named gitlab-runner) with Ubuntu 16.04 on it and the service account associated.
install gitlab-runner / kubectl / docker-ce on it
register a runner of type shell
register a runner of type docker
The runner shell have no problem what so ever.
The runner docker ? well... works with something like this
exemple:
stage: build
image: google/cloud-sdk:latest
tags:
- runner-docker
script:
- # do something here
My problem is when I want to use an image I previously build like this:
exemple2:
stage: build
image: eu.gcr.io/project/image_name:$CI_COMMIT_SHA
tags:
- runner-docker
script:
- # do something here
When I do this, gitlab-runner can't pull the image.
So, I've tried somehting like this: Access google container registry without the gcloud client
Then, whene I connect to the gitlab-runner (via ssh) I've no problem doing a pull.
But the runner can't.
Any idea what going wrong ?
I've done a temporay gitlab-ci.yml like this:
stage:
- build
- test
variables:
CI_DEBUG_TRACE: "true"
test_gcloud_shell:
stage: build
tags:
- shell
before_script:
- echo "disable before script"
script:
- docker run --rm eu.gcr.io/project/image_name:latest
test_gcloud_docker:
stage: test
image: eu.gcr.io/project/image_name:latest
tags:
- docker
before_script:
- echo "disable before script"
script:
- echo "hello"
The task test_gcloud_shell work without any problem, but not test_gcloud_docker.
Any id ?

Have you set DOCKER_AUTH_CONFIG? See GitLab's docs and a similar issue.
You probably need to use the service account's JSON key file if you want long-lived credentials.

Related

How to setup Gitlab CI E2E tests using Multiple dockers

I am a bit lost with the automated testing using Gitlab CI. I hope I can explain my problem so somebody can help me. I'll try to explain the situation first, after which I'll try to ask a question (which is harder than it sounds)
Situation
Architecture
React frontend with Jest unit tests and Cypress e2e tests
Django API server 1 including a Postgres database and tests
Django API server 2 with a MongoDB database (which communicates with the other API
Gitlab
For the 2 API's, there is a Docker and a docker-compose file. These work fine and are set up correctly.
We are using GitLab for the CI/CD, there we have the following stages in this order:
build: where dockers for 1, 2 & 3 are build separate and pushed to private-registry
Test: Where the unit testing and e2e test (should) run
Release: where the docker images are released
Deploy: Where the docker images are deployed
Goal
I want to set up the GitLab CI such that it runs the cypress tests. But for this, all build dockers are needed. Currently, I am not able to use all dockers together when performing the end-to-end tests.
Problem
I don't really get how I would achieve this.
Can I use the dockers that are built in the build stage for my e2e tests and can somebody give me an example of how this would be achieved? (By running the build docker containers as a service?)
Do I need one Docker-compose file including all dockers and databases?
Do I even need a dind?
I hope somebody can give me some advice on how to achieve this. An example would be even better but I don't know if somebody would want to do that.
Thanks for taking the time to read!
(if needed) Example of the API server 1
build-api:
image: docker:19
stage: build
services:
- docker:19-dind
script:
cd api
docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
docker pull $IMAGE_TAG_API:latest || true
docker build -f ./Dockerfile --cache-from $IMAGE_TAG_API:latest --tag $IMAGE_TAG_API:$CI_COMMIT_SHA .
docker push $IMAGE_TAG_API:$CI_COMMIT_SHA
test-api:
image: docker:19
stage: test
services:
- postgres:12.2-alpine
- docker:19-dind
variables:
DB_NAME: project_ci_test
POSTGRES_HOST_AUTH_METHOD: trust
script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker pull $IMAGE_TAG_API:$CI_COMMIT_SHA
- docker run $IMAGE_TAG_API:$CI_COMMIT_SHA sh -c "python manage.py test"
after_script:
- echo "Pytest tests complete"
coverage: "/TOTAL.+ ([0-9]{1,3}%)/"
release-api-staging:
image: docker:19
stage: release
services:
- docker:19-dind
only:
refs: [ master ]
changes: [ ".gitlab-ci.yml", "api/**/*" ]
environment:
name: staging
script:
- docker pull $IMAGE_TAG_API:$CI_COMMIT_SHA
- docker tag $IMAGE_TAG_API:$CI_COMMIT_SHA $IMAGE_TAG_API:latest
- docker push $IMAGE_TAG_API:latest
The answer is a bit late, but still i'll try to explain the approach briefly for other developers with same issues. I also created an example project, contain 3 microservices in GitLab, where Server A runs end-to-end tests and is dependend on Server B and Server C.
When e2e test full-stack applications you have to either:
mock all the responses of the microservices
test against a deployed environment;
or spin-up the environment temporary in the pipeline
As you noted, you want to spin-up the environment temporary in the pipeline. The following steps should be taken:
Deploy all backends as docker images in GitLab's private registry;
Mimic your docker-compose.yml services in 1 job in the pipeline;
connect the dots together.
Deploy backends as docker images in GitLab private registry
First you have to publish your docker images in the private registry of GitLab. You do this, because you now can reuse those images in another job. For this approach you need docker:dind. A simple example job to publish to a private registry on gitlab looks like:
before_script:
- echo -n $CI_JOB_TOKEN | docker login -u gitlab-ci-token --password-stdin $CI_REGISTRY
publish:image:docker:
stage: publish
image: docker
services:
- name: docker:dind
alias: docker
variables:
CI_DOCKER_NAME: ${CI_REGISTRY_IMAGE}/my-docker-image
script:
- docker pull $CI_REGISTRY_IMAGE || true
- docker build --pull --cache-from $CI_REGISTRY_IMAGE --tag $CI_DOCKER_NAME --file Dockerfile .
- docker push $CI_DOCKER_NAME
only:
- master
To see a real-world example, I have an example project that is public available.
Mimic your docker-compose.yml services in 1 job in the pipeline
Once you dockerized all backends and published the images on a private registry, you can start to mimic your docker-compose.yml with a GitLab job. A basic example:
test:e2e:
image: ubuntu:20.04
stage: test
services:
- name: postgres:12-alpine
alias: postgress
- name: mongo
alias: mongo
# my backend image
- name: registry.gitlab.com/[MY_GROUP]/my-docker-image
alias: server
script:
- curl http://server:3000 # expecting server exposes on port 3000, this should work
- curl http://mongo:270117 # should work
- curl http://postgress:5432 # should work!
Run the tests
Now everything is running in a single job in GitLab, you can simply start your front-end in detached mode and run cypress to test it. Example:
script:
- npm run start & # start in detached mode
- wait-on http://localhost:8080 # see: https://www.npmjs.com/package/wait-on
- cypress run # make sure cypress is available as well
Conclusion
Your docker-compose.yml is not meant to run in a pipeline. Mimic it instead using GitLab services. Dockerize all backends and store them in GitLab's private registry. Spin up all services in your pipeline and run your tests.
This article might shed some light.
https://jessie.codes/article/running-cypress-gitlab-ci/
Essentially, you make two docker composers, one for your Cypress test and one for your items that are to be tested. This gets around the issues with images being able to access node and docker.

Automate local deployment of docker containers with gitlab runner and gitlab-ci without privileged user

We have a prototype-oriented develop environment, in which many small services are being developed and deployed to our on-premise hardware. We're using GitLab to manage our code and GitLab CI / CD for continuous integration. As a next step, we also want to automate the deployment process. Unfortunately, all documentation we find uses a cloud service or kubernetes cluster as target environment. However, we want to configure our GitLab runner in a way to deploy docker containers locally. At the same time, we want to avoid using a privileged user for the runner (as our servers are so far fully maintained via Ansible / services like Portainer).
Typically, our .gitlab-ci.yml looks something like this:
stages:
- build
- test
- deploy
dockerimage:
stage: build
# builds a docker image from the Dockerfile in the repository, and pushes it to an image registry
sometest:
stage: test
# uses the docker image from build stage to test the service
production:
stage: deploy
# should create a container from the above image on system of runner without privileged user
TL;DR How can we configure our local Gitlab Runner to locally deploy docker containers from images defined in Gitlab CI / CD without usage of privileges?
The Build stage is usually the one that people use Docker in Docker (find). To not have to use the privileged user you can use the kaniko executor image in Gitlab.
Specifically you would use the kaniko debug image like this:
dockerimage:
stage: build
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: [""]
script:
- mkdir -p /kaniko/.docker
- echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_REGISTRY_PASSWORD\"}}}" > /kaniko/.docker/config.json
- /kaniko/executor --context $CI_PROJECT_DIR --dockerfile $CI_PROJECT_DIR/Dockerfile --destination $CI_REGISTRY_IMAGE:$CI_COMMIT_TAG
rules:
- if: $CI_COMMIT_TAG
You can find examples of how to use it in Gilab's documentation.
If you want to use that image in the deploy stage you simply need to reference the created image.
You could do something like this:
production:
stage: deploy
image: $CI_REGISTRY_IMAGE:$CI_COMMIT_TAG
With this method, you do not need a privileged user. But I assume this is not what you are looking to do in your deployment stage. Usually, you would just use the image you created in the container registry to deploy the container locally. The last method explained would only deploy the image in the GitLab runner.

Gitlab's CI docker in docker login and test containers

I have a project that needs a TestContainers running to execute end2end tests.
The Containers 's image is another project which docker image is pushed to GitLab's Container Registry. This means that, whenever I want to do docker pull of this image, I need to do a docker login first.
Locally it works fine, I just do a login, run my tests and everything's ok.. on the pipeline is another story.
In GitLab's documentation, on the pipeline's configuration file .gitlab-ci.yml, they use image: docker:19.03.12. The problem with that is that I need to run ./gradlew, and said image doesn't have java for it to run. Otherwise, if I set the image to image: gradle:jdk14, even if I setup DockerInDocker, when I run docker login, it says docker is not recognized as a command.
I tried creating a custom image with Docker and Java14, but still get the following error:
com.github.dockerjava.api.exception.NotFoundException: {"message":"pull access denied for registry.gitlab.com/projects/projecta, repository does not exist or may require 'docker login': denied: requested access to the resource is denied"}
As you can see in the gitlab-ci file, it's running docker login before executing the tests, and according to the pipeline's output is successful
.gitlab-ci.yml
image: gradle:jdk14
variables:
GRADLE_OPTS: "-Dorg.gradle.daemon=false"
stages:
- build
- test
before_script:
- export GRADLE_USER_HOME=`pwd`/.gradle
assemble:
stage: build
script:
- ./gradlew assemble
only:
changes:
- "**/*.gradle.kts"
- gradle.properties
cache:
key: $CI_PROJECT_NAME
paths:
- .gradle/wrapper
- .gradle/caches
policy: push
cache:
key: $CI_PROJECT_NAME
paths:
- .gradle/wrapper
- .gradle/caches
policy: pull
test:
stage: test
image: registry.gitlab.com/project/docker-jdk14:latest #<-- my custom image
dependencies:
- checkstyle
services:
- docker:dind
variables:
DOCKER_HOST: "tcp://docker:2375"
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: ""
script:
- docker login -u $CI_DEPLOY_USER -p $CI_DEPLOY_PASSWORD $CI_REGISTRY
- ./gradlew test
I have the feeling that I'm missing something but so far the only explanation I can come up with is that the docker login the pipeline is executing doesn't set the credentials to the inner docker instance.
Is there anyway to call the login in the inner instance instead of the outer one?
I thought about doing the login call inside the test.. but that would be my last option.
If I'm reading your question correctly you're trying to run CI for project gitlab.com/projects/projectb, which uses image built in project gitlab.com/projects/projecta during tests.
You're attempting to pull image registry.gitlab.com/projects/projecta using username and password from predefined variables $CI_DEPLOY_USER and $CI_DEPLOY_PASSWORD.
It doesn't work, because that user has only permissions to access gitlab.com/projects/projectb. What you need to do is to create deploy token for project gitlab.com/projects/projecta with permissions to access the registry, supply it to your CI in gitlab.com/projects/projectb via custom variables and use those to login to $CI_REGISTRY.

Use docker without registry for gitlab-ci

My school has a personal gitlab setup, but it doesn't have a registry setup for docker images.
What I want to do is run my pipeline with docker, so that I can build, test etc in a docker environment.
Right now i am trying random stuff because I don't know what I am doing. This is what I have now:
Gitlab-ci:
image: docker:latest
services:
- docker:dind
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
build-master:
stage: build
script:
- docker build --pull -t "$CI_REGISTRY_IMAGE" .
- docker push "$CI_REGISTRY_IMAGE"
build:
stage: build
script:
- docker build --pull -t "$CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG" .
- docker push "$CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG"
My secret variables on gitlab:
My error message in the pipeline:
Something else I tried uses a gitlab repo. This uses the docker image for ros correctly, but in my application I also use opencv, so I want to add more to the docker image. If i know how to do that in the example below, thats also an option. On top of this, in the example below i can't run tests.
Gitlab-ci:
image: ros:kinetic-ros-core
stages:
- build
variables:
ROS_PACKAGES_TO_INSTALL: ""
USE_ROSDEP: "true"
cache:
paths:
- ccache/
before_script:
- git clone https://gitlab.com/VictorLamoine/ros_gitlab_ci.git
- source ros_gitlab_ci/gitlab-ci.bash
catkin_make:
stage: build
script:
- catkin_make
catkin_build:
stage: build
script:
- catkin build --summarize --no-status --force-color
As I said I have tried many things, this is just the latest thing I have tried. How can I run my runners and gitlab-ci with docker without a gitlab registry?
Just use it withouth registry.
You just need to insert this to gitlab runner config file:
pull_policy = "if-not-present"
Thats enough, and remove commands like:
docker push ...
docker pull ...
Or even insert "|| true" at the end of the push pull command if you want to keep push pull in case, like this:
docker pull ... || true;
Which keeps your code to continue if command fail.
Just dont forget that : pull_policy = "if-not-present" , which allow You to run docker image withouth pull and push.
As image is in case if mussing builded, this works.
example:
[[runners]]
name = "Runner name"
url = ...
...
executor = "docker"
[runners.docker]
image = ...
pull_policy = "if-not-present"
...
You can change these secret variables to point to docker-hub registry server.
You have to create your account on that https://hub.docker.com/ and then use that details to configure - gitlab secret variables.

gitlab-runner - deploy docker image to a server

I can setup a gitlab-runner with docker image as below:-
stages:
- build
- test
- deploy
image: laravel/laravel:v1
build:
stage: build
script:
- npm install
- composer install
- cp .env.example .env
- php artisan key:generate
- php artisan storage:link
test:
stage: test
script: echo "Running tests"
deploy_staging:
stage: deploy
script:
- echo "What shall I do?"
environment:
name: staging
url: https://staging.example.com
only:
- master
It can pass the build stage and test stage, and I believe a docker image/container is ready for deployment. From Google, I observe it may use "docker push" to proceed next step, such as push to AWS ECS, or somewhere of Gitlab. Actually I wish to understand, if I can push it directly to another remote server (e.g. by scp)?
A docker image is a combination of different Layers which are being built when you use the docker build command. It reuses existing Layers and gives the combination of layers a name, which is your image name. They are usually stored somewhere in /var/lib/docker.
In general all necessary data is stored on your system, yes. But it is not advised to directly copy these layers to a different machine and I am not quite sure if this would work properly. Docker advises you to use a "docker registry". Installing an own registry on your remote server is very simple, because the registry can also be run as a container (see the docs).
I'd advise you to stick to the proposed solution's from the docker team and use the public DockerHub registry or an own registry if you have sensitive data.
You are using GitLab. GitLab provides its own registry. You can push your images to your own Gitlab registry and pull it from your remote server. Your remote server only needs to authenticate against your registry and you're done. GitLab's CI can directly build and push your images to your own registry on each push to the master-branch, for example. You can find many examples in the docs.

Resources