Gitlab's CI docker in docker login and test containers - docker

I have a project that needs a TestContainers running to execute end2end tests.
The Containers 's image is another project which docker image is pushed to GitLab's Container Registry. This means that, whenever I want to do docker pull of this image, I need to do a docker login first.
Locally it works fine, I just do a login, run my tests and everything's ok.. on the pipeline is another story.
In GitLab's documentation, on the pipeline's configuration file .gitlab-ci.yml, they use image: docker:19.03.12. The problem with that is that I need to run ./gradlew, and said image doesn't have java for it to run. Otherwise, if I set the image to image: gradle:jdk14, even if I setup DockerInDocker, when I run docker login, it says docker is not recognized as a command.
I tried creating a custom image with Docker and Java14, but still get the following error:
com.github.dockerjava.api.exception.NotFoundException: {"message":"pull access denied for registry.gitlab.com/projects/projecta, repository does not exist or may require 'docker login': denied: requested access to the resource is denied"}
As you can see in the gitlab-ci file, it's running docker login before executing the tests, and according to the pipeline's output is successful
.gitlab-ci.yml
image: gradle:jdk14
variables:
GRADLE_OPTS: "-Dorg.gradle.daemon=false"
stages:
- build
- test
before_script:
- export GRADLE_USER_HOME=`pwd`/.gradle
assemble:
stage: build
script:
- ./gradlew assemble
only:
changes:
- "**/*.gradle.kts"
- gradle.properties
cache:
key: $CI_PROJECT_NAME
paths:
- .gradle/wrapper
- .gradle/caches
policy: push
cache:
key: $CI_PROJECT_NAME
paths:
- .gradle/wrapper
- .gradle/caches
policy: pull
test:
stage: test
image: registry.gitlab.com/project/docker-jdk14:latest #<-- my custom image
dependencies:
- checkstyle
services:
- docker:dind
variables:
DOCKER_HOST: "tcp://docker:2375"
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: ""
script:
- docker login -u $CI_DEPLOY_USER -p $CI_DEPLOY_PASSWORD $CI_REGISTRY
- ./gradlew test
I have the feeling that I'm missing something but so far the only explanation I can come up with is that the docker login the pipeline is executing doesn't set the credentials to the inner docker instance.
Is there anyway to call the login in the inner instance instead of the outer one?
I thought about doing the login call inside the test.. but that would be my last option.

If I'm reading your question correctly you're trying to run CI for project gitlab.com/projects/projectb, which uses image built in project gitlab.com/projects/projecta during tests.
You're attempting to pull image registry.gitlab.com/projects/projecta using username and password from predefined variables $CI_DEPLOY_USER and $CI_DEPLOY_PASSWORD.
It doesn't work, because that user has only permissions to access gitlab.com/projects/projectb. What you need to do is to create deploy token for project gitlab.com/projects/projecta with permissions to access the registry, supply it to your CI in gitlab.com/projects/projectb via custom variables and use those to login to $CI_REGISTRY.

Related

Cannot pull images using authenticated user in GitlabCI

I'm trying to mitigate the DockerHub pull limit by logging in to a DockerHub account in my gitlab-runner. I'm not using methods like Gitlab's Dependency proxy because I would have to edit hundreds of files. I decided to log in to Docker in gitlab-runner.
.gitlab-ci.yml:
image: docker
services:
- docker:dind
stages:
- base
docker-build:
stage: base
tags:
- experimental
script:
- docker build -t grex:alpine_${CI_PIPELINE_ID} ./alpine
- docker info
The alpine folder contains a Dockerfile containing just FROM alpine.
The config.toml of the gitlab-runner has the line pre_build_script = "docker login -u grex -p <password>"
The docker info line states that my user is logged in.
I followed all of the options from the docs but to no avail. After each pipeline run, I checked the current rate limit for my user and it remained unchanged, leaving me to infer the pipeline made an unauthenticated docker pull. Any help is appreciated!
After some experimentation, it seems Gitlab caches images and that resulted in the number of pulled images to not change.

How to setup Gitlab CI E2E tests using Multiple dockers

I am a bit lost with the automated testing using Gitlab CI. I hope I can explain my problem so somebody can help me. I'll try to explain the situation first, after which I'll try to ask a question (which is harder than it sounds)
Situation
Architecture
React frontend with Jest unit tests and Cypress e2e tests
Django API server 1 including a Postgres database and tests
Django API server 2 with a MongoDB database (which communicates with the other API
Gitlab
For the 2 API's, there is a Docker and a docker-compose file. These work fine and are set up correctly.
We are using GitLab for the CI/CD, there we have the following stages in this order:
build: where dockers for 1, 2 & 3 are build separate and pushed to private-registry
Test: Where the unit testing and e2e test (should) run
Release: where the docker images are released
Deploy: Where the docker images are deployed
Goal
I want to set up the GitLab CI such that it runs the cypress tests. But for this, all build dockers are needed. Currently, I am not able to use all dockers together when performing the end-to-end tests.
Problem
I don't really get how I would achieve this.
Can I use the dockers that are built in the build stage for my e2e tests and can somebody give me an example of how this would be achieved? (By running the build docker containers as a service?)
Do I need one Docker-compose file including all dockers and databases?
Do I even need a dind?
I hope somebody can give me some advice on how to achieve this. An example would be even better but I don't know if somebody would want to do that.
Thanks for taking the time to read!
(if needed) Example of the API server 1
build-api:
image: docker:19
stage: build
services:
- docker:19-dind
script:
cd api
docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
docker pull $IMAGE_TAG_API:latest || true
docker build -f ./Dockerfile --cache-from $IMAGE_TAG_API:latest --tag $IMAGE_TAG_API:$CI_COMMIT_SHA .
docker push $IMAGE_TAG_API:$CI_COMMIT_SHA
test-api:
image: docker:19
stage: test
services:
- postgres:12.2-alpine
- docker:19-dind
variables:
DB_NAME: project_ci_test
POSTGRES_HOST_AUTH_METHOD: trust
script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker pull $IMAGE_TAG_API:$CI_COMMIT_SHA
- docker run $IMAGE_TAG_API:$CI_COMMIT_SHA sh -c "python manage.py test"
after_script:
- echo "Pytest tests complete"
coverage: "/TOTAL.+ ([0-9]{1,3}%)/"
release-api-staging:
image: docker:19
stage: release
services:
- docker:19-dind
only:
refs: [ master ]
changes: [ ".gitlab-ci.yml", "api/**/*" ]
environment:
name: staging
script:
- docker pull $IMAGE_TAG_API:$CI_COMMIT_SHA
- docker tag $IMAGE_TAG_API:$CI_COMMIT_SHA $IMAGE_TAG_API:latest
- docker push $IMAGE_TAG_API:latest
The answer is a bit late, but still i'll try to explain the approach briefly for other developers with same issues. I also created an example project, contain 3 microservices in GitLab, where Server A runs end-to-end tests and is dependend on Server B and Server C.
When e2e test full-stack applications you have to either:
mock all the responses of the microservices
test against a deployed environment;
or spin-up the environment temporary in the pipeline
As you noted, you want to spin-up the environment temporary in the pipeline. The following steps should be taken:
Deploy all backends as docker images in GitLab's private registry;
Mimic your docker-compose.yml services in 1 job in the pipeline;
connect the dots together.
Deploy backends as docker images in GitLab private registry
First you have to publish your docker images in the private registry of GitLab. You do this, because you now can reuse those images in another job. For this approach you need docker:dind. A simple example job to publish to a private registry on gitlab looks like:
before_script:
- echo -n $CI_JOB_TOKEN | docker login -u gitlab-ci-token --password-stdin $CI_REGISTRY
publish:image:docker:
stage: publish
image: docker
services:
- name: docker:dind
alias: docker
variables:
CI_DOCKER_NAME: ${CI_REGISTRY_IMAGE}/my-docker-image
script:
- docker pull $CI_REGISTRY_IMAGE || true
- docker build --pull --cache-from $CI_REGISTRY_IMAGE --tag $CI_DOCKER_NAME --file Dockerfile .
- docker push $CI_DOCKER_NAME
only:
- master
To see a real-world example, I have an example project that is public available.
Mimic your docker-compose.yml services in 1 job in the pipeline
Once you dockerized all backends and published the images on a private registry, you can start to mimic your docker-compose.yml with a GitLab job. A basic example:
test:e2e:
image: ubuntu:20.04
stage: test
services:
- name: postgres:12-alpine
alias: postgress
- name: mongo
alias: mongo
# my backend image
- name: registry.gitlab.com/[MY_GROUP]/my-docker-image
alias: server
script:
- curl http://server:3000 # expecting server exposes on port 3000, this should work
- curl http://mongo:270117 # should work
- curl http://postgress:5432 # should work!
Run the tests
Now everything is running in a single job in GitLab, you can simply start your front-end in detached mode and run cypress to test it. Example:
script:
- npm run start & # start in detached mode
- wait-on http://localhost:8080 # see: https://www.npmjs.com/package/wait-on
- cypress run # make sure cypress is available as well
Conclusion
Your docker-compose.yml is not meant to run in a pipeline. Mimic it instead using GitLab services. Dockerize all backends and store them in GitLab's private registry. Spin up all services in your pipeline and run your tests.
This article might shed some light.
https://jessie.codes/article/running-cypress-gitlab-ci/
Essentially, you make two docker composers, one for your Cypress test and one for your items that are to be tested. This gets around the issues with images being able to access node and docker.

Docker with BitBucket

I'm trying to make automatic publishing using docker + bitbucket pipelines; unfortunately, I have a problem. I read the pipelines deploy instructions on Docker Hub, and I created the following template:
# This is a sample build configuration for Docker.
# Check our guides at https://confluence.atlassian.com/x/O1toN for more examples.
# Only use spaces to indent your .yml configuration.
# -----
# You can specify a custom docker image from Docker Hub as your build environment.
image: atlassian/default-image:2
pipelines:
default:
- step:
services:
- docker
script: # Modify the commands below to build your repository.
# Set $DOCKER_HUB_USERNAME and $DOCKER_HUB_PASSWORD as environment variables in repository settings
- export IMAGE_NAME=paweltest/tester:$BITBUCKET_COMMIT
# build the Docker image (this will use the Dockerfile in the root of the repo)
- docker build -t paweltest/tester .
# authenticate with the Docker Hub registry
- docker login --username $DOCKER_HUB_USERNAME --password $DOCKER_HUB_PASSWORD
# push the new Docker image to the Docker registry
- docker push paweltest/tester:tagname
I have completed the data, but after doing the push, I get the following error when the build starts:
unable to prepare context: lstat/opt/atlassian/pipelines/agent/build/Dockerfile: no dry file or directory
What would I want to achieve? After posting changes to the repository, I'd like for an image to be automatically built and sent to the Docker hub, preferably to the target server where the application is.
I've looked for a solution and tried different combinations. For now, I have about 200 commits with Failed status and no further ideas.
Bitbucket pipelines is a CI/CD service, you can build your applications and deploy resources to production or test server instance. You can build and deploy docker images too - it shouldn't be a problem unless you do something wrong...
All defined scripts in bitbucket-pipelines.yml file are running in a container created from the indicated image(atlassian/default-image:2 in your case)
You should have Dockerfile in the project and from this file you can build and publish a docker image.
I created simple repository without Dockerfile and I started build:
unable to prepare context: unable to evaluate symlinks in Dockerfile
path: lstat /opt/atlassian/pipelines/agent/build/Dockerfile: no such
file or directory
I need Dockerfile in my project to build an image(at the same level as the bitbucket-pipelines.yml file):
FROM node:latest
WORKDIR /src/
EXPOSE 4000
In next step I created a public DockerHub repository:
I also changed your bitbucket-pipelines.yml file(you forgot to mark the new image with a tag):
image: atlassian/default-image:2
pipelines:
default:
- step:
services:
- docker
script:
# build the Docker image (this will use the Dockerfile in the root of the repo)
- docker build -t appngpl/stackoverflow-question-56065689 .
# add new image tag
- docker tag appngpl/stackoverflow-question-56065689 appngpl/stackoverflow-question-56065689:$BITBUCKET_COMMIT
# authenticate with the Docker Hub registry
- docker login --username $DOCKER_HUB_USERNAME --password $DOCKER_HUB_PASSWORD
# push the new Docker image to the Docker registry
- docker push appngpl/stackoverflow-question-56065689:$BITBUCKET_COMMIT
Result:
Everything works fine :)
Bitbucket repository: https://bitbucket.org/krzysztof-raciniewski/stackoverflow-question-56065689
GitHub image repository: https://hub.docker.com/r/appngpl/stackoverflow-question-56065689

Google compute and servicce account for registry access

I' ve got a private gitlab host and want to add some runners on gcp.
So, I've:
create a service account (with Editor rights on the project)
create a compute instance (named gitlab-runner) with Ubuntu 16.04 on it and the service account associated.
install gitlab-runner / kubectl / docker-ce on it
register a runner of type shell
register a runner of type docker
The runner shell have no problem what so ever.
The runner docker ? well... works with something like this
exemple:
stage: build
image: google/cloud-sdk:latest
tags:
- runner-docker
script:
- # do something here
My problem is when I want to use an image I previously build like this:
exemple2:
stage: build
image: eu.gcr.io/project/image_name:$CI_COMMIT_SHA
tags:
- runner-docker
script:
- # do something here
When I do this, gitlab-runner can't pull the image.
So, I've tried somehting like this: Access google container registry without the gcloud client
Then, whene I connect to the gitlab-runner (via ssh) I've no problem doing a pull.
But the runner can't.
Any idea what going wrong ?
I've done a temporay gitlab-ci.yml like this:
stage:
- build
- test
variables:
CI_DEBUG_TRACE: "true"
test_gcloud_shell:
stage: build
tags:
- shell
before_script:
- echo "disable before script"
script:
- docker run --rm eu.gcr.io/project/image_name:latest
test_gcloud_docker:
stage: test
image: eu.gcr.io/project/image_name:latest
tags:
- docker
before_script:
- echo "disable before script"
script:
- echo "hello"
The task test_gcloud_shell work without any problem, but not test_gcloud_docker.
Any id ?
Have you set DOCKER_AUTH_CONFIG? See GitLab's docs and a similar issue.
You probably need to use the service account's JSON key file if you want long-lived credentials.

gitlab-runner - deploy docker image to a server

I can setup a gitlab-runner with docker image as below:-
stages:
- build
- test
- deploy
image: laravel/laravel:v1
build:
stage: build
script:
- npm install
- composer install
- cp .env.example .env
- php artisan key:generate
- php artisan storage:link
test:
stage: test
script: echo "Running tests"
deploy_staging:
stage: deploy
script:
- echo "What shall I do?"
environment:
name: staging
url: https://staging.example.com
only:
- master
It can pass the build stage and test stage, and I believe a docker image/container is ready for deployment. From Google, I observe it may use "docker push" to proceed next step, such as push to AWS ECS, or somewhere of Gitlab. Actually I wish to understand, if I can push it directly to another remote server (e.g. by scp)?
A docker image is a combination of different Layers which are being built when you use the docker build command. It reuses existing Layers and gives the combination of layers a name, which is your image name. They are usually stored somewhere in /var/lib/docker.
In general all necessary data is stored on your system, yes. But it is not advised to directly copy these layers to a different machine and I am not quite sure if this would work properly. Docker advises you to use a "docker registry". Installing an own registry on your remote server is very simple, because the registry can also be run as a container (see the docs).
I'd advise you to stick to the proposed solution's from the docker team and use the public DockerHub registry or an own registry if you have sensitive data.
You are using GitLab. GitLab provides its own registry. You can push your images to your own Gitlab registry and pull it from your remote server. Your remote server only needs to authenticate against your registry and you're done. GitLab's CI can directly build and push your images to your own registry on each push to the master-branch, for example. You can find many examples in the docs.

Resources