GitLab CI run docker container of other Repository - docker

I am generally still relatively new to the GitLab CI topic and unfortunately I cannot test this myself yet, so this is more of a theoretical attempt.
I want to start a Docker container from one of my other projects in Gitlab in the CI pipeline of my main project.
This Container (I now call it Mock-Container) is created and published in the GitLab CI pipeline of the corresponding project and contains various mocked services.
In the project in which I want to run the Mock-Container, it should be able to start that container in the GitLab CI.
I know it is possible to use a build of the project in a different stage in the same pipeline, like here for example:
variables:
DOCKER_HOST: tcp://docker:2376
DOCKER_TLS_CERTDIR: "/certs"
CONTAINER_TEST_IMAGE: $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG
CONTAINER_RELEASE_IMAGE: $CI_REGISTRY_IMAGE:latest
is it for example possible if the $CI_REGISTRY_IMAGE used in CONTAINER-IMAGE-Variables is like:
registry.gitlab.com/foo/bar/mainproject
to add a variable here like:
MOCK_CONTAINER_IMAGE: registry.gitlab.com/foo/bar/mockproject:latest
so I could for example could use it in the services list in the test stage:
build:
stage: build
image: quay.io/podman/stable
script:
- podman login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY --log-level=debug
- podman build --format docker --pull -t $CONTAINER_TEST_IMAGE .
- podman push $CONTAINER_TEST_IMAGE
test:
stage: test
image:
name: postman/newman
entrypoint: [ "" ]
services:
- name: $CONTAINER_TEST_IMAGE
alias: main-project
- name: $MOCK_CONTAINER_IMAGE
alias: mock-container
...
Is this possible or is there a better way to achieve this.

If you're asking that you want to set a variable in the .gitlab-ci.yml file with the registry URL of the other container like this:
variables:
MOCK_CONTAINER_IMAGE: registry.gitlab.com/foo/bar/mockproject:latest
then yes you can. And you can use the variable in different stages in your file as you please. If you want to pull this image here from the registry, you can do that in a stage as well.
Check this reference for more info: https://docs.gitlab.com/ee/ci/yaml/#variables

Related

Can't connect to Docker daemon in my GitLab CI pipeline

I am trying to build a super-simple CI/CD pipeline using GitLab CI.
Upon running it I get presented with the error:
Server:
ERROR: Cannot connect to the Docker daemon at tcp://docker:2375.
Is the docker daemon running?
My .gitlab-ci.yml is :
image: docker:latest
variables:
DOCKER_HOST: tcp://docker:2375
services:
- name: docker:dind
entrypoint: ["env", "-u", "DOCKER_HOST"]
command: ["dockerd-entrypoint.sh"]
before_script:
- docker --version
docker_build:
stage: build
image: docker:latest
services:
- docker:dind
script:
- docker build -t arieltar/hubsec:1.1 .
- docker push arieltar/hubsec:1.1
Based on the error message I would ask, does the gitlab-runner user belong to the docker group?
You will need to decide if you want to use Docker-in-Docker with, or without TLS. This requires changing /etc/gitlab-runner/config.toml settings, and assigning the DOCKER_TLS_CERTDIR in your .gitlab-ci.yml file. See the Docker-in-docker section of the GitLab docs.
Please check below things as prelim.
Whether docker is running or not
Login with gitlab-user if you are running pipeline with gitlab user and check if that user can access or run docker ps without sudo :).
add below entry if pt1. and pt2 satisfied.
services:
name: docker:dind
entrypoint: ["dockerd-entrypoint.sh", "--tls=false"]
script:
export DOCKER_HOST=tcp://127.0.0.1:2375 && docker build -t arieltar/hubsec:1.1 .

Gitlab CI docker cannot login to docker hub

i have two project on gitlab with same CI config file and ci variables. When i try to build dockerfile, one project passed, but second say:
Error: Cannot perform an interactive login from a non TTY device
config:
image: docker:latest
services:
- docker:dind
stages:
- build
variables:
CONTAINER_IMAGE: sleezy/go-hello-world:${CI_COMMIT_SHORT_SHA}
build:
stage: build
script:
- docker login -u ${DOCKER_USER} -p ${DOCKER_PASSWORD}
- docker build -t ${CONTAINER_IMAGE} .
- docker tag ${CONTAINER_IMAGE} ${CONTAINER_IMAGE}
- docker tag ${CONTAINER_IMAGE} sleezy/go-hello-world:latest
- docker push ${CONTAINER_IMAGE}
How i said, everything is same, variables, dockerhub account - username, password, config, even gitlab runner version, so i really dont know why? Any help, thanks.

A locally built Docker image within a Bitbucket Pipeline

What I need is a way to build a Dockerfile within the repository as an image and use this as the image for the next step(s).
I've tried the Bitbucket Pipeline configuration below but in the "Build" step it doesn't seem to have the image (which was built in the previous step) in its cache.
pipelines:
branches:
main:
- step:
name: Docker Image(s)
script:
- docker build -t foo/bar .docker/composer
services:
- docker
caches:
- docker
- step:
name: Build
image: foo/bar
script:
- echo "Hello, World"
- composer --version
services:
- docker
caches:
- docker
I've tried the answer on the StackOverflow question below but the context in that question is pushing the image in the following step. It's not about using the image which was built for the step itself.
Bitbucket pipeline use locally built image from previous step
There's a few conceptual mistakes in your current pipeline. Let me first first run through those before giving you some possible solutions.
Clarifications
Caching
Bitbucket Pipelines uses the cache keyword to persist data across multiple pipelines. Whilst it will also persist across steps, the primary use-case is for the data to be used on separate builds. The cache takes 7 days to expire, and thus will not be updated with new data during those 7 days. You can manually delete the cache on the main Pipelines page. If you want to carry data across steps in the same pipelines, you should use the artifacts keyword.
Docker service
You should only need to use the docker service whenever you want to have a docker daemon available to your build. Most commonly whenever you need to use a docker command in your script. In your second step, you do not need this. So it doesn't need the docker service.
Solution 1 - Combine the steps
Combine the steps, and run composer within the created image by using the docker run command.
pipelines:
branches:
main:
- step:
name: Docker image and build
script:
- docker build -t foo/bar .docker/composer
# Replace <destination> with the working directory of the foo/bar image.
- docker run -v $BITBUCKET_CLONE_DIR:<destination> foo/bar composer --version
services:
- docker
Solution 2 - Using two steps with DockerHub
This example keeps the two step approach. In this scenario, you will push your foo/bar image to a public repository in Dockerhub. Pipelines will then pull it to use in the subsequent step.
pipelines:
branches:
main:
- step:
name: Docker Image(s)
script:
- docker build -t foo/bar .docker/composer
- docker login -u $DOCKERHUB_USER -p $DOCKERHUB_PASSWORD
- docker push foo/bar
services:
- docker
- step:
name: Build
image: foo/bar
script:
- echo "Hello, World. I'm running insider of the previously pushed foo/bar container"
- composer --version
If you'd like to use a private repository instead, you can replace the second step with:
...
- step:
name: Build
image:
name: foo/bar
username: $DOCKERHUB_USERNAME
password: $DOCKERHUB_PASSWORD
email $DOCKERHUB_EMAIL
script:
- echo "Hello, World. I'm running insider of the previously pushed foo/bar container"
- composer --version
To expand on phod's answer. If you really want two steps, you can transfer the image from one step to another.
pipelines:
branches:
main:
- step:
name: Docker Image(s)
script:
- docker build -t foo/bar .docker/composer
- docker image save foo/bar -o foobar.tar.gz
services:
- docker
caches:
- docker
artifacts:
- foobar.tar.gz
- step:
name: Build
script:
- docker image load -i foobar.tar.gz
- docker run -v $BITBUCKET_CLONE_DIR:<destination> foo/bar composer --version
services:
- docker
Note that this will upload all the layers and dependencies for the image. It can take quite a while to execute and may therefor not be the best solution.

How to setup Gitlab CI E2E tests using Multiple dockers

I am a bit lost with the automated testing using Gitlab CI. I hope I can explain my problem so somebody can help me. I'll try to explain the situation first, after which I'll try to ask a question (which is harder than it sounds)
Situation
Architecture
React frontend with Jest unit tests and Cypress e2e tests
Django API server 1 including a Postgres database and tests
Django API server 2 with a MongoDB database (which communicates with the other API
Gitlab
For the 2 API's, there is a Docker and a docker-compose file. These work fine and are set up correctly.
We are using GitLab for the CI/CD, there we have the following stages in this order:
build: where dockers for 1, 2 & 3 are build separate and pushed to private-registry
Test: Where the unit testing and e2e test (should) run
Release: where the docker images are released
Deploy: Where the docker images are deployed
Goal
I want to set up the GitLab CI such that it runs the cypress tests. But for this, all build dockers are needed. Currently, I am not able to use all dockers together when performing the end-to-end tests.
Problem
I don't really get how I would achieve this.
Can I use the dockers that are built in the build stage for my e2e tests and can somebody give me an example of how this would be achieved? (By running the build docker containers as a service?)
Do I need one Docker-compose file including all dockers and databases?
Do I even need a dind?
I hope somebody can give me some advice on how to achieve this. An example would be even better but I don't know if somebody would want to do that.
Thanks for taking the time to read!
(if needed) Example of the API server 1
build-api:
image: docker:19
stage: build
services:
- docker:19-dind
script:
cd api
docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
docker pull $IMAGE_TAG_API:latest || true
docker build -f ./Dockerfile --cache-from $IMAGE_TAG_API:latest --tag $IMAGE_TAG_API:$CI_COMMIT_SHA .
docker push $IMAGE_TAG_API:$CI_COMMIT_SHA
test-api:
image: docker:19
stage: test
services:
- postgres:12.2-alpine
- docker:19-dind
variables:
DB_NAME: project_ci_test
POSTGRES_HOST_AUTH_METHOD: trust
script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker pull $IMAGE_TAG_API:$CI_COMMIT_SHA
- docker run $IMAGE_TAG_API:$CI_COMMIT_SHA sh -c "python manage.py test"
after_script:
- echo "Pytest tests complete"
coverage: "/TOTAL.+ ([0-9]{1,3}%)/"
release-api-staging:
image: docker:19
stage: release
services:
- docker:19-dind
only:
refs: [ master ]
changes: [ ".gitlab-ci.yml", "api/**/*" ]
environment:
name: staging
script:
- docker pull $IMAGE_TAG_API:$CI_COMMIT_SHA
- docker tag $IMAGE_TAG_API:$CI_COMMIT_SHA $IMAGE_TAG_API:latest
- docker push $IMAGE_TAG_API:latest
The answer is a bit late, but still i'll try to explain the approach briefly for other developers with same issues. I also created an example project, contain 3 microservices in GitLab, where Server A runs end-to-end tests and is dependend on Server B and Server C.
When e2e test full-stack applications you have to either:
mock all the responses of the microservices
test against a deployed environment;
or spin-up the environment temporary in the pipeline
As you noted, you want to spin-up the environment temporary in the pipeline. The following steps should be taken:
Deploy all backends as docker images in GitLab's private registry;
Mimic your docker-compose.yml services in 1 job in the pipeline;
connect the dots together.
Deploy backends as docker images in GitLab private registry
First you have to publish your docker images in the private registry of GitLab. You do this, because you now can reuse those images in another job. For this approach you need docker:dind. A simple example job to publish to a private registry on gitlab looks like:
before_script:
- echo -n $CI_JOB_TOKEN | docker login -u gitlab-ci-token --password-stdin $CI_REGISTRY
publish:image:docker:
stage: publish
image: docker
services:
- name: docker:dind
alias: docker
variables:
CI_DOCKER_NAME: ${CI_REGISTRY_IMAGE}/my-docker-image
script:
- docker pull $CI_REGISTRY_IMAGE || true
- docker build --pull --cache-from $CI_REGISTRY_IMAGE --tag $CI_DOCKER_NAME --file Dockerfile .
- docker push $CI_DOCKER_NAME
only:
- master
To see a real-world example, I have an example project that is public available.
Mimic your docker-compose.yml services in 1 job in the pipeline
Once you dockerized all backends and published the images on a private registry, you can start to mimic your docker-compose.yml with a GitLab job. A basic example:
test:e2e:
image: ubuntu:20.04
stage: test
services:
- name: postgres:12-alpine
alias: postgress
- name: mongo
alias: mongo
# my backend image
- name: registry.gitlab.com/[MY_GROUP]/my-docker-image
alias: server
script:
- curl http://server:3000 # expecting server exposes on port 3000, this should work
- curl http://mongo:270117 # should work
- curl http://postgress:5432 # should work!
Run the tests
Now everything is running in a single job in GitLab, you can simply start your front-end in detached mode and run cypress to test it. Example:
script:
- npm run start & # start in detached mode
- wait-on http://localhost:8080 # see: https://www.npmjs.com/package/wait-on
- cypress run # make sure cypress is available as well
Conclusion
Your docker-compose.yml is not meant to run in a pipeline. Mimic it instead using GitLab services. Dockerize all backends and store them in GitLab's private registry. Spin up all services in your pipeline and run your tests.
This article might shed some light.
https://jessie.codes/article/running-cypress-gitlab-ci/
Essentially, you make two docker composers, one for your Cypress test and one for your items that are to be tested. This gets around the issues with images being able to access node and docker.

Use docker without registry for gitlab-ci

My school has a personal gitlab setup, but it doesn't have a registry setup for docker images.
What I want to do is run my pipeline with docker, so that I can build, test etc in a docker environment.
Right now i am trying random stuff because I don't know what I am doing. This is what I have now:
Gitlab-ci:
image: docker:latest
services:
- docker:dind
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
build-master:
stage: build
script:
- docker build --pull -t "$CI_REGISTRY_IMAGE" .
- docker push "$CI_REGISTRY_IMAGE"
build:
stage: build
script:
- docker build --pull -t "$CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG" .
- docker push "$CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG"
My secret variables on gitlab:
My error message in the pipeline:
Something else I tried uses a gitlab repo. This uses the docker image for ros correctly, but in my application I also use opencv, so I want to add more to the docker image. If i know how to do that in the example below, thats also an option. On top of this, in the example below i can't run tests.
Gitlab-ci:
image: ros:kinetic-ros-core
stages:
- build
variables:
ROS_PACKAGES_TO_INSTALL: ""
USE_ROSDEP: "true"
cache:
paths:
- ccache/
before_script:
- git clone https://gitlab.com/VictorLamoine/ros_gitlab_ci.git
- source ros_gitlab_ci/gitlab-ci.bash
catkin_make:
stage: build
script:
- catkin_make
catkin_build:
stage: build
script:
- catkin build --summarize --no-status --force-color
As I said I have tried many things, this is just the latest thing I have tried. How can I run my runners and gitlab-ci with docker without a gitlab registry?
Just use it withouth registry.
You just need to insert this to gitlab runner config file:
pull_policy = "if-not-present"
Thats enough, and remove commands like:
docker push ...
docker pull ...
Or even insert "|| true" at the end of the push pull command if you want to keep push pull in case, like this:
docker pull ... || true;
Which keeps your code to continue if command fail.
Just dont forget that : pull_policy = "if-not-present" , which allow You to run docker image withouth pull and push.
As image is in case if mussing builded, this works.
example:
[[runners]]
name = "Runner name"
url = ...
...
executor = "docker"
[runners.docker]
image = ...
pull_policy = "if-not-present"
...
You can change these secret variables to point to docker-hub registry server.
You have to create your account on that https://hub.docker.com/ and then use that details to configure - gitlab secret variables.

Resources