CI/CD Gitlab with Harbor Registry - docker

I have 3 server
Gitlab
Gitlab Runner
Harbor Registry
When I run CI/CD on Gitlab but it cannot login to Harbor Registry. This is error.
Get https://172.21.5.247/v1/users/: x509: cannot validate certificate for 172.21.5.247 because it doesn't contain any IP SANs
When I try login docker on server Gitlab and Gitlab Runner is successfully. I added "insecure-registries" to two server.
.gitlab.ci.yml file
image: docker:18-git
variables:
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: ""
DOCKER_HOST: tcp://localhost:2375
stages:
- build
- push
services:
- name: docker:dind
command: ["--insecure-registry=172.21.5.247:443"]
before_script:
- echo $HARBOR_USERNAME
- echo -n $HARBOR_PASSWORD | docker login -u $HARBOR_USERNAME -p $HARBOR_PASSWORD $HARBOR_REGISTRY
- docker version
- docker info
after_script:
- docker logout $HARBOR_REGISTRY
Build:
stage: build
script:
- docker pull $HARBOR_REGISTRY_IMAGE:latest || true
- >
docker build
--pull
--cache-from $HARBOR_REGISTRY_IMAGE:latest
--tag $HARBOR_REGISTRY_IMAGE:$CI_COMMIT_SHA .
- docker push $HARBOR_REGISTRY_IMAGE:$CI_COMMIT_SHA
Push_When_tag:
stage: push
only:
- tags
script:
- docker pull $HARBOR_REGISTRY_IMAGE:$CI_COMMIT_SHA
- docker tag $HARBOR_REGISTRY_IMAGE:$CI_COMMIT_SHA $HARBOR_REGISTRY_IMAGE:$CI_COMMIT_REF_NAME
- docker push $HARBOR_REGISTRY_IMAGE:$CI_COMMIT_REF_NAME
It have error in docker login.

Since Harbor 2.2 minor release you are able to create a harbor robot login,
afterwards write these credentials to Settings->CI/CD->Variables:
-HARBOR_ROBOT_USER (Important! you have to escape the $ in the robot username eg. robot$$myuser robot account name containing "$" will cause...)
-HARBOR_ROBOT_PASSWORD
Now you are able to use these Variables in before script as follows
- HARBOR_ROBOT_PASSWORD=${HARBOR_ROBOT_PASSWORD}
- HARBOR_ROBOT_USER=${HARBOR_ROBOT_USER}
## login process to harbor docker registry
echo $HARBOR_ROBOT_PASSWORD | docker login --username $HARBOR_ROBOT_USER --password-stdin ${HARBOR_REGISTRY}

Related

which gitlab-executor to choose so that i can use many docker images in a pipeline?

i have this pipeline to execute :
stages:
- build-gitlab
- deploy-uat
build:
image: node:14-alpine
stage: build-gitlab
services:
- docker
before_script:
- docker login $CI_REGISTRY_URL -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD
script:
- docker build --tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA .
- docker tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA $CI_FRONTEND_REGISTRY_URL
- docker push $CI_FRONTEND_REGISTRY_URL
deploy:
image:
name: bitnami/kubectl:latest
stage: deploy-uat
before_script:
- kubectl config set-cluster deploy-cluster --server="$K8S_SERVER" --insecure-skip-tls-verify
- kubectl config set-credentials gitlab --token=$(echo $K8S_TOKEN | base64 -d)
- kubectl config set-context deploy-cluster --cluster=deploy-cluster --namespace=ns-frontend-dev --user=gitlab
- kubectl config use-context deploy-cluster
script:
- envsubst < deploy.tmpl > deploy.yaml
- kubectl apply -f deploy.yaml
Initially i defined a runner for my gitlab with shell executor. Docker is installed in my runner that is why the build stage executed itself successfully. But if i would like to use multiple docker images as you can see in my gitlab-ci.yaml file, the shell executor is not the appropriate one.
I saw this documentation about gitlab executors
but it is not explicit enough.
i register a new runner with docker executor, then i got this result :
Preparing the "docker" executor
Using Docker executor with image node:14-alpine ...
Starting service docker:latest ...
Pulling docker image docker:latest ...
Using docker image sha256:0f8d12a73562adf6588be88e37974abd42168017f375a1e160ba08a7ee3ffaa9 for docker:latest with digest docker#sha256:75026b00c823579421c1850c00def301a6126b3f3f684594e51114c997f76467 ...
Waiting for services to be up and running (timeout 30 seconds)...
*** WARNING: Service runner-jdn9pn3z-project-33-concurrent-0-0e760484a3d3cab3-docker-0 probably didn't start properly.
Health check error:
service "runner-jdn9pn3z-project-33-concurrent-0-0e760484a3d3cab3-docker-0-wait-for-service" health check: exit code 1
Health check container logs:
2023-01-18T15:50:31.037166246Z FATAL: No HOST or PORT found
and the deploy part did not succeed. What is the right executor to choose between :
docker, shell, ssh, kubernetes, custom, parallels, virtualbox, docker+machine, docker-ssh+machine, instance, docker-ssh
And how to use it

Gitlab CI docker cannot login to docker hub

i have two project on gitlab with same CI config file and ci variables. When i try to build dockerfile, one project passed, but second say:
Error: Cannot perform an interactive login from a non TTY device
config:
image: docker:latest
services:
- docker:dind
stages:
- build
variables:
CONTAINER_IMAGE: sleezy/go-hello-world:${CI_COMMIT_SHORT_SHA}
build:
stage: build
script:
- docker login -u ${DOCKER_USER} -p ${DOCKER_PASSWORD}
- docker build -t ${CONTAINER_IMAGE} .
- docker tag ${CONTAINER_IMAGE} ${CONTAINER_IMAGE}
- docker tag ${CONTAINER_IMAGE} sleezy/go-hello-world:latest
- docker push ${CONTAINER_IMAGE}
How i said, everything is same, variables, dockerhub account - username, password, config, even gitlab runner version, so i really dont know why? Any help, thanks.

Gitlab docker in docker deployment unable to access private registry

I set up a private gitlab registry on a docker host. On the same host I'm trying to build test images and push them to said registry.
For some reason, this is not working. Here is my gitlab ci config:
stages:
- build_testing
- analytics
- testing
- build_deployment
variables:
MYSQL_RANDOM_ROOT_PASSWORD: 'true'
MYSQL_USER: 'dev'
MYSQL_PASSWORD: 'dev'
MYSQL_DATABASE: 'debitor_management_test'
# image: 10.11.12.41/laravel:v1
# services:
# - name: mariadb:10.1
# alias: mysql
image: docker:stable
services:
- name: docker:dind
command: ["--insecure-registry=10.11.12.41:443"]
build_testing:
stage: build_testing
script:
- docker build -t 10.11.12.41/debitor_management_testing .
- ping -c 5 10.11.12.41
- docker push 10.11.12.41/debitor_management_testing
The ping command is working, but the docker push fails with
$ docker push 10.11.12.41/debitor_management_testing
The push refers to repository [10.11.12.41/debitor_management_testing]
Get https://10.11.12.41/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
ERROR: Job failed: exit code 1
How can I get this to work?
The error suggests that the CI runner cannot communicate with 10.11.12.41.
Every GitLab repository has an associated Container Registry for storing Docker images. You might better off using that rather than running a custom registry for storing images. GitLab CI provides predefined variables to your CI jobs such as CI_REGISTRY, CI_REGISTRY_IMAGE, CI_REGISTRY_USER, and CI_REGISTRY_PASSWORD to help you access the registry associated with your repository.
If you use the built-in registry, you can write your build_testing job like the following.
build_testing:
stage: build_testing
script:
- docker login --username $CI_REGISTRY_USER --password $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker image build -tag $CI_REGISTRY_IMAGE .
- docker image push $CI_REGISTRY_IMAGE

Build docker images with gitlab CI and push to self signed https nexus repo

I have a gitlab CI setup where i would like build and push docker images, the first problem was that my nexus repo wasn't https.
The actual error message was this:
Error response from daemon: Get http://some.host:port/v2/: http:
server gave HTTP response to HTTPS client
To build docker images we use docker:latest image, and i can't find the way to add our host as insecure registry in .gitlab-ci.yml
So a self signed my nexus repository in hope it will solve, but it's not worked either and giver the following error message:
Error response from daemon: Get https://some.host:port/v2/: x509:
certificate signed by unknown authority
this is my current CI setup:
image: docker:latest
services:
- docker:dind
before_script:
- docker info
- docker login -u USER -p PASSWORD some.host:port
stages:
- build
build-image:
stage: build
script:
- docker build -t some.host:port/image:alpine .
- docker push some.host:port/image:alpine
only:
- master
when: manual
So is there a simple solution or an existing docker image where i can configure insecure registries may be some docker magic with command line i really need to create an own image to solve this?
You can launch docker dind with different command. See below url for more details
https://docs.gitlab.com/ce/ci/docker/using_docker_images.html#setting-a-command-for-the-service. So you need to update your .gitlab.ci.yml
image: docker:latest
services:
- name: docker:dind
command: [ "--insecure-registry=some.host:port" ]
before_script:
- docker info
- docker login -u USER -p PASSWORD some.host:port
stages:
- build
build-image:
stage: build
script:
- docker build -t some.host:port/image:alpine .
- docker push some.host:port/image:alpine
only:
- master
when: manual
Then you can use a insecure http registry
Worked for me with slight modification in syntax, Command expects array.
services:
- name: docker:dind
command: ["--insecure-registry=some.host:port"]

Docker container does not run on a EC2 instance part of EC2 Cluster

I am trying to automate the deployment process of my project. The environment looks like:
we use GitLab to store our code
we execute a CD/CI pipeline within GitLab to build a Docker image and to store it in Amazon repository
once the build stage is completed, Docker have to run in deployment stage the latest image on the first of two instances and after successful execution to scale the containers on the second instance.
This is how the .gutlab-ci.yml file looks like:
image: docker:latest
services:
- docker:dind
stages:
- build
- deploy
variables:
DOCKER_DRIVER: overlay2
testBuild:
stage: build
script:
- docker login -u AWS -p <password> <link to Amazons' repo>
- docker build -t <repo/image:latest> app/
- docker push <repo/image:latest>
testDeploy:
stage: deploy
variables:
AWS_DEFAULT_REGION: "us-east-2"
AWS_ACCESS_KEY_ID: "access key"
AWS_SECRET_ACCESS_KEY: "ssecretAK"
AWS_CLUSTER: "testCluster"
AWS_SIZE: "2"
before_script:
- apk add --update curl
- curl -o /usr/local/bin/ecs-cli https://s3.amazonaws.com/amazon-ecs-cli/ecs-cli-linux-amd64-latest
- chmod +x /usr/local/bin/ecs-cli
script:
- docker login -u AWS -p <password> <repo_link>
- docker run --rm --name <name-ofcontainer> -p 80:8000 -i <repo/image:latest>
- ecs-cli configure --region $AWS_DEFAULT_REGION --access-key $AWS_ACCESS_KEY_ID --secret-key $AWS_SECRET_ACCESS_KEY --cluster $AWS_CLUSTER
- ecs-cli scale --capability-iam --size $AWS_SIZE
only:
- development
Now when the script is successfully executed I SSH the instances and and enter docker ps -a it does not list a running container also it does not find the image with docker image.
If I enter manually the commands on one of the instances the website is available.
My questions is how to make the container available?
EDIT 1:
We use shared runner, if this is what you asks. The reason we use docker:dind is because when we do not use it the following error occurs and we cannot go further:
Warning: failed to get default registry endpoint from daemon (Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?). Using system default: https://index.docker.io/v1/
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

Resources