Gitlab CI - registry and nginx - docker

I am trying to setup self hosted gitlab CI with its own registry. I am also using self signed certificates for TLS, signed this certificate using my own CA, which is installed as a trusted CA in my host machine
Gitlab-CE 13.6.3 version is installed on Ubuntu 18.04. Have installed snap microk8s cluster on the same host
Questions (some very basics)
Does Gitlab registry use the docker daemon ?
How is the connectivity achieved
Docker client --> NGINX (5050) --> Gitlab registry (5000)
I have below configuration in gitlab.rb file
registry['enable'] = true
registry['registry_http_addr'] = "127.0.0.1:5000"
registry['log_directory'] = "/var/log/gitlab/registry"
registry['env'] = {
'SSL_CERT_DIR' => "/etc/gitlab/ssl"
}
# Below you can find settings that are exclusive to "Registry NGINX"
registry_nginx['enable'] = true
registry_nginx['ssl_certificate'] = "/etc/gitlab/ssl/gitlab.local.crt"
registry_nginx['ssl_certificate_key'] = "/etc/gitlab/ssl/gitlab.local.key"
registry_nginx['proxy_set_headers'] = {
"Host" => "$http_host",
"X-Real-IP" => "$remote_addr",
"X-Forwarded-For" => "$proxy_add_x_forwarded_for",
"X-Forwarded-Proto" => "https",
"X-Forwarded-Ssl" => "on"
}
# When the registry is automatically enabled using the same domain as `external_url`,
# it listens on this port
registry_nginx['listen_port'] = 5050
registry_nginx['listen_addresses'] = ['*', '[::]']
When I try to docker login, following errors are observed. Is it expected based on the above configuration ?
- with URL: https://127.0.0.1:5000 - > Login Success
- with URL: https://127.0.0.1:5050 - > Login Success
- with URL: https://gitlab.local:5050 - > x509 certificate signed by unknown authority
I have gitlab k8s & docker runners, Can they access the gitlab registry (nginx) port 5050 from within the container ?
[[runners]]
name = "docker"
token = "xxxxxxx"
executor = "docker"
[runners.docker]
image = "docker:stable"
privileged = true
volumes = ["/cache", "/var/run/docker.sock:/var/run/docker.sock"]
Note: I tried various gitlab forums/posts about the certificate issues on gitlab registry to build/push images, but to no success
Thank you

Try by placing the certificate in docker by:
sudo mkdir -p /etc/docker/certs.d/gitlab.local:5050
cp /yourcerts/gitlab.local.crt /etc/docker/certs.d/gitlab.local:5050/ca.crt
sudo service docker reload

Related

gitlan docker X509 certificate error on login

We have ci in docker executor with docker-dind on gitlab. Here it is:
docker-build-job:
stage: build
image: docker:20.10.6
scripts:
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.our.ru
config.toml:
[runners.docker]
image = "docker:20.10.6"
tls_verify = false
privileged = true
disable_entrypoint_overwrite = false
oom_kill_disable = false
disable_cache = false
volumes = ["/cache", "/certs/client", "/usr/share/ca-certificates:/certs"]
shm_size = 0
[[runners.docker.services]]
alias = "docker"
name = "docker:20.10.12-dind"
volumes = ["/cache", "/certs/client", "/etc/gitlab-runner/certs:/certs/ca:ro"]
command = ['/bin/sh', '-c', 'ls -alh /certs/client && dockerd-entrypoint.sh || exit']
I have following questions, help please:
There is docker:20.10.12-dind in runners.docker.services section of config.toml. As far as I understand scripts of all ci jobs will be executed inside docker-dind container, and it doesn't depend on absence of 'services: docker:19.03.12-dind' instruction in ci job. Am I right?
So, will this [[runners.docker.services]] filled section automatically execute job scripts inside dind container?
As far as I understood this command is executed in dind container:
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.our.ru
The following error appears:
Error response from daemon: Get "https://registry.our.ru/v2/": x509: certificate signed by unknown authority
I also execute this command openssl s_client -showcerts -connect registry.our.ru:443 and get response with 'Verification: OK'. I understand that all my certificates are right. I can login registry.our.ru from my gitlab-runner computer with no problem.
Tell me , please, What I do wrong.
There is following text in docker registry config file (config.yml):
auth:
token:
realm: https://gitlab.our.ru/jwt/auth
service: container_registry
issuer: omnibus-gitlab-issuer
rootcertbundle: /etc/docker/registry/ssl/gitlab-registry.crt
Do I understand correctly that $CI_BUILD_TOKEN is involved in the creation of the certificate? Where should this certificate be located in the dind container? is this certificate verified by the root certificate located at /etc/docker/registry/ssl/gitlab-registry.crt?
Thank u in advance!

Failed to pull image from private docker registry in Kubernetes Cluster [duplicate]

Trying to add insecure registry to containerd config as below:
[plugins."io.containerd.grpc.v1.cri".cni]
bin_dir = "/opt/cni/bin"
conf_dir = "/etc/cni/net.d"
max_conf_num = 1
conf_template = ""
[plugins."io.containerd.grpc.v1.cri".registry]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
endpoint = ["https://registry-1.docker.io"]
[plugin."io.containerd.grpc.v1.cri".registry.mirrors."test.http-registry.io"]
endpoint = ["http://v048011.dom600.lab:5000"]
Even after adding it to config.toml, when pulling image from the insecure registry, it fails:
sudo ctr image pull v048011.dom600.lab:5000:5000/myjenkins:latest
ctr: failed to resolve reference "v048011.dom600.lab:5000/myjenkins:latest": failed to do request: Head https://v048011.dom600.lab:5000:5000/v2/myjenkins/manifests/latest: http: server gave HTTP response to HTTPS client
In docker we could just add the insecure registry to daemon.json file and docker would pull images from it, how can i achieve the same in containerd ?
Replacing docker as runtime in k8s cluster.
ctr does not read the /etc/containerd/config.toml config file, this config is used by cri, which means kubectl or crictl would use it.
The error log http: server gave HTTP response to HTTPS client, shows that the registry is using http, but ctr is trying to connect it using https. So if you want to pull the image from http, you should add the param --plain-http with ctr like this:
$ ctr i pull --plain-http <image>
The registry config doc is here.
You should be able to pull the image with crictl, remember to restart containerd.
$ sudo crictl -r /run/containerd/containerd.sock pull <image>
# or config runntime once for all
$ sudo crictl config runtime-endpoint /run/containerd/containerd.sock
$ sudo crictl pull <image>
Config example:
# /etc/containerd/config.toml
# change <IP>:5000 to your registry url
[plugins."io.containerd.grpc.v1.cri".registry]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."<IP>:5000"]
endpoint = ["http://<IP>:5000"]
[plugins."io.containerd.grpc.v1.cri".registry.configs]
[plugins."io.containerd.grpc.v1.cri".registry.configs."<IP>:5000".tls]
insecure_skip_verify = true
Restart the service after configuration modification.
$ sudo systemctl restart containerd
Adding the following config:
[plugins."io.containerd.grpc.v1.cri".cni]
bin_dir = "/opt/cni/bin"
conf_dir = "/etc/cni/net.d"
max_conf_num = 1
conf_template = ""
[plugins."io.containerd.grpc.v1.cri".registry]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
endpoint = ["https://registry-1.docker.io"]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."test.http-registry.io"]
endpoint = ["http://v048011.dom600.lab:5000"]
[plugins."io.containerd.grpc.v1.cri".registry.configs]
[plugins."io.containerd.grpc.v1.cri".registry.configs."test.http-registry.io".tls]
insecure_skip_verify = true
should skip TLS verification for the test registry. See also the documentation on registry TLS communication configuration.
Edit: Please note the "s" in plugins, there is a typo in your configuration.
NOTE: Be sure to restart containerd aferwards:
$ sudo systemctl restart containerd
In my case, I simply added [[registry]] field into /etc/containers/registries.conf file simply because I was using crio
[[registry]]
insecure = true
location = "IP ADDRESS"
and restart crio
systemctl restart crio.service
Please refer
https://github.com/cri-o/cri-o/blob/main/docs/crio.conf.5.md

DinD configuration for Gitlab CI with private Docker registry in Sonatype Nexus 3

I have done a setup of my own Gitlab(-p 7022:22, 7080:9080), Gitlab Runner(-p 7093:8093), Sonatype Nexus3(Maven, Docker, Helm)(-p 10081:8081, 10082:10082, 10083:10083, 10084:10084). All are running as docker images of their own. And till docker build everything runs great(below code snippets doesn't have docker build related code). Problem is, I want to upload the final docker image to my Nexus3 Docker Registry which I am unable to do so.
My Gitlab Runner config is below
concurrent = 1
check_interval = 0
[session_server]
session_timeout = 1800
[[runners]]
name = "testing dind runner"
url = "http://192.168.0.250:7080/" ----> Gitlab git repo external url
token = "SOME TOKEN"
executor = "docker"
[runners.custom_build_dir]
[runners.cache]
[runners.cache.s3]
[runners.cache.gcs]
[runners.cache.azure]
[runners.docker]
tls_verify = false
image = "docker:19.03.12"
privileged = true
disable_entrypoint_overwrite = false
oom_kill_disable = false
disable_cache = false
volumes = ["/var/run/docker.sock:/var/run/docker.sock", "/cache"]
shm_size = 0
My Docker registry in Nexus3 is hosted as below
Web UI = http://192.168.0.250:10081/ (container internal port is 8081 and exposed to host on 10081)
Docker(Group) = 10084 (exposed as the same port through docker)
Docker(Hosted) = 10082 (exposed as the same port through docker)
Docker(Proxy) = 10083 (exposed as the same port through docker)
My project CI config is below
image: docker:19.03.12
services:
- name: docker:19.03.12-dind
variables:
DOCKER_HOST: tcp://docker:2375
DOCKER_TLS_CERTDIR: ""
before_script:
- docker info
- docker login -u nx-uploader -p 1234 192.168.0.250:10082
stages:
- test docker reg
test-docker:
stage: test docker reg
script:
- docker images
- docker search httpd
I am constantly getting errors as
$ docker login -u nx-uploader -p 1234 192.168.0.250:10082
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
error during connect: Post http://docker:2376/v1.40/auth: dial tcp: lookup docker on 192.168.0.1:53: no such host
ERROR: Job failed: exit code 1
Need help/pointers to fix this.
Thanks in advance.

GitLab with Docker runner on localhost: how to expose host to container?

I'm learning to use GitLab CI.
Just now I'm using GitLab on localhost (external_url "http://localhost"). And I've registered a Docker runner with vanilla ubuntu:20.04 image and tried to run some test job on it.
Alas, it tries to clone my repo from localhost repository in the container, but cannot do it, because my localhost's port 80 is not visible from container.
Running with gitlab-runner 13.5.0 (ece86343)
on docker0 x8pHJPn7
Preparing the "docker" executor
Using Docker executor with image ubuntu:20.04 ...
Pulling docker image ubuntu:20.04 ...
Using docker image sha256:d70eaf7277eada08fca944de400e7e4dd97b1262c06ed2b1011500caa4decaf1 for ubuntu:20.04 with digest ubuntu#sha256:fff16eea1a8ae92867721d90c59a75652ea66d29c05294e6e2f898704bdb8cf1 ...
Preparing environment
Running on runner-x8phjpn7-project-6-concurrent-0 via gigant...
Getting source from Git repository
Fetching changes with git depth set to 50...
Reinitialized existing Git repository in /builds/root/ci_fuss/.git/
fatal: unable to access 'http://localhost:80/root/ci_fuss.git/': Failed to connect to localhost port 80: Connection refused
Uploading artifacts for failed job
Uploading artifacts...
WARNING: report.xml: no matching files
ERROR: No files to upload
Cleaning up file based variables
ERROR: Job failed: exit code 1
How can I can my Docker runner to expose host's localhost:80 as container's localhost:80?
Well, i have coped with this stuff.
I have added network_mode = "host"to my runner configuration in /etc/gitlab-runner/config.toml to make my docker use host network connections.
Also I've added --pull_policy="if-not-present" to first search for container image locally, then in remote repo.
[[runners]]
name = "docker0"
url = "http://localhost/"
token = "TTBRFis_W_yJJpN1LLzV"
executor = "docker"
[runners.custom_build_dir]
[runners.cache]
[runners.cache.s3]
[runners.cache.gcs]
[runners.cache.azure]
[runners.docker]
tls_verify = false
image = "exposed_ctr:latest"
privileged = false
disable_entrypoint_overwrite = false
oom_kill_disable = false
disable_cache = false
volumes = ["/cache"]
shm_size = 0
network_mode = "host"
pull_policy = "if-not-present"

git runner is unable to access

I am trying to set up a gitlab runner to use gitlab-ci instead of my Jenkins.
I set up a docker container with linked docker.sock
docker run -d --name gitlab-runner --restart always \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /config/file/on/host:/etc/gitlab-runner \
gitlab/gitlab-runner:latest
After the container run I registered a new runner to gitlab server that ends in following configuration
concurrent = 1
check_interval = 0
[[runners]]
name = "lianli"
url = "<https://gitlab_server.de"
token = "<secret>"
executor = "docker"
[runners.docker]
tls_verify = false
image = "debian:latest"
privileged = false
disable_cache = false
volumes = ["/cache"]
shm_size = 0
[runners.cache]
So now everything is connect. But when the pipeline is running it ends in an access error:
remote: Git access over HTTP is not allowed
fatal: unable to access 'https://gitlab-ci-token:xxxxxxxxxxxxxxxxxxxx#gitlab_server.de/group/project.git/': The requested URL returned error: 403
ERROR: Job failed: exit code 1
My .gitlab-ci.yml looks like:
stages:
- test
variables:
NGINX: nginx:stable-alpine
before_script:
- docker info
test:
stage: test
script:
- docker build -t nginx_test .
I do not understand, why it could not access?
Note: The runner is Version 9.3.0 and gitlab is version 9.3.2
Is your GitLab instance configured to accept http requests to repository?
Are you a member of the project?

Resources