I have private GitLab server running in the cloud (bitnami image). I have a custom domain registered with the public IP of Gitlab Server and letsencrypt certificate generated for this domain. I can access gitlab server by https://mycustomdomain/.
I have installed gitlab-runner on linux host and successfully registered (docker executor) with gitlab server (https://mycustomdomain/).
Now when i then run the pipeline, it fails with following message:
Pulling docker image node:latest ...
Using docker image sha256:2a0d8959c8e1b967d926059e555fdd23926c8fff809a0cf5fab373e694bbce64 for node:latest ...
Running on runner-PcudM7CB-project-1-concurrent-0 via my-gitlab-worker...
Fetching changes with git depth set to 50...
Initialized empty Git repository in /builds/root/microcities/.git/
Created fresh repository.
fatal: unable to access 'https://<my gitlab public IP>/root/microcities.git/': SSL: no alternative certificate subject name matches target host name 'my gitlab public IP'
ERROR: Job failed: exit code 1
Why does the runner/docker container refer to gitlab server by it's IP rather than by domain name?
Solution is to update the gitlab server configuration. In my case that means running
cd /opt/bitnami/apps/gitlab
sudo ./bnconfig --machine_hostname DOMAIN-NAME
This is well covered in bitnami documentation, my bad I did miss this step.
Related
I am trying to connect my jenkins server to my private repository on gitlab.com.
I have already added the API access token of gitlab to my jenkins server and added the Jenkins public key to the ssh-keys of gitlab account.
Upon adding my gitlab repository to my jenkins pipeline I get below error:
Failed to connect to repository : Command "git ls-remote -h -- git#gitlab.com:user_name/repo_name.git HEAD" returned status code 128:
stdout:
stderr: Host key verification failed.
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
When I try to run the following command on my Jenkins server:
ssh -T git#gitlab.com:user_name/repo_name.git
I get the following error:
ssh: Could not resolve hostname gitlab.com:user_name/repo_name.git: Name or service not known
I am not able to figure out the reason why my Jenkins server is unable to access the repository even after providing the SSH Keys and the Access Token.
API access token of gitlab
That would be use for HTTPS access.
added the Jenkins public key to the ssh-keys of gitlab account
That is relevant for SSH URL, and means you need to set the right credentials in your Jenkins job (the credential referencing the Jenkins private key, whose public key was published to GitLab)
I would test first on Jenkins server:
ssh -Tv git#gitlab.com
Check also the content of your Jenkins server running account ~/.ssh/config file for any gitlab.com Host entry.
I am trying to test my python project and run via gitlab. I have installed runner on my ubuntu notebook and complete registered with local
gitlab server.
Thus, got 2 seperate machine one runner and another one is gitlab server. Both machine can communicate each other.
Notebook(192.168.100.10) ---- GitLab(172.16.10.100)
Once I commit test, my job failed with message below;
Reinitialized existing Git repository in /builds/dz/mytest/.git/
fatal: unable to access 'http://gitlab.lab01.ng/dz/mytest.git/': Could not resolve host: gitlab.lab01.ng
Uploading artifacts for failed job
ERROR: Job failed: exit code 1
From my notebook cli, i can ping gitlab server ip but not the host name even curl also doesnt know the hostname.
I believe this is something to do with the dns that cannot resolved.
I add hostname in my notebook /etc/hosts , i can ping hostname but still failed run job with the same message.
I have tried people suggest add below inside gitlab-runner config.toml, thus I add below in config.toml (Not sure if this is correct to add in config.toml)
[[runners]]
dns_search = [""]
Still failed and got the same message could not resilve host.
What can I do on my notebook setting/runner? I dont have admin access to gitlab to check further.
Anyone face the same problem. Appreciate help and support thank you.
--For information I have tried testing the runner on my notebook with public gitlab (gitlab.com) and I can run the job successfully without any error message--
I'm assuming you are using docker as the executor for your GitLab runner since you did not specify it in your question. Docker executor does not share the /etc/hosts of the host machine but you can use extra_hosts parameter inside your config.toml to let the runner container know about the custom hostname:
[runners.docker]
extra_hosts = ["gitlab.lab01.ng:172.16.10.100"]
I'm trying to configure and execute GitLab CI jobs, that is why I have two Docker containers:
gitlab/gitlab-runner:latest
-gitlab/gitlab-ce:latest
Now every gitlab CI build fails with the error:
Running with gitlab-runner 12.3.0 (a8a019e0) on sonar-runner rRZ6XQ2Y
Using Docker executor with image alpine:latest ...
Pulling docker image alpine:latest ...
Using docker image sha256:cc0abc535e36a7e for alpine:latest ...
Running on runner-rRZ6XQ2Y-project-2-concurrent-0 via pc-user...
Fetching changes with git depth set to 50...
Reinitialized existing Git repository in /builds/root/sonar-gitlab-kotlin/.git/
fatal: unable to access 'http://gitlab-ci-token:[MASKED]#localhost/root/sonar-gitlab-kotlin.git/': Failed to connect to localhost port 80: Connection refused Reinitialized existing Git repository in /builds/root/sonar-gitlab-kotlin/.git/
In order to fix the problem
I've changed parameter URL in /etc/gitlab-runner/config.toml
Inside of gitlab-cunner container: root#pc-user:/# gitlab-runner restart
$ docker restart gitlab-runner
so now I have:
root#pc-user:/# gitlab-runner list
Runtime platform arch=amd64 os=linux pid=45 revision=a8a019e0 version=12.3.0
Listing configured runners ConfigFile=/etc/gitlab-runner/config.toml
sonar-runner Executor=docker Token=rRZ6XQ2YWXxxxxxxx URL=http://192.168.74.12/
However, I'm still getting the error:
fatal: unable to access 'http://gitlab-ci-token:[MASKED]#**localhost**/root/sonar-gitlab-kotlin.git/': Failed to connect to localhost port 80: Connection refused
But I expected to have
http://gitlab-ci-token:[MASKED]#**192.168.74.12**/root/sonar-gitlab-kotlin.git/
Could you please clarify: I'm wrong with my expectation or the parameter should changed.
I am now to JFrog Artifactory. Doing Docker registry setup and having below issue. Please help me on this.
Infrastructure setup:
We have Artifactory install in our datacenter VM. Running on Tomcat port 8082. So if I need to access from localhost. I can just call https://localhost:8082/artifactory ; Again we have URL setup for that in apache server (which indeed doing reverse proxy). URL id: https://dev-tools.xyz.com/artifactory ; which we have our company (xyz) wind card certs.
Docker registry setup: I went ahead and click on (GUI) click on “Admin” Repositories “local” Click “New” select Docker (repo type) Put Repository name “docker-local” leave all default and save it. Hope at this moment I am all done with setup.
Now I went to my docker server; its different server then my artifactory server. And did…
[root#linuxvm1234 ~]# docker login -u adm -p xxxx https://devtools.xyz.com/artifactory/api/docker/docker-local
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
Error response from daemon: login attempt to https://devtools.xyz.com/v2/ failed with status: 404 Not Found
Question.. not sure what is the wrong with this URL?
All,
I am using DCOS and the associated Jenkins.
My company is having a proxy for any external traffic.
Jenkins is running properly and can access the internal network as well as any external network.
I can get jobs to curl a URL on internet if I set the HTTP proxy. I can pass this proxy to mesosphere/jenkins-dind:0.3.1 container as environment variable however, I can't run any docker pull or docker run while being in docker in docker mode.
I managed to reproduce the issue on one of the agent box.
sudo docker run hello-world
Hello from Docker!
This works!!
However, sudo docker run --privileged mesosphere/jenkins-dind:0.3.1 wrapper.sh "docker run hello-world" will fail with
docker: Error while pulling image: Get https://index.docker.io/v1/repositories/library/hello-world/images: x509: certificate is valid for FG3K6C3A13800607, not index.docker.io.
This is typically showing that the docker daemon is not having access to the proxy.
Do you know how to ensure that the dind is getting access to the proxy settings?
Antoine
This error can also manifest itself if the Docker daemon is unauthenticated against your registry but it looks like you're running against the public image, so that's not likely to be the problem.
You could try creating a new Parameter to the Jenkins node (see the instructions here for an example for how to set an environment variable called DOCKER_EXTRA_OPTS: https://docs.mesosphere.com/1.8/usage/service-guides/jenkins/advanced-configuration/).
In this case, we want to do the same (with Name env) but with the contents of Value set to something like HTTP_PROXY=http://proxy.example.com:80/.