GitLab-Runner cannot clone from local GitLab - docker

This is my setup:
I run GitLab using Docker an expose it on port 10080 to my machine.
I have a gitlab-runner on my machine that is configured to use the Docker executor.
When I connect the runner to my GitLab instance, I use localhost:10080 as the URL which works fine.
When the runner runs a job inside a Docker container, it tries to clone the code from localhost:10080 which obviously fails since it's inside a container and localhost does not refer to my local machine.
Now what are my options? Docker for Mac has a host.docker.internal DNS entry that refers to the host machine when inside a container but I can't use it when I register the runner because the runner runs directly on my machine.

I found a solution that works for me but could depend on the system.
In ~/.gitlab-runner/config.toml under the [runners.docker] config, I just needed to add extra_hosts = ["localhost:172.17.0.1"] to override the IP for localhost. The 172.17.0.1 IP might vary on other peoples machine.

Related

AEM clean install -PautoInstallPackage with Jenkins in docker

Situation :
I set up Jenkins with docker, to install package on AEM instance.
In my pom.xml, 'targetURL' of profile 'autoInstallPackage' is "http://localhost:4502/crx/packgmr/service.jsp".
Problem :
When I set 'targetURL' with IP address, it works.
When I set 'targetURL' with 'localohost', it fails.
What I want to do :
Run 'autoInstallPackage' targetURL with 'localhost', not IP address.
I guess the cause is this (not sure) :
Jenkins is running inside docker, so the docker container could be considered as 'localhost'.
Please help me. Thank you.
Your guess is right. The localhost exists only inside the container.
To bypass this behaviour you can run the docker image as so:
docker run --network host --name jenkins your_jenkins_image
The --network host param will cause the docker container to use the dns of the host machine (the machine running docker).

Cannot Connect to docker daemon. is docker daemon running?

I'm using Jenkins on Docker on my local Mac Machine.
And I'm running another Docker on ubuntu VirtualBox. So now, there are 2 docker machines. one is on my mac machine and one is on my Ubuntu VirtualBox machine. I'm running Jenkins on Mac Docker. Now in the Jenkins pipeline, I want to build an image on my ubuntu machine.
I've configured Jenkins docker cloud and in the docker host URL, it is connected to the ubuntu docker-machine.
But while building a new image, I'm getting the error. Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
I've tried even adding ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:4243 -H unix:///var/run/docker.sock
at /lib/systemd/system/docker.service
WHen i check ps -aux,
Can someone please help me out?
help is appreciated.
First personally if I had a setup like that I would not bother connecting to the remote docker and would just install a Jenkins agent on the ubuntu machine and make it talk to the Jenkins master.
But if you want to do it they way you have it set up right now we a Jenkins talking from inside out one docker host into another docker host I suggest looking into the following:
Your Jenkins master and the ubuntu machine a very isolated they might as well just be on different machines not even in the same room. Unix domain sockets, the ones that are identified by unix://* are made for communicating within a single local OS kernel, trying to bridge them into remote machine will lead to disaster.
So the only way Jenkins could communicate to the remote host is via a remote protocol like TCP. Most of the time when you install docker with the default settings it doesn't even listen to TCP at all, mostly for security reasons.
First thing you should do is to configure a docker inside of the ubuntu machine to listen on TCP port and accept connections from remote hosts. You can use netstat -nat to see if anything is listening on TCP 4243. When things are configured correctly you see the line that stats with 0.0.0.0:4243 or something like that in the output of the nestat
Second you need to make sure your the firewalls/iptables/netfilter configuration on the Ubuntu host lets in connections from outside. A good test to try is to telnet <ubuntu-ip> 4243 from a terminal session on your Mac.
Then you need to make sure you that docker networking is configured correctly so that connections from the inside of the container that is running Jenkins end up on your ubuntu box. To test you need to exec -it into your jenkins container and repeat the telnet test. On modern linuxes telnet is usually not installed, so you can use curl -vvv which will always end up with an error, so just look at the verbose output to see if the error because things cannot communicate (timeout, connection reset etc) or the error occurs because your curl tried to talk HTTP to docker and got gibberish response. In the later case you can consider things to be set up correctly.
Finally you need to tell Jenkins Docker to communicate to the remote docker via TCP. Usually that is given on the command line to your docker run, docker ps, docker exec
I've configured it by defining the slave label in my Jenkins Pipeline.
Jenkins agents run on a variety of different environments such as physical machines, virtual machines, Kubernetes clusters, and Docker images.
In your Jenkins Pipeline or In your JenkinsFile, you've to set the agent accordingly to what you're using either using Docker image or any virtual machine.
Also Thank you so much #Vlad, all the things you told me, were really helpful.

gitlab running inside docker container

I have a machine with ssh running on it. Now, I wanted to run the gitlab inside the docker container. So, followed the instructions mentioned here https://docs.gitlab.com/omnibus/docker/. The instruction says bind the container ssh port 22 with host machine's ssh port(22). I was unable to do this because port was already binded with openssh server in the host machine. So I binded the container's ssh port to some other port say 222 or so. Doing so gitlab got set-up but when I try to clone the project using ssh way I am not able to do.
Is there a way to fix this issue? what could be reason, I suspect it's because of the port mapping. I want to have the ssh running on my host machine, run the gitlab inside the container and should be able to use ssh way for code commit,clone and push.
Docker port mapping is one thing but you also need to adapt the gitlab rails configuration in gitlab.rb to specify the custom ssh port :
gitlab_rails['gitlab_shell_ssh_port'] = 222
and restart the container

How to configure dynamically provisioned Docker agents

I installed Docker on Windows 10, and I pulled jenkins docker from Docker Hub. Next, I started my jenkins docker,
docker run --rm -u root -p 8080:8080 -v my_host_path:/var/jenkins_home jenkins
Next, I used Manage Jenkins and Manage Plugins to install Docker plugin, then went to the Configure page and tried to add Docker Cloud.
After I entered Docker Host URI : tcp://127.0.0.1:2375, I wanted to "Test Connection", but unfortunately got failed.
I tried to follow the instruction as below link:
How to find "Docker Host URI" to be used in Jenkins "Docker Plugin"?
But I can't not find any docker setting file under /etc/default/* in my jenkins container, so I can't set the DOCKER_OPTS argument.
Could someone give me any advise? Thank you !
Problem context: end of Chapter 3 exercise from the book "Continuous Delivery with Docker and Jenkins" by Rafal Leszko
from Configure and troubleshoot the Docker daemon page
Important: Setting hosts in the daemon.json is not supported on Docker Desktop for Windows or Docker Desktop for Mac.
Setting the docker host uri does NOT work on Windows. So either of these won't work in the Settings > Daemon tab:
"hosts" : "-H tcp://0.0.0.0:2375"
"DOCKER_OPTS" : "-H tcp://0.0.0.0:2375"
Exposing the daemon without TLS (checkbox on General tab) as recommended in some places did not work for me either.
The solution to connecting the Docker plugin in Jenkins with the docker host, is:
use the special DNS name host.docker.internal
From the docs:
How do I connect from a container to a service on the host?
Windows has a changing IP address (or none if you have no network access). We recommend that you connect to the special DNS name host.docker.internal, which resolves to the internal IP address used by the host. This is for development purposes and will not work in a production environment outside of Docker Desktop for Windows.
The gateway is also reachable as gateway.docker.internal.
For more information about the networking features in Docker Desktop for Windows, see Networking.
While the 'will not work in a production environment outside of Docker Desktop for Windows' disclaimer might bother some, I believe Docker for Windows is not meant for production use cases anyway.
Additionally, publish this mapping for Jenkins agent-master communication -p 50000:50000

Local Docker connection to Kubernetes Cluster

I want to connect a docker container running locally to a service running on a Kubernetes cluster. To do so I have exposed a service through reserving some static IP addresses.
I have also saved those IP addresses in local DNS, in the /etc/hosts/ file:
123.123.123.12 host1
456.456.456.45 host2
I want to link my container to that such that all the traffic is routed to those addresses so that it can be processed by the cluster. I am using the link feature in the docker container but it isn't working.
I want to connect directly using IP? How should I do this?
There's no difference doing this if the client is or isn't in Docker. However you have the service exposed from Kubernetes, you'd make the same connection to it from a process running on an external host or from a process running in a Docker container on that host.
Say, as in the example in the Kubernetes documentation, you're running a NodePort service that's accessible on port 31496 on every node in the cluster, and you're trying to connect to it from outside the cluster. Maybe as in the question 123.123.123.12 is some node in the cluster. A typical setup would be to get the location of the service from an environment variable (JavaScript process.env.THE_SERVICE_URL; Ruby ENV['THE_SERVICE_URL']; Python os.environ['THE_SERVICE_URL']; ...).
When you're developing, you could set that variable in your local shell:
export THE_SERVICE_URL=http://123.123.123.12:31496
cd here && ./kubernetes_client_script.py
When you go to deploy your application, you can set the same environment variable:
docker run -e THE_SERVICE_URL=http://123.123.123.12:31496 me:k8s-client

Resources