Trigger remote Jenkins job on GCE VM - jenkins

I am currently running Jenkins on a GCE VM. As a build step, I want to trigger a Jenkins job on another VM in the same project. The problem is, HTTP and HTTPS access to the VMs is disabled, and I cannot use curl to trigger it remotely. An SSH tunnel remains the only option. But SSH onto a VM requires a google_compute_engine private key file, which helps you log in to a particular user..I was confused about how to use this file for the Jenkins user, which does not have a separate shell and was hoping for some advice. Thanks in advance!

The easiest way is to apply the default firewall rules to allow HTTP and HTTPS traffic to the instance by setting the Allow HTTP traffic and Allow HTTPS traffic checkboxs on in its detailed view at Developers Console, or adding the http-server and https-server tags manually by using gcloud command: "
gcloud compute instances add-tags INSTANCE --tags http-server https-server".
Setting up an SSH tunnel to Jenkins port of service is a possibility and it doesn't require using google_compute_engine key necessarily. You can configure and additional key and copy the public part for that key into Jenkins user's authorized_keys file directly, as you would with any other server. See this link for more details. If you use a custom SSH key, remember to specify the related private key when setting up the tunnel.
Another more straight-forward option would be creating new firewall rules for Jenkins ports and applying them to the Jenkins instance.

Related

github webhook fails to connect to jenkins with public ip

I am trying to configure github webhooks with my jenkins server but I keep getting "failed to connect". Note that I am using a public ip and not a private or localhost address, At first, icmp protocol was blocked on my firewall but even after allowing it, it still doesn't work.
However, when I proxy my server (using smee client) and use the proxied url in the webhook instead, it works fine, so I thought the problem was jenkins url (in system configuration of jenkins) so I changed that to the public ip but it doesn't have any effect, now I'm clueless.
It might be relevant to mention that jenkins is running on a docker container,
Apparently the webhook must pass through a web server and not to jenkins directly, So I configured nginx as a reverse proxy to jenkins server and it worked fine.

How to fix "We couldn’t deliver this payload: Couldn't connect to server" in github webhook while setting up a jenkins trigger?

I'm setting up a Web-hook in GitHub to trigger a Jenkins build for every push event. I'm running Jenkins from a Blue Ocean Docker container.
https://imgur.com/bNf5dMd
I'm able to access jenkins from http://192.168.99.101:32771/ , as specified in the docker container.
I have specified the git repository and checked "GitHub hook trigger for GITScm polling" checkbox.
I am able to manually kick off the build process after a commit, but when I setup the webhook in github with the payload url http://192.168.99.101:32771/github-webhook/ and commit something,
I get the error "We couldn’t deliver this payload: Couldn't connect to server"
Other solutions I've looked at.
Using ngrok. But I'm not running this on localhost.
I tried using a Personal Access Token and created an automatic webhook from Jenkins to Github. And I got the same error.
What am I missing, or what am I doing wrong?
Github will never reach to your Jenkins Server as your server is only accessible with-in network.
The error is very clear.
We couldn’t deliver this payload: Couldn't connect to server
http://192.168.99.101:32771 this is the same as your local host in term of access it from outside of the network.
Possible solution.
Run jenkins on some remote server with Internet access and give that IP in the webhook of the Github
Pass your public IP to Github, maybe you still issue with define port forwarding if there is any issue with access.
For anyone who's jenkins server is in a Ec2 instance of AWS and if it is in a private subnet, you can do two things:
Move your private ec2 into a public subnent
Create a Load balancer (ALB) within a public subnet and attach your private ec2 instance to that Load Balancer. Then use the address of ALB for you github hook.

Webhook execution failed: execution expired

I am trying to trigger jenkins build whenever there is a push to GitLab.
I am referring to https://github.com/jenkinsci/gitlab-plugin.
When I test the connection for webhook it shows execution expired.
I am using:
Jenkins ver. 2.60.1
GitLab version 9.4.0-rc2-ee
Git lab plugin 1.4.6
The exact error message, clicking "Test setting" from GitLab:
We tried to send a request to the provided URL but an error occurred: execution expired
As mentioned in issue 128:
This looks and sounds like a configuration or network error.
Maybe your machine is not publicly available on the webhook address (firewall etc).
For instance, on Digital Ocean server, you would need to open up the port (mentioned in git-auto-deploy.conf.json) in the firewall:
sudo ufw allow 8866/tcp
Double-check though what you put in Manage Jenkins > Configure in term of Gitlab information (connection name, host url, credentials), as mentioned in jenkinsci/gitlab-plugin issue 391.
See GitLab Integration Jenkins: Configure the Jenkins server
It means issues in between jenkins server and gitlab or github server.
Like what I did:
I have set my local-IP:port/project/jenkins_project_name
http://192.168.1.21:8080/project/jenkins_project_name
and set the above URL in the gitlab webhook, it shouldn't work - right?
Because it's an IP that's private and not routable.
SO later I realized and set the public-IP and then hook worked.
http://public_IP:8080/project/jenkins_project_name
Note: To routable public-IP, you should expose port in your router [e.g. 8080 was for me or anything want ]
Hope this works.
I have faced the same issue.
In my case Jenkins is running in an AWS EC2 instance. I have resolved the issue by whitelisting the Public IP addresses of Gitlab on port 443 into the instance security group.

how to make Jenkins create a tunnel before polling from SVN

I need to create a tunnel like the following before code can be checked out from SVN :
ssh -L 9898:some_server.com:9898 user#another_server.com
Now, I added pre-scm-buildstep plugin and wrote a script to open the tunnel before updating the repository as explained here, but it doesn't work with polling. It only works if I ask Jenkins to 'Build now'. In the setup where I have set it up to poll, its red saying that its unable to access the repository url, which can only happen if the tunnel was not created.
Is there any plugin such that I can execute a script before it polls, so that I can open the tunnel before it starts polling
Use ProxyCommand in your ssh config to have ssh automatically create the tunnel for you. e.g.,
Host another_server.com
ProxyCommand ssh some_server.com exec nc %h %p
With the above in ~jenkins/.ssh/config (or whatever user jenkins runs as), when it tries to ssh to another_server.com it will actually ssh to some_server.com and run nc to forward the ssh connection to the another_server.com.

What does "Jenkins URL" means in configuration settings?

On Jenkins configuration page in section "Jenkins URL" I've set this option to "http://name_of_my_machine.jenkins:8080/"
Usually I open jenkins by: "http://localhost:8080/"
But this new option did not work for me - Jenkins does not open. So what does it mean?
Jenkins can't determine its URL on its own. So when it needs to create full links that's where the URL is taken from. In general even if you specify the wrong URL it should not affect the way Jenkins works in any significant way. It certainly has no effect on the URL that you enter in your browser to connect to Jenkins server. You can either specify http://localhost:8080 (when connecting from your machine and assuming that you started Jenkins on port 8080) or http://<machine_hostname>:8080 when connecting from anywhere.
So no matter what you specify it has no effect on connecting to Jenkins, therefore http://name_of_my_machine.jenkins:8080/ won't work, as .jenkins is not part of the name (e.g. ping name_of_my_machine.jenkins won't find the host).
Whenever Jenkins needs to create a URL that points to itself, Jenkins picks it up from the "Jenkins URL" setting in the global configuration.
Jenkins could try to guess the URL by e.g. getting the hostname and combining that with the port it is running on. But sometimes the hostname is not the same as the DNS name. And what if you have placed a front-end or proxy before Jenkins that e.g. terminates SSL connections and you would really like people to use Jenkins at https://company.com/jenkins/. Jenkins running in port 8080 cannot know about the front-end. The only reliable way for Jenkins to get the URL to itself is for an administrator setting it in Jenkins configuration.
Jenkins needs to know it's own URL when it is creating links that point back to itself. It does this e.g. when it sends out emails containing direct links to build results. Also, if you have a JNLP type slave, the slave initiates the connection to the master and the master returns a message which contains a link back to Jenkins for downloading the slave agent software.
Do you mean the option in the E-mail configuration section? This is only to generate the links in emails Jenkins sends (see the help for the option -- click the symbol with the question mark). If after changing it you cannot access your server anymore, it must be something else.

Resources