Passing client IP information to Jenkins Job - jenkins

In Jenkins, I want to pass the ip of the client that initiated the build to the Jenkins job so I can access that information inside a class that extends "Builder" or as an environment variable, or anything that works.
So for example, in the console log of every build, I can print something like: "Job started by user1 from ip: 10.101.101.1"
I know I can get an audit trial by using the "Audit Trial" plugin for Jenkins, but I would like to print that information to the build console so it'll be more straight forward.
Thanks in advance.
Edit: I want the ip of the user/client that started the job/build, which is not necessarily the ip of the jenkins slave the job is running on.

Jenkins not able to pass the client IP to a variable at the moment , the Client IP you saw at the output log is remote cause message.
I would say it is indeed useful to get remote IP in a job to enable some callback functions . Alternatively you can pass the IP as parameter when make the remote call
echo $ipaddr in your shell script of jenkins job
Enable the string parameters in jenkins job
https://wiki.jenkins-ci.org/display/JENKINS/Parameterized+Build
Make the call
curl -k -u user:apitoken -X POST
https://jenkins.local/job/yourjob/build \ --data token=jobtoken \
--data-urlencode json='{"parameter": [{"name":"ipaddr", "value":"x.x.x.x"}]}'

You can use the EnvInject plugin and use the "Prepare an environment for the run" option.
In the "Evaluated Groovy script" section, copy this code:
return [IP_ADDRESS: InetAddress.localHost.canonicalHostName]
Then, you use the $IP_ADDRESS in your build step section.
echo $IP_ADDRESS
Build log:
[EnvInject] - Loading node environment variables.
[EnvInject] - Preparing an environment for the build.
[EnvInject] - Keeping Jenkins system variables.
[EnvInject] - Keeping Jenkins build variables.
[EnvInject] - Evaluation the following Groovy script content:
return [IP_ADDRESS: InetAddress.localHost.canonicalHostName]
[EnvInject] - Injecting contributions.
Building on master in workspace /var/lib/jenkins/jobs/Test Groovy IP address/workspace
[workspace] $ /bin/sh -xe /tmp/hudson6447343457570437614.sh
+ echo 172.16.203.72
172.16.203.72
Notifying upstream projects of job completion
Finished: SUCCESS

Related

Triggering the Jenkins job from the GitLab pipeline stage and on successfully completion of the job move to next stage

Can you please help, I have the following scenario and I went through many videos, blogs but could not find anything matching with my use-case
Requirement:
To write a CI\CD pipeline in GitLab, which can facilitate the following stages in this order
- verify # unit test, sonarqube, pages
- build # package
- publish # copy artifact in repository
- deploy # Deploy artifact on runtime in an test environment
- integration # run postman\integration tests
All other stages are fine and working but for the deploy stage, because of a few restrictions I have to submit an existing Jenkins job using Jenkin remote API with the following script but the problem that script returns an asynchronous response and start the Jenkins job and deploy stage completes and it moves to next stage (integration).
Run Jenkins Job:
image: maven:3-jdk-8
tags:
- java
environment: development
stage: deploy
script:
- artifact_no=$(grep -m1 '<version>' pom.xml | grep -oP '(?<=>).*(?=<)')
- curl -X POST http://myhost:8081/job/fpp/view/categorized/job/fpp_PREP_party/build --user mkumar:1121053c6b6d19bf0b3c1d6ab604f22867 --data-urlencode json="{\"parameter\":[{\"name\":\"app_version\",\"value\":\"$artifact_no\"}]}"
Note: Using GitLab CE edition and Jenkins CI project service is not available.
I am looking for a possible way of triggering the Jenkins job from the pipeline and only on successful completion of the Jenkins job my integration stage starts executing.
Thanks for the help!
Retrieving the status of a Jenkins job that is triggered programmatically through the remote access API is notorious for not being quite convoluted.
Normally you would expect to receive in the response header, under the Location attribute, a url that you can poll to get the status of your request, but unfortunately there are some in-between steps to reach that point. You can find a guide in this post. You may also have a look in this older post.
Once you have the url, you can pool and parse the status job and either sh "exit 1" or sh "exit 0" in your script to force the job that is invoking the external job to fail or succeed, depending on how you want to assert the result of the remote job

User Interactive shell script not running in Jenkinsfile

I have a user interactive shell script that runs successfully on my Linux server. But when I try to run it via jenkins, it doesn't run.
I have created a Jenkinsfile.
Jenkinsfile
node('slaves') {
try
{
def app
stage('Remmove Docker service') {
sh 'sshpass ssh docusr#10.26.13.12 "/path/to/shell/script"'
}
}
}
Shell Script
#!/bin/bash
read -p "Hi kindly enter the api name : " api
docker service logs $api --raw
The shell Scipt runs successfully on my local server, when I try to run it on Jenkins using Jenkinsfile, it doesn't accept $api variable in my shell script which is user interactive.
What you are trying to achieve doesn't serve any purpose of automating your job by jenkins, if I correctly understood. So, your job is actually seeking a user input and it's in best interest to have a parameterized jenkins build in this case.
For your case, you can still give an argument to the sshpass command $api and have it read from the jenkins environment itself Or, better make your jenkins build parameterized and use your user input $api as the parameter.

How to trigger a jenkins job remotely which has a list subversion tag as a parameter

I have a requirement where i need to trigger the build jobs remotely using curl command. I am unable to pass the branch/tag name as a parameter to trigger the build.
I used the below command :
& $CURLEXE -k -X POST $dst_job_url --user username:token --data-urlencode json='{"parameters": [{"name":"branch","branch":"branches"}]}'
If i run the above command it was triggers the build for the trunk ( default ).
You omitted the URL, so it's hard to be certain. Jenkins has two urls for building: "build" and "buildWithParameters". If you're not using buildWithParameters, switching to it will probably help.
See:
How to trigger Jenkins builds remotely and to pass parameters

Jenkins pipeline job gets triggered as anonymous but not as an user/Admin

Jenkins Pipeline job doesn't trigger pipeline job using jenkins cli. When i run jenkins as anaonymous this works, but when i create a user/admin it fails.
I have a job A which has parameters and passes the same to Pipeline Job. This is a Master-slave setup. This is how i run:
sudo java -jar /home/user/jenkins-cli.jar -s $JENKINS_URL build pipeline_job -p parameter_Name="$parameter_Name" -p parameter_Name2="$parameter2_Name"
1.) I tried using options, "-auth" , "-username -password" but doesn't work.
errors:
No such command: -auth
No such command: -ssh
2.) Another option is paste the public key in SSH section http://jenkin_url/me/configure , but still it fails
error:
java.io.IOException: Invalid PEM structure, '-----BEGIN...' missing
Is there i am missing anything ?
I Found the solution,
1.) used SSH CLI.
In my case i was using master-slave environment, connection was made using SSH keys vice-versa. In order to trigger the build using Jenkins CLI, place the SSH keys both public & private and place them in http://jenkinsURL/user/username/configure
Here username= the one used to connect the nodes.
Trigger the job as below:
java -jar /home/username/jenkins-cli.jar -s $JENKINS_URL -i /home/username/.ssh/id_rsa build JOBNAME
Note: This is one way but cloudbees doesn't encourage this approach.
2.) There is new approach i.e., using API token authentication.
Go to http://jenkinsURL/user/username/configure
Copy the API token
trigger the build as below:
java -jar /home/username/jenkins-cli.jar -s $JENKINS_URL -auth username:apitoken /home/username/.ssh/id_rsa build JOBNAME
Note: For using API token option, download the latest jar file

Log in on OpenShift using Jenkins

I have a docker container with jenkins deployed using OpenShift Origin.
Now I want to use Jenkins to build/test and deploy other OpenShift apps.
So I try to login on my OpenShift-server (from inside my jenkins) but than I get the following error. Can someone help me?
Started by user Jenkins Admin
[EnvInject] - Loading node environment variables.
Building in workspace /var/lib/jenkins/jobs/s2i-build-deploy/workspace
[EnvInject] - Executing scripts and injecting environment variables after the SCM step.
[EnvInject] - Injecting as environment variables the properties content
APP_HOSTNAME=http://test.apps.example.com
USER_NAME=admin
PASSWORD=admin
OSO_SERVER=ip.compute.internal:8443
DEVEL_PROJ_NAME=test
SERVICE=test
[EnvInject] - Variables injected successfully.
[workspace] $ /bin/sh -xe /tmp/hudson1352752763797328747.sh
+ oc login -uadmin -padmin --server=ip.compute.internal:8443
error: x509: certificate signed by unknown authority
Build step 'Execute shell' marked build as failure
Finished: FAILURE
The oc login-command is working when I'm performing directly in my server.
That error means you need to also specify the CA that was used to sign the API server's certificate. You will need to also specify --ca-file (check oc help options) with the ca.crt of the master in order to login.
As mentioned in this comment try:
oc login $OPENSHIFT_URL --insecure-skip-tls-verify=true

Resources