How programmatically to get Jenkins user id? - jenkins

I would like to pass the user id of the person who started a Jenkins job to a script. The output of 'env' indicates no environment variables are set and the output of 'ant -v' indicates no properties are set. How do I access the user id of the person that started a job? (I understand that triggers can start jobs, but for this job, it will always be a person).

To get the job executor:
curl -s "${BUILD_URL}/api/json" | \
python -c '\
import json; \
import sys; \
obj = json.loads(sys.stdin.read()); \
print [ \
cause["userId"] \
for action in obj["actions"] \
if "causes" in action \
for cause in action["causes"] \
if "userId" in cause][0];'
Also see How to set environment variables in Jenkins? which explains how to pass it into a script.

BUILD_START_USER=`curl ${JOB_URL}'lastBuild/api/json' | python -c 'import json,sys;obj=json.loads(sys.stdin.read());print obj["'"actions"'"][1]["'"causes"'"][0]["'"userId"'"]'`

Related

Ansible ad-hoc inventory not working when executed in a shell command in a Jenkins pipeline?

Ansible v2.11.x
I have a Jenkins pipeline that does this. All the $VARIABLES are passed-in from the job's parameters.
withCredentials([string(credentialsId: "VAULT_PASSWORD", variable: "VAULT_PASSWORD")]) {
stage("Configure $env.IP_ADDRESS") {
sh """
ansible-playbook -i \\"$env.IP_ADDRESS,\\" \
-e var_host=$env.IP_ADDRESS \
-e web_branch=$env.WEB_BRANCH \
-e web_version=$env.WEB_VERSION \
site.yml
"""
}
}
My playbook is this
---
- hosts: "{{ var_host | default('site') }}"
roles:
- some_role
I have a groups_vars/all.yml file meant to be used by ad-hoc inventories like this. When I run the pipeline, I simply get the following, and the run does nothing
22:52:29 + ansible-playbook -i "10.x.x.x," -e var_host=10.x.x.x -e web_branch=development -e web_version=81cdedd6fe-20210811_2031 site.yml
22:52:31 [WARNING]: Could not match supplied host pattern, ignoring: 10.x.x.x
If I go on the build node and execute exactly the same command, it works. I can also execute the same command on my Mac, and it works too.
So why does the ad-hoc inventory not work when executed in the pipeline?
This post gave me a clue
The correct syntax that worked for me is
withCredentials([string(credentialsId: "VAULT_PASSWORD", variable: "VAULT_PASSWORD")]) {
stage("Configure $env.IP_ADDRESS") {
sh """
ansible-playbook -i $env.IP_ADDRESS, \
-e var_host=$env.IP_ADDRESS \
-e web_branch=$env.WEB_BRANCH \
-e web_version=$env.WEB_VERSION \
site.yml
"""
}
}

"unrecognized arguments" error while executing a Dataflow job with gcloud cli

I have created a job in Dataflow UI and it works fine. Now I want to automate it from the command line with a small bash script:
#GLOBAL VARIABLES
export PROJECT="cf-businessintelligence"
export GCS_LOCATION="gs://dataflow-templates/latest/Jdbc_to_BigQuery"
export MAX_WORKERS="15"
export NETWORK="businessintelligence"
export REGION_ID="us-central1"
export STAGING_LOCATION="gs://dataflow_temporary_directory/temp_dir"
export SUBNETWORK="bidw-dataflow-usc1"
export WORKER_MACHINE_TYPE="n1-standard-96"
export ZONE="us-central1-a"
export JOBNAME="test"
#COMMAND
gcloud dataflow jobs run $JOBNAME --project=$PROJECT --gcs-location=$GCS_LOCATION \
--max-workers=$MAX_WORKERS \
--network=$NETWORK \
--parameters ^:^query="select current_date":connectionURL="jdbc:mysql://mysqldbhost:3306/bidw":user="xyz",password="abc":driverClassName="com.mysql.jdbc.Driver":driverJars="gs://jdbc_drivers/mysql-connector-java-8.0.16.jar":outputTable="cf-businessintelligence:bidw.mytest":tempLocation="gs://dataflow_temporary_directory/tmp" \
--region=$REGION_ID \
--staging-location=$STAGING_LOCATION \
--subnetwork=$SUBNETWORK \
--worker-machine-type=$WORKER_MACHINE_TYPE \
--zone=$ZONE
When I run it, it fails with the following error:
ERROR: (gcloud.dataflow.jobs.run) unrecognized arguments:
--network=businessintelligence
Following the instructions in gcloud topic escaping , I believe I correctly escaped my parameters so I am really confused. Why is failing on the NETWORK parameter?
Try getting help for your command, to see which options are currently accepted by it:
gcloud dataflow jobs run --help
For me, this displays a number of options, but not the --network option.
I then checked the beta channel:
gcloud beta dataflow jobs run --help
And it does display the --network option. So you'll want to launch your job with gcloud beta dataflow....
Both the network and subnetwork arguments need to be the complete URL.
Source: https://cloud.google.com/dataflow/docs/guides/specifying-networks#example_network_and_subnetwork_specifications
Example for the subnetwork flag:
https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION_NAME/subnetworks/SUBNETWORK_NAME

jenkins Job Status via Curl

I need to get job build status failure or success via curl command.
I tried this :
curl --silent http://user:TokenID#Jenkins-BuildURL/job/job_number/api/json | jq -r '.result'
Unable to execute the curl.
Try below Command :
FYI , you are missing JOB_NAME in your curl command
curl --silent http://user:TokenID#Jenkins-BuildURL/job/${JOB_NAME}/${BUILD_NUMBER}/api/json
Note : JOB_NAME,BUILD_NUMBER are jenkins Environment variables , when executed from jenkins job it will pick latest job details
and you can always pass your credentials using '-u' option :
Example :
curl --silent -u username:user_pwd http://Jenkins-BuildURL/job/${JOB_NAME}/${BUILD_NUMBER}/api/json
And simple trick would be first check in browser if the Url is valid or not , if it valid half of the problem is eliminated , then we can focus on curl command

Creates package but no export

My job completes with no error. The logs show "accuracy", "auc", and other statistical measures of my model. ML-engine creates a package subdirectory, and a tar under that, as expected. But, there's no export directory, checkpoint, eval, graph or any other artifact that I'm accustom to seeing when I train locally. Am I missing something simple with the command I'm using to call the service?
gcloud ml-engine jobs submit training $JOB_NAME \
--job-dir $OUTPUT_PATH \
--runtime-version 1.0 \
--module-name trainer.task \
--package-path trainer/ \
--region $REGION \
-- \
--model_type wide \
--train_data $TRAIN_DATA \
--test_data $TEST_DATA \
--train_steps 1000 \
--verbose-logging true
The logs show this: model directory = /tmp/tmpS7Z2bq
But I was expecting my model to go to the GCS bucket I defined in $OUTPUT_PATH.
I'm following the steps under "Run a single-instance trainer in the cloud" from the getting started docs.
Maybe you could show where and how you declare the $OUTPUT_PATH?
Also the model directory, might be the directory within the $OUTPUT_PATH where you could find the model of that specific Job.

Generate JIRA release note through a jenkins job without plugins

I know that this is possible through the JIRA-JENKINS plugin. But I'm not an administrative user neither in JIRA nor Jenkins. Therefore I want to know is it possible to generate JIRA release note through a jenkin job without installing any plugins to JIRA or JENKINS?
Ok, I did it just now, here is my solution (that is a mix of several partial solution I have found googling):
In your deploy jobs, add a shell execution step at the end of the job and replace all parameters of the following script with correct values
version=<your_jira_version> ##(for example 1.0.71)
project_name=<your_jira_project_key> ##(for example PRJ)
jira_version_id=$(curl --silent -u <jira_user>:<jira_password> -X GET -H "Content-Type: application/json" "https://<your_jira_url>/rest/api/2/project/${project_name}/versions" | jq "map(select(.[\"name\"] == \"$version\")) | .[0] | .id" | sed -e 's/^"//' -e 's/"$//')
project_id=$(curl --silent -u <jira_user>:<jira_password> -X GET -H "Content-Type: application/json" "https://<your_jira_url>/rest/api/2/project/${project_name}" | jq .id | sed -e 's/^"//' -e 's/"$//')
release_notes_page="https://<your_jira_url>/secure/ReleaseNote.jspa?version=${jira_version_id}&styleName=Text&projectId=${project_id}"
release_notes=$(curl --silent -D- -u <jira_user>:<jira_password> -X GET -H "Content-Type: application/json" "$release_notes_page")
rm -rf releasenotes.txt
echo "$release_notes" | sed -n "/<textarea rows=\"40\" cols=\"120\">/,/<\/textarea>/p" | grep -v "textarea" > releasenotes.txt
You can use the maven-changes-plugin. You have to create a small maven project (doesn't need any sources) and include the plugin in the plugins section with the necessary configuration (see here: http://maven.apache.org/plugins/maven-changes-plugin/jira-report-mojo.html)
Then you create a Jenkins job, and just execute the maven goals you need (most probably just "mvn changes:jira-report").

Resources