Is it possible to interface Tivoli Workload Scheduler with Jenkins such as to trigger TWS job streams from Jenkins?
There's no out-of-the-box jenkins plugin, but there are many IBM Workload Scheduler interfaces that you can leverage to submit the job stream from jenkins:
conman sbs command line.
https://www.ibm.com/support/knowledgecenter/en/SSGSPN_9.4.0/com.ibm.tivoli.itws.doc_9.4/distr/src_ref/awsrgsubmitsched.htm
REST APIs,if you are running TWSd 9.3 FP2 or later.
https://start.wa.ibmserviceengage.com/twsd/
Java APIs.
https://www.ibm.com/support/knowledgecenter/en/SSGSPN_9.4.0/com.ibm.tivoli.itws.doc_9.4/common/src_dgd/awsddjapi.htm
generate a custom event using sendevent and create an event rule to handle that event performing a submit.
https://www.ibm.com/support/knowledgecenter/en/SSGSPN_9.4.0/com.ibm.tivoli.itws.doc_9.4/distr/src_ref/awsrgeventsend4dyn.htm
Related
Is it possible to interface Jenkins with Tivoli Workload Scheduler, such as to trigger Jenkins to run a job requested by a TWS job stream ?
A trigger should wait for the Jenkins job to complete and return to TWS with a proper returncode.
But I havn't been able to find any info as yet!
PS. I'm just the interface guy and I have no Jenkins knowledge that so ever, so please any points of direction is welcome.
We have multiple Jenkin masters and have enabled Jenkins Prometheus plugin and connected these masters as data sources to Grafana. I am currently interested in finding jobs which are waiting for executors for more than a certain time and create an alert based on this. I looked at Jenkins metrics but did not find any suitable metric to have monitoring for this use case. How can I achieve this?
You can access the Jenkins build queue via Groovy and check for entries waiting for too long.
It is possible to run Groovy scripts via the Jenkins REST API, which then will be your interface for polling for such "blocked" jobs.
I am trying to write a script to automate the deployment of a Java Dataflow job. The script creates a template and then uses the command
gcloud dataflow jobs run my-job --gcs-location=gs://my_bucket/template
The issue is, I want to update the job if the job already exists and it's running. I can do the update if I run the job via maven, but I need to do this via gcloud so I can have a service account for deployment and another one for running the job. I tried different things (adding --parameters update to the command line), but I always get an error. Is there a way to update a Dataflow job exclusively via gcloud dataflow jobs run?
Referring to the official documentation, which describes gcloud beta dataflow jobs - a group of subcommands for working with Dataflow jobs, there is no possibility to use gcloud for update the job.
As for now, the Apache Beam SDKs provide a way to update an ongoing streaming job on the Dataflow managed service with new pipeline code, you can find more information here. Another way of updating an existing Dataflow job is by using REST API, where you can find Java example.
Additionally, please follow Feature Request regarding recreating job with gcloud.
I am trying to stress test my jenkins infrastructure using jmeter. I have created a Jmeter TestPlan which uses HTTPRequest component of jmeter to trigger the jenkins builds using jenkins rest api. The idea is to trigger a large number of builds and monitor the System health. when I run the jmeter test plan for single thread it works fine, but when I run it with multiple threads each HTTPrequest to trigger the jenkins build should be run for each thread... but it runs only once i.e. each build is triggered only once on jenkins (no matter what is the thread count). In Jmeter test results, it shows that the HTTPRequest is successful for all the threads.. but on Jenkins the build seems to be triggered only for 1 thread group.
Well-behaved JMeter test must represent real system usage, if you want to simulate user clicking Jenkins "Build Now" button you need to send request like:
http://jenkins_host:port/job/jobname/build?delay=0sec
this delay=0sec parameter is uber important as if you don't have it only first request will trigger the job, with this parameter you will have either as many concurrent jobs as available executors:
If there are not enough executors to serve all the jobs, the jobs will be put into queue
You can use JMeter PerfMon Plugin for monitoring Jenkins node health (CPU, RAM, JVM metrics, etc.)
I have a requirement to scale down OpenShift pods at the end of each business day automatically.
How might I schedule this automatically?
OpenShift, like Kubernetes, is an api-driven application. Essentially all application functionality is exposed over the control-plane API running on the master hosts.
You can use any orchestration tool that is capable of making API calls to perform this activity. Information on calling the OpenShift API directly can be found in the official documentation in the REST API Reference Overview section.
Many orchestration tools have plugins that allow you to interact with OpenShift/Kubernetes API more natively than running network calls directly. In the case of Jenkins for example there is the OpensShift Pipeline Jenkins plugin that allows you to perform OpenShift activities directly from Jenkins pipelines. In the cases of Ansible there is the k8s module.
If you were to combine this with Jenkins capability to run jobs on a schedule you have something that meets your requirements.
For something much simpler you could just schedule Ansible or bash scripts on a server via cron to execute the appropriate API commands against the OpenShift API.
Executing these commands from within OpenShift would also be possible via the CronJob object.