When running Dataflow job in Google cloud, how can the Dataflow pipeline be configured to shoot an email upon failure (or successful completion)? Is there an easy option, where it can be configured inside the Pipeline program, or any other options?
One possible option from my practice is to use Apache Airflow(on GCP, it's Cloud Composer).
Cloud Composor/Apache Airflow can send you failure emails when DAG (jobs) face failures. So I can host my job in Airflow and it sends emails upon failures.
You can check[1] to see if it satisfies your need
https://cloud.google.com/composer/docs/quickstart
Related
I am trying to trigger a build in Jenkins directly from Xray. I have been successfully able to create a trigger in jira and has provided the webhook url and other information needed to run the build. But on triggering the build from any Test Plan, I am getting the following error:
Error publishing web request. Response HTTP status:503
ERROR: The requested URL could not be retrieved.
When i'm hitting the same webhook url in any browser, then the build is getting triggered in jenkins, hence it seems there's no issue with the provided webhook url.
One thing to note is that our Xray is in Jira cloud whereas the Jenkins is running behind a VPN. Can anyone help me in resolving the above issue?
If your Xray on Jira Cloud is trying to acess Jenkins, the Jenkins server needs to have a public URL. If you have it behind the firewall then it's not possible for Xray or any other tool to trigger it directly.
You can call the Jenkins URL from your browser because you are probably with your vpn connected.
The possible workaround would be to use a tool such as ngrok, as mentioned in the docs, to make a tunnel, but beware with the security implications of exposing your Jenkins url to the world.
For reference, Xray cloud provides documentation/examples showcasing how to take advantage of Jira Cloud automation capabilities to trigger jobs, for example in Jenkins.
We have multiple Jenkin masters and have enabled Jenkins Prometheus plugin and connected these masters as data sources to Grafana. I am currently interested in finding jobs which are waiting for executors for more than a certain time and create an alert based on this. I looked at Jenkins metrics but did not find any suitable metric to have monitoring for this use case. How can I achieve this?
You can access the Jenkins build queue via Groovy and check for entries waiting for too long.
It is possible to run Groovy scripts via the Jenkins REST API, which then will be your interface for polling for such "blocked" jobs.
Our company policy requires the policy contraint "compute.requireShieldedVm" to be enabled. However, when running a Cloud Dataflow job, it is failing to create a worker with the error :
Constraint constraints/compute.requireShieldedVm violated for project projects/********. The boot disk's 'initialize_params.source_image' field specifies a non-Shielded image: projects/dataflow-service-producer-prod/global/images/dataflow-dataflow-owned-resource-20200216-22-rc00. See https://cloud.google.com/resource-manager/docs/organization-policy/org-policy-constraints for more information."
Is there any way when running a Dataflow job to request that a ShieldedVm be used for the worker compute?
It is not possible to provide a custom image as there is no such parameter that one can provide during job submission as can be seen here Job Submission Parameters
Alternatively, if you are running a Python based dataflow job you can setup the environment through setup files. An example of which can be found here Dataflow - Custom Python Package Environment
I am trying to integrate Bitbucket on cloud and Jenkins on premise, but once I enter the IP of my local Jenkins in Bitbucket cloud it show error URL not valid.
Is there a way to solve this, or do I need to buy Jenkins cloud license?
Your local Jenkins server is not seen by a cloud Bitbucket server because it is an internal server.
You can solve it in one of those alternatives:
Ask your system administrator to expose your Jenkins server with a global IP address along with the Jenkins port (e.g. 8080) so the Bitbucket server will be able to access it. This is not totally secure due.
Activate the Jenkins job that pulls from the remote BitBucket server on time internal - in the Job 'Build Triggers' section check the 'Poll SCM' checkbox and set the cron setting (for example: 'H/15 * * * *' for building every 15 minutes. Notice that it will not build if there were no code changes)
I want a way to automatically discover Jenkins master servers and automatically monitor the health of the jobs on those Jenkins master servers so that I can look at a single console(using nagios host) to detect issues when a job is failing anywhere in integration.
Could someone help me out to finding Jenkins master servers using nagios?
There's a nagios plugin for retrieving job health information from Jenkins, but it looks like it requires manual configuration for each job, see Nagios Jenkins plugin.
I'm not familiar enough with nagios to know how any built-in auto-discovery works, but it looks like there are several example scripts (check_find_new_hosts and device discovery) for generating the necessary configuration from a network scan. You'll have to do some work to integrate the results of the scan into your nagios instance. (IIRC, you need to restart nagios after writing new configuration?)
To get the list of Jenkins servers, you can build on one of the existing network scan scripts for nagios. The script should scan an IP range and identify devices that respond to http://IP:8080/api/xml. The resulting XML document (JSON results are also supported) should contain a root tag named <hudson> (in my instance, maybe this will change to "jenkins" in a future release). If the server responds to this request, then you'll want your script to generate the nagios configuration for monitoring it.
In addition, the XML response will contain a list of jobs, like:
<job>
<name>My Job</name>
<url>http://jenkins:8080/job/My%20job/</url>
<color>blue</color>
</job>
By iterating through this list, you get the job names, job urls (for more details or polling for status), and the current statuses (blue means success). This list of jobs can provide input to the Nagios Jenkins plugin configuration.
The Jenkins Remote API is documented on your Jenkins instance, just go to http://jenkins:8080/api.