Updating a Dataflow job from the rest API - google-cloud-dataflow

I am trying to programatically update a cloud DataFlow job by using the REST API as described here
I have a PubSub to BigQuery job and my end goal is to replace the BigQuery output table.
I've tried updating the current job with a new job by using the replacedByJobId field but always getting this error:
{
"error": {
"code": 400,
"message": "(b7fd8310f1b85ccf): Could not modify workflow; invalid modifier value: 0",
"status": "INVALID_ARGUMENT"
} }
Request body:
{
"id": "jobid",
"projectId": "projectId",
"replacedByJobId = "newJobId", }
Is there another way to either replace a running job's parameters (OutputTable) or replace a running job with a new similar job?

In order to update a job you also need to provide a compatible replacement job. Note that update is currently only supported using the Java SDK.
You can find documentation on updating using the Java SDK at: Updating an Existing Pipeline: Launching Your Replacement Job.

java -jar pipeline/build/libs/pipeline-service-1.0.jar \
--project=my-project \
--zone=us-central1-f \
--streaming=true \
--stagingLocation=gs://my-bucket/tmp/dataflow/staging/ \
--runner=DataflowPipelineRunner \
--numWorkers=5 \
--workerMachineType=n1-standard-2 \
--jobName=ingresspipeline \
--update

Related

Creating / Getting a Cloud Run Job using the Python API Client Library

I created a Cloud Run Job using command line:
gcloud --verbosity=debug beta run jobs create my-job \
--image=us-docker.pkg.dev/cloudrun/container/job:latest
When I can list the jobs using the API Client library, my-job is returned:
import googleapiclient.discovery
with googleapiclient.discovery.build('run', 'v1') as client:
request = client.namespaces().jobs().list(parent=f'namespaces/my-project')
response = request.execute()
print(response)
However, when I try to get the job using the following snippet, I get 404 "Requested entity was not found":
...
request = client.namespaces().jobs().get(name='namespaces/my-project/jobs/my-job')
response = request.execute()
...
I am also unable to create a job using the following snippet, this again return 404 "Requested entity was not found":
request = client.namespaces().jobs().create(parent=f'namespaces/my-project',
body={
"metadata": {
"name": "my-job2",
},
"spec": {
"template": {
"spec": {
"template": {
"spec": {
"containers": [{
"image": "us-docker.pkg.dev/cloudrun/container/job:latest"
}],
}
}
}
}
},
})
I have Cloud Run Admin permissions for the project.
What am I missing?
Looking into the API reference, it appears you are using the call correctly 1, 2. When doing the get call, you need to specify the correct regional endpoint. So, it might happen that you are using the global endpoint for the list, create and get calls. However, make sure that you use the regional endpoint for the get and create call.
The global endpoint has this documentation for v14: "For v1, this endpoint only supports Global List: use regional endpoints instead."
You can see an example using this command from cloud shell. (This assumes your region is 'us-central1', if not that needs to be updated).
curl -X GET
-H "Authorization: Bearer "$(gcloud auth application-default print-access-token)
https://us-central1-run.googleapis.com/apis/run.googleapis.com/v1/namespaces/my-project/jobs/my-job
curl -X GET
-H "Authorization: Bearer "$(gcloud auth application-default print-access-token)
https://run.googleapis.com/apis/run.googleapis.com/v1/namespaces/my-project/jobs/my-job

Excluding Draft PRs from Jenkins

I will like to be able to choose to only run Jenkins build on PRs who are not marked as draft. Is there currently a way to do that?
I found something like this: https://github.com/jenkinsci/github-branch-source-plugin/pull/416, but cannot seem to find any place in the Jenkins dashboard that will allow me to exclude draft PRs.
Thanks!
The GitHub API allows you to view details of a PR and see if the PR is a draft:
Request:
curl \
-H "Accept: application/vnd.github+json" \
-H "Authorization: token <TOKEN>" \
https://api.github.com/repos/OWNER/REPO/pulls/PULL_NUMBER
Response:
{
"url": "https://api.github.com/repos/octocat/Hello-World/pulls/1347",
"id": 1,
"node_id": "MDExOlB1bGxSZXF1ZXN0MQ==",
...
"auto_merge": null,
"draft": false, <---- This is what you want
"merged": false,
...
"deletions": 3,
"changed_files": 5
}
You will have to modify the Jenkins job itself to perform this API call, parse the response and get the value of draft, and continue or abort the build depending on the value.

Resource not found - Triggering BitBucket Pipeline using curl

I created a new project and added a repository to it in my workspace. Further, I added a bitbucket-pipelines.yml to build a pipeline. I am able to trigger the pipeline manually however while trying to execute it using BitBucket API using curl, I get the below error every time:
Can anyone suggest what I am missing here?
NOTE: The same curl command (below) is able to run other pipelines in different repositories in the same workspace, so do I need to enable something in my current repository to access pipeline using BitBucket APIs. TIA
Error:
{"type": "error", "error": {"message": "Resource not found"}}%
cURL command:
curl -X POST -is -u username:password \
-H 'Content-Type: application/json' \
https://api.bitbucket.org/2.0/repositories/workspace-name/repo-name/pipelines/ \
-d '
{
"target": {
"type": "pipeline_ref_target",
"ref_type": "branch",
"ref_name": "master",
"selector": {
"type": "custom",
"pattern": "create-tenant"
}
}}'
So the tokens I was using in the curl weren't added to Repository Settings -> User and group access. Got them added and was able to execute it successfully.

How can use File Spec in an API Call in Jfrog

I have a question about how to use File Sepc in an API Call in JFrog.
I used Jenkins Artifactory Plugin to upload or download artifacts to JFrog, I try to rewrite the function using JFrog API (GET/PUT) to do the same thing.
but I have now a problem, for some artifacts I used file Spec to set some properties and finally I upload this file spec.
"files": [
{
"pattern": "${file}",
"target": "${target}" """
if (runID) {
uploadSpec += """,
"props": "artifactId=${runID}"
"""
}
uploadSpec += """
}
]
as you can see this artifactId.
in this case when I use JFrog API to upload artifacts how should I set properties?
sh """
curl -sSf -u user:pw -X PUT -T ${zipFile} 'https://${config.artifactory.name}.xxxx:443/artifactory/${path}'
"""
How can I call put api and set "props": "artifactId=${runID}"
any solutions??
First - if you can use the JFrog CLI - you should use it, because it makes it simpler and provides some advanced features out-of-the-box, such as batch parallel uploads/downloads, file-specs, attaching properties, build-info, authentication, etc.
If you still want to use the Artifactory API directly for setting properties, which is indeed a viable good option, you can do one of the following:
Add the properties as matrix parameters as part of the upload (deploy) API call.
In your case, it should be something like:
sh """
curl -sSf -u user:pw -X PUT -T ${zipFile} 'https://${config.artifactory.name}.xxxx:443/artifactory/${path};artifactId=${runID}'
"""
Note the ;key=value in the end of the URL.
Do a second call, after the upload, to set the item properties.
In your case, it should be something like:
Using the set item properties API -
sh """
curl -sSf -u user:pw -X PUT 'https://${config.artifactory.name}.xxxx:443/artifactory/api/storage/${path}?properties=artifactId=${runID}'
"""
or, using the update item properties API-
sh """
curl -sSf -u user:pw -X PATCH 'https://${config.artifactory.name}.xxxx:443/artifactory/api/metadata/${path}' -d '{ "props": { "artifactId" : "${runID}" } }'
"""
For more information, see:
Working with JFrog Properties
Using Properties in Deployment and Resolution
Artifactory REST API - Item Properties

Jenkins Webhook Header as Argument to Shell script

I'm trying to trigger a Jenkins job through the webhook using the curl command whenever there is EC2 Spot Instance Interruption Warning with below sample event. All this will be done in AWS lambda from CloudWatch Event Trigger.
{
"version": "0",
"id": "1e5527d7-bb36-4607-3370-4164db56a40e",
"detail-type": "EC2 Spot Instance Interruption Warning",
"source": "aws.ec2",
"account": "123456789012",
"time": "1970-01-01T00:00:00Z",
"region": "us-east-1",
"resources": [
"arn:aws:ec2:us-east-1b:instance/i-0b662ef9931388ba0"
],
"detail": {
"instance-id": "i-0b662ef9931388ba0",
"instance-action": "terminate"
}
}
My aim is to get the instance-id from the event and pass it as a header to the jenkins webhook where the jenkins job gets triggered and this header has to be sent as an argument to the underlying python script in the jenkins job.
I tried the below approach which didn't give me success. And am not sure if this is how it is done. :)
curl -H 'param: instance' https://jenkins.url/generic-webhook-trigger/invoke\?token\=jenkins-job
Generic Jenkins webhook trigger is configured as below.
The final python script in the job is configured as below.
python maintenance.py $.param
I'm trying to get the final script similar to below. Please let me know if you know of any approach how can I get this done. TIA
python maintenance.py i-0b662ef9931388ba
I fixed this by adding the following.
Add string parameter under This project is parameterized as shown below.
Next under Generic Webhook Trigger I have added Header parameters
Now, we can directly pass this param in the build command using $param.
The curl command is now modified to below.
curl --location --request POST 'https://jenkins.url/generic-webhook-trigger/invoke\?token\=jenkins-job' --header 'Content-Type: application/json' --header "param: $EVENT_DATA" --data-raw ''
What did I learn from this?
I was too lazy to read the documentation and finally figured it out only after reading it.

Resources