When trying to enter the job info from GC web UI, it never opens but this kind of message is presented in the main jobs listing:
A job with ID "2017-10-17_23_35_15-3310306527853724439" doesn't exist
gcloud dataflow jobs cancel returns this kind of message (repeatedly):
Failed to cancel job [2017-10-17_23_35_15-3310306527853724439]:
(882c3a8a1f6e0d10): Workflow modification failed. Causes:
(19752e1d053cad56): Operation cancel not allowed for job 2017-10-
17_23_35_15-3310306527853724439. Job is not yet ready for canceling.
Please retry in a few minutes.
Updating or deploying a new job with the same name doesn't work. So how to force kill the job?
Related
I have some jobs which trigger each other after they finish, but in some cases, the last test3 job isn't started, and I have no error message/ideas on why?
Most of the time the build test2 writes "Triggering a new build of test3" and in some cases, it's not there? It doesn't matter if the job test2 failed or passed, and there isn't any pattern when it doesn't trigger the last job. Does anyone have some ideas on what to look at, because I'm a little lost?
All runs are triggered on: Jenkins 2.332.3
Settings:
Builds:
What can make a job not trigger the next one?
I am attempting to delete a Google Cloud Dataflow job using gcloud and its failing for a reason I don't understand:
# gcloud dataflow --project=XXXX jobs cancel --region=europe-west2 bhp-dp-pubsub-to-datalake
Failed to cancel job [bhp-dp-pubsub-to-datalake]: (fe9655fb12e69cb6): Could not cancel
workflow; user does not have sufficient permissions on project: XXX, or the job does not
exist in the project. Please ensure you have permission to access the job and the `--
region` flag, europe-west2, matches the job's region.
I know that I have permission to cancel jobs because I can do it from the UI.
Anyone any idea what might be wrong here?
I use jenkins for my countinuous integration testing + possibility of starting manual checks.
My utilization is a job that :
Poll mercurial repo every 10 mn
Once a commit is done, start a generation (clone + make)
Launch a python suite test script
Gather result
Sometimes I want to be able to gracefully stop a job without using the "stop" button that simply aborts the test.
I managed to do it using a trick that check the presence of a file in the log directory used by the Python test suite
But I'm looking for a way to do it inside jenkins job itself.
Is there a way to have a customizable button for that purpose ?
I tried "batch task" plugin that would have been perfect BUT it waits for the python script to complete before execution ... So useless in my case (but the code works)
Thanks in advance for your help
The queue job in Jenkins gets automatically started when the current job gets finished/aborted. Which is the normal behaviour. I found a plugin called "Block queued job plugin" which also does the same thing.
It block the queue of a project unless the running job of the project gets finished.
Then Whats the use of "Block queued job plugin"?? I want to use it if it solves the following problem:
A build flow has a restart slave machine step in it .Example:
build( "RestartMachine" , MachineName: "ctttest-06" )
build( "CheckForShutdown")
build( "RunTest" )
In case there is an ongoing build flow and another build flow is in queue. Whenever the the job executes "RestartMachine". The queued job starts executing and takes over the slave which not expected as the ongoing job was not finished.
Is there a way to achieve that? getting the status of other jobs and checking if last run was "success"?
I don't want to automatically run this deployment job on the upstream success, but to trigger this manually. but still safeguard by checking upstream (multiple) success
Thanks for the help