Dataflow job create from template failing with error: Workflow failed - google-cloud-dataflow

I am trying to get started with Cloud dataflow by running the simple Wordcount template, but the job is failing without any error reason. In the past iterations, I did see informative errors on dataflow jobs, which I was able to fix, but now my jobs are consistently failing with following status in the log. Job ID: 2022-10-29_04_23_45-16586439030273551039
No errors are shown, so I have no direction to try out.

Related

Zephyr For Jira Test Management Plugin of Jenkins hangs while sending test report to Jira

I am trying to use Zephyr For Jira Test Management Plugin in Jenkins to create test cases automaticly and update their status. What works is that Jenkins creates test case and it is fine, but issue is that it hangs and not updates Jira's issue (pass/fail). Does anyone know is it a bug or am I doing something wrong?
Connection is working and validated:
Connection image
Plugin gets information at post build actions: Post build config
Jenkins console output says that build is successful, and then hangs: Success message and hanging
Error message from cmd that I don't understand: Error message
I got it working. At the post build actions (second picture), publish test results to Zephyr for Jira - changed from cycle Ad hoc, to New Cycle. And it worked!

Apache Airflow Job Fails and Thinks Successful Dataflow Jobs are Zombies

Airflow job fails after detecting successful Dataflow job as a zombie.
I run an hourly Dataflow job that's triggered by an external Airflow instance using the python DataflowTemplateOperator. A couple of times a week, Dataflow becomes completely unresponsive to status pings. When I've caught the error in real-time and tried looking at the status of the Dataflow job in the GCP UI, the page won't load despite my having a network connection and being able to look at other pages on the GCP site. After a few minutes, everything returns to normal working order. This seems to happen towards the end of a job's run or when workers are shutting down. The Dataflow jobs don't fail, and don't report any errors. Airflow thinks they've failed because, when Dataflow becomes unresponsive, Airflow assumes the jobs are zombies. I needed a fast solution and just increased my number of retries, but I would like to understand the problem and find a better solution.

Dataflow appears to be stuck. can someone cancel the job?

Got an error message from dataflow. I tried to cancel it manually but didn't work. Error:
2017-12-13 (03:44:56) Workflow failed. Causes: The Dataflow appears to be stuck. Please reach out to t...: Workflow failed. Causes: The Dataflow appears to be stuck. Please reach out to the Dataflow team at http://stackoverflow.com/questions/tagged/google-cloud-dataflow.
Can someone help on this job? Thanks!
Had a similar problem. Maybe this is the reason: When the default VPC network has been renamed or changed on a project, it is necessary to inform the new network name through WorkerOptions.network (--network on the CLI) when running a job remotely (DataflowRunner), or else the job gets stuck with no clue/clear log error message with what happened.

Cloudbees Jenkins job fails with no console output

We are testing out the Cloudbees Jenkins service, but all of our jobs have started to not show any console output. This seems to happen both in jobs that are successful and jobs that fail.
We have created a new job, and cannot see why it is failing since the console output appears empty.
I'm not sure if we have hit some limit in the free version of the service, or if there is some current bug in the service that is preventing the console output from being visible. Has anyone else seen this?
Restarting your Jenkins should resolve this (this is the 2nd report we've had of this behaviour recently) - please log a support job if that is not the case.
To restart, browse to https://[account].ci.cloudbees.com/restart/

Jenkins Log to see what is causing a job to fail

I have a job in Jenkins that it testing nunit tests for a project. The Jenkins job fails, although all the unit tests pass.
So on Jenkins it says the build fails - but test results show no failures.
I cannot seem to figure out what is causing the job to fail. Is there some sort of way to see what causes a Jenkins job to be marked as fail? i.e. a detailed log file for a job or something? Any suggestions would be much appreciated.
Have you checked the Console Output for the failed job?
That said, errors in the Console Output can be hard to find, and then harder to understand. Sometimes I need to log in/remote to the build machine and build the solution, or run the unit tests, manually to see the error in an uncluttered, non-abstracted way (i.e., in the VisualStudio IDE or the NUnit GUI).
Oh, and the Log Parser Plugin makes finding errors in Jenkins much easier.

Resources