Got an error message from dataflow. I tried to cancel it manually but didn't work. Error:
2017-12-13 (03:44:56) Workflow failed. Causes: The Dataflow appears to be stuck. Please reach out to t...: Workflow failed. Causes: The Dataflow appears to be stuck. Please reach out to the Dataflow team at http://stackoverflow.com/questions/tagged/google-cloud-dataflow.
Can someone help on this job? Thanks!
Had a similar problem. Maybe this is the reason: When the default VPC network has been renamed or changed on a project, it is necessary to inform the new network name through WorkerOptions.network (--network on the CLI) when running a job remotely (DataflowRunner), or else the job gets stuck with no clue/clear log error message with what happened.
Related
I am trying to execute a CI/CD project. Everytime i am trying to create a job and trying to save my job it pops up an error. I have gone through a few issues available on the internet, but none of them have resolved the issue.
Can anybody help in this regard? Thanks
Tomcat 9 Server
Java 16.0.1
Jenkins 2.361.1
I would suggest do following:
Check the Jenkins logs on server and try to fix.
If don't work, try to restart your Jenkins.
We are using Terraform to deploy the resouce "google_dataflow_job" on Google Cloud.
We did a successfully deployement few weeks ago (Dataflow API is already enabled since years).
We now get this error when we are executing a "terraform plan"
Error: Error when reading or editing Dataflow job 2020-10-07_06_20_01-18099947630909311965: googleapi: Error 400: (10a8bef84dbdde13): There is no cloudservices robot account for your project. Please ensure that the Dataflow API is enabled for your project., failedPrecondition
We have theses accounts:
Does anyone know how to add this cloudservices robot account?
I got the same error when running terraforms and it seems to be solved when recreating the job.
I did the following steps:
Cancel the job from the console
Remove the job from terraform state
rerun terraform in order to create the dataflow job again
From time to time my pipeline fails due to an HTTP request error 401, while trying to get the jenkins file from BitBucket. In this case, instead of crashing with an error message, I'd like to print a friendly error message and possibly send an email.
Is there a way to check for connectivity to the BitBucket server, before trying to run the pipeline? If this is not possible, can I catch this error somehow, and take action on it?
I'm guessing this would have to be something built into Jenkins, or on the Jenkins server, since my jenkins file resides within the repository on the BitBucket server.
An thoughts would be greatly appreciated, as I'm a Jenkins novice.
I was using the delivery pipeline in jenkins and it was working fine but when I am going to create new pipeline it giving the error like.
"Error communicating to server!"
NOTE: My Jenkins is running fine with build-pipeline.
Could you pls provide your inputs why is happening...
Need to re-install the delivery plugin and restart the Jenkins. Issue resloved.
We are testing out the Cloudbees Jenkins service, but all of our jobs have started to not show any console output. This seems to happen both in jobs that are successful and jobs that fail.
We have created a new job, and cannot see why it is failing since the console output appears empty.
I'm not sure if we have hit some limit in the free version of the service, or if there is some current bug in the service that is preventing the console output from being visible. Has anyone else seen this?
Restarting your Jenkins should resolve this (this is the 2nd report we've had of this behaviour recently) - please log a support job if that is not the case.
To restart, browse to https://[account].ci.cloudbees.com/restart/