Our pipeline failed, but the graph in the developers console still shows it as been successful. However, the results were not written to BigQuery, and the job log clearly shows it failed.
Shouldn't the graph show it as failed too?
Another example:
Another example:
This was a bug with the handling of BigQuery outputs that was fixed in a release of the Dataflow service. Thank you for your patience.
Related
I am integrating JMeter to Jenkins and displaying the result/report using the Performance Plugin and so far it is working fine when the the report shows the Info about if the request is successful or failed, how much time does it take, percentage of error...
I would like to display here more Info if the request is failed, such as Response Data, or in case if it is failed because of an assertion, it should display the message here. Saying in other words, these approaches should act same as View Results Tree Listener in JMeter GUI but I do not know how to attach it to the Performance Plugin in Jenkins.
Does anyone have an idea how to achieve it? Thank you in advanced!
What you can do it add to your plan a View Results Tree and:
fill in file name property.
Click on Configure and check everything except CSV Column Names
Set Errors only
This way, when an error occurs, you'll have all details in this XML file which you can load in JMeter after test to analyze errors.
My dataflow batch job not end in 5 hours. still canceling.
Im running this type of job in scheduler every 10 min.
normally, it is finished in 10 min.
but it tooks over 5 hour!
My job is
2018-08-26_13_30_17-1172470278423820020
Error log is here
Stackdriver
2018-08-27 (06:33:14) Workflow failed. Causes: The Dataflow job appears to be stuck because no worker activity has been se...
2018-08-27 (08:34:08) Workflow failed. Causes: The Dataflow job appears to be stuck because no worker activity has been se...
2018-08-27 (10:34:58) Workflow failed. Causes: The Dataflow job appears to be stuck because no worker activity has been se...
Workflow failed. Causes: The Dataflow job appears to be stuck because no worker activity has been seen in the last 1h. You can get help with Cloud Dataflow at https://cloud.google.com/dataflow/support.
Generally The Dataflow job appears to be stuck because no worker activity has been seen in the last 1h can be caused by too long setup progress. Just increase worker resources (via --machine_type parameter) to overcome the issue.
In my case I was installing several dependencies that required building wheels (pystan, fbprophet) and it took more than an hour on the minimal machine (n1-standard-1 with 1 vCPU and 3.75GB RAM). Using more powerful instance (n1-standard-4 which has 4 times more resources) solved my problem.
It is possibly not the case if there is single job stuck but may help someone else getting the same error.
Such situation can happen in three major cases:
a) Some task takes more than an hour to process.
b) Some task got stuck processing.
This is generally caused by transforms that take too long to process, or enter blocking state.
Best way to debug is to check for previous logs and see if there were any errors, or unexpected state. It seems that you have tried to re-run the job and it still failed. In this case you can add extra logs to the step that got stuck and see which data it got stuck at.
c) There is failure on Apache Beam / Dataflow side.
This is a rare case. Please, create support ticket if you believe this is the issue.
I just recovered from this issue.
As mentioned in other responses, this error is due to long setup progress. This is probably the incapability of pip to resolve the dependencies in the setup.py file. It does not fail; it tries in vain "forever" to find suitable versions leading to a timeout.
Recreate the environment locally using the necessary packages.
Mark the versions that satisfy the dependencies.
Define these versions explicitly in the setup.py.
I don't know why this worked but updating the package dependencies in setup.py got the job to run.
dependencies = [
'apache-beam[gcp] == 2.25.0',
'google-cloud-storage == 1.32.0',
'mysql-connector-python == 8.0.22',
]
I'm using TFS 2017 and SonarQube 5.6.2 5.6.4 6.2.
I'm trying to setup SonarQube integration into pull requests. On pull requests that don't appear to have any issues the sonarqube analysis runs fine. It looks like it only fails when there are issues found and it tries to read the sonar-report.json to post the issues to the pull request. I'm receiving the following error on builds that appear to find issues:
------------------------------------------------------------------------
EXECUTION SUCCESS
------------------------------------------------------------------------
Total time: 5:25.577s
Final Memory: 56M/600M
------------------------------------------------------------------------
The SonarQube Scanner has finished
Creating a summary markdown file...
Analysis results: http://somedomain:9000/dashboard/index/CP
Post-processing succeeded.
Fetching code analysis issues and posting them to the PR...
System.Management.Automation.RuntimeException: Could not find the SonarQube issue report at D:\agent2-TFS-Build01\_work\13\.sonarqube\out\.sonar\sonar-report.json. Unable to post issues to the PR. ---> System.IO.FileNotFoundException: Could not find the SonarQube issue report at D:\agent2-TFS-Build01\_work\13\.sonarqube\out\.sonar\sonar-report.json. Unable to post issues to the PR.
--- End of inner exception stack trace ---
at System.Management.Automation.Runspaces.PipelineBase.Invoke(IEnumerable input)
at System.Management.Automation.PowerShell.Worker.ConstructPipelineAndDoWork(Runspace rs, Boolean performSyncInvoke)
at System.Management.Automation.PowerShell.Worker.CreateRunspaceIfNeededAndDoWork(Runspace rsToUse, Boolean isSync)
at System.Management.Automation.PowerShell.CoreInvokeHelper[TInput,TOutput](PSDataCollection`1 input, PSDataCollection`1 output, PSInvocationSettings settings)
at System.Management.Automation.PowerShell.CoreInvoke[TInput,TOutput](PSDataCollection`1 input, PSDataCollection`1 output, PSInvocationSettings settings)
at Microsoft.TeamFoundation.DistributedTask.Handlers.PowerShellHandler.Execute(ITaskContext context, CancellationToken cancellationToken, Int32 timeoutInMinutes)
at Microsoft.TeamFoundation.DistributedTask.Worker.JobRunner.RunTask(ITaskContext context, TaskWrapper task, CancellationTokenSource tokenSource)
As far as I can tell, there is no way to configure the location of this report. One thing I read was that if you aren't failing the build on quality gate failures then you should also not enable the option to include full analysis report in the build summary. There were no specifics on why this is the case. I currently have the option to include the report enabled. Could this be the cause of my problem? We are not failing builds on quality gate failures yet because I was trying to give devs some time to adjust to the changes. Anyone know what's going on here? Here is a screenshot of my prepare analysis settings.
You can find a copy of the logs for the end analysis step here. I've retracted the domain and username. Everything else in the log is left untouched.
Edit 1/19:
Since I don't have fail on quality gate failure enabled I am not getting an error message if the build does fail the quality gate. I'm receiving the error message that I posted above about the sonar-report.json missing. Here is what I am seeing for pull requests with zero issues.
Fetching code analysis issues and posting them to the PR...
SonarQube found 0 issues out of which 0 are new
True
Processing 0 new messages
No new messages were posted
Uploading the legacy summary report. The new report is not uploaded if not enabled, if the SonarQube server version is 5.2 or lower or if the build was triggered by a pull request
The build was not set to fail if the associated quality gate fails.
This is why I think it only happens on builds that do have some issues to post to the PR. When is the sonar-report.json that it's looking for supposed to be written? I looked in the workspace on the build server and the file is definitely not there.
Edit 1/30/17:
Here is some additional info that may help figure out what it is. Currently I have 26 projects that run a SQ analysis on both PR and CI builds. All CI builds are working as expected. 24/26 PR builds are working as expected. The only two builds that are failing are both webapp projects. One of them is made up of C#, TypeScript, and JavaScript. The other is C#, VB, TypeScript, and Javascript. All the projects that are succeeding are strictly C# applications. What I have noticed is that on all of the C# applications I see a log message similar to this:
INFO: Performing issue tracking
INFO: 1033/1033 components tracked
INFO: Export issues to D:\agent2-TFS-Build01\_work\35\.sonarqube\out\.sonar\sonar-report.json
INFO: ANALYSIS SUCCESSFUL
This little Export bit is missing from the failed builds. Here is a log snippet from one of the failed PR builds. I would expect the logs to be in somewhat similar order but the Export issues to ... section is missing. Also it looks like it's uploading a full analysis report even though this is a PR build:
INFO: Analysis report generated in 2672ms, dir size=10 MB
INFO: Analysis reports compressed in 2750ms, zip size=4 MB
INFO: Analysis report uploaded in 583ms
INFO: ANALYSIS SUCCESSFUL, you can browse http://~:9000/dashboard/index/AppName
I'm getting tons of Missing blame information for the following files: [long list of files] errors during the analysis for the JS files that are being generated by the TypeScript compiler. I don't know if those errors would have anything to do with why this is failing but it's the only errors I see in the logs for that build step.
So, now I guess the problem to figure out is why are the issues for the two web app projects not being exported to the sonar-report.json. As you can see in the following log messages it appears that both projects are being started as PR builds with analysis mode set to issues and export path set to sonar-report.json but by the time the scan is done it skips the export step.
##[debug]Calling InvokeGetRestMethod "/api/server/version"
##[debug]Variable read: MSBuild.SonarQube.HostUrl = http://somedomain:9000/
##[debug]Variable read: MSBuild.SonarQube.ServerUsername = ********
##[debug]Variable read: MSBuild.SonarQube.ServerPassword =
##[debug]GET http://somedomain:9000/api/server/version with 0-byte payload
##[debug]received 3-byte response of content type text/html;charset=utf-8
##[debug]/d:sonar.ts.lcov.reportPath="C:\TFS-Build02-Agent2\_work\11\s\Source\AppName.Web\CodeCoverage\lcov\lcov.info" /d:sonar.ts.tslintconfigpath="C:\TFS-Build02-Agent2\_work\11\s\Source\AppName.Web\tslint.json" /d:sonar.ts.tslintruledir="C:\TFS-Build02-Agent2\_work\11\s\Source\AppName.Web\tslint-rules\" /d:sonar.ts.tslintpath="C:\TFS-Build02-Agent2\_work\11\s\Source\AppName.Web\node_modules\tslint\bin\tslint" /d:sonar.analysis.mode=issues /d:sonar.report.export.path=sonar-report.json
And the second one that is failing:
##[debug]Calling InvokeGetRestMethod "/api/server/version"
##[debug]Variable read: MSBuild.SonarQube.HostUrl = http://somedomain:9000/
##[debug]Variable read: MSBuild.SonarQube.ServerUsername = ********
##[debug]Variable read: MSBuild.SonarQube.ServerPassword =
##[debug]GET http://somedomain:9000/api/server/version with 0-byte payload
##[debug]received 3-byte response of content type text/html;charset=utf-8
##[debug]/d:sonar.ts.lcov.reportpath="D:\agent2-TFS-Build01\_work\13\s\Source\AppName.WebApp\CodeCoverage\lcov\lcov.info" /d:sonar.ts.tslintconfigpath="D:\agent2-TFS-Build01\_work\13\s\Source\AppName.WebApp\tslint.json" /d:sonar.ts.tslintruledir="D:\agent2-TFS-Build01\_work\13\s\Source\AppName.WebApp\tslint-rules\" /d:sonar.ts.tslintpath="D:\agent2-TFS-Build01\_work\13\s\Source\AppName.WebApp\node_modules\tslint" /d:sonar.analysis.mode=issues /d:sonar.report.export.path=sonar-report.json
I have finally had some time to circle back around to figuring out this problem. It appears that the SonarQube VSTS task doesn't export the issues report if it encounters an error during the end analysis step. I went through the build logs and made sure to exclude any of the js files that are generated by the typescript compiler. This helped to get rid of the many missing blame information errors that were happening. Once I had those files excluded from analysis I updated the SonarTsPlugin to the latest version along with all the associated build variables and PR integration started working again.
I know this worked because I no longer see the RuntimeException: Could not find the SonarQube issue report error. Now I see this in the logs:
SonarQube found 9264 issues out of which 102 are new
Processing 102 new messages
102 message(s) were filtered because they do not belong to files that were changed in this PR
If you are experiencing a problem like this I suggest you look at fixing any build errors you see in the logs for the end analysis step. Once I had them all resolved it started working as intended.
I am trying to add a new independent pipeline to running job, and it keeps failing. Also it fails quite ungracefully, sitting in a "-" state for a long time before moving to "not started" and then failed with no errors reported.
I suspect that this may be impossible, but I cannot find confirmation anywhere.
Job id for the latest attempt is 2016-02-02_02_08_52-5813673121185917804 (it's still sitting at not started)
Update: It should now be possible to add a PubSub source when updating a pipeline.
Original Response:
Thanks for reporting this problem. We have identified an issue when updating a pipeline with additional PubSub sources.
For the time being we suggest either running a new (separate) pipeline with the additional PubSub source, or stopping and starting the job over with the additional sources installed from the beginning.
So currently I am trying to progress my JIRA workflow with the Jira Issue Updater plugin in Jenkins. Attached are my config screenshot and my workflow. However i get this error when I execute a commit triggered build.
JIRA Update Results Recorder
Unable to connect to REST service
java.io.IOException: Server returned HTTP response code: 400 for URL: http://*******:9055/rest/api/2/search?jqlFinished: SUCCESS
This does not have any effect on my Jira workflow.
Thanks for the help in advance and let me know if more information is needed.
Hadi
EDIT: I got a 404 meaning the JQL is incorrect, but when I try to use that in incognito mode I get an empty string, However, if i am locally logged in, I get all issues in xml format.
Workflow
Jenkins Jira Config
I ended up using the JIRA plugin instead.
I used Progress JIRA Issues by workflow action step after build successful phase.
Attaching a screenshot of the configuration.
I am still trying to figure how to pull the issue number from the commit message for this action.
JIRA PLUGIN CONFIG
This is a following question and answer if anybody else gets stuck:
How can I get JIRA issue number from a commit message in Jenkins