TFS 2013 Report error - tfs

We have a whole bunch of reports sitting in TFS 2010, then we decide to directly upgrade to TFS 2013.
Upgrading and configuration and have been done successfully.
However those old reports are not working, I fixed the data source connection string and rebuild the data warehouse successfully.
But I get the error:
An error has occurred during report processing. (rsProcessingAborted)
Query execution failed for dataset 'IterationParam'. (rsErrorExecutingCommand)
The dimension '[Iteration]' was not found in the cube when the string, [Iteration].[Parent_ID].[XXX], was parsed.
I looked in the analysis service database and could not find Itration Dimension.
Is something wrong? Please give me advice
cheers,

Did you rebuild just the warehouse, or the cube also? You need to do both. And you need to rebuild the warehouse first, then the cube afterwards.
I use the TFS Web Services to do this:
Fire up the web service page, for me its: http://localhost:8080/tfs/TeamFoundation/Administration/v3.0/WarehouseControlService.asmx
Execute GetProcessingStatus passing false for the includeOnlineHostsOnly
Make sure that all the jobs are currently Idle
Execute ProcessWarehouse leaving the args blank
Rerun GetProcessingStatus until all the warehouse jobs are idle again
Execute ProcessAnalysisDatabase passing in Full for the processingType
Rerun GetProcessingStatus until all the jobs are idle again

Related

NullPointerException when retrieving job executions from Spring Cloud Data Flow 1.1.2 UI or API

I'm seeing a problem when I try to get a job execution list either in the UI 'Jobs' tab ('Error fetching data. Is the Data Flow server running?') or via the REST API (500 NullPointerException).
The error from the log is
java.lang.NullPointerException: null
at org.springframework.cloud.dataflow.server.service.impl.DefaultTaskJobService.getTaskJobExecution(DefaultTaskJobService.java:231) ~[spring-cloud-dataflow-server-core-1.1.2.RELEASE.jar!/:1.1.2.RELEASE]
which seems to be caused by the code:
taskExplorer.getTaskExecutionIdByJobExecutionId(jobExecution.getId())
Looking into this it seems some of my jobs have not been associated with task ids - ie there is no entry in the task_task_batch table and if I try to retrieve one of those jobs or a list of all jobs I get the NullPointerException.
Retrieving a job directly by id which does have an association in the task_task_batch table is OK.
Investigating why this is happening with some of my job tasks it seems to be because some of them use XML instead of Java Config to configure the jobs
(We have some pre-existing complex jobs that we are moving from XD to Spring Cloud Data Flow and keeping the XML is the quickest way initially to do this).
Otherwise these jobs are running fine, and logging job/step executions to the DB.
When using XML it seems the taskBatchExecutionListener is not getting added to the job automatically and so the task/batch association does not get registered.
The code for this is in TaskBatchExecutionListenerBeanPostProcessor and it could be because when using XML config the job bean is of type JobParserJobFactoryBean and not AbstractJob.
If I add the listener manually ie
<listeners>
<listener ref="taskBatchExecutionListener"/>
</listeners>
in the job xml the problem is fixed.
I have a few questions:
1) Is this a bug in Spring Cloud Task - ie its just not handling XML configuration correctly, if so I can raise an issue for this.
2) Should Spring Cloud Dataflow handle this better? It seems a badly behaved task can effectively corrupt the data and stop retrieval of lists containing 'good' jobs too.
I had the same problem, and in my case the problem was solved by adding #EnableTask to the job configuration
I remember getting this error as well, but in my case, I tried deleting a task execution via the spring cloud dataflow server API (see https://docs.spring.io/spring-cloud-dataflow/docs/current-SNAPSHOT/reference/htmlsingle/#api-guide-resources-task-executions-delete).
When I noticed it is actually a NO-OP delete (it does nothing and was confirmed by the developers https://github.com/spring-cloud/spring-cloud-dataflow/issues/1844), I tried manually deleting records from the database, and I missed some records.
There are some FK dependencies and my clean-up was not thorough. Once I started over (created a new DB schema and all) then the issue went away.

TFS - Could not find the SonarQube issue report

I'm using TFS 2017 and SonarQube 5.6.2 5.6.4 6.2.
I'm trying to setup SonarQube integration into pull requests. On pull requests that don't appear to have any issues the sonarqube analysis runs fine. It looks like it only fails when there are issues found and it tries to read the sonar-report.json to post the issues to the pull request. I'm receiving the following error on builds that appear to find issues:
------------------------------------------------------------------------
EXECUTION SUCCESS
------------------------------------------------------------------------
Total time: 5:25.577s
Final Memory: 56M/600M
------------------------------------------------------------------------
The SonarQube Scanner has finished
Creating a summary markdown file...
Analysis results: http://somedomain:9000/dashboard/index/CP
Post-processing succeeded.
Fetching code analysis issues and posting them to the PR...
System.Management.Automation.RuntimeException: Could not find the SonarQube issue report at D:\agent2-TFS-Build01\_work\13\.sonarqube\out\.sonar\sonar-report.json. Unable to post issues to the PR. ---> System.IO.FileNotFoundException: Could not find the SonarQube issue report at D:\agent2-TFS-Build01\_work\13\.sonarqube\out\.sonar\sonar-report.json. Unable to post issues to the PR.
--- End of inner exception stack trace ---
at System.Management.Automation.Runspaces.PipelineBase.Invoke(IEnumerable input)
at System.Management.Automation.PowerShell.Worker.ConstructPipelineAndDoWork(Runspace rs, Boolean performSyncInvoke)
at System.Management.Automation.PowerShell.Worker.CreateRunspaceIfNeededAndDoWork(Runspace rsToUse, Boolean isSync)
at System.Management.Automation.PowerShell.CoreInvokeHelper[TInput,TOutput](PSDataCollection`1 input, PSDataCollection`1 output, PSInvocationSettings settings)
at System.Management.Automation.PowerShell.CoreInvoke[TInput,TOutput](PSDataCollection`1 input, PSDataCollection`1 output, PSInvocationSettings settings)
at Microsoft.TeamFoundation.DistributedTask.Handlers.PowerShellHandler.Execute(ITaskContext context, CancellationToken cancellationToken, Int32 timeoutInMinutes)
at Microsoft.TeamFoundation.DistributedTask.Worker.JobRunner.RunTask(ITaskContext context, TaskWrapper task, CancellationTokenSource tokenSource)
As far as I can tell, there is no way to configure the location of this report. One thing I read was that if you aren't failing the build on quality gate failures then you should also not enable the option to include full analysis report in the build summary. There were no specifics on why this is the case. I currently have the option to include the report enabled. Could this be the cause of my problem? We are not failing builds on quality gate failures yet because I was trying to give devs some time to adjust to the changes. Anyone know what's going on here? Here is a screenshot of my prepare analysis settings.
You can find a copy of the logs for the end analysis step here. I've retracted the domain and username. Everything else in the log is left untouched.
Edit 1/19:
Since I don't have fail on quality gate failure enabled I am not getting an error message if the build does fail the quality gate. I'm receiving the error message that I posted above about the sonar-report.json missing. Here is what I am seeing for pull requests with zero issues.
Fetching code analysis issues and posting them to the PR...
SonarQube found 0 issues out of which 0 are new
True
Processing 0 new messages
No new messages were posted
Uploading the legacy summary report. The new report is not uploaded if not enabled, if the SonarQube server version is 5.2 or lower or if the build was triggered by a pull request
The build was not set to fail if the associated quality gate fails.
This is why I think it only happens on builds that do have some issues to post to the PR. When is the sonar-report.json that it's looking for supposed to be written? I looked in the workspace on the build server and the file is definitely not there.
Edit 1/30/17:
Here is some additional info that may help figure out what it is. Currently I have 26 projects that run a SQ analysis on both PR and CI builds. All CI builds are working as expected. 24/26 PR builds are working as expected. The only two builds that are failing are both webapp projects. One of them is made up of C#, TypeScript, and JavaScript. The other is C#, VB, TypeScript, and Javascript. All the projects that are succeeding are strictly C# applications. What I have noticed is that on all of the C# applications I see a log message similar to this:
INFO: Performing issue tracking
INFO: 1033/1033 components tracked
INFO: Export issues to D:\agent2-TFS-Build01\_work\35\.sonarqube\out\.sonar\sonar-report.json
INFO: ANALYSIS SUCCESSFUL
This little Export bit is missing from the failed builds. Here is a log snippet from one of the failed PR builds. I would expect the logs to be in somewhat similar order but the Export issues to ... section is missing. Also it looks like it's uploading a full analysis report even though this is a PR build:
INFO: Analysis report generated in 2672ms, dir size=10 MB
INFO: Analysis reports compressed in 2750ms, zip size=4 MB
INFO: Analysis report uploaded in 583ms
INFO: ANALYSIS SUCCESSFUL, you can browse http://~:9000/dashboard/index/AppName
I'm getting tons of Missing blame information for the following files: [long list of files] errors during the analysis for the JS files that are being generated by the TypeScript compiler. I don't know if those errors would have anything to do with why this is failing but it's the only errors I see in the logs for that build step.
So, now I guess the problem to figure out is why are the issues for the two web app projects not being exported to the sonar-report.json. As you can see in the following log messages it appears that both projects are being started as PR builds with analysis mode set to issues and export path set to sonar-report.json but by the time the scan is done it skips the export step.
##[debug]Calling InvokeGetRestMethod "/api/server/version"
##[debug]Variable read: MSBuild.SonarQube.HostUrl = http://somedomain:9000/
##[debug]Variable read: MSBuild.SonarQube.ServerUsername = ********
##[debug]Variable read: MSBuild.SonarQube.ServerPassword =
##[debug]GET http://somedomain:9000/api/server/version with 0-byte payload
##[debug]received 3-byte response of content type text/html;charset=utf-8
##[debug]/d:sonar.ts.lcov.reportPath="C:\TFS-Build02-Agent2\_work\11\s\Source\AppName.Web\CodeCoverage\lcov\lcov.info" /d:sonar.ts.tslintconfigpath="C:\TFS-Build02-Agent2\_work\11\s\Source\AppName.Web\tslint.json" /d:sonar.ts.tslintruledir="C:\TFS-Build02-Agent2\_work\11\s\Source\AppName.Web\tslint-rules\" /d:sonar.ts.tslintpath="C:\TFS-Build02-Agent2\_work\11\s\Source\AppName.Web\node_modules\tslint\bin\tslint" /d:sonar.analysis.mode=issues /d:sonar.report.export.path=sonar-report.json
And the second one that is failing:
##[debug]Calling InvokeGetRestMethod "/api/server/version"
##[debug]Variable read: MSBuild.SonarQube.HostUrl = http://somedomain:9000/
##[debug]Variable read: MSBuild.SonarQube.ServerUsername = ********
##[debug]Variable read: MSBuild.SonarQube.ServerPassword =
##[debug]GET http://somedomain:9000/api/server/version with 0-byte payload
##[debug]received 3-byte response of content type text/html;charset=utf-8
##[debug]/d:sonar.ts.lcov.reportpath="D:\agent2-TFS-Build01\_work\13\s\Source\AppName.WebApp\CodeCoverage\lcov\lcov.info" /d:sonar.ts.tslintconfigpath="D:\agent2-TFS-Build01\_work\13\s\Source\AppName.WebApp\tslint.json" /d:sonar.ts.tslintruledir="D:\agent2-TFS-Build01\_work\13\s\Source\AppName.WebApp\tslint-rules\" /d:sonar.ts.tslintpath="D:\agent2-TFS-Build01\_work\13\s\Source\AppName.WebApp\node_modules\tslint" /d:sonar.analysis.mode=issues /d:sonar.report.export.path=sonar-report.json
I have finally had some time to circle back around to figuring out this problem. It appears that the SonarQube VSTS task doesn't export the issues report if it encounters an error during the end analysis step. I went through the build logs and made sure to exclude any of the js files that are generated by the typescript compiler. This helped to get rid of the many missing blame information errors that were happening. Once I had those files excluded from analysis I updated the SonarTsPlugin to the latest version along with all the associated build variables and PR integration started working again.
I know this worked because I no longer see the RuntimeException: Could not find the SonarQube issue report error. Now I see this in the logs:
SonarQube found 9264 issues out of which 102 are new
Processing 102 new messages
102 message(s) were filtered because they do not belong to files that were changed in this PR
If you are experiencing a problem like this I suggest you look at fixing any build errors you see in the logs for the end analysis step. Once I had them all resolved it started working as intended.

Is it possible to update google dataflow job with a new pipeline?

I am trying to add a new independent pipeline to running job, and it keeps failing. Also it fails quite ungracefully, sitting in a "-" state for a long time before moving to "not started" and then failed with no errors reported.
I suspect that this may be impossible, but I cannot find confirmation anywhere.
Job id for the latest attempt is 2016-02-02_02_08_52-5813673121185917804 (it's still sitting at not started)
Update: It should now be possible to add a PubSub source when updating a pipeline.
Original Response:
Thanks for reporting this problem. We have identified an issue when updating a pipeline with additional PubSub sources.
For the time being we suggest either running a new (separate) pipeline with the additional PubSub source, or stopping and starting the job over with the additional sources installed from the beginning.

Dataflow job fails, but the step still shows successful

Our pipeline failed, but the graph in the developers console still shows it as been successful. However, the results were not written to BigQuery, and the job log clearly shows it failed.
Shouldn't the graph show it as failed too?
Another example:
Another example:
This was a bug with the handling of BigQuery outputs that was fixed in a release of the Dataflow service. Thank you for your patience.

Restore tfs collection from mdf/ldf

My server running TFS express crashed. I managed to mount the disk and extract mdf/ldf file for my TFS collection. Here is what I did next:
Built a new machine (with the same name/IP address) and installed SQL Express/TFS server express.
From SQL Server Management Studio, attached the mdf/ldf files. I can now see TFS_MyCollection as a new database.
From TFS Administrative console, clicked on "Attach Collection."
However, the new database is not being listed.
I went through a bunch of links on the Internet. https://social.msdn.microsoft.com/Forums/en-US/d949edf3-1795-448a-a1cc-39555ce87b50/tfs-2010-installation-error had a similar situation. Based on the suggestion, I had attached the database. I also looked at https://msdn.microsoft.com/en-us/library/ms404869(VS.80).aspx. However, this one talks about using backup/restore, which is not my case.
I must be missing some configuration step. Please advice. Regards.
You cant just attach a collection that was never detached.
You need to unconfigure your TFS instance (tfconfig.exe setup /uninstall:all) and then restore all of the databases.
You will need to restore each collection and the configuration DB. They are currently a set. Once you have all of the databses attached/restored you need to run the setup and "configure application tier only".
https://msdn.microsoft.com/en-us/library/ms404869.aspx
You need to follow the documentation for moving hardware. Make sure that you follow each step.
Note: You should take backups!

Resources