Can you access TFS 2017 build definition through the database? - tfs

I had a few TFS 2017 build definitions created at one point, and through some messing around with the install and databases, I have lost access to those definitions. They don't show up anywhere in the web portal. I also can't restore and attach the project collection that was associated with them, though I do have an actual SQL backup of the project collection database.
So, is there a way to query either the Tfs_Configuration, Tfs_Warehouse, or the Tfs_projectcollection databases to retrieve build definition information?
If not, I will end up recreating them, but just curious if there was some other way.

Recommend you to recreate them if the definitions are simple ones, that will not spend much time.
But if they are complicated ones, you can restore the backup database in another sql instance, query out the missing build definitions, then insert the definitions to current table [Tfs_DefaultCollection].[Build].[tbl_Definition]. (Make sure backup the databases before doing this in case meet any problems.)
Just select a row and copy (right click > copy in sql server) from the queried definitions, you need to do additional actions to add the ' ', for each value.
Then insert into the values something like this:
insert into [Tfs_DefaultCollection].[Build].[tbl_Definition]
values
('1', '85', '17', '1', 'D4', '1', '7', '0', '5', '8', '1', '', '$(date:yyyyMMdd)$(rev:.r)', '1', '60', '', '1F739003-44A9-4AB6-B1A2-D3CD2A291588', '2017-04-21 19:53:49.733', '', '[{"enabled":false,"definition":{"id":"7c555368-ca64-4199-add6-9ebaf0b0137d"},"inputs":{"multipliers":"[]","parallel":"false","continueOnError":"true","additionalFields":"{}"}},{"enabled":false,"definition":{"id":"a9db38f9-9fdc-478c-b0f9-464221e58316"},"inputs":{"workItemType":"106","assignToRequestor":"true","additionalFields":"{}"}},{"enabled":false,"definition":{"id":"57578776-4c22-4526-aeb0-86b6da17ee9c"},"inputs":{"additionalFields":"{}"}}]', '{"properties":{"labelSources":"0","tfvcMapping":"{\"mappings\":[{\"serverPath\":\"$/6553c041-5e50-4ace-bec2-c1dba2b812ca\",\"mappingType\":\"map\",\"localPath\":\"\\\\\"},{\"serverPath\":\"$/6553c041-5e50-4ace-bec2-c1dba2b812ca/Drops\",\"mappingType\":\"cloak\",\"localPath\":\"\\\\\"}]}","cleanOptions":"0"},"id":"$/","type":"TfsVersionControl","name":"6553c041-5e50-4ace-bec2-c1dba2b812ca","url":"http://win-kev0061habi:8080/tfs/DefaultCollection/","defaultBranch":"$/6553c041-5e50-4ace-bec2-c1dba2b812ca","rootFolder":"$/6553c041-5e50-4ace-bec2-c1dba2b812ca","clean":"false","checkoutSubmodules":false}', '', '[{"enabled":true,"continueOnError":false,"alwaysRun":false,"displayName":"NuGet restore **\\*.sln","timeoutInMinutes":0,"task":{"id":"333b11bd-d341-40d9-afcf-b32d5ce6f23b","versionSpec":"0.*","definitionType":"task"},"inputs":{"solution":"**\\*.sln","nugetConfigPath":"","restoreMode":"restore","noCache":"false","nuGetRestoreArgs":"","verbosity":"-","nuGetVersion":"3.3.0","nuGetPath":""}},{"enabled":true,"continueOnError":false,"alwaysRun":false,"displayName":"Build solution **\\*.sln","timeoutInMinutes":0,"task":{"id":"71a9a2d3-a98a-4caa-96ab-affca411ecda","versionSpec":"1.*","definitionType":"task"},"inputs":{"solution":"**\\*.sln","msbuildArgs":"","platform":"$(BuildPlatform)","configuration":"$(BuildConfiguration)","clean":"false","vsVersion":"15.0","maximumCpuCount":"false","restoreNugetPackages":"false","msbuildArchitecture":"x86","logProjectEvents":"true","createLogFile":"false"}},{"enabled":true,"continueOnError":false,"alwaysRun":false,"displayName":"Test Assemblies **\\$(BuildConfiguration)\\*test*.dll;-:**\\obj\\**","timeoutInMinutes":0,"task":{"id":"ef087383-ee5e-42c7-9a53-ab56c98420f9","versionSpec":"1.*","definitionType":"task"},"inputs":{"testAssembly":"**\\$(BuildConfiguration)\\*test*.dll;-:**\\obj\\**","testFiltercriteria":"","runSettingsFile":"","overrideTestrunParameters":"","codeCoverageEnabled":"false","runInParallel":"false","vstestLocationMethod":"version","vsTestVersion":"14.0","vstestLocation":"","pathtoCustomTestAdapters":"","otherConsoleOptions":"","testRunTitle":"","platform":"$(BuildPlatform)","configuration":"$(BuildConfiguration)","publishRunAttachments":"true"}},{"enabled":true,"continueOnError":true,"alwaysRun":false,"displayName":"Publish symbols path: ","timeoutInMinutes":0,"task":{"id":"0675668a-7bba-4ccb-901d-5ad6554ca653","versionSpec":"1.*","definitionType":"task"},"inputs":{"SymbolsPath":"","SearchPattern":"**\\bin\\**\\*.pdb","SymbolsFolder":"","SkipIndexing":"false","TreatNotIndexedAsWarning":"false","SymbolsMaximumWaitTime":"","SymbolsProduct":"","SymbolsVersion":"","SymbolsArtifactName":"Symbols_$(BuildConfiguration)"}},{"enabled":true,"continueOnError":false,"alwaysRun":true,"displayName":"Copy Files to: $(build.artifactstagingdirectory)","timeoutInMinutes":0,"task":{"id":"5bfb729a-a7c8-4a78-a7c3-8d717bb7c13c","versionSpec":"2.*","definitionType":"task"},"inputs":{"SourceFolder":"$(build.sourcesdirectory)","Contents":"**\\bin\\$(BuildConfiguration)\\**","TargetFolder":"$(build.artifactstagingdirectory)","CleanTargetFolder":"false","OverWrite":"false","flattenFolders":"false"}},{"enabled":true,"continueOnError":false,"alwaysRun":true,"displayName":"Publish Artifact: drop","timeoutInMinutes":0,"task":{"id":"2ff763a7-ce83-4e1f-bc89-0ae63477cebe","versionSpec":"1.*","definitionType":"task"},"inputs":{"PathtoPublish":"$(build.artifactstagingdirectory)","ArtifactName":"drop","ArtifactType":"Container","TargetPath":"\\\\my\\share\\$(Build.DefinitionName)\\$(Build.BuildNumber)"}}]', '{"system.debug":{"value":"false","allowOverride":true},"BuildConfiguration":{"value":"release","allowOverride":true},"BuildPlatform":{"value":"any cpu","allowOverride":true}}', '', '[{"branches":["+refs/heads/*"],"artifacts":[],"artifactTypesToDelete":["FilePath","SymbolStore"],"daysToKeep":10,"minimumToKeep":1,"deleteBuildRecord":true,"deleteTestResults":true}]', '0', '0', '1', '')
If you have lots of definitions need to be done, you can also try to merge database, please refer to below link for details:
http://byalexblog.net/merge-sql-databases

Related

Where does Postman store current value of environment variables?

I've been looking around my machine to see where the postman environment variables are stored. I've looked under AppData\Local\Postman, and C:\Users\username\Postman folders, and haven't found a config file that has a last modified date matching my change of environment variables.
I know I can export the environment variables, but I want to search over the current variables. And the exports don't include the current values, unless they replace the initial value, which I want to keep.
There are still ways to get around this. But I want to write a simple command to fetch some current environment variables via cmd, ex using grep.
So is there a way to check for the current environment variables? Where are they stored?
I don't think this will be a successful attempt. Postman seems to use a database, e.g. leveldb, according to information I found here. That will be stored as a binary file on your disk.
You can, however, have a look into the DB by going to View => Developer => Show DevTools and then going to Storage => IndexedDB => variable_sessions => workspace. I can find a current value for an environment variable like this:
But I don't see a way to search in this other than by keys which are uuids and not variable names or values.
All in all, exporting your environments into a text file might be the easiest option.

Do the apoc.import use merge or create to add new data?

CALL apoc.import.csv(
[{fileName: 'file:/persons.csv', labels: ['Person']}],
[{fileName: 'file:/knows.csv', type: 'KNOWS'}],
{delimiter: '|', arrayDelimiter: ',', stringIds: false}
)
For this example, internally, does the 'import' use merge or create to add nodes, relationships and properties? I tested, it seems it uses 'create' to add new rows even for a new ID record. Is there a way to control this? When to use apoc.load VS apoc.import? It seems apoc.load is a lot more flexible, where users can choose to use cypher commands specifically for purposes. Right?
From the source of CsvEntityLoader (which seems to be doing the work under the covers), nodes are blindly created rather than being merged.
While there's an ignoreDuplicateNodes configuration property you can set, it just ignores IDs duplicated within the incoming CSV (i.e. it's not de-duplicating the incoming records against your existing graph). You could protect yourself from creating duplicate nodes by creating an appropriate unique constraint on any uniquely-identifying properties, which would at least prevent you accidentally running the same import twice.
Personally I'd only use apoc.import.csv to do a one-off bulk load of data into a fresh graph (or to load a dump from another graph that was exported as a CSV by something like apoc.export.csv.*). And even then, you've got the batch import tool that'll do that job with higher performance for large datasets.
I tend to use either the built-in LOAD CSV command or apoc.load.csv for most things, as you can control exactly what you do with each record coming in from the file (such as performing a MERGE rather than a CREATE).
As indicated by by #Pablissimo's answer, the ignoreDuplicateNodes config option (when explicitly set to true) does not actually check for duplicates in the DB - it just checks within the file. A request to address this hole was brought up before, but nothing has been done yet to address it. So, if this is a concern for your use case, then you should not use apoc.import.csv.
The rest of this answer applies iff your files never specify nodes that already exist in your DB.
If your node CSV file follows the neo4j-admin import command's import file header format and has a header that specifies the :ID field for the column containing the node's unique ID, then the apoc.import.csv procedure should, by default, fail when it encounters duplicate node IDs (within the same file). That is because the procedure's ignoreDuplicateNodes config value defaults to false (you can specify true to skip duplicate IDs instead of failing).
However, since your node imports are not failing but are generating duplicate nodes, that implies your node CSV file does not specify the :ID field as appropriate. To fix this, you need to add the :ID field and call the procedure with the config option ignoreDuplicateNodes:true. Or, you can modify those CSV files somehow to remove duplicate rows.

XPO Import - Relation is incomplete due to missing fields

I have been setting up builds for quite some time now. To do this, I use the scripts Microsoft provided for AX 2012 (Build and deploy scripts for Microsoft Dynamics AX 2012)
There were some tweaks to be done in the scripts to get TFS working the way it should and it also involved some extra actions because we have code in the startupPost (fe precompilation with message window instead of compiler output form due to modification in the sysSetupFormRun class)
But what is haunting me for some weeks now is the XPO import. The provided script uses the latest CombineXPO tool to combine all of the XPO files that were fetched from TFS into one big XPO. Once that is done, the XPO is imported in Ax.
And the real problem here is that I do not trust the XPO import because we have frequently been seeing huge amounts of errors like :
Compiler ERROR: \Data Dictionary\Tables\EPSICParameters\EPSICParameters : Relation Currency is incomplete due to missing fields
And indeed the fields aren't there in Ax, but when I look in the XPO that was supposed to be imported,the relation fields are present which indicated that the sources were fetched fine from TFS.
REFERENCE #Currency
PROPERTIES
Name #Currency
Table #Currency
RelatedTableCardinality #ZeroOne
Cardinality #ZeroMore
RelationshipType #Association
UseDefaultRoleNames #Yes
ENDPROPERTIES
FIELDREFERENCES
REFERENCETYPE PKFK
PROPERTIES
Field #CurrencyCode
RelatedField #CurrencyCode
SourceEDT #CurrencyCode
ENDPROPERTIES
ENDFIELDREFERENCES
ENDREFERENCE
Anyone who could help me out here? This thing is really blocking our automated builds with Ax because we simply cannot tell when the next build is going to run fine :s
I had this error as well. I believe that the root cause of this is due to the relation being auto created when you drag and drop an EDT onto a table to create a field, and then a rename of that field breaking the table relation. However, the EDT relation will still work on the field and the front end/GUI will not break. For example, dragging the HcmApprover EDT onto a table will prompt you to ask if you want to add the ForeignKey relation from the EDT to the current table? If you say yes, and then rename the field from HcmApprover to something else, the table relation will break. However, the front end will appear to work correctly (you will likely still be able to see a working dropdown to view hired workers from the HCM module).
I'm not positive, but I think the GUI still works because of the EDT relations on the field itself causing the front end to still operate correctly.
Either way, if you drag and drop EDTs (this goes for more than just EDTs) to create fields and do any renaming, make sure that the appropriate auto/framework-generated "stuff" is also renamed manually (ie by you).
Try doing the import twice, ignore any errors from the first run.

Unable to Generate Script for 3 Views in SQL server management studio 2008

I have a strange problem
When I create Object Script (script to drop and create Stored Procedures, Views, Functions) from Sql Server 2008 it misses 3 Views don't know why?
I am performing Following steps to create object script
1) Open Sql Server 2008 Management Studio
2) Connect to server
3) Right click on selected database then click on Tasks -> Generate Script, then select database from list, click Next.
4) It gives options I am changing three options i.e. Include If Not Exists = true, Script Drop = true, Script Use Database = false and clicing Next button
4) Now selecting SP, Views and Functions and clicking Next,
5) clicking Select All for All the coming screens
6) Finally clicking Finish button.
Is there any limitation or special condition or convention that I am not following and causing Views not to include in Generate Script?
Please let me know if I am missing something , I have tried many ways.
I also found that this problem not only exists with Views but it also exists with Functions and Stored Procedures.
If we rename them it works fine , for example a Function earlier named dbo.SeperateElementsInt was working fine, but strangely, Generate Script ignored this function, later we renamed it to dbo.SeperateElementsInteger and it started generating script.
We cannot change the View names as it is used at many places.
Views which are giving problem are dbo.DivisionInfo and dbo.CustomerDivisonOfficeInfo
Stored Procedure which is giving problem is dbo.procsync_get_zVariable
The problem exists with SSMS 2005 too.
Thanks
We didn't understand each other on INFORMATION_SCHEMA-profiler issue. I was suggesting to turn profiler on, because SSMS does a SELECT on INFORMATION_SCHEMA with some where clauses. I suspect that the query itself cuts off your views. Once You have a query that SSMS executes to get the list of objects You should find why it doesn't see some views.
Here are the scripts that SSMS executes when You select all views and start scripting. Check if any of them doesn't return DivisionInfo view. (I've created DivisionInfo view in my database to reproduce your case). For quick check execute them one by one and read my comments after each query. Please note that You should actually catch queries on your environment with Profiler, because they may differ on your environment.
Before showing screen to select views, procedures, ... SSMS executes following script to get the list of views:
exec sp_executesql N'SELECT
''Server[#Name='' + quotename(CAST(
serverproperty(N''Servername'')
AS sysname),'''''''') + '']'' + ''/Database[#Name='' + quotename(db_name(),'''''''') + '']'' + ''/View[#Name='' + quotename(v.name,'''''''') + '' and #Schema='' + quotename(SCHEMA_NAME(v.schema_id),'''''''') + '']'' AS [Urn],
v.name AS [Name],
SCHEMA_NAME(v.schema_id) AS [Schema]
FROM
sys.all_views AS v
WHERE
(v.type = #_msparam_0)and(CAST(
case
when v.is_ms_shipped = 1 then 1
when (
select
major_id
from
sys.extended_properties
where
major_id = v.object_id and
minor_id = 0 and
class = 1 and
name = N''microsoft_database_tools_support'')
is not null then 1
else 0
end
AS bit)=0)
ORDER BY
[Schema] ASC,[Name] ASC',N'#_msparam_0 nvarchar(4000)',#_msparam_0=N'V'
Is your view listed? You can add condition WHERE v.name = 'DivisionInfo' to filter it. If there is no DivisionInfo listed check what part of this query eliminates it from result set.
Once You select objects to script and start scripting, SSMS creates temp table, store objects in it and executes scripts to find related objects.
Create temp table and insert DivisionInfo view in it:
CREATE TABLE #tempdep (objid int NOT NULL, objname sysname NOT NULL, objschema sysname NULL, objdb sysname NOT NULL, objtype smallint NOT NULL)
exec sp_executesql N'INSERT INTO #tempdep
SELECT
v.object_id AS [ID],
v.name AS [Name],
SCHEMA_NAME(v.schema_id) AS [Schema],
db_name(),
2
FROM
sys.all_views AS v
WHERE
(v.type = #_msparam_0)and(v.name=#_msparam_1 and SCHEMA_NAME(v.schema_id)=#_msparam_2)',N'#_msparam_0 nvarchar(4000),#_msparam_1 nvarchar(4000),#_msparam_2 nvarchar(4000)',#_msparam_0=N'V',#_msparam_1=N'DivisionInfo',#_msparam_2=N'dbo'
Did this query insert anything in #tempdep? If not, check why. Once again, You have to use Profiler to get queries from your environment instead of using queries I put here because they are from my environment.
When You start profiling, there should be many inserts like the one above. You need to find the one that relates to DivisionInfo. You can use Find option to find it because You will see many queries in Profiler because You have a lot of other views. To make profiler log smaller, script only views.
As You can see, idea is to start profiling and start scripting. Once scripting is finished, stop profiler and check scripts executed by SSMS. You should find why it doesn't see DivisionInfo. If there is no DivisionInfo in profiler log but You can check it for scripting in wizard, then take scripts for DivisionInfo and for one view that scripting works for and see the differences between them. Take a close look at differences between them in regards to scripts that SMSS uses to retrieve them.
for some reason SSMS discards this view
according to data he extracted with queries (catched from profiler)
I just ran into the exact issue. We were trying to script out the schema of one database (Call it Database_A) and many views wouldn't script out.
We'd decommissioned another database (Call it Database_B) and all the views that wouldn't script (in Database_A) pointed out to that database (Database_B) - which was accessed through a linked server, and was offline. Since all the connection strings were now pointing to the new server that Database_A was now on, I brought Database_A on the old server online in read_only for just long enough to script out the views, and it worked. Took the database offline again, and we had what we needed.
The script I threw together to find the linked server reference in the views was this:
use Database_B
go
select so.name, sc.text
from sysobjects so, syscomments sc
where so.id = sc.id
and sc.text like '%Database_A%'
That's what worked for me, I hope it works for you as well.
Take care,
Tom

Tfs custom work item value migration

I have a task to create reports about various work items from a Team Foundation Server 2010 instance. They are looking for more information than the query tools seem to expose which is why I am not using the OOB reporting capabilities. The documentation on creating custom reports against TFS identify the Tfs_Analysis cube and the Tfs_Warehouse database as the intended sources for reporting.
They have created a custom work item, "Deployment Requests", to track requests for code migrations. This work item has custom urgency levels (critical, medium, low).
According to Manually Process the Data Warehouse and Analysis Services Cube for Team Foundation Server, every two minutes my ODS (Tfs_DefaultCollection) should sync with the Tfs_Warehouse and every 2 hours it hits the Tfs_Analysis cube. The basic work items correctly show up in my Tfs_Warehouse except not all of the data makes it over, in particular, the urgency isn't getting migrated.
As a concrete example, work item 19301 was a deployment request. This is what they can see using the native query tool from the web front-end.
I can find it in the Tfs_DefaultCollection and the "Urgency" is mapped to Fld10176.
SELECT
Fld10176 AS Urgency
, *
FROM Tfs_DefaultCollection.dbo.WorkItemsAre
WHERE ID = 19301
trimmed results...
Urgency Not A Field Changed Date
1 - Critical - (Right Away) 58 2011-09-07 15:52:29.613
If I query the warehouse, I see the deployment request and the "standard" data (people, time, area, etc)
SELECT
DWI.System_WorkItemType
, DWI.Microsoft_VSTS_Common_Priority
, DWI.Microsoft_VSTS_Common_Severity
, *
FROM
Tfw_Warehouse.dbo.DimWorkItem DWI
WHERE
DWI.System_Id = 19301
Trimmed results
System_WorkItemType Microsoft_VSTS_Common_Priority Microsoft_VSTS_Common_Severity
Deployment Request NULL NULL
I am not the TFS admin (first exposure to TFS is at this new gig) and thus far, they've been rather ...unhelpful.
Is there be a way to map that custom field over to an existing field in the Tfs_Warehouse? (Backfilling legacy values would be great but fixing current/future is all I need)
Is there a different approach I should be using?
Did you mark the field as reportable? See http://msdn.microsoft.com/en-us/library/ee921481.aspx for more information about this topic.
Based on Ewald Hofman's link, I ran
C:\Program Files\Microsoft Visual Studio 10.0\VC>witadmin listfields /collection:http://SomeServer/tfs > \tmp\witadmin.txt
and discovered a host of things not configured
Reportable As: None
At this point, I punted the ticket to the TFS admins and indicated they needed to fix things. In particular, examine these two fields
Field: Application.Changes
Name: ApplicationChanges
Type: PlainText
Use: Project1, Project2
Indexed: False
Reportable As: None
or
Field: Microsoft.VSTS.Common.ApplicationChanges
Name: Application Changes
Type: Html
Use: Project1, Project2
Indexed: False
Reportable As: None
It will be a while before the TFS Admins do anything but I'm happy to accept Edwald's answer.

Resources