I have been setting up builds for quite some time now. To do this, I use the scripts Microsoft provided for AX 2012 (Build and deploy scripts for Microsoft Dynamics AX 2012)
There were some tweaks to be done in the scripts to get TFS working the way it should and it also involved some extra actions because we have code in the startupPost (fe precompilation with message window instead of compiler output form due to modification in the sysSetupFormRun class)
But what is haunting me for some weeks now is the XPO import. The provided script uses the latest CombineXPO tool to combine all of the XPO files that were fetched from TFS into one big XPO. Once that is done, the XPO is imported in Ax.
And the real problem here is that I do not trust the XPO import because we have frequently been seeing huge amounts of errors like :
Compiler ERROR: \Data Dictionary\Tables\EPSICParameters\EPSICParameters : Relation Currency is incomplete due to missing fields
And indeed the fields aren't there in Ax, but when I look in the XPO that was supposed to be imported,the relation fields are present which indicated that the sources were fetched fine from TFS.
REFERENCE #Currency
PROPERTIES
Name #Currency
Table #Currency
RelatedTableCardinality #ZeroOne
Cardinality #ZeroMore
RelationshipType #Association
UseDefaultRoleNames #Yes
ENDPROPERTIES
FIELDREFERENCES
REFERENCETYPE PKFK
PROPERTIES
Field #CurrencyCode
RelatedField #CurrencyCode
SourceEDT #CurrencyCode
ENDPROPERTIES
ENDFIELDREFERENCES
ENDREFERENCE
Anyone who could help me out here? This thing is really blocking our automated builds with Ax because we simply cannot tell when the next build is going to run fine :s
I had this error as well. I believe that the root cause of this is due to the relation being auto created when you drag and drop an EDT onto a table to create a field, and then a rename of that field breaking the table relation. However, the EDT relation will still work on the field and the front end/GUI will not break. For example, dragging the HcmApprover EDT onto a table will prompt you to ask if you want to add the ForeignKey relation from the EDT to the current table? If you say yes, and then rename the field from HcmApprover to something else, the table relation will break. However, the front end will appear to work correctly (you will likely still be able to see a working dropdown to view hired workers from the HCM module).
I'm not positive, but I think the GUI still works because of the EDT relations on the field itself causing the front end to still operate correctly.
Either way, if you drag and drop EDTs (this goes for more than just EDTs) to create fields and do any renaming, make sure that the appropriate auto/framework-generated "stuff" is also renamed manually (ie by you).
Try doing the import twice, ignore any errors from the first run.
Related
I have been given the task of creating a DXL script. First problem is that I have never used DXL before, even though I have many years experience with DOORS itself. I have been surfing the Net to seek guidance on my particular problem. I also have a few specimen DXL scripts for reference.
My new client requires that for each View of a given Module, of which there are many Views, new "reduced" Modules are to be produced reflecting each View.
By "reduced", I mean that these new Modules are to contain nothing that isn't actually needed for that View., i.e. Columns, Attributes etc. These new Modules will only have the single View.
So, the way forward as I see it, is to take copies of the single master Module, one for each View, rename those copies to reflect a given Master Module/Required View, select that required View in the given copy Module and then delete everything that is not needed by that View, i.e. available Columns, Attributes etc.
This would be simple if I had the required DXL knowledge, which I am endeavouring to pick up as fast as I can.
If at all possible, this script has to be generic and be able to work upon any of the master Module copies to produce the associated "reduced" Module reflecting a particular View.
The client aims to use the script periodically for View archiving (I know, that's the way they want it).
Clarification
Some clarification of what I believe is required, given the following text from my original question:
If at all possible, this script has to be generic and be able to work upon any of the master Module copies to produce the associated "reduced" Module reflecting a particular View.
So, say there are ten views of the master Module, outside of the DXL script, I would copy the master Module ten times, renaming each copy to reflect each of the ten views. Unless you know different, each of those ten copies will reflect the same “Absolute Number”s as are in the master Module, so no problem there?
So, starting with the first of the copied Modules, each named to reflect the View it will eventually represent, its View would be set from the ten Views available to it, that which matches its title.
The single generic DXL script would then be run against that first copy Module, the aim being to delete everything not actually needed for that view, i.e. Attributes, Columns etc. Would some kind of purging command be required in the script for any aforementioned deleted items?
The single generic DXL script would then delete ALL views from that copy Module. The log that is produced when running the script also needs capturing, but I’m not sure whether this should be done from within the script, if possible or as a separate manual task outside of the script.
The aforementioned (indented) process would then be repeated, using the same generic script, against the remaining nine copied Modules. The intension is to leave us with ten copy Modules, each one reflecting one of the ten possible Views, with each one containing only the Attributes, Columns etc. required for that View.
Creating a mirror of a module with this approach is not so easy IMO. Think e.g. about "Absolute Number". If the original module contains the numbers 15 (level 1), 2000 (level 2), 1 (level 1), you will have to create 2000 objects, purge 1997 of them and move them to the correct place.
There is a "duplicate" tool at https://www.ibm.com/developerworks/community/forums/html/topic?id=43862118-113d-4eac-b3f1-21d3b73959d1 which tries to do this, but as stated there, this script is said not to work correctly in all situations.
So, I would rather use the approach "string clipCopy (Item i); string clipPaste(Folder folderRef)". Should be faster and less error prone. But: all Out-Links will also be copied with this method, you will probably have to delete these after the copy or else the link target module(s) will have lots of In-Links.
The problem is still not so easy to solve, as every view might have DXL columns that rely on some or other attribute, and it might contain DXL attributes which again might rely on sth else. I doubt that there is a way to analyze DXL code "on the fly" and find out which columns may be deleted.
Perhaps a totally different approach would be feasible: open each view and create an export to Excel, this way you will get rid of any dynamic dependencies. Then re-import the excel sheet to a new DOORS module. You will still have the "Absolute Number" problem, but perhaps you can make a deal that you will have a pseudo attribute "Original Absolute Number" and disregard the "new" "Absolute Number"'
Quite a big task for a DXL beginner....
Update: On second thought, perhaps you might want to combine these approaches
agree with your employer that you will use an alternative attribute for Absolute Number
use a loop like Russel suggested, when creating objects remember that objects might have to be created "below" or "after" its predecessor or sibling
for DXL attributes do not copy the DXL code but the actual current value of the object
for DXL columns create pseudo attributes _ and create a new view that uses these pseudo attributes instead of the original value
Copying the entire module, then deleting everything not in that view, seems worse than just copying the things you need from each particular view.
I would take the following as the outline of your program:
for view in main module do {
for column in view do {
Find attribute for each column and store (possibly in a skip list?)
Store name of column
}
create new module
create needed types / attributes in new module
create new view in new module
for object in main module {
create object in new module
for attribute in main module {
check if attribute is in new module {
copy info from old object to new
}
}
}
}
Each of these for X in y loops should be in the DXL reference manual in some for or another.
If you need more help, let me know!
I am trying to add a new "tab" to bug item in TFS 2017. Looking at the "tabs" you see things like "Steps to Reproduce", System, etc.
I have found information on changing work item types but nothing about adding a new "tab" across the top where you see Steps to Repro, System, Test Cases, Tasks. The change I want to make may not be possible? Or it is possible I don't know the correct verbiage to use when asking google. The think I want to change may not be a tab control at all it maybe something else different.
Thanks
***************** Updated questions after posting *****************************
After playing around with Process Editor -> WIT -> Open WIT from server -> Bug
as suggested by Andy Li-MSFT I don't see a lot of control on the formatting on the tab. I was planning to add fields in a grid like pattern like a table as shown below. I am able to get the values in the drop down list for field1 and add the fields. However I have a couple follow up questions if you have time.
Setting either the control or column for the control to read-only the column will not render when adding a new bug. I have a little more control if I set AllowedValues and Frozen for the column however the value can still be changed. Is there a better way to set read-only?
There is not much control on the layout. I am OK adding a lot of fields but would like them to be displayed in a table like structure. Is there a way to control the look of the fields on the form?
Is there a way to add the fields in a grid? This would be ideal so I only have one header for each column.
The last-updated-by and last-updated-date. Is it possible to track on a row level who made a change? If not I would be OK just adding a last updated by and last updated date to the new tab. Row level updates would be nice.
<pre>
Field 1 Field 2 (Read-only) Field 3 Last Updated By Last updated Date
Status (completed, empty, N/A) "Some text here which describes something to do" "Optional comments" tfs user name date/time
Status (completed, empty, N/A) "Some text here which describes something to do" "Optional comments" tfs user name date/time
</pre>
You need to modify the WIT definition file (Bug work item type in your scenario).
You can try below ways to do that:
Export the WIT definition file with witadmin commands, add a new tab under <TabGroup> and add a new control for it, then save and import the file. See Import, export, and manage work item types for details.
e.g:
<Tab Label="Tab0501">
<Control FieldName="System.ChangedDate" Type="DateTimeControl" Label="Test0501:" LabelPosition="Left" />
</Tab>
You can also use the TFS Power Tools to export/import WIT definition files or directly modify the files from server:
Visual Studio 2015 : Microsoft Visual Studio Team Foundation Server
2015 Power Tools
Visual Studio 2017 : TFS Process Template Editor
Reference below screenshot to do that.
Another way is writing an extension to Extend the work item form, you can reference my answer in another thread to do that.
In informatica mapping design, there must be a target table, but in my design, I only use informatica to call store procedures, and after they were called, all work has been done, so I don't need a target table to be inserted or updated.
I used a non-exist table as the target table, and one nonsense field as the input port(cause there must be at least one input port!), then unchecked or the option(insert, update,delete) in the session configuration, so that the informatica would not generated DML SQL statements, avoiding "no table" errors.
But then informatica treat the input row as reject row and try to write it into a bad file. And cause I unchecked the insert option, the session log showed that there was an error that it couldn't be insert into the bad file!
Strangely, this error never showed in the monitor, and all session run successfully! It only appeared in informatica's meta table.
Is there a better way to avoid this problem, although it has no effect to my result? Is there a possibility to use a non-exist table and do nothing to it (include reject the input rows)?
Use a filter transformation just before the target and put filter condition 'FALSE'
No rows will go to the target
I had run into this same issue when i wanted to just execute a stored procedure and nothing else.
I solved this by creating a dummy source object that had one port and a dummy target with one port of the same datatype. In the source qualifier I added a SQL statement select 1 from dual (since it's Oracle).
I then added a filter object that was set to false. Then I connected the single port from the source/qualifier through the filter and finally to the target.
When the mapping is run, the source qualifier will return 1 row of one value, this will pass through to the filter but nothing will come out of the filter because the filter is set to false. This mapping will always be successful and valid because all ports are connected a nothing makes it to the "dummy" target thus no bad file logs or failure, etc.
Let me know if you need any clarification and I can update this answer.
No, you always need a target for the mapping to be valid. But I would rather work with a flat file target instead of a database table, you'll have much less work to do.
If you're on Linux / Unix, you can even route the file to /dev/null (use folder:/dev/, file:null) so the file is not actually written to the filesystem.
And using one dummy port is the right way. As you have said, you need at least one port, even if you don't really use it.
As odd as this may sound (Unix systems): neither source, nor target need to exist.
Source (flat file): /dev/null, column DUMMY
Target (flat file): /dev/null, column DUMMY
And you don't need to use any databases for the session to succeed, nor use any filters. It runs.
I have a task to create reports about various work items from a Team Foundation Server 2010 instance. They are looking for more information than the query tools seem to expose which is why I am not using the OOB reporting capabilities. The documentation on creating custom reports against TFS identify the Tfs_Analysis cube and the Tfs_Warehouse database as the intended sources for reporting.
They have created a custom work item, "Deployment Requests", to track requests for code migrations. This work item has custom urgency levels (critical, medium, low).
According to Manually Process the Data Warehouse and Analysis Services Cube for Team Foundation Server, every two minutes my ODS (Tfs_DefaultCollection) should sync with the Tfs_Warehouse and every 2 hours it hits the Tfs_Analysis cube. The basic work items correctly show up in my Tfs_Warehouse except not all of the data makes it over, in particular, the urgency isn't getting migrated.
As a concrete example, work item 19301 was a deployment request. This is what they can see using the native query tool from the web front-end.
I can find it in the Tfs_DefaultCollection and the "Urgency" is mapped to Fld10176.
SELECT
Fld10176 AS Urgency
, *
FROM Tfs_DefaultCollection.dbo.WorkItemsAre
WHERE ID = 19301
trimmed results...
Urgency Not A Field Changed Date
1 - Critical - (Right Away) 58 2011-09-07 15:52:29.613
If I query the warehouse, I see the deployment request and the "standard" data (people, time, area, etc)
SELECT
DWI.System_WorkItemType
, DWI.Microsoft_VSTS_Common_Priority
, DWI.Microsoft_VSTS_Common_Severity
, *
FROM
Tfw_Warehouse.dbo.DimWorkItem DWI
WHERE
DWI.System_Id = 19301
Trimmed results
System_WorkItemType Microsoft_VSTS_Common_Priority Microsoft_VSTS_Common_Severity
Deployment Request NULL NULL
I am not the TFS admin (first exposure to TFS is at this new gig) and thus far, they've been rather ...unhelpful.
Is there be a way to map that custom field over to an existing field in the Tfs_Warehouse? (Backfilling legacy values would be great but fixing current/future is all I need)
Is there a different approach I should be using?
Did you mark the field as reportable? See http://msdn.microsoft.com/en-us/library/ee921481.aspx for more information about this topic.
Based on Ewald Hofman's link, I ran
C:\Program Files\Microsoft Visual Studio 10.0\VC>witadmin listfields /collection:http://SomeServer/tfs > \tmp\witadmin.txt
and discovered a host of things not configured
Reportable As: None
At this point, I punted the ticket to the TFS admins and indicated they needed to fix things. In particular, examine these two fields
Field: Application.Changes
Name: ApplicationChanges
Type: PlainText
Use: Project1, Project2
Indexed: False
Reportable As: None
or
Field: Microsoft.VSTS.Common.ApplicationChanges
Name: Application Changes
Type: Html
Use: Project1, Project2
Indexed: False
Reportable As: None
It will be a while before the TFS Admins do anything but I'm happy to accept Edwald's answer.
Our Application is an MVC Application. We are using Entity Framework. When I am updating the model to add a table from the database. I am recieving an exception and it says
"An exception of type 'System.ArgumentException' occured while attempting to update from the database. The exception message is: 'An entry with the same key already exists'."
I am not able to figure out what is the problem. Can I have a solution to get over the problem.
I had it just as you did. You probably have two identical nodes: EntitySetMapping. You should remove one and everything will be ok.
I ran into this issue today. It means that you have two definitions of some sort with the same name. In my case, it was a duplicate EntitySetMapping. It happened as a result of me migrating some customizations from an old version of my model into a new version. I copied an EntitySetMapping with custom insert/delete/update mappings, but I didn't think to delete the mapping that had been previously auto-generated by the model designer.
Unfortunately, you won't know you have this problem until the next time you attempt to update from the database, meaning there's potential for this one to go undetected for some time.
In the future, when making significant changes to the model via the XML editor, I would recommend that you do a test database update just to make sure that all is well.
I recently ran into a similar issue with EF6 where this error was occurring and there was no duplicate key anywhere... That were visible in the edmx. What I had to do was right-click in the edmx and select Model Browser. In the Model Browser view and under Model/Entity Types, some lingering entities were there. For some reason, deleting all the entities in the edmx actually didn't do what you would think. Removing these lingering Entities in the Model Browser solved my problem. Hopefully this will solve some people's problems because this type of solution is easy to fix but hard to find.
Probably, there is another table with the same key exists. Can we see the code? Read more on this exception here.
Well, it seems I figured out a fix for my case.
Step 1: I deleted the table from the edmx model that contained the reference or what ever that was causing the error.
Step 2: I right clicked the designer and click "Update Model from Database" again and all was fixed.
FYI, the only way I knew which table to delete was because it was the last one modified since the "Update Model from Database" worked last.
Taken From Here
This is a more detailed answer based on Tazos333's answer.
Finding the actual duplicate is the hardest part. Using Powershell it can be done by:
Get-Content .\models.edmx | Group-Object | Where-Object { $_.Count -gt 1 } | Select -ExpandProperty Name
In order to get as few false positive results as possible, run this only for the <!-- C-S mapping content --> section (copy-pasted in other text file).
Besides scalar properties duplicates (quite normal, since tables might contain the same column names), the duplicate <EntitySetMapping Name="..."> will be very obvious.