I've got a class library project set up to do template editing and wanted to drop in another copy and customize it. However I get
The type 'TfsBuild.Process' already contains a definition for '_contentLoaded'
and 22 more of the same with the other _ names.
Also Type 'TfsBuild.Process' already defines a member called 'Process' with the same parameter types C:\Projects\MSBuild.Tasks\TechnicalDebtTaskLib\BuildProcessTemplate\obj\Release\CodeMetric.g.cs 62 16 BuildProcessTemplate
i've tried hand editing different parts of the xaml to try to find what key might need to be more unique, but no luck.
How do I work on multiple build process templates in the same solution or copy a process template that makes it unique?
Summary:
<activity mc:ignorable="sap" x:class="Tfs2008Template.Process" ... xmlns:this="clr-namespace:Tfs2008Template">
two tags must be changed for example:
<activity mc:ignorable="sap" x:class="Tfs2008Template2.Process" ... xmlns:this="clr-namespace:Tfs2008Template2">
For whatever reasoning the two tags that are related and need to be in sync are usually at the beginning and end of a very long list of attributes.
Full Article on my blog
Related
This bounty has ended. Answers to this question are eligible for a +100 reputation bounty. Bounty grace period ends in 14 hours.
frans is looking for an answer from a reputable source.
I have a working set of JJB YAML files successfully creating jobs and folders.
I now want to make certain values I use inside those YAML files configurable i.e. when running jenkins-jobs test|update -r jobfolder I want to set values for folder prefixes (to not damage existing production jobs), names for branches, nodes etc.
I don't want to use JJBs defaults approach for this since I'm already using it for configuration at a different place and it results in conflicts when used in projects and jobs together.
The ideal way of doing this I can think of would be a way to call JJB like this
jenkins-jobs test|update --define "folder-prefix=experimental/,node=test-node" -r jobfolder
Giving me variables I can use in the actual job definition files.
Since this option seemingly doesn't exist, I'm currently trying to provide files which contain those variables and somehow 'inject' them in my project.
Those are the approaches I can think of:
1 - having different configuration folders with YAML files inside, I would use like this:
jenkins-jobs test -r experimental-config:jobfolder
jenkins-jobs test -r production-config:jobfolder
with experimental-config and production-config being folders with additional files containing my configuration I can switch between.
But unfortunately I don't know how I would reference values I've defined in different yaml files. Is that even possible?
2 - having include files like described in the documentation
While that sounds promising I didn't manage to actually make this run. I tried to turn the following 'configuration header' I'm already using:
- dynamic-config: &dynamic-config
name: "dynamic-config"
folder-prefix: "experimental/"
node: "test-node"
[Rest of the file making use of dynamic-config]
into something making use of the !include statement like this:
!include: dynamic-config.yaml.inc
[Rest of the file making use of stuff defined in dynamic-config.yaml.inc]
giving me a seemingly unrelated parser error:
yaml.parser.ParserError: expected '<document start>', but found '<block sequence start>'
in "/home/me/my/project.yml", line 11, column 1
so I tried this snippet, which looks more like the example by putting it inside an existing element:
- dynamic-config: &dynamic-config
name: "dynamic-config"
!include: dynamic-config.yaml.inc
giving me a different error but still an error:
yaml.scanner.ScannerError: while scanning a simple key
in "/home/me/my/project.yml", line 7, column 5
could not find expected ':'
in "/home/me/my/project.yml", line 8, column 5
In both cases it doesn't make a difference whether or not the specified include file exists or not, which makes me doubt you can just 'include' a file like this at all.
What am I doing wrong here? Is there a more obvious / straight forward way to customize a jenkins-jobs run?
Update:
I somehow managed to use the !include tag for individual items now, like this:
- dynamic-config: &dynamic-config
name: "dynamic-config"
folder-prefix: !include: job-configs/active/folder-prefix.inc
branch-name: !include: job-configs/active/branch-name.inc
node-name: !include: job-configs/active/node-name.inc
But I wasn't able to put the whole dynamic-config element (with the anchor) into an include file yet.
2nd update:
Looks like I'm trying something similar as the guy from this question.
Can someone confirm, that this is currently still a problem? What's the JJB way of handling this?
I have been given the task of creating a DXL script. First problem is that I have never used DXL before, even though I have many years experience with DOORS itself. I have been surfing the Net to seek guidance on my particular problem. I also have a few specimen DXL scripts for reference.
My new client requires that for each View of a given Module, of which there are many Views, new "reduced" Modules are to be produced reflecting each View.
By "reduced", I mean that these new Modules are to contain nothing that isn't actually needed for that View., i.e. Columns, Attributes etc. These new Modules will only have the single View.
So, the way forward as I see it, is to take copies of the single master Module, one for each View, rename those copies to reflect a given Master Module/Required View, select that required View in the given copy Module and then delete everything that is not needed by that View, i.e. available Columns, Attributes etc.
This would be simple if I had the required DXL knowledge, which I am endeavouring to pick up as fast as I can.
If at all possible, this script has to be generic and be able to work upon any of the master Module copies to produce the associated "reduced" Module reflecting a particular View.
The client aims to use the script periodically for View archiving (I know, that's the way they want it).
Clarification
Some clarification of what I believe is required, given the following text from my original question:
If at all possible, this script has to be generic and be able to work upon any of the master Module copies to produce the associated "reduced" Module reflecting a particular View.
So, say there are ten views of the master Module, outside of the DXL script, I would copy the master Module ten times, renaming each copy to reflect each of the ten views. Unless you know different, each of those ten copies will reflect the same “Absolute Number”s as are in the master Module, so no problem there?
So, starting with the first of the copied Modules, each named to reflect the View it will eventually represent, its View would be set from the ten Views available to it, that which matches its title.
The single generic DXL script would then be run against that first copy Module, the aim being to delete everything not actually needed for that view, i.e. Attributes, Columns etc. Would some kind of purging command be required in the script for any aforementioned deleted items?
The single generic DXL script would then delete ALL views from that copy Module. The log that is produced when running the script also needs capturing, but I’m not sure whether this should be done from within the script, if possible or as a separate manual task outside of the script.
The aforementioned (indented) process would then be repeated, using the same generic script, against the remaining nine copied Modules. The intension is to leave us with ten copy Modules, each one reflecting one of the ten possible Views, with each one containing only the Attributes, Columns etc. required for that View.
Creating a mirror of a module with this approach is not so easy IMO. Think e.g. about "Absolute Number". If the original module contains the numbers 15 (level 1), 2000 (level 2), 1 (level 1), you will have to create 2000 objects, purge 1997 of them and move them to the correct place.
There is a "duplicate" tool at https://www.ibm.com/developerworks/community/forums/html/topic?id=43862118-113d-4eac-b3f1-21d3b73959d1 which tries to do this, but as stated there, this script is said not to work correctly in all situations.
So, I would rather use the approach "string clipCopy (Item i); string clipPaste(Folder folderRef)". Should be faster and less error prone. But: all Out-Links will also be copied with this method, you will probably have to delete these after the copy or else the link target module(s) will have lots of In-Links.
The problem is still not so easy to solve, as every view might have DXL columns that rely on some or other attribute, and it might contain DXL attributes which again might rely on sth else. I doubt that there is a way to analyze DXL code "on the fly" and find out which columns may be deleted.
Perhaps a totally different approach would be feasible: open each view and create an export to Excel, this way you will get rid of any dynamic dependencies. Then re-import the excel sheet to a new DOORS module. You will still have the "Absolute Number" problem, but perhaps you can make a deal that you will have a pseudo attribute "Original Absolute Number" and disregard the "new" "Absolute Number"'
Quite a big task for a DXL beginner....
Update: On second thought, perhaps you might want to combine these approaches
agree with your employer that you will use an alternative attribute for Absolute Number
use a loop like Russel suggested, when creating objects remember that objects might have to be created "below" or "after" its predecessor or sibling
for DXL attributes do not copy the DXL code but the actual current value of the object
for DXL columns create pseudo attributes _ and create a new view that uses these pseudo attributes instead of the original value
Copying the entire module, then deleting everything not in that view, seems worse than just copying the things you need from each particular view.
I would take the following as the outline of your program:
for view in main module do {
for column in view do {
Find attribute for each column and store (possibly in a skip list?)
Store name of column
}
create new module
create needed types / attributes in new module
create new view in new module
for object in main module {
create object in new module
for attribute in main module {
check if attribute is in new module {
copy info from old object to new
}
}
}
}
Each of these for X in y loops should be in the DXL reference manual in some for or another.
If you need more help, let me know!
My Grails (2.4.2) app was created with a bunch of "default/standard" resource bundles:
myapp/
grails-app/
i18n/
messages.properties
messages_fr.properties
I would now like to create my own "custom" resource bundle, that is, define properties in a file outside of these standard messages*.properties files that myapp was created with.
According to the i18n documentation, all bundles need to be prefixed with messages and suffixed .properties. So I added two new props files, one for English and one for French:
myapp/
grails-app/
i18n/
messages.properties
messages_fr.properties
messages_myapp.properties
messages_myapp_fr.properties
For one, I'm not 100% sure I'm interpreting the docs correctly. So if anything about my 2 new props files jumps out at you as being incorrect, please start by letting me know!
Having said that, in all the example from those docs, I don't see where you specify the bundle to use. All of the examples look like this:
<g:message code="fizz.buzz.foo" />
But what if I have a fizz.buzz.foo property defined in both messages_blah.properties and messages_bar.properties?
So I ask: How do I add my own custom resource bundles, and how do I properly refer to them from inside a GSP?
To answer your question you have to understand what Grails (well, Spring really) is doing to accomplish this.
You are on the right path with the multiple files. What you have outlined there matches the documentation and will work.
However, under the covers what is really being done is they are being combined into a single bundle (per language). So there is no need to tell Grails/Spring which bundle to use.
Finally, what happens when the same key is defined multiple times? The first one matched wins. I seem to recall that the order in which the bundles are combined is in file name order, though you should be able to test this pretty quickly.
Hope this helps, and best of luck!
When I open up a build definition I can see the arguments are split into sections with a number prefix e.g. 1. Basic, 2. Misc etc.
However, when I edit the xaml there is no indication as to where these categories are defined.
Can someone provide some guidance as to where they are located within the arguments list?
Here is a similar question except the poster has inquired about a different parameter based off the build settings which I believe is a different case than regular parameters.
Missing ProcessParameterMetadata in TFS DefaultTemplate.xaml: where is for e.g. Items to Build in the Required category
You can open the build template and edit the Metadata argument as shown in the picture below:
Then you can edit the category:
In the XML there is no category as standalone. You can define the category in the Process.Metadata section:
<this:Process.Metadata>
<mtbw:ProcessParameterMetadataCollection>
<mtbw:ProcessParameterMetadata Category="#300 Advanced" Description="Enable MSBuid Multi-proc to build your solutions' projects in parallel, when possible, using all available processors on the build server." DisplayName="MSBuild Multi-Proc" ParameterName="MSBuildMultiProc" />
....
</mtbw:ProcessParameterMetadataCollection>
</this:Process.Metadata>
This is to add new categorys but i dont know where the standard categories are.
Hope that helped you.
Chears
I have been trying to make a task in my TFS builds more generic, and one of the things I am trying to do is copy some files to different directories depending on the build using the task. I toyed with the idea of using properties, but I couldn't think of a way to do that cleanly, so I tried to go with using item metadata, as I've been able to do so in another place in the same target file I'm working on, only this time, I'd like to use properties.
Here's what I want to do:
<ItemGroup>
<DestinationParent Include="$(DeploymentPath)">
<DestinationParentPath>$(DeploymentPath)</QuartzParentPath>
</DestinationParent>
</ItemGroup>
And later in the build, I tried to copy some files to the destination folder by referencing the item metadata:
<Copy SourceFiles="#(FilesToCopy)" DestinationFiles="#(FilesToCopy->'%(DestinationParentPath)\Destination\%(RecursiveDir)%(Filename)%(Extension)')" ContinueOnError="false" ></Copy>
Unfortunately, after the build runs, my BuildLog shows the following:
Copying file from "$(BinariesRoot)\%(ConfigurationToBuild.FlavorToBuild)\<File being copied>" to "\Destination\<File being copied>".
%(DestinationParentPath) had expanded to an empty string, for whatever reason. Using %(DestinationParent.DestinationParentPath) produced an error, telling me that I should simply be using %(DestinationParentPath). $(DeploymentPath) is expanded to the correct string as expected in several other places in the build.
Another source of confusion is that using %(ConfigurationToBuild.FlavorToBuild) yielded the correct value, i.e. Test, as can be seen in the following:
EDIT: this is defined under the root node Project, whereas the ItemGroup with DestinationParentPath is defined under a Target node. Does this also make a difference?
<ItemGroup>
<ConfigurationToBuild Include="Test|Any CPU">
<FlavorToBuild>Test</FlavorToBuild>
<PlatformToBuild>Any CPU</PlatformToBuild>
</ConfigurationToBuild>
</ItemGroup>
It does not seem as though the Include attribute is relevant when you're only interested in the string in the item's metadata since I'm pretty sure "Test|Any CPU" does not reference any actual file.
So once again, why is %(DestinationParentPath) expanding to an empty string?
EDIT: I forgot to mention that I also tried hard-coding the actual path for DestinationParentPath, but this still resulted in %(DestinationParentPath) expanding to an empty string.
EDIT: this is defined under the root node Project, whereas the ItemGroup with DestinationParentPath is defined under a Target node. Does this also make a difference?
Yes, it makes a difference. The ability to define an ItemGroup inside a Target is new to msbuild 3.5. Despite looking declarative, it's actually executed at runtime just as if you'd called the old style CreateItem / CreateProperty tasks. That alone leads to potential issues: you need to consider when the containing task is (first) called. Order of operations is not always obvious to the naked eye. May be wise to make the task where you use %(DestinationParentPath) dependent on the task where it's created, even if there's no "logical" dependency.
In addition, there are the age-old msbuild scoping quirks/bugs. Dynamically created properties & items are not visible to "sibling" tasks. Also, items updated in nested builds aren't always bubbled up.
Check out the workarounds in the links, you should be able to find something that works for you even if it's icky.