How do I open a previous version of a DOORs module? - ibm-doors

Problem
When running a long DXL script to generate a DOORs module, I accidentally overwrote an old version of a DOORs module by the same name that I meant to keep for comparison. By overwrote, I mean it generates a whole new document, then saves it under the same name.
Question
Can I open a previous iteration of a DOORs module?

What do you mean by "overwrote"? Did you delete and purge the old module? In that case, you will have to revert to a backup of the database.
Or did you just set new values to existing objects? In that case, if you created a baseline, just open it, you might want to use the “smart history viewer“ (http://www.smartdxl.com/content/?p=418). In the case that you overwrote attributes in the current version, it's more difficult. I think there is no script out there that reverts a module to a state some x hours ago.
Perhaps you can write your own script that goes through the history records of the module, like in DXL DOORS Retrieve Redlines from Specific History Version or in http://www-01.ibm.com/support/docview.wss?uid=swg21444153 and with this information display and perhaps restore the old content.
It gets more complicated if you moved, deleted, purged, linked objects in your script

Related

Visual Studio schema compare extra parentheses

We have a database project we deploy using Visual Studio. The project includes a schema compare to help see the changes before they are published. It will sporadically highlight file differences such as extra parenthesis, index fill factors, and extended properties. Sometimes if we clone the branch again and the run the compare, they go away.
In schema compare options...
General: We have ignore fill factor checked.
Object Types -- Application-scoped: Extended Properties is not checked.
What is worse though is that if you run the update it says it's completed, but no changes are made to the database. They are still there upon next compare and clicking on the generate script button, says there is no script to generate. We are working VS2019 and the DB server is 2014. Any thoughts? Thanks.

TFS always has conflicts on project files for simple ADD changes that it refuses to auto merge like it does for code files

EVERY SINGLE TIME I unshelve a changeset with add/rename/move/remove changes or
have pending changes with such changes, I have to manually merge the parent project file which receives a lot of activity (the majority of the codebase is nested under the one project so its project file's version changes very frequently).
This is a multiple-times-per day frustration and it seems that TFS should be able to do this for me since the changes are simple (e.g. remove the line for the deleted file in latest version and add the line for the new file in local version). I conclude this because it behaves intelligently in this way for code files to automerge versions.
So why does automerge behave differently with project files than with code files?
For example, developer A creates a changeset adding a file to the project and shelves it for review. Then another developer B checks-in changes to the project that also adds a file (unrelated), so when I go to unshelve developer A's changes I have to resolve conflicts on the project file.
Also, if a group of files are moved/renamed/added and I want to unshelve with only a single affecting change, it is much easier to just take the server version of the project file and manually reapply the single change (e.g. add existing file) instead of merging a dozen spread out changes across thousands of lines. (And god forbid you have a rename change and took server version of the project file, because then you need to manually edit it with a text editor to rename otherwise you'll get a sequence of errors due to the renamed file already existing on disk when trying to rename from solution explorer).
Update: I'm escalating this, because now it's screwing us over.
Using VS2014 and our codebase mostly lives under one giant database project (*.sqlproj).
Most likely, different files get added on the same line in the file since in most cases, project files are auto-generated, unlike regular code, it is hard to tell in which line VS would insert a new file item when someone adds it on the front end. If 2 people add new items to the project at the same time, it is very likely that you will get 2 versions of the project file with these new items written on the same line, which will cause the conflict. Sometimes, (e.g. if you add reference), it's more than one line per item, which makes merging even worse.
I would suggest that each developer that makes changes to the project file, checks it out with the lock (exclusively), so there will be one change at a time. Also, make sure other developers get latest before making their change (there is a setting in Visual Studio under Tools > Options > Source Control > Visual Studio Team Foundation Server). This will make auto-merge work.

Understanding Label limitations

A blog states
Labels come with a big warning though - labels themselves are not
version controlled, meaning that there is no way to track or audit the
history of activity on a label. Plus, labels don't keep copies of
versions in case of a file deletion, so if a file gets deleted, any
label that relies on a version of that file is essentially hosed.
This is not the only place I've read similar information regarding TFS labels. The lack of history is clear enough. The second part "labels don't keep copies of versions..." is unclear. In fact, I created a test project > labeled it > deleted a file > performed a Get by Label and the file came back. So what is this referring to? Has the label functionality changed in TFS as of recent?
I realize that a file deletion does not actually remove history, is that the cause? In other words, if I run
tf destroy "$/MyTeamProject/Project/FileName.cs"
Is that what it means to delete a file? If so, that seems an extraordinary circumstance to even consider. I mean, its an intentional unrecoverable deletion of history. Changesets are not going to be any improvement over labels in such a case.
When we apply a label we do so to a version of source control at a point in time. Intuitively, because we initially created a snapshot of source control at a point in time one may assume the snapshot represents source code at a point in time.
This is incorrect. Labels can be edited after creation.
Conceptually, a label defines a product and the product’s bug-fixes (source). A real world example may help. Let’s say we have a product called AlphaBoogerBear. AlphaBoogerBear is a product, not a version (think pre-release Windows names). AlphaBoogerBear can be made into a Label, AlphaBoogerBearLabel. We perform a release of AlphaBoogerBear. There are some bugs. We fix them.
Now, we go back and edit AlphaBoogerBearLabel to include the bugfixes. The label no longer represents a snapshot at a point in time. Instead, it represents the most stable release of AlphaBoogerBear.
Finally, we move to BetaBoogerBear. We have the option to go back and grab a label that represents the old product at its best version in time.
In my opinion, if one requires a snapshot of version of source control it's better to branch. If one requires an edit-able snapshot that represents a product release then a Label is useful. Albeit, it seems a difficult balance of trust and convenience.
As far as the author's intentions, I really can't say for sure. He could mean to say that items can be deleted from a label and thus when you Get by Label the item will be gone. Though, the item is still stored in TFS history so although it is a confusing situation, not all is lost.
I'm not sure what is meant by the sentence about labels getting affected by file deletions. But you have it right, a regular file delete won't affect labels, but a destroy will.
What it's cautioning you about with respect to not being version controlled though, is that somebody can come and edit a label, by including or excluding files from the label, or changing the versions of files included in the label. And there will be no history of these changes to the label definition.
As I understand it, a label in TFS is basically a set/collection of changesets.
Let's say you label a directory with two files in it. The label would then consist of three changesets: one for the directory and one for each file. Deleting one of these files in TFS will produce a new changeset for the directory, so doing a Get by Label at this stage would get the deleted file "back" since it contains the changeset prior to the deletion. Destroying a file would remove it from any changeset records it has appeared in, thus also destroying the information in the label.
Since the label is identified by its name only, it is also very easy to overwrite it with a new label, destroying the old information. The /child parameter to this command can change this behavior somewhat: using /child:merge will keep the changesets that were previously recorded along with the new one, /child:replace will exchange the old changeset with the new. In the example above, none of these alternatives would make any difference, since Get by Label would still retrieve the highest changeset.

jenkins version conflict with findbugs

i am getting the following thing every time i start jenkins.i couldn't get hudson findbugs graph even though i activate it.
Manage Old Data
When there are changes in how data is stored on disk, Jenkins uses the following strategy: data is migrated to the new structure when it is loaded, but the file is not resaved in the new format. This allows for downgrading Jenkins if needed. However, it can also leave data on disk in the old format indefinitely. The table below lists files containing such data, and the Jenkins version(s) where the data structure was changed.
Sometimes errors occur while reading data (if a plugin adds some data and that plugin is later disabled, if migration code is not written for structure changes, or if Jenkins is downgraded after it has already written data not readable by the older version). These errors are logged, but the unreadable data is then skipped over, allowing Jenkins to startup and function properly.
Type Name Version
The form below may be used to resave these files in the current format. Doing so means a downgrade to a Jenkins release older than the selected version will not be able to read the data stored in the new format. Note that simply using Jenkins to create and configure jobs and run builds can save data that may not be readable by older Jenkins releases, even when this form is not used. Also if any unreadable data errors are reported in the right side of the table above, note that this data will be lost when the file is resaved.
Eventually the code supporting these data migrations may be removed. Compatibility will be retained for at least 150 releases since the structure change. Versions older than this are in bold above, and it is recommended to resave these files.
No old data was found.
Unreadable Data
It is acceptable to leave unreadable data in these files, as Jenkins will safely ignore it. To avoid the log messages at Jenkins startup you can permanently delete the unreadable data by resaving these files using the button below.
Type Name Error
hudson.maven.MavenModuleSet nov 7 latest NonExistentFieldException: No such field hudson.plugins.findbugs.FindBugsReporter.isRankActivated
Discard Unreadable Data
According to the main FindBugs author, this is expected behavior when you downgrade FindBugs from a newer version to an older one:
Once you upgrade to a new version you can't downgrade without getting
such kind of exceptions (I only ensure backward compatibility). Can't
you use the "manage old data wizard" in Jenkins to remove these new
fields from your persisted Jenkins build files?
Nabble discussion.

Is it possible to configure TFS not to mark file as read-only?

The title pretty much says it all.
I'm using a RFT, VS addin that allows me to edit a proprietary data file with a GUI. The problem is that this file doesn't show up in VS and when I start editing it via the GUI, VS doesn't check it out automatically (probably a bug of the VS addin). So, I've to check it out manually before editing it, otherwise the addin will crash when trying to save the file (because it is read-only), and sometimes will also corrupt the local working copy of this project.
Everything would be much easier if TFS didn't mark the file not checked out as read only.
Do you know if there is a way to instruct TFS to keep all the files as not read-only?
No. You can exclude it from source control, but that's probably not want you want.
I have the same issues with TFS. Our project has a few small SQL Server database files that we have chosen to put under source control. We handle the read only issue by adding these to the post build statement on the project build. I suppose we could have done this pre-build as well.
attrib $(TargetDir)*.mdf -r
attrib $(TargetDir)*.ldf -r
It has been a while, but - I think this link is actually the answer to that.
When you do a check out, what you are actually doing is saying “TFS, I
would like to edit the version of the file that I have already
downloaded, is that ok?” TFS then looks at that version, and tells you
if you can edit it or not (based on your security permissions at that
point in time and if anyone else has placed a lock on the file). If
you can edit the file, the TFS marks the file as read/write on your
local machine and allows you to proceed.
I.E: When I r.clicked the project and selected "Check out and edit", the r/o flag was automatically removed, and I could compile (with both pre/post events) and then "check in" again.
Well you can get latest to a samba share, which eats the readonly bit.

Resources