What is sent to PlasticSCM server in-between merge and replicate that affects other users, and why? - plasticscm

Person A does a merge
Person A spends some time QAing the merged code
Person B tries to checkin a change on one of the merged branches, and needs to go through the "there is a pending merge" dialog
Why is it that this happens, even when Person A has not yet replicated their merge to the server?
More specifically, what information is sent to the server implicitly on merge, and why? It seems odd from a user perspective that this is separate from replication, especially as it affects other users and forces them to merge from something that they do not yet have access to.

If Person A performs a merge and spends some time QAing, without checking the result in, it will not affect in any way to Person B. Impossible.
If Person A performs the merge and then he checks the changes in, Person B might face a merge operation if his changes are basing from a previous changeset and they are in conflict with the changes done at the merged changeset. If Person B changes are not in conflict with the ones merged then Person B can just update-merge, Plastic will let you do it: Update-Merge Dialog
Then you mentioned the "there is a pending merge" dialog, this dialog will only will be displayed when you start a merge operation, you don't finish it and finally you try to check-in the partial operation result. Once you face this scenario you should repeat the merge in order to continue with the unmerged candidates and finish the operation, then Plastic will let you check the merge in.
Finally, answering your question:
What is sent to PlasticSCM server in-between merge and replicate that
affects other users, and why?
I'm afraid nothing. If you start a merge at the central repository or the distributed repository it won't affect third party users working with replicated repositories, they are completely independent until you push the merged changeset to external repositories.

Related

Rails rollback DB and revert select commits

We have a feature (that contained some DB migrations and 50+ changed files) that was merged into master a few commits ago. The powers that be now want that whole feature removed.
There have been no new migrations since this feature was added though there have been some new commits that we want to keep.
What's the best way to unwind this quickly, assuming small team and OK to force push to origin? Is it possible (read: recommended?) to:
rollback the migrations to the point right before this feature (PR) was merged
revert the git commits back to the same point
replay the more recent commits (by hash?) we want to keep (unrelated to the unwanted feature) back on the codebase
This will wreak havoc with other developers but we're a very small team at the moment and working together on this.
Or maybe there is a better way?
I realize rollbacks & reverts are covered in other topics here, and I've read many of them, but our situation is somewhat different as we want to rollback, revert and then replay certain commits and then bring origin up to date so it appears as though that bad feature never happened (or if easier, a merge commit reverting that feature PR would be acceptable).
Thanks for any help!
I'd suggest to add single migration that has code to revert all those 6 obsolete + 1 revert feature merge.
This way you'll
still have it in git history in case there'll be need to reapply, or reuse part of functionality
won't need to be too smart to safely reset git and DB to some state on each dev env + staging/production - which will avoid human mistakes

The Date & Time for TFS locks

Is there a way to see the date and time when a file was locked in TFS?
Just to be clear, I am not talking about check-ins. Only locked/check outs.
In TFS, Locks appear as a "pending change" for the person who locked the item. As long as the lock is in effect, it will appear as a pending change. When a commit is made of that pending change, the lock is released.
While the lock is in effect, the locked branch is effectively read-only, since (to simplify) the locker is the only user who can make commits. The act of committing is what releases any locks on the branch.
So the lock operation is first occurs at local side and will not be recorded by TFS. You could only get who locked the branch and check the branch status is locked or not at the moment. There is no way to query when a file/branch was locked in TFS. You could also take a look this similar question: when a file was locked in TFS

Save NSUndoManager transactions one by one

I need to save changes not only locally into Core Data, but on server too.
My concern is, in my case user can do bunch of interaction in a short time. Between interaction there is not enough time to receive success message returned from server. So either I lock the GUI, until next message returns - this is the case now -, or choose a different approach.
My new approach would be to let user do many interactions and put transactions onto undo stack provided by NSUndoManager, enabled on NSManagedObjectContext, BUT save / commit ONLY that transaction for which success message was received. How can I move undo "cursor" one at a time, commit records one by one, although context contains already planty of unsaved changes?
NSUndoManager is not really suited to this task. You can tell it to undo or redo actions, but you can't inspect those actions or selectively save data in the current undo stack.
What I've done in the past is create my own queue of outgoing changes. Whenever changes are saved locally, add those changes to a list of un-synced outgoing changes. Then use a different queue to handle processing that queue by sending them to the server and, if the server reports success, clearing out those changes. You can use NSManagedObjectContextWillSaveNotification and/or NSManagedObjectContextDidSaveNotification to monitor changes and update the outbound queue.
This means that the iOS device may have queued changes that the server doesn't know about, especially if the network is unreliable or unavailable. That's pretty much unavoidable in those situations though, unless you do something awful like refuse to let people make new changes until the network comes back up.

Show iterations for work item in TFS

I made a mistake in Team Foundation Server 2013 when trying to clean up our iterations. Our iteration path set up for the longest time was:
TFS PROJECT
Sprint 1
Sprint 2
...
There were discussions within the team, so I changed the Iteration path set up to this:
TFS PROJECT
Iteration Group
Sprint 1
Sprint 2
...
Readied Work
Well, after experimenting, I decided to move all the sprints under their parent Iteration Group back to the main TFS PROJECT parent. Unfortunately (this is where the mistake occurred), I deleted the Iteration Group container, thinking the Iterations would be re-parented. In doing so, all the child iterations were deleted and the work items that had been previously associated with each sprint were reallocated to the top parent, TFS PROJECT. The iteration path structure now looks like this:
TFS PROJECT
Readied Work
I have already recreated the iterations, as we did not have a backup of the project/collection to which I could have rolled back. The DBA team is hands off the TFS database, so they are not available to assist. I know how Areas/Teams/etc work in TFS, but I am not familiar with the database structure.
Given that I am able to see all the work items on the TFS portal, is there a way to show all IterationIDs each Product Backlog Item has been associated with, in a list?
I would prefer to NOT look at the history of each PBI, as there are a lot.
First, I'd strongly recommend against touching the SQL database directly.
Using the TFS API you can query work items, and you can use the 'AsOf' operator to get the state from a historical point in time. Using this it wouldn't take much work to query the area/iteration paths of all your work items from 2 days ago, and then write them back to the current work items.
As you only has a few iterations could use the 'was ever' operator. If you create a query and add a filter of IterationPath was ever '/project/group/iteration 1' you will see the work items that were ever under that node. You can then bulk edit everything you find under the desired path..

How do you handle multiple tasks on the same file in TFS?

We are using Team Foundation Server 2010 at work and all of our assignments come from TFS tasks.
Right now I have 2 tasks that relate to the same source file. They are two separate feature requests, but I will end up writing common methods for both. I check in the code changes and link the task when I am finished with the task.
There's an issue right now though with the test database that is prohibiting me from actually finishing off on the first task and checking it in. Well the next task is on the same file.
I am curious how other people handle this issue. I know I could shelve the change and work on the other, but I kind of need some of the other methods I wrote in the first task. The body in some of these methods will be changing a little bit to handle the next feature.
Do I shelve my changes, copy my methods over and modify them for the new task? If I do that, how would the merge process work when I un-shelve my changes? How do any of you handle this issue? Am I better off just doing both tasks in the same changeset? However, now the 2nd task has a dependency on the 1st. If for some reason the database doesn't get fixed, that first task is now holding up the 2nd task from being deployed.
Thanks for your input in advance.
If both tasks are destined for the same release, work on them in the same branch, and just associate both task workitems with the checkin. If they are fundamentally separate changes, and my move to the production codebase separately, then you should have two separate branches for the code.

Resources