Our company (internal projects) used version control (TFS, now 2015) for simply keeping an audit trail of released code - I have brought in the use of branching and merging and its completely changed the way we look at bottlenecks in the development pipeline and has generally been well received, but now I am looking for the next step.
Our code consists of one large piece of software and several other accompanying business applications.
We have four environments we keep up at all times, and our 'pipeline' is like so.
Developer does work locally.
Pushes code to a 'Development' environment (so we can all look at the code, see how well it integrates in the environment e.t.c)
When testing is ready we push to 'Test' - this is code that has been approved to be moved up the pipeline and therefore the
environment is a lot more stable than 'Development'.
Next, we pass it to the UAT server, which is essentially a mimic of the live server, to be as stable and representative of a live release
as possible. Code approved to move here is NOT frequent.
Finally, production environments.
Now I simply took the approach of having a branch for each environment, allowing for easy compares, for people to quickly grab the source and whatnot, and to see the progression of the codebase up the chain.
MAIN -> STAGE -> TEST -> DEV
This is one single, linear line, and we can simply view the history of the MAIN branch to see all the different released builds.
From the dev branch we splinter off into our local branches, and any hotfixes come directly off the UAT branch.
This works for us - but it works in the sense a procedural program could work - it may not be the most effective approach.
I'm just very curious as to whether there are better ways to do this, and after reading loads of stuff online I feel like people don't split their branches by environment but I don't really understand how that works better? Even though it is a pain to merge four times to release some code (although most of the time it is a rather slow pipeline, we have weekly releases).
Any help much appreciated.
You are correct when mentioned that the more complex the branching strategy the more overhead to maintain.
But if the situation demands there are not escaping. If you have not gone through the branching strategy document by ALM rangers for TFS, please take a look. It should help you.
I think the strategy you are following is not an linear branching but rather the one in the below image.
In a more complex enterprise software the branching strategy boils down to this.
I think it's a lot of overhead to maintain different branches for each environment as you rightly mentioned above (especially the number of merges). The simplest branching strategy is the one shown below (similar to what we use):
Main
| |
| |
DEV Release
The development would happen in the DEV branch, once it is ready for UAT we merge it into MAIN and then create a Release branch. You can use the DEV branch for the next release development at this point and all the bug fixes for the current release will happen in the release branch now. The Release branch will be used for the PROD deployment as well.
As for whether this would work for you or not will depend on your specific needs but 80% of the projects I worked with uses the above branching strategy.
Related
This is not a programming question, but I don't know any more active forum and besides programmers are the best people to be able to answer my question.
I am trying to understand the rationale behind continuous integration. On one hand, I understand that it is a good practice to daily commit your code before heading to home whether or not the coding and testing is complete or not and then there is continuous integration concept where the minute something is committed, it triggers a build and all the test cases are run. Aren't the two things contradictory?. If we commit daily whatever coding is done, it will cause daily failed builds..Why don't we manually trigger builds once the coding and testing is complete?.
Usually when you save your code daily is to be sure that your work will not be lost.
On the counterpart the CI or Continuous Integration is to test if what you produced is ok, in the majority of projects the CI isn't applied to individual branches ie: feature, bugfix, it's applied on major branches ie: master, develop, releases, etc. And these branches aren't updated daily as they need a pull request to be update and someone to approval that pull request.
The use case for having CI implemented on individual branches (feature, bugfix) is to check before merging a pull request into a major branch when it will check the tests and if the code builds.
So resuming, yes you need to commit your code daily, but you don't need to apply CI to it daily.
I suggest to you check the Gitflow workflow: https://www.atlassian.com/git/tutorials/comparing-workflows/gitflow-workflow
The answer is obvious.
1. Committing Code: In general the code is committed only after testing with the environment locally.
Consider Developer_A working on Component_A hence one has to commit with minimum verification as the scope is to develop Component_A.
No imagine complex system with 50 developer developing Component_B...Component_Z++
If someone is committing the code without minimum test it is most probably going to give you failed result.
Or else developer might have it committed on development branch that all together depends on SCM strategy adapted in project.
2. Continues Integration test scope:
On the other hand integrator principally collects and synergies different codes (Software Components) together into 1 container and perform different tests.
Most importantly, integrator need to ensure that all the Components Developed from different developers is fitting good and at the end Software is working as expected. To ensure that, Integrator have acceptance criteria and to proactively prevent something which can go wrong, it is important to have these criteria automated with the help of Continues integration.
But among all factors, it is important to give feedback on the quality of software to the developers. It is best in favor of project (economically), to know about the bug earlier hence Continues Integration and DevOps.
In Complex System it is worth to have automated watcher to catch the sneaked mistakes from developers.
3 Tools and Automation:
To create human independent system, automation tools like Jenkins are helpful.
Based on the testing strategy different testing levels can be performed with the help of Automation tools.
We have TFS 2013, currently for the team project working we don't have any branches, basically just a plain folder structure with various solutions in them.
With the goal to introduce release management, the intention is to create several branche, e.g. development/main/releases
As I was told I cannot 'disturb' the project team developers from their day to day work since there are other projects being worked on, question, what is the best practice to do this? Create a separate team project? How can we adopt the branching practice without asking all the developers.
Please help point a direction for this or share some thought on this, any help is appreciated!!
Unfortunately to do this you'll have to "disturb" the development team. You have 2 options. 1. Come up with a process in isolation and then disturb them when it goes live. 2. Collaborate with them on a process and work together to meet your requirements. (Being able to release your code to production is a key requirement for any project, sadly it's so obvious that it never gets added to the backlog / project plan and is always treated as a afterthought)
I recommend option 2. Without collaboration you're just going to end up causing resentment that you're imposing something on the Devs and they'll fight it tooth and nail. Also without the development team being involved you'll miss something important that will make the process brittle and difficult to maintain.
You'll need to get buy in from the Developers to implement a branching strategy as it will have a significant impact on them, they need to understand why you're doing this and what the benefits are, both to them and the business. They don't necessarily have to do any of the work, but they need to know what you're doing and why, they will also need to know when the changes are coming so that they can plan for the change.
Firstly you need to real the ALM Rangers version control guidance.
Secondly you need to get the developers to read it as well. They will be responsible for maintaining the code and merging it between branches. They will need to know when and where they need to check in various changes (such as hotfixes), and what process they should follow when code is ready for release.
Finally, regarding your question about where the branches should be located. It would be better to locate the all branches in the same Team Project rather than having your Dev branch in a separate Team Project.
My team is considering doing branch-per-task development in TFS 2010. We are thinking of using shelvesets for small tasks (1-3 days) and creating new branches for anything larger (4 days to 2 months). Once development is complete on the branch it will be merged to main and deleted (not destroyed). Typically it will be only one developer working on a particular branch.
Does anyone have experience working on a project using TFS 2010 with many branches. How did it work? Were there any server performance issues as the number of branches grew? Does it affect the performance of the VS IDE at all?
There are already many answers out there relating to questions such as "TFS sucks at merging and is crushing my soul, what can I do?" and "Why would anyone ever use TFS when x, y, and z are available?" Please try to keep your answers relating to server performance and usability of the system in the presence of a large number of branches.
Here is some background with my history of branching. The project I worked on previously used a branch-per-task strategy with ClearCase and it worked very well. Branch creation was tied to both the defect tracking and build system. Developers completed units of work each in their own branch. The lifetime of each branch varied from a day up to a couple of months. At the end of each task the code was merged into the main integration branch. This was a large project and after approximately 10 years of development the system has over 10,000 branches. ClearCase is able to handle this volume of branching quite well (except when viewing popular files in the Version Tree Browser, load time could be slow).
Basically the model you describe is a Branch by Feature, this is the model that the Dev Div of Microsoft uses to develop the Visual Studio product family, so you can tell it scales pretty well with TFS.
I recommend you to read this blog post and you can read the Branching Guide V2 to get more information.
As for the merging, the topic was pretty well covered here and on the web, in my opinion it doesn't suck when you use it correctly (and without the default merge tool).
I have gone through the TFS rangers guides before posting this.
I have the below requirement in our project:
Development of release 1 happens
It is deployed on Dev environment for integration and sanity test
If ok, then the code is deployed on QA environment
If ok, QA released code gets deployed on production.
Currently we have a code base in TFS under CODE, where developers code.
The above is branched to DEV to mirror dev environment code
DEV branch is branched to QA branch
In case a hot fix is required, then it is fixed directly on QA branch, and then later reverse merged to below branches.
This was fine till first time development, but I feel this needs to be re-structured to be better scalable for future release development.
Current issues:
need to plan to support future release 1.5 development
At the moment there are some features/fixes which may/may not go in
current release. Developers just shelf it at the moment so that they
can be unshelved in future. Issue is that with time, shelves become
huge pain to merge as they do not have history
Sometime people work on big features on shelves for upto a week.
Merging becomes a huge paint as till then those dozens of files have
also been worked by a lot of people.
Keeping all the above in mind, i am thinking of redesigning our TS branching stategy as depicted below:
As per this approach:
Development would happen on a dev branch such as DevRel1 branch only
If a developer needs to work on a big feature, he would work on a
branch such as Feature 1 branch, branched from Dev branch. On
completion he would merge back to dev branch.
For probably features, which may or may not go in this release, the
develop needs to work on probably feature branched from main branch.
As per final decision, it would be base less merged into appropriate
dev branch.
Code would be deployed in Dev environment from Dev branch
Code would be deployed to QA branch from Main
For release, main would be branched to a new release branch
Hot fixes on QA happen on Main branch, and on Release happen on
release branch. RI from release happens to main, and FI to dev
branches happen in this case
Is this getting too complex? can it be simplified, does this look fine process wise, or does it need correction?
We are using TFS 2008 at the moment.
My advice would be to keep things as simple as you can. IMHO the main things to avoid are:
Complexity confuses people. The more complexity there is, the more time it takes to deal with, the more merges you end up doing, and the more mistakes that are made. If you don't need a process then remove it.
If you work on code in a branch, then you will necessarily need to merge code in the future. The longer branches live before they are merged the more difficult and time consuming the merge becomes.
"Piggybacking" a hierarchy of branches on each other introduces the need to merge through multiple levels to get code back to the dev branch, which makes it time consuming to push a change from an outlying branch back into the main codebase.
So you should definitely use branching to support your dev needs, but try to keep your scheme as simple as possible.
We use a similar, but simpler, approach to yours. That is:
We work on a Dev branch. Daily builds come off this branch for continuous QA.
When a release "code freeze" is required, a Release branch is made. A bugfix can be made in either branch, but if required for the release it is always immediately merged to keep dev & release in sync. We try not to let releases diverge from the main branch at all if possible. We never have more than one release branch active at a time.
When small features are developed, or where features can be developed without becoming "enabled" in the application or otherwise affecting the stability of the code, we continue to develop the code within the Dev branch. E.g We might use an #if condition to disable the code until it is safe to "activate" in daily test builds. This minimises the need for branching.
When a large feature is developed that could potentially break the Dev branch, we work on a separate Feature branch. We try to plan the feature to minimise the time during which the feature/dev branches are allowed to coexist, and if possible stop developers working on related areas of the code while the feature is developed, to minimise the merge problems when the feature completes. If possible features are developed between releases to minimise the overlap (the number of concurrent branches).
Our other key strategies are:
Use Continuous Integration, Unit Tests, Regression Tests, Gated Checkins and continuous QA testing to keep the Dev branch as stable as possible. Our aim is that any daily build "should" be good enough to ship straight to a customer. In reality there are occasional short periods (a few days) where that stability is lost, but most of the time when these happen we are still within a few days of having a releasable build.
Defer making branches until absolutely needed. In TFS you can retroactively create a branch from any point in the codebase history. So when we are ready to start a release branch, we don't actually create the branch but just send the current release build to the QA department for testing. If they are happy with the quality of that build, it goes to customers as is, with no branch having been created. It is only when we need to fix a bug for that release that we actually create the branch (from the point in time where the original release-candidate was built, so we know it starts from the well tested code snapshot) and incur the (small) costs that entails.
As a side note, we also tried a Dev branch, with gated checkins to QA branch with gated checkins to Release branch, but this worked poorly (Primarily we found that it added a considerable overhead to all development. We like to check in frequently and the two extra test-and merge steps for every checkin are expensive. In the worst cases, if you delete, move or rename files in TFS it becomes very flakey and even simple merges fail - these are difficult and time consuming to sort out. We decided that merging in TFS is still not lightweight and robust enough to support this sort of approach unless you are prepared to invest a lot of time into managing the branches. On the other hand, if devs are careful with their checkins, there is far less need for such a "rigorous" approach. So we swapped back to the above lightweight approach, which increases the risks but minimises the need to merge - for us (with a smallish and disciplined/competent team) this approach works better.
Thanks to all your responses. I have hence simplified my design and we plan to tweak it a bit before doing a test run on it.
The new design is as shown below(comments on it are still always welcome!)
My 2 cents ...
For correctness, I would suggest to have the PossibleFeature1 branch merge back to the same branch it originates from, so the main. Talking about which, do not differentiate between a Feature and a PossibleFeature branch. They are the same. Features are alway subject to delay, reprioritization, whatever reason why they won't end up in the planned release. Allow this flexibility in principle for every Feature branch. So treat every Feature the same.
To further simplify your model (and your life) think about only having a main branch for development and QA. The additional overhead and complexity of having both is not worth the effort. Deploy stable main versions to QA from the main development line. Label the shipped versions.
So my (personal) model would be to have a main development branch. Big challenging features are put on their own branch. They might end up in the release, they might go to the next, no worries. Keep merging from the main into the feature branches on a regular basis to keep them in sync. It is good to have seperate Release branches if you have multiple versions out in the field that you need to support for a while. Start the Release branch before your stabilization, alpha and beta stages. You might consider deployment to QA as part of these Release branches.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I am quite new to Lean/Kanban, but have poured over online resources over the last few weeks and have come up with a question that I haven't found a good answer for. Lean/Kanban seems otherwise such a good fit for our company, who is already using Scrum, but have reached some limitations inside that methodology. I hope someone here can give me a good idea.
As I see it, one of the biggest advantages of Scrum over Waterfall is the use of sprints. By having everything ready every 14 days you get short feedback cycles and can release often. However, as I have understood from reading about Lean, there are some costs associated with this (for example, time spent in sprint planning meetings, team commitment meetings & some problems with finding something useful for everyone at the end of the sprints).
Lean/Kanban will remove these wastes, but only at the cost of not being able to release every 14 days. Or have I missed an important point? For, in Kanban, how can you work on new development tasks and release at the same time? How do you make sure you don't ship something that is only halfway done? And how can you test it properly?
My best "solutions/ideas" so far are:
Don't release often and allow the waste associated with running out of new development tasks. Not really a solution to the question asked though.
Develop in branches and then merge into the main trunk. Makes you have to support at least two branches continuously internally.
Use some smart automatic labelling system to automatically build only certain finished tasks and not others.
As a summary, my question is: When you use Lean/Kanban, can you release often without introducing waste? Or is release often not part of Lean/Kanban?
Additional info specific to my company:
We use Team Foundation System & Source Control and have previously had some bad experiences in regards to branching and merging. Could this be solved simply by bringing in some expertise in this area?
The problem you describe seems more a source control program -- how to separate done features from features in-progress, than about Kanban. You seem to put a heavy penalty on running many branches -- which is the case for source control systems not based around the idea of multiple branches. On Distributed Source Control systems, such as GIT and Mercury, everything is a branch, and having them and working with them is lightweight.
I assume you read this blog about Kanban vs SCRUM, and the associated practical guide?
And, in answer to your question, yes, you can release often with Kanban.
You need to understand pull systems, which is what Kanban is designed to manage.
A customer (or product owner or similar) request for a feature in the running system is what triggers the process.
The request is a signal that goes to deployment. Deployment look for a tested item with properties that match the request. If none is there, you write the tests and look at development if there is a development slot that can be used to implement something that fulfils the test. When development has done its development (maybe looking for a suitable analysis first and so on), the test does its test, and deployment deploys.
The requests going backwards through the system are permissions to start working. As soon as the request has arrived, this triggers a lot of activity, where each activity should be completed as quickly as possible. There you have your turbo deployment.
Just like the request for a car goes to the dealer who looks in the ship who signals to the car factory, who signals to the suppliers.
Kanban is not about pushing requests through a system. It is about pulling functionality out of the system in exchange for a request that enters via the last step.
The team I manage uses Kanban and we release around every two weeks. If you're strict about what gets integrated into your mainline code branch (tests passing, customer approved, etc.), Kanban allows you to release whenever you want. You need to make sure that the stories moving through your system aren't co-dependent in order to do this, but on my team that's usually not a problem - a large part of our work involves maintenance, which consists of several unrelated bug fixes / features per release.
The way we handled weekly releases on a sustained engineering project that used Kanban was to implement a branching strategy. The devs worked in a sandbox branch, and made one checkin per work item. Our testers would test the work item in the sandbox; if it passed the regression tests the checkin would be migrated to our release branch. We locked the release branch from noon Monday until the release went out (usually by Wednesday, occasionally by Thursday, the drop dead date was Friday), and re-ran the regression tests for all migrated checkins as well as integration tests for the product, dropping a release once all of the tests passed.
This strategy let devs continually be working on issues without being frozen out of their branch during the release process. It also let them work on issues that took more than a week to resolve; if it wasn't checked in and tested/approved it didn't get migrated.
If I were running Kanban for a new version of a project, I'd use a similar strategy but group all related checkins as a 'feature', migrating a feature en masse to the release branch once the feature was done and then performing additional unit/integration/acceptance/regression testing in the release branch before dropping a release with that feature. Note that a key concept of Kanban is limiting work in progress, so I might restrict my team to work on one feature at a time (this would probably be several work items/user stories).
There's more to this than just source control, but your choice of TFS is going to limit you. When the Burton project was conceived back in 2004, Microsoft wasn't paying attention to Agile, much less Lean. It's going to be your weakest mechanical link for some time. Your hackles should have been raised by CodePlex's own adoption of Mercurial after having been offered to the Microsoft community as the poster child of TFS implementation.
A more salient issue here is Work Design. It encompasses the order that you choose to implement features (work schedule), as well as prioritization and cost of delay, and the shape and size of work items.
Scrum is commonly interpreted to say that non-technical "Product Owners" can determine work schedule based solely on their own concerns. If you follow this path, you're going to incur a lot waste by not taking the opportunities to do work together that belongs together. Work that belongs together can't just be determined by Product Owner wishes. Technical and workforce (skills) opportunities must also be taken into consideration.
For work to be done in the most productive way, the work itself has to be designed that way. This means that in a Lan Product Development team, decisions are made not by a non-technical worker, but by what Toyota calls someone of "Towering Technical Competence" who is close to the product, close to the customers, and close to the team.
This role is a stark contrast to Scrum's proposition. A Chief Engineer on a Lean team is himself (or herself) the voice of the customer, and the role of Product Owner is unnecessary.
Scrum's "Product Owner" is a recognition of an under-developed role in software development organizations, but it's far from a sustainable solution that consistently avoids waste. The role of "Software Architect" is often insufficient as well, as in some developer sub-cultures, the architect has become far too removed from the work.
Your issues of continuous deployment are only partially addressed with technology and tools. Look also to organizational issues, and perhaps give some thought to Scrum's purpose as a transitional approach from waterfall rather than one that can serve your organization indefinitely.
For source control I'd highly recommend Perforce. It makes branching and integrating changes from other branches relatively straightforward, and provides the best interface for source control that I've seen so far.
Continuous integration helps as well - i.e. lots of small, more than daily commits, instead of huge and potentially challenging merges. Tools like CruiseControl can help highlight when the source gets broken by a bad commit. Also, if everyone makes many small changes then conflicting changes will be rare.
I'd also advice not to try to follow things like lean, scrum, kanban & co. too closely. Just solve the problems yourself, looking to these ideas for guidance rather than instruction. The specifics of your problems will more than likely require some flexibility for the best management.
How we do it:
We have a pipeline with the following stages
Backlog
TODO
In progress (Develop and quick testing)
Code review
Test (Rigorous testing)
Integration test and general acceptance tests
Deploy
Each story is developed as a branch based on the latest version to leave the Deploy stage. They are then integrated as part of preparing the integration test.
QA pulls from the code review stage and can prepare releases at any pace the want. I think we have a pace of roughly one release every week.
By removing the "master" branch from git and not doing any merge before the code review stage we've made sure that there is no possibility to "sneak" code into releases. Which, as an interesting by-product, has forced us to visualize a lot of the work that used to be hidden.