Software development lifecycle for an update or upgarde a project or an existing system? - sdlc

I have read software development lifecycles and different approaches for the initial project that needs to be developed. Can anyone suggest and explain different approaches for a software upgrade or update to an existing project?

Agile methodologies can be used here as well, if you want to implement incremental updates. It is especially important to make smaller changes to an existing code base, since you can break old functionality, if they are not guaranteed by tests.
If this is something you are doing frequently, "Working Effectively with Legacy Code" might have some more answers.

Related

Test case repository for BDD [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
We are transitioning across to BDD. Currently using specflow and visual studio to run our automated tests via Jenkins and have over 1000 tests in Quality Centre written in a more traditional fashion of which, the regression tests will be converted to BDD in time.
I'm looking for a repository (similar to test plan in Quality Centre) to house all our test cases/feature files. It must be compatible with Specflow and Jira. What do people use as a manageable test case repository for their tests?
Cheers.
I'm not 100% sure I understand your issue, not being familiar with some of the tools you talk about, but when you have executable specifications your test cases are in the feature files which are stored in the code base. This is part of the point, that your test cases are the things that get executed, so they are always up to date.
What #Sam-Holder said is good; adding to that because I'm familiar with the issue and the tools you're talking about.
You're probably used to the idea that Quality Centre contains a bunch of test scripts, some of which pass and some of which don't (yet).
When you're doing BDD with automated scenarios, they pretty much always pass, all the time. Half the things that QC does simply aren't needed with modern Agile processes.
A pretty common practice is to put the scenarios into the Jira story until they're automated. They're transient. Nobody ever looks at Jira once the story's done. The codebase is the single repository of truth, and anything that lives in Jira gets ignored.
The automated scenarios are checked into the same codebase as the code. If the scenarios go red (fail), the team makes them green ASAP. They provide living documentation for the code. See if you can find someone to show you what Jenkins looks like in action and you'll get a better picture. Commonly the Jira number is added to the check-in comment to provide some level of traceability.
I think it's good practice to keep any manual test cases checked in alongside the automated ones (though please do question why they're not automated; if you automated them in QC you can usually do it with SpecFlow). This helps the test cases (scenarios) to provide living documentation for the code. In fact, getting rid of the word "test" was part of the reason why BDD came about, because BDD's really about analysis and exploration through conversation. It provides tests as a nice by-product.
To answer the question, the most commonly used tool at the moment is Git (at least for newcomers). It's version control that the devs are using. SVN / Mercurial are also OK flavours of version control. Get the devs to help you.
If you're still working in a silo and not talking to the devs, don't use BDD tools like SpecFlow - you'll find them harder to keep up-to-date, because your steps will probably be too detailed and English is trickier to refactor than code.
Better still, use the BDD tools and go talk to the devs and the BAs / SMEs / Product Owner who understands the problem. Get them to help you write the scenarios. When you start having the conversations, even over legacy code, you'll start understanding why BDD works.
Here's a blog post I wrote on using BDD with legacy systems that might also be helpful to you. Here's a blog post on the BDD lifecycle: exploration, specification and test by example. And here's one to get you started on how to derive that "Given, When, Then" syntax using real conversations.

Introduce branching to existing project best practice for TFS 2013

We have TFS 2013, currently for the team project working we don't have any branches, basically just a plain folder structure with various solutions in them.
With the goal to introduce release management, the intention is to create several branche, e.g. development/main/releases
As I was told I cannot 'disturb' the project team developers from their day to day work since there are other projects being worked on, question, what is the best practice to do this? Create a separate team project? How can we adopt the branching practice without asking all the developers.
Please help point a direction for this or share some thought on this, any help is appreciated!!
Unfortunately to do this you'll have to "disturb" the development team. You have 2 options. 1. Come up with a process in isolation and then disturb them when it goes live. 2. Collaborate with them on a process and work together to meet your requirements. (Being able to release your code to production is a key requirement for any project, sadly it's so obvious that it never gets added to the backlog / project plan and is always treated as a afterthought)
I recommend option 2. Without collaboration you're just going to end up causing resentment that you're imposing something on the Devs and they'll fight it tooth and nail. Also without the development team being involved you'll miss something important that will make the process brittle and difficult to maintain.
You'll need to get buy in from the Developers to implement a branching strategy as it will have a significant impact on them, they need to understand why you're doing this and what the benefits are, both to them and the business. They don't necessarily have to do any of the work, but they need to know what you're doing and why, they will also need to know when the changes are coming so that they can plan for the change.
Firstly you need to real the ALM Rangers version control guidance.
Secondly you need to get the developers to read it as well. They will be responsible for maintaining the code and merging it between branches. They will need to know when and where they need to check in various changes (such as hotfixes), and what process they should follow when code is ready for release.
Finally, regarding your question about where the branches should be located. It would be better to locate the all branches in the same Team Project rather than having your Dev branch in a separate Team Project.

TFS2012 vs Jetbrains TeamCity+YouTrack [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
We have used TFS2012 on the cloud and we don't like that there's no reporting service so we're looking to move to on-premise TFS2012. At the same time, we're starting to like Git and we're thinking that it may make more sense than TFS version control.
This obviously requires research and developers to "play admin" so we're taking the time to evaluate whether Jetbrains' highly-appraised solutions are a better fit.
Given a team of 6-8 people that run with Scrum that is eager to be on the "best practice" train for agile and a project that combines .NET technologies for the back-end and Javascript (AngularJS) on the front-end, considering a move from TFS2012 to a TeamCity/YouTrack/Git stack for scrum planning, source control, continuous integration and quality control and issue tracking:
What would/could we miss from TFS2012?
What are we going to enjoy from the new stack?
Is the new stack falling short in any respect that TFS is not and vice versa?
Note: This is a question specific to TFS2012 - there are several comparisons on SO and elsewhere for previous TFS versions and TeamCity, perhaps YouTrack too.
Here's a brief account of my two-week long experience with Git/YouTrack vs 6 months of TFS.
The new stack feels a lot more lightweight than TFS. Both installing (we tried the on-premise TFS shortly) and using TFS gives the sense of a very heavyweight enterprise suite for no reason. This is partially an illusion that the UI design gives but it seems that with YouTrack:
Takes less clicks to do anything and even less if you learn some shortcuts and how to use the commands.
It's easier to navigate between the views - there are less of them but give a better overview than TFS. This is not because they present more info - in most cases they present less info - but because they give the key information in a visually clean way.
The ability to run ad-hoc searches in YouTrack makes such a big difference! In TFS you have to create a query with a UI that tries to makes it easier but ends up making it harder for you than just typing the query params. I mean, we are developers after all.
I've enjoyed the local commits of Git and how pull requests work to integrate work from other people into a main branch vs merging on TFS.
TeamCity has also been very lightweight to use - though I have no experience with CI on TFS. Having said that, it's an area I didn't delve into much because I was already spending a lot of time managing TFS.
Hiccups and things that I miss from TFS:
It's a little harder to manage releases with YouTrack or I haven't figured out how to do it effectively. The management and separation of product backlog, release backlog and sprint backlog is easier on TFS.
There's no way to plan a sprint based on capacity of developers - I believe JetBrains is working on that though.
You gotta pay for a private Git - though YouTrack/TeamCity are free and full-featured for a few users.
I'll try and keep this up to date as I go.

Considering Porting App from .NET to Erlang - need advice [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I am looking at Erlang for a future version of a distributed soft-real-time hosted web-based telephony app (i.e. Erlang looks like absolutely the perfect choice for this kind of app). I come from a .NET background and the current version of this app uses a combination of C#, WCF and JQuery to deliver the service. I now need Erlang to allow me to add extra 9s to my up-time and to allow me to get more bang for my server bucks.
Previously I'd set up a development process here combining VS.NET, GIT, TeamCity and auto-deployment of MSI files to the various environments we maintain. It's not perfect, but we're all now pretty comfortable with it. I'm wondering whether a process like we have is even appropriate for such a radically different technology stack (LYME)?
I'm confident that all of the programming challenges we previously solved using .NET can be better solved in less code with Erlang, so I'm completely sold on the language choice. What I don't yet understand from reading the Pragmatic and O'Reilly books on Erlang, is how I should adapt my software engineering and application life-cycle management (ALM) processes to suit the new platform. I see that in-place code updates could make my (and my testing and ops team's) life much easier (compared to the god-awful misery of trying to deploy MSI files across a windows network) but I am not sure how things should change when I use Erlang.
How would you:
do continuous integration in Erlang (is it commonly used?)
use it during a QA cycle (we often run concurrent topic branches using GIT, that get their own mini-QA cycle, so they all get deployed into a test environment)
build and distribute your code to DEV, TEST, UAT, STAGING, and PROD environments
integrate code generation phases into your build cycle (we currently use MSBUILD + T4 templates)
centralize logging for a bunch of different servers (we currently use Log4Net, MSMQ, etc)
do alerting with tools like SCOM
determine whether someone/something has misconfigured your production servers
allow production hot-fixes only after adequate QA (only by authorized personnel)
profile the performance (computation and communication) of your apps
interact with windows-based active directory servers
I guess I need to know what worked for you and why! What tools and frameworks did you use? What did you try that failed? What would you do differently if you could start over, knowing what you know now?
Whoa, what a long post. First, you should be aware that the 99.9% and better kool-aid is a bit dangerous to drink while blind. Yes, you can get some astounding stability figures, but you need to write your program in a way facilitating this. It does not come for free. It does not happen by magic either. Your application must be designed in a way such that other subsystems recover. OTP will help you a lot - but it still takes time to learn.
Continuous integration: Easily done. If you can call rebar or make through your build-bot you are probably set here already. Look into eunit, cover and Erlang QuickCheck (the mini variant is free for starters) - all can be run from rebar.
QA Cycle: I have not had any problems here. Again, if using rebar you can build embedded releases that are minimized erlang vm's you can copy anywhere and run (they are self-contained). You can even hot deploy fixes to such a system pretty easily by altering the code path a bit so you have an overlay of newer fixes. Your options are numerous. Git already help you here a lot.
Environmentalization: Easily done.
Logging centralization: Look into SASL and the error_logger. You can do anything you want here.
Alerting: The system can be probed for all you need (introspection is strong in Erlang). But you might have to code a bit to hook it up to the system of your choice.
Misconfiguration: Configuration files are Erlang terms. If it can be computed, it can be done.
Security: Limit who has access. It is a people problem, not a technical one in my opinion.
Profiling: cprof, cover, eprof, fprof, instrument + a couple of distributed systems for doing the same. Random sampling is also easy (introspection is strong in Erlang).
Windows interaction: Dunno. (Bias: last time I used windows professionally was in 1998 or so).
Some personal observations:
Your largest problem might end up being that you try to cram Erlang into your existing process and it might resist. It is a new environment, so new approaches will be needed in places and you should expect to adapt and workaround limitations you find along the way. The general consensus is that it can work (it is working for several big sites).
It looks like you have a well-established and strict process. How much is that process allowed to be sacrificed to give way to a new kind of thinking?
Are your programmers willing to throw out almost all of their OO knowledge? If not, you will end with a social problem rather than a technical one. If they are like me however, they will cheer, clap in their hands and get a constant high by working with an interesting language solving an interesting problem in a new way.
How many Erlang-experienced programmers do you have? If you have rather few, then better cut your teeth on some smaller subsystems first and then work towards the larger goal. Getting the full benefit of the system takes months if not years. Getting partial benefit can be had in weeks though.

How to release often with Lean/Kanban? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I am quite new to Lean/Kanban, but have poured over online resources over the last few weeks and have come up with a question that I haven't found a good answer for. Lean/Kanban seems otherwise such a good fit for our company, who is already using Scrum, but have reached some limitations inside that methodology. I hope someone here can give me a good idea.
As I see it, one of the biggest advantages of Scrum over Waterfall is the use of sprints. By having everything ready every 14 days you get short feedback cycles and can release often. However, as I have understood from reading about Lean, there are some costs associated with this (for example, time spent in sprint planning meetings, team commitment meetings & some problems with finding something useful for everyone at the end of the sprints).
Lean/Kanban will remove these wastes, but only at the cost of not being able to release every 14 days. Or have I missed an important point? For, in Kanban, how can you work on new development tasks and release at the same time? How do you make sure you don't ship something that is only halfway done? And how can you test it properly?
My best "solutions/ideas" so far are:
Don't release often and allow the waste associated with running out of new development tasks. Not really a solution to the question asked though.
Develop in branches and then merge into the main trunk. Makes you have to support at least two branches continuously internally.
Use some smart automatic labelling system to automatically build only certain finished tasks and not others.
As a summary, my question is: When you use Lean/Kanban, can you release often without introducing waste? Or is release often not part of Lean/Kanban?
Additional info specific to my company:
We use Team Foundation System & Source Control and have previously had some bad experiences in regards to branching and merging. Could this be solved simply by bringing in some expertise in this area?
The problem you describe seems more a source control program -- how to separate done features from features in-progress, than about Kanban. You seem to put a heavy penalty on running many branches -- which is the case for source control systems not based around the idea of multiple branches. On Distributed Source Control systems, such as GIT and Mercury, everything is a branch, and having them and working with them is lightweight.
I assume you read this blog about Kanban vs SCRUM, and the associated practical guide?
And, in answer to your question, yes, you can release often with Kanban.
You need to understand pull systems, which is what Kanban is designed to manage.
A customer (or product owner or similar) request for a feature in the running system is what triggers the process.
The request is a signal that goes to deployment. Deployment look for a tested item with properties that match the request. If none is there, you write the tests and look at development if there is a development slot that can be used to implement something that fulfils the test. When development has done its development (maybe looking for a suitable analysis first and so on), the test does its test, and deployment deploys.
The requests going backwards through the system are permissions to start working. As soon as the request has arrived, this triggers a lot of activity, where each activity should be completed as quickly as possible. There you have your turbo deployment.
Just like the request for a car goes to the dealer who looks in the ship who signals to the car factory, who signals to the suppliers.
Kanban is not about pushing requests through a system. It is about pulling functionality out of the system in exchange for a request that enters via the last step.
The team I manage uses Kanban and we release around every two weeks. If you're strict about what gets integrated into your mainline code branch (tests passing, customer approved, etc.), Kanban allows you to release whenever you want. You need to make sure that the stories moving through your system aren't co-dependent in order to do this, but on my team that's usually not a problem - a large part of our work involves maintenance, which consists of several unrelated bug fixes / features per release.
The way we handled weekly releases on a sustained engineering project that used Kanban was to implement a branching strategy. The devs worked in a sandbox branch, and made one checkin per work item. Our testers would test the work item in the sandbox; if it passed the regression tests the checkin would be migrated to our release branch. We locked the release branch from noon Monday until the release went out (usually by Wednesday, occasionally by Thursday, the drop dead date was Friday), and re-ran the regression tests for all migrated checkins as well as integration tests for the product, dropping a release once all of the tests passed.
This strategy let devs continually be working on issues without being frozen out of their branch during the release process. It also let them work on issues that took more than a week to resolve; if it wasn't checked in and tested/approved it didn't get migrated.
If I were running Kanban for a new version of a project, I'd use a similar strategy but group all related checkins as a 'feature', migrating a feature en masse to the release branch once the feature was done and then performing additional unit/integration/acceptance/regression testing in the release branch before dropping a release with that feature. Note that a key concept of Kanban is limiting work in progress, so I might restrict my team to work on one feature at a time (this would probably be several work items/user stories).
There's more to this than just source control, but your choice of TFS is going to limit you. When the Burton project was conceived back in 2004, Microsoft wasn't paying attention to Agile, much less Lean. It's going to be your weakest mechanical link for some time. Your hackles should have been raised by CodePlex's own adoption of Mercurial after having been offered to the Microsoft community as the poster child of TFS implementation.
A more salient issue here is Work Design. It encompasses the order that you choose to implement features (work schedule), as well as prioritization and cost of delay, and the shape and size of work items.
Scrum is commonly interpreted to say that non-technical "Product Owners" can determine work schedule based solely on their own concerns. If you follow this path, you're going to incur a lot waste by not taking the opportunities to do work together that belongs together. Work that belongs together can't just be determined by Product Owner wishes. Technical and workforce (skills) opportunities must also be taken into consideration.
For work to be done in the most productive way, the work itself has to be designed that way. This means that in a Lan Product Development team, decisions are made not by a non-technical worker, but by what Toyota calls someone of "Towering Technical Competence" who is close to the product, close to the customers, and close to the team.
This role is a stark contrast to Scrum's proposition. A Chief Engineer on a Lean team is himself (or herself) the voice of the customer, and the role of Product Owner is unnecessary.
Scrum's "Product Owner" is a recognition of an under-developed role in software development organizations, but it's far from a sustainable solution that consistently avoids waste. The role of "Software Architect" is often insufficient as well, as in some developer sub-cultures, the architect has become far too removed from the work.
Your issues of continuous deployment are only partially addressed with technology and tools. Look also to organizational issues, and perhaps give some thought to Scrum's purpose as a transitional approach from waterfall rather than one that can serve your organization indefinitely.
For source control I'd highly recommend Perforce. It makes branching and integrating changes from other branches relatively straightforward, and provides the best interface for source control that I've seen so far.
Continuous integration helps as well - i.e. lots of small, more than daily commits, instead of huge and potentially challenging merges. Tools like CruiseControl can help highlight when the source gets broken by a bad commit. Also, if everyone makes many small changes then conflicting changes will be rare.
I'd also advice not to try to follow things like lean, scrum, kanban & co. too closely. Just solve the problems yourself, looking to these ideas for guidance rather than instruction. The specifics of your problems will more than likely require some flexibility for the best management.
How we do it:
We have a pipeline with the following stages
Backlog
TODO
In progress (Develop and quick testing)
Code review
Test (Rigorous testing)
Integration test and general acceptance tests
Deploy
Each story is developed as a branch based on the latest version to leave the Deploy stage. They are then integrated as part of preparing the integration test.
QA pulls from the code review stage and can prepare releases at any pace the want. I think we have a pace of roughly one release every week.
By removing the "master" branch from git and not doing any merge before the code review stage we've made sure that there is no possibility to "sneak" code into releases. Which, as an interesting by-product, has forced us to visualize a lot of the work that used to be hidden.

Resources