where does specification by example complement/replace traditional requirements documentation? - bdd

I'm trying to understand where SBE's complement or replaces traditional requirements documentation. The diagram levels of requirements shows three levels of traditional software requirements.
Which of the items below (from the diagram) does SBE replace and which ones does it complement:
Vision and Scope Document
Business Requirements
Use Case Document
User Requirements
Business Rules
Software Requirements Specification
System Requirements
Functional Requirements
Quality Attributes
External Interfaces
Constraints
My naive understanding of SBE's would say that the SBE's are just an alternative form of the Software Requirements Specification. Is this correct?

BDD and SBE are normally used by Agile teams, who don't focus as much on documentation as traditional software development teams do.
BDD is the art of using examples in conversation to illustrate behaviour. SBE then uses the examples as a way of specifying the behaviour that you decide to address (I always think of it as a subset of BDD, since talking through examples often ends up to eliminating scope, discovering uncertainty or finding different options, none of which end up as specifications).
There are a couple of things that are hard to do with BDD. One of them is anything which isn't discrete in nature, or which needs to always be true throughout the lifetime of the system - non-functionals, quality attributes, constraints, etc. It's hard to talk through examples of these. These continuous aspects of requirements lend themselves better to monitoring, and that's discrete, so BDD can even be used to help manage these.
Since an initial vision is usually created to help the company make money, save money, or protect existing revenue (stopping customers going elsewhere, for instance), you can even come up with examples of how the project will do this. In fact, if you can't, the project is likely to fail anyway. So BDD / SBE can also be used to help complement an initial vision and scope.
Therefore, BDD / SBE can complement all of these documents, and in Agile teams, the documents themselves are usually replaced by conversations about the requirements and rules (illustrated by examples), story cards to represent placeholders for those conversations, and perhaps some lightweight capture of those conversations on a Wiki.
It is unlikely that any Agile team captures all of their examples up-front, as this leads to excessive investment in the requirements and tends to turn it into a traditional Waterfall /SDLC project instead.
This blog post I wrote about BDD in the Large may also be of interest.

Related

How to align the BPMN models with the Technology Architecture?

I stuck how to proceed further and need some new ideas to align these BPMN models which I have drawn for Customer Relationship Management(CRM) and Human Resources(HR).
As far as BPM model is considered it's mainly used for Business Architecture(BA) and then for Technical Architecture(TA) I could possibly use Rational Unified Process(RUP) but when I researched I could only find IBM Rational Rose Software which is not free...
My Question:-
Is there, open Source RUP tools which I can use? I looked up OpenUp but I could not make it work(which is a different issue).
Is this the right approach; for BA -> BPM and TA -> RUP ?
The scope of BPMN (BPMN specification 1. Scope ) describes
The primary goal of BPMN is to provide a notation that is readily understandable by all business users, from the business
analysts that create the initial drafts of the processes, to the technical developers responsible for implementing the
technology that will perform those processes, and finally, to the business people who will manage and monitor those
processes. Thus, BPMN creates a standardized bridge for the gap between the business process design and process
implementation.
There are Business process management(BPM) software's which provides process modeling and process execution conformance. Thus effectively making the models executable [at least to a certain depth].
In the free/ open source world you can find jBPM, Activiti etc...
I have tried out jBPM, is pretty much mature and has standard notations compliance. Also it supports modeling, execution and operational functionalities.

Are there concrete tools or methods for visualizing the structure of a program/project?

I'm working on some beginner programming tutorials and am finding it difficult to keep track of the many modules and functions involved, their purpose (abstractly), and their interrelationships. I'd like to see everything from a bird's-eye view to better envision how I can more elegantly reorganize and refactor the code.
Is there a specialized tool (other than a whiteboard and marker) that professionals use to manage this complexity? Are programmers expected to just rely on mental models? Do professionals use flowchart software like Lucidchart for this kind of thing?
Structure Charts have been around since the mid-70s. Data Flow Diagrams, if you do leaf level -1 only, are useful too for structured (non-OO). If doing non-OO look at the Yourdon Method. Also look at Essential Systems Analysis as the basis for event partitioning. There are various CASE tools still in use.
UML can work well and has been around for many years, if you are doing OO. If one does not go "diagram-happy" then UML can work quite well.
There are ERDs for data relationships.
Graphical modeling tools have never penetrated the general programmer population more than about 18%. I think in part due to lack of proper training for the developers, lack of proper training in managing projects using models for managers and over-promise/under deliver by CASE tool vendors. I started using graphical tools in college - structure charts. I am always amazed at how "professional developers" can write large programs with no visual model of the interrelationships and dependencies.
How do they remember all that? How do they bring new people up to speed when they join the project?
Those of us who ask the questions you ask seem to be in a minority. I don't think it's a "tool-thing." I think some developers want that "higher level of abstraction" and visualization, and some don't.
There's always UML, although I am not a huge fan.
You also didn't tag with what language you are talking about.
For .Net, Visual Studio can actually auto-generate code from such diagrams.
You can also check this similar post on Quora.

A production-ready, real-time recommendation engine thats easy to setup

I want to store large number of data ppints for user actions, like likes, tags etc (I have plans for both e-commerce and document management).
With the data points, I want to support functions such as
"users who loved X loved Y,Z" recommendations
"fetch more stuff similar to X,Y" clustering.
By production-ready, real-time; I mean that I can enter data points and make queries at the same time, the server will take care of answering queries and updating scores by itself.
I searched around the interwebs and the solutions that come up are either of:
Data-mining libraries that are mostly academic-oriented and are meant for large batch operations, not for heavy real-time queries
Hadoop/Mahout, which is production-ready and support real time updates and queries, but have a steep learning curve and tough to administer.
For recommenders, Mahout has a non-distributed recommender implementation that does not use Hadoop. In fact, this is the only part that is real-time; the Hadoop-based parts are not.
I think there is little learning curve to it; see here and here for a pretty complete writeup.
Mahout in Action chapters 2-5 cover this quite well too.
Please understand that for useful recommendations, the various parameters of such a system must be carefully fine tuned. The out of the box functionality many systems have (Oracle data mining, Microsoft data mining extensions etc.) just offer the core functionality.
So in the end, you will not get around the "steep learning curve", I guess. That is why you need experts for data mining. If there were a point-and-click solution, it would already be integrated everywhere.
Example "similar items". I laughed hard, when Amazon once recommended me to buy two products: Debian Linux Administrators Handbook and ... Debian Linux Admininstrators Handbook WITH CD.
I hope you get the key point of this example: to a plain algorithm, the two books appear "similar", and thus a sensible combination. To a human, it it pointless to buy the same book twice. You need to teach such rules to any recommendation system, as they cannot be trivially learned from the data. There will always be good results and useless results, and you need to tune and parameterize the system carefully.

What are alternatives to the Waterfall model

Can you please give a methodology that stands to alleviate the disadvantages of waterfall model?
The problem with Waterfall is that it consists of monolithic stages, each building on the previous stage. So the code is developed in one chunk after the entire system has been designed, which in turn happened after all the requirements have been gathered and signed off.
This is problem because any change has to be ratified by a complex procedure and rippled through all the stages. But the lesson of history is: change happens. The requirements are always incomplete, or mis-specified or simply out-of-date by the time we get to coding. Too often design and build proceed on the basis of assumptions which are nullified when the system gets to UAT. This leads to frantic re-work and slippages.
The truth is not many customers are good at the sort of abstract thinking required to envisage a working software software system. And too many IT professionals lack the experience necessary to understand business logic. Waterfall refuses to accept these truth.
The only honest requirement specification is "I'll know it when I see it". So it is crucial to get working software in front of real users as soon as possible. Any methodology which focuses on delivering working software incrementally in short iterations will "alleviate the disadvantages of waterfall model".
Originally that was RAD or DSDM. Then XP tok up the banner. Now there is Agile and related things like Scrum and Kanban.
So why do people persist with the Waterfall method?
There is a common perception that Agile is just a cover for cowboy hackers to ditch all the boring process stuff and get on with what they enjoy most: writing code. The branding of "Extreme Programming" certainly encourage this thought, and, let's be honest, it is not an unfounded allegation. That is, some coders pretend to be agile as an excuse not to plan, design or document. This does not reflect the actual practice of Agile, which require just as much rigour as any other methodolgy.
Also Agile requires a much greater commitment of time from the customer's staff, which many organizations are loath to accept. Also the people footing the bill may be unwilling to empower their junior staff to make decisions. There is an important distinction between Customer and User.
When it comes to outsourcing the waterfall model provides an easy framework for matching deliverables to staged payments. Indeed the contractual aspect maybe stronger than that: in the EU Waterfall is mandated for all projects valued at EUR 100m or more.
Finally, there are projects where Waterfall works well. These projects have knowledge domains which are stable and well-understood by both the customers and the developers.
last word
Despite its failings Waterfall has delivered many projects successfully. This is because hard work, aptitude and integrity are more important than methodology.
The waterfall model was documented in 1970 by a Dr Winston Royce in a paper titled 'Managing the development of large Software Systems'. Basically outlining his ideas on sequential development. His idea was that software could be produced in a similar fashion to an automobile, where the vehicle is pieced together in sequential/linear phases.
This linear approach doesn't really allow for changes in a piece of software once it begins. There is no tight relationship with the end user/client so its harder to outline possible problem areas.
Its worth noting some phases of the waterfall model allow for 'splashback' whereby there is enough time in the development period to go back and make small changes. Time constraints and the amount of work involved and budgets don't really allow for much change if any to be made using this model.
The waterfall model is old, as time goes by software paradigms themselves change. Object Oriented programming is popular, back then it was barely alive. Through the use of the waterfall model its obvious that the flaws have been spotted and this has lead to the alternative development methodologies.
Ok, so now for alternatives. Incremental model is described by Alistair Cockburn(2008) as a staging and scheduling strategy in which various parts are developed at different times or rates and integrated upon completion of that specific part.
Basically incremental looks a lot like this:
Analysis->Design->Code->Test
Analysis->Design->Code->Test
Analysis->Design->Code->Test
Number of benefits include lifecycle being flexible and allowing for change from the get go.
Working software or rather parts are generated quickly and early on. Code produced is earlier to test and manage due to the small iterations of progress. Not all of the requirements of the system are gathered up front, just an outline. This allows for a quick start, however it might be a disadvantage in some systems as things like the system architecture being supported might be missed.
Iterative on the other hand allows parts of the system to be reworked and revised to improve the system. Time is set aside to allow for this. Iterative does not start with a full specification of requirements. Development is done by specifying and implementing just part of the software. Software is reviewed in order to identify further requirements.This is more of a top down approach. Disadvantages with this methodology are making sure all the iterations are compatible. As each new iteration is approved, developers may employ a technique known as backwards engineering, which is a systematic review and check procedure to make sure each new iteration is compatible with previous ones.A major benefit with the constant iterations is that the client is kept in the loop and the final product should meet the requirements.
Iterative approach diagram.
Other methodologies include Prototyping. Evolutionary and Throwaway. These are also deemed as more of a top down approach. Both process are borrowed from engineering.In engineering it is common to construct scale models of objects to be built. Building models allows the engineer to test certain aspects of the design. The software development prototyping methodology provides the same ideology. Prototyping is not seen as a standalone, complete development methodology but rather an approach to handling selected portions of a larger, more traditional development methodology.
Throwaway Prototyping - Throwaway prototyping does not preserve the prototype that has been developed. In throwaway prototyping there is never any intention to convert the prototype into a working system. Instead the prototype is developed quickly to demonstrate some aspect of a system design that is unclear. It can also be developed to help users or clients decide between different features or interface characteristics. Once any problems or uncertainty has been addressed the prototype can be ‘thrown away’ and the principles learned used in the design and documentation of the actual product.
Evolutionary Prototyping - In Evolutionary prototyping you begin by modeling parts of the target system and if the prototyping process is successful you evolve the rest of the system from those parts. One key aspect of this approach is that the prototype becomes the actual production system. This process allows for difficult parts of the system to be modeled successfully in prototypes and dealt with early on in a project.
Other areas to look into will include Agile-> SCRUM, Extreme programming, Paired programming etc.
Tried to keep it short but people write books on this sort of stuff and there is so much to discuss.
Might be worth having a look at:
Incremental and Iterative
The alternative to the waterfall method is "doing it the correct way".
Waterfall seems to make sense if you are on a factory floor assembly line. But I've never seen it work as part of the design process...and sofware development is ALL a design process. And so the waterfall method never really works in the sense that it doesn't help facilitate the creation of high quality product, but rather focuses on process. Process can be great, but what's the point if the product it produces is second rate?
Kanban and Scrum are two of the most commonly used alternatives to Waterfall. I tried to give a good overview and comparison of the different SDLC approaches.
Waterfall relies heavily on massive monolithic phases as mentioned by APC. This is a huge weak point because trying to determine the end product from the start is a fruitless endeavor.
Kanban is slightly cowboy, but I find if you couple it with standups it certainly still has it's place.
Scrum is great for putting pressure on the team and getting ownership on tickets. I've found most places have been going with this one but the downfall of it is some people go overboard with having meetings for everything. Sprint planning meetings, sprint kickoff meetings, daily standup meetings that last 1 hour with 20+ people present, demo meetings, and then finally the post-mortem.
Remember that agile is only as good as you make it and you can easily sink any methodology if you go wild with unrestrained meetings which aren't adding value. Keep it as lean as you can without it being chaotic.
From the top of my head, I can think of ways to palliate the shortcomings of the waterfall model:
Have the coder concentrate on automating the process itself. Automate the transitions between one step and another, so that changes will flow more or less automatically.
Make the process more bidirectional. One principal characteristic in the waterfall model is that changes flow from top to bottom. This is a unidirectional process, and that is part of the problem.
Another thing which would help is (as someone mentioned in an earlier answer) is for the developer to gain a better understanding of the business logic involved, and of what the customer wants, and for the customer to gain knowledge about the characteristics of the development process.
Here are some links about Waterfall model:
http://www.cs.odu.edu/~zeil/cs451/Lectures/01overview/process2/process2_htsu2.html
http://www.buzzle.com/editorials/3-13-2005-67039.asp

Why do Ruby developers appear not to use UML?

I always hear about UML being used in Java projects but never in Ruby ones. Is this just a cultural difference or is there less of a need for modeling in Ruby development because it's part of a more 'agile' culture?
Obviously you can't generalize this to everybody, but programmers in languages like Ruby and Python tend to be less drawn to large design documents and UML because they view their language of choice as being concise and expressive enough that it isn't always necessary. There's a feeling of, "I could spend time and plot all this out in UML...or I could just write some Python that actually implements the design and expresses it in a language I like to read and lots of people can read." Java programs tend to feel "heavier" than their Ruby or Python counterparts — it's part of the design of the language.
Note that I'm not saying this is true of your project or even that it's true at all as a whole — this is just what I've observed about these programming cultures.
Call me crazy but UML isn't for me regardless of the application stack.
(Note, tongue sometimes placed in cheek.)
Probably one of the biggest cultural differences is that Java is often used in projects with large numbers of programmers, led by PHBs, where the high-level system design is done by people with the title "software architect". On these sort of projects the people in the "software architect" role will often generate a large amount of documentation (including UML relationship and state diagrams) during the initial planning phase of the project. These and other documentation artifacts are then expected to be implemented by the hordes of non-architect-programmers.
Ruby on the other hand, is the new hotness and is therefore more often chosen by people who want to program in it. Since the "architect" is the implementer, there is less need for complex upfront documentation. The implementers jot a few notes on general design guidelines and then sit down to program rather than designing upfront for others to program.
This isn't to say that you won't find a few scattered UML diagrams here or there in projects built in Ruby or other snazzy languages -- such as when someone is trying to describe a complex concept -- but such things just aren't needed as much if you are doing the work yourself.
One of the obvious reasons is that well-designed Ruby programs rely heavily on Mixins, which AFAIK simply cannot be modeled in UML at all. I know that Schärli et al developed an extension to UML that can represent Traits which given the close relationship between Traits and Mixins could probably be adapted or just reused for representing Mixins, but then it's not UML anymore.
This is a comment to the answer about mixins. Mixins can actually be modelled in UML quite easily using many different methods. Typically one uses multiple inheritance, interfaces or stereotypes (or any combination of these). Choosing the method depends on the project and personal taste - let us not forget that the main reason for modeling is to conquer complexity, better understand reality and communicate more effectively so each model needs to fit a particular problem and audience. Models are, by definition, pragmatic and so must be the process of creating them.
Let us not forget that UML is extensible using profiles and stereotypes. Such extended UML is still valid UML.
In general, UML is more expressive and less restrictive than programming languages so if something can be written down in some programming language, it can also be done in UML.

Resources