Test case repository for BDD [closed] - jenkins

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
We are transitioning across to BDD. Currently using specflow and visual studio to run our automated tests via Jenkins and have over 1000 tests in Quality Centre written in a more traditional fashion of which, the regression tests will be converted to BDD in time.
I'm looking for a repository (similar to test plan in Quality Centre) to house all our test cases/feature files. It must be compatible with Specflow and Jira. What do people use as a manageable test case repository for their tests?
Cheers.

I'm not 100% sure I understand your issue, not being familiar with some of the tools you talk about, but when you have executable specifications your test cases are in the feature files which are stored in the code base. This is part of the point, that your test cases are the things that get executed, so they are always up to date.

What #Sam-Holder said is good; adding to that because I'm familiar with the issue and the tools you're talking about.
You're probably used to the idea that Quality Centre contains a bunch of test scripts, some of which pass and some of which don't (yet).
When you're doing BDD with automated scenarios, they pretty much always pass, all the time. Half the things that QC does simply aren't needed with modern Agile processes.
A pretty common practice is to put the scenarios into the Jira story until they're automated. They're transient. Nobody ever looks at Jira once the story's done. The codebase is the single repository of truth, and anything that lives in Jira gets ignored.
The automated scenarios are checked into the same codebase as the code. If the scenarios go red (fail), the team makes them green ASAP. They provide living documentation for the code. See if you can find someone to show you what Jenkins looks like in action and you'll get a better picture. Commonly the Jira number is added to the check-in comment to provide some level of traceability.
I think it's good practice to keep any manual test cases checked in alongside the automated ones (though please do question why they're not automated; if you automated them in QC you can usually do it with SpecFlow). This helps the test cases (scenarios) to provide living documentation for the code. In fact, getting rid of the word "test" was part of the reason why BDD came about, because BDD's really about analysis and exploration through conversation. It provides tests as a nice by-product.
To answer the question, the most commonly used tool at the moment is Git (at least for newcomers). It's version control that the devs are using. SVN / Mercurial are also OK flavours of version control. Get the devs to help you.
If you're still working in a silo and not talking to the devs, don't use BDD tools like SpecFlow - you'll find them harder to keep up-to-date, because your steps will probably be too detailed and English is trickier to refactor than code.
Better still, use the BDD tools and go talk to the devs and the BAs / SMEs / Product Owner who understands the problem. Get them to help you write the scenarios. When you start having the conversations, even over legacy code, you'll start understanding why BDD works.
Here's a blog post I wrote on using BDD with legacy systems that might also be helpful to you. Here's a blog post on the BDD lifecycle: exploration, specification and test by example. And here's one to get you started on how to derive that "Given, When, Then" syntax using real conversations.

Related

Where to start using BDD on existing system?

We have been using waterfall to develop and enhance the system in our company and recently we are moving into Agile and the management is interested in BDD.
My understanding is that BDD is a tool for business and development team to refine the stories and at the same time these scenarios will become test cases for the feature. This sounds perfect to me but since we already have the features available, how can BDD work in this type of situation?
Should we just write up the stories and scenarios per our knowledge of the feature?
My only concern of the above is the coverage of the scenarios. Or we shouldn't worry and keep adding new scenarios and test it whenever the team came up with new ones?
Prompted by yet another person who mailed me with the same question today, I've written up an answer for this.
Short version, you can use BDD to help you understand what the system actually does, and why, but you'll be clarifying the requirements rather than exploring them.
Additionally, you asked, "Should we just write up the stories and scenarios per our knowledge of the feature?"
I'd speak with any stakeholders you can find, ask them what the system should do, then look to see if it actually does it. Systems designed before adopting a practice of conversations with examples often don't do what the originators intended. You can then differentiate between the behaviour you've actually got, while creating a new backlog from the behaviour you want.
I advise grabbing someone who's good at asking questions and spotting missing scenarios to have these discussions with (usually a tester). Because you already have some knowledge of the system, it's likely you'll be very good at describing what you think it does, while missing gaps.
If you don't have any automated testing yet and you want to start using BDD, I would suggest you to start by writing some Scenarios for some of your manual tests scripts, I find it to be a good way to train the writing of the BDD style, then as Lunivore said, you should work together with the business people and QA to find out better about the behaviour of your system and preferably write the scenarios with them.

Tool suggestions for specification by example where analysts - not developers - write the tests?

We are looking to initiate a bdd-style approach, inspired by Gojko Adzic's specification by example. Implementation is in java and devs are already writing junit tests.
Key requirement is that specifications (acceptance tests) can be written, read and maintained by non-developers. The project will run as an agile team - so it's fine if devs have to instrument the specs. However, I don't want developers, testers or domain experts having to read or write something that looks like code.
So far I've looked at FitNesse, Concordion and various others (e.g. Spock). I've rejected spock and similar tools because they target developers as the primary audience. FitNesse seems to meet most of the requirements.
Concordion is probably current favourite however: specs looks cleaner and simpler.
So my question (actually three):
Any suggestions for other tools I should look at?
Has anyone been successful with using concordion (or another tool) in this way?
Is concordion still actively developed/supported? Difficult to tell from website and most related SO questions are several years old.
Thanks.
I've worked with a number of teams implementing Specification by Example with Concordion. We train our whole team up to write Concordion specifications in HTML. Only a small subset of HTML is required, so we can train a newbie in about 30 minutes. Typically we have the testers writing the HTML specification, with the BA or Scrum Master sometimes writing them.
We've used Eclipse (Web Page Editor) for editing the HTML. This works well, except that Concordion requires valid XHTML, and Eclipse does not allow HTML to be validated as XHTML. This mostly shows with <br> tags being used rather than <br/>. We cover this off in training. We also train the whole team in the use of source control. By using Eclipse, we have a single user interface for editing and source control. We also find that having the team using the same IDE is a step on the journey to a cross-functional team.
I know of another team where the BA is writing the specifications using a Mac-based HTML editor.
Concordion is actively maintained, with rapid responses on the mailing list (Yahoo) and the issues list. The Concordion codebase is stable. The active development over the last year or so has focussed on the extension mechanism, allowing users to add commands and listeners (eg. for capturing screenshots on test failure).
Also take a look at Cucumber and JBehave, both of which allow specifications to be written in plain text.
If you choose to use FitNesse, it may also be worth looking at Slim, which sits behind FitNesse in place of Fit. It provides slightly different table formats to Fit, and I've found it suits BDD much better.
Just to update the topic, you can also consider jnario.

Looking for open source Delphi project with good unit test coverage [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
For educational reasons I am looking for an open source Delphi project with good unit test coverage. Projects which are under Test Driven Development are preferred. The size of the project doesn't matter.
Subject should be business or game development, but no web projects.
Any hints?
Edit:
Thanks for the suggestions, but I prefer projects where the requests come from "normal" users, not from programmers like for a CRM or ERP system. For example, a task planner or jump-and-run game. Has anybody seen something like this?
Take a look at DeHL. It makes heavy use of generics, and the author has an extensive test suite to make sure that the collections will work properly and not break the compiler.
We tried to implement test driven development for all root classes of our ORM Framework.
All low-level (numerical or UTF-8 text conversion) and high-level features (RTTI, ORM, JSON, database, client/server) were tested before their implementation.
We even made some basic regression tests about the encryption or pdf generation part.
And the tests were then inserted into the main documentation of some medical project (to follow the IEC 62304 requirements). Every release triggered more than 1,000,000 unitary tests. Then manual tests (human-driven on real hardware with working robotic workstations) were performed. Those high-level tests were written using the same documentation tool, which generated a cross-reference matrix to track that all tests passed before any release.
Perhaps not a perfect match, but at least a real use case, in a real world Delphi application, developed for the medical area (and if you know about FDA regulation, you know what I mean). :)
See this article in our forum.
Delphiwebscript boasts about their coverage: http://code.google.com/p/dwscript/
I recommend DeHL.
From its introduction page:
DeHL is an abbreviation that stands
for Delphi Helper Library. DeHL is a
library which makes use of the newly
introduced features in Delphi 2009;
features like Generics and Anonymous
Methods. It tries to fill in the gaps
in the Delphi RTL by providing what
most developers already have in other
development platforms.
IIRC tiopf has a large testsuite.
Free Pascal maybe even has a larger one, but it depends if that can be regarded Delphi enough for your purposes (3860 tests that pretty much pinpoint "Delphi, the language"). They have their own unit testing framework "fpcunit"
I can recommend JediCodeFormat & DelphiCodeToDoc.
Both are open source projects with many automated tests build with DUnit framework.
I think instantobjects is good place to study. This is one of the best OPF in delphi.It also contains unit test built with DUnit
OmniThreadLibrary also has a lot of tests
http://otl.17slon.com/index.htm

Considering Porting App from .NET to Erlang - need advice [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I am looking at Erlang for a future version of a distributed soft-real-time hosted web-based telephony app (i.e. Erlang looks like absolutely the perfect choice for this kind of app). I come from a .NET background and the current version of this app uses a combination of C#, WCF and JQuery to deliver the service. I now need Erlang to allow me to add extra 9s to my up-time and to allow me to get more bang for my server bucks.
Previously I'd set up a development process here combining VS.NET, GIT, TeamCity and auto-deployment of MSI files to the various environments we maintain. It's not perfect, but we're all now pretty comfortable with it. I'm wondering whether a process like we have is even appropriate for such a radically different technology stack (LYME)?
I'm confident that all of the programming challenges we previously solved using .NET can be better solved in less code with Erlang, so I'm completely sold on the language choice. What I don't yet understand from reading the Pragmatic and O'Reilly books on Erlang, is how I should adapt my software engineering and application life-cycle management (ALM) processes to suit the new platform. I see that in-place code updates could make my (and my testing and ops team's) life much easier (compared to the god-awful misery of trying to deploy MSI files across a windows network) but I am not sure how things should change when I use Erlang.
How would you:
do continuous integration in Erlang (is it commonly used?)
use it during a QA cycle (we often run concurrent topic branches using GIT, that get their own mini-QA cycle, so they all get deployed into a test environment)
build and distribute your code to DEV, TEST, UAT, STAGING, and PROD environments
integrate code generation phases into your build cycle (we currently use MSBUILD + T4 templates)
centralize logging for a bunch of different servers (we currently use Log4Net, MSMQ, etc)
do alerting with tools like SCOM
determine whether someone/something has misconfigured your production servers
allow production hot-fixes only after adequate QA (only by authorized personnel)
profile the performance (computation and communication) of your apps
interact with windows-based active directory servers
I guess I need to know what worked for you and why! What tools and frameworks did you use? What did you try that failed? What would you do differently if you could start over, knowing what you know now?
Whoa, what a long post. First, you should be aware that the 99.9% and better kool-aid is a bit dangerous to drink while blind. Yes, you can get some astounding stability figures, but you need to write your program in a way facilitating this. It does not come for free. It does not happen by magic either. Your application must be designed in a way such that other subsystems recover. OTP will help you a lot - but it still takes time to learn.
Continuous integration: Easily done. If you can call rebar or make through your build-bot you are probably set here already. Look into eunit, cover and Erlang QuickCheck (the mini variant is free for starters) - all can be run from rebar.
QA Cycle: I have not had any problems here. Again, if using rebar you can build embedded releases that are minimized erlang vm's you can copy anywhere and run (they are self-contained). You can even hot deploy fixes to such a system pretty easily by altering the code path a bit so you have an overlay of newer fixes. Your options are numerous. Git already help you here a lot.
Environmentalization: Easily done.
Logging centralization: Look into SASL and the error_logger. You can do anything you want here.
Alerting: The system can be probed for all you need (introspection is strong in Erlang). But you might have to code a bit to hook it up to the system of your choice.
Misconfiguration: Configuration files are Erlang terms. If it can be computed, it can be done.
Security: Limit who has access. It is a people problem, not a technical one in my opinion.
Profiling: cprof, cover, eprof, fprof, instrument + a couple of distributed systems for doing the same. Random sampling is also easy (introspection is strong in Erlang).
Windows interaction: Dunno. (Bias: last time I used windows professionally was in 1998 or so).
Some personal observations:
Your largest problem might end up being that you try to cram Erlang into your existing process and it might resist. It is a new environment, so new approaches will be needed in places and you should expect to adapt and workaround limitations you find along the way. The general consensus is that it can work (it is working for several big sites).
It looks like you have a well-established and strict process. How much is that process allowed to be sacrificed to give way to a new kind of thinking?
Are your programmers willing to throw out almost all of their OO knowledge? If not, you will end with a social problem rather than a technical one. If they are like me however, they will cheer, clap in their hands and get a constant high by working with an interesting language solving an interesting problem in a new way.
How many Erlang-experienced programmers do you have? If you have rather few, then better cut your teeth on some smaller subsystems first and then work towards the larger goal. Getting the full benefit of the system takes months if not years. Getting partial benefit can be had in weeks though.

What is "Continuous Implementation"? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
Is "Continuous Implementation" the name of a software
development methodology? If so, what is it exactly?
Do you have experience using it?
Note that I know what continuous integration is, but not continuous implementation.
Background: today I learned (second hand) of a company that
uses "Continuous Implementation" in the context of their
software development. Is it formally defined or is it part
of some agile software development methodology?
The best I could find was this paper in the European Journal of
Information Systems:
Agility Through Scenario Development And Continuous Implementation
"... a business and IS/IT initiative at Volvo ...
development and implementation of an agile aftermarket
supply chain. ... to create a platform, Web services, and
a Web portal for selling spare parts over the Internet. "
Try searching for "Continuous Integration". It's a Good Thing(TM), in my opinion. "Continuous Implementation" would only be a good development methodology in the Dilbert universe. ;)
Edit:
The original question was simply asking what "continuous implementation" is. Since this site is StackOverflow, not EconomicsOverflow or PolymerEngineeringOverflow, the correct answer is "nothing."
The question was edited afterward to expand the scope, but that doesn't really change my answer.
All references of this term I can find in the realm of software development appear to be a mistake where the author is really meant continuous integration, a common agile technique.
The OP now referenced a a paper using the term in the context of use of the term in an "agile" supply-chain management implementation. Even so, despite the publication, the term has not entered common parlance in SCM, much less software development, and thus has no generally-accepted definition.
I think, the OP is referring to 'Continuous Implementation' only. It is not a commonly used term.
I didn't hear the term, but in the Agile or Scrum methodology, the implementations happen frequently than the traditional waterfall model (but obviously not continuously as in 'Continuous Implementation').
At the company I work, we follow Scrum methodology to deliver the new version every 6 months. Since ours is a product company offering Software-as-Service, the implementations are in our control. We eventually plan to have more frequent implementations. This is much different from the pre-Scrum days, when the new version comes typically every 2 years.
Continuous implementation is a term used in game theory. See here for example. I doubt that this is what you're after, but there you are anyway.
MIKE, an information systems management approach, also uses the term; see here. The Volvo reference in the OP may be referring to MIKE or something similar.
Richard is likely correct that you mean Continuous Integration, a practice whose primary element is frequent builds to ensure the incremental addition of working functionality to your software.
The seminal article on this practice is "Continuous Integration" by Martin Fowler (this is the original, there is a link at the top to an updated version).
Sounds like marketing people mismatched the terminology. Happens all the time.
Actually, I think that this new animal comes from a Lean background (which makes sense in the context of Volvo). Nothing formal though. In other words, it sounds Agile, it taste Agile but nobody knows exactly what it means and, for these reasons, I'm sure Volvo's C-level managers like it a lot :) This makes my bullshit detector ring very loudly actually.

Resources