When I see tutorials regarding cucumber I see feature examples like "manage users" with scenarios such as add users, delete user etc. This all very well when starting a project.
However, I would like to use something like pivotal tracker with third-party tools such as pickler and have features as stories (the pivotal tracker concept) which can be derived from requests and bug reports (as they may be also referred to in other project and code management tools).
The problem I see is that the number of feature files could become quite large because a new one could be started for each request, also the number of scenarios could be low in each because they would be spread over multiple feature files over different periods, so how would you organise them?
also will testing become too slow over time how can this be reduced?
Have a read of this: Features != Stories
Related
Sourcegraph- this site help me find usage of some of the libraries and usage examples provided by others, where official documentation of the libraries would go only so far. This has been incredibly valuable to me as a developer.
I'd like to see how other developers have used some APIs. We are working on a really huge team. Of course there are all sorts of permissions restrictions for multiple projects. However, most of the code is open. It really is a valuable asset to people like me.
TFS is a source control provider, I think it should also have something inside like sourcegraph. My question is- What's the best way to do this in TFS?
Suggest you to upgrade your TFS to TFS2017. This new release includes our most recent feature innovations and improvements. One of important update is code serach.
Code Search
Code Search provides fast, flexible, and accurate search across all your code. As your codebase expands and is divided across multiple projects and repositories, finding what you need becomes increasingly difficult. To maximize cross-team collaboration and code sharing, Code Search can quickly and efficiently locate relevant information across all your projects.
From discovering examples of an API's implementation, browsing its definition, to searching for error text, Code Search delivers a one-stop solution for all your code exploration and troubleshooting needs.
Code Search offers:
Search across one or more projects
Semantic Ranking
Rich filtering
Code collaboration
For details, see Search across all your code.
Sourcegraph now supports full-regex search across all code and is now deployable as a single Docker image (https://about.sourcegraph.com/docs) and indexes any Git-based code host.
I am currently working for a large organisation with about 2k developers working in our IT department. We maintain many things including our e-commerce platform and there are currently about 30 projects currently impacting that.
Recently all of our teams have been instructed to deliver a series of automated tests using Concordion and Selenium Webdriver. For a while this has been going fairly well and many tests have been created but lately maintaining the existing tests while our e-commerce platform constantly changes has been somewhat of a nightmare. We have thousands of test scripts covering many parts of our website but there does not seem to be any facility in Concordion to split scripts into reusable compartments which could then be maintained once, rather than having to make changes to hundreds of HTML files for one change.
How are other people approaching this?
The goal of Concordion is not to implement test scripts as HTML, but rather for the HTML to describe the behaviour that you are testing (what you are trying to achieve). The implementation details (how it is being tested) are implemented as Java code. This code can then be structured with an appropriate level of abstraction so that each change to the system under test only requires a change to one part of the code.
Your HTML specifications should only need to change on the rare occasions that the business rules change.
These concepts are described further on the Hints and Tips tab of the Concordion home page.
Thank you for sharing your experience with us. It’s great to hear / read about large scale application of behavior driven development / specification by example.
One approach that could help you is to focus on key examples (http://gojko.net/2014/05/05/focus-on-key-examples). During specification workshops the entire team is working to get a common understanding of the new user needs and requirements. Then you go on and write specification documents containing key examples. There you should not try to cover everything, but to write only as many examples as necessary to express the common understanding.
Additionally, you should try to identify concepts, on which the examples are based. Are there some examples related to a similar topic – this is probably an underlying concept. It is often easier to understand the examples, if they focus just on one concept (e.g. the validation of a card number). Each concept can be usually described with only a few examples.
Do you have any other types of automated tests (e.g. unit tests)? Are you experiencing the same maintainability challenges with these other tests? Could you use good practices from these other test types to improve your Concordion approach?
Could you tell us more about your setup? How many active specifications have you already created within your company?
I haven't any real experience in BDD and I've recently discovered SpecFlow. I've read a bit about it (and Gherkin), I went through some screen casts, and I must say that I'm moderatly convinced. Of course, by nature, the examples provided as an introduction are relatively simple. Is anybody using SpecFlow on real (read "complex") projects and finding that tool helpful?
Gojko Adzic has written a whole book (www.specificationbyexample.com) where he interviewed various teams around the globe working according to these concepts for several years. The book not only describes there experience but also summarizes very well common challenges and benefits teams reported. I think this book can help convincing management as well as provides some guidance when starting with this. It is not a step-by-step cook book, though, neither does it talk in detail about specific tools (which is not necessary IMHO).
To talk about first hand experience, we (TechTalk) are using SpecFlow since several years in projects of different size, domains and architecture. We are doing mainly custom development in various domains (financial sector, government, GIS) and our projects are usually having a 2-9 months duration with a size of 150-500 PD. The largest projects we do with SpecFlow are 1800+ PD - these are long running programs for several years with ongoing frequent releases.
We are also using SpecFlow in product development, e.g. in SpecLog (www.speclog.net).
We are also coaching larger projects in ATDD and Specification-By-Example in various industries (automotive, financial services, ...) who are applying these concepts quite successfully. These projects are partly also on other platforms, e.g. on Java we used JBehave so far, although if I would start a project right now I would strongly consider Cucumber-JVM.
I also recommend checking out the (free) screen casts at skillsmatter.com who are running related conferences since several years (BDDX, CukeUp). These always have some experience reports from various domains and industries.
We've got an Excel spreadsheet floating around right now (globally) at my company to capture various pieces of information about each countries technology usage. The problem is that it goes out, gets changes, but they're never obvious, and often conflicting - and then we have to smash them together. To me, the workbook is no more than a garbage in/garbage out type application waiting to be written.
In a company that has enough staff and knowledge to dedicate to Enterprise projects, for some reason, agile and language/frameworks such as Rails, Grails, etc. are frowned upon. That said, I can't help but think that this is almost a perfect fit for the need, given the scaffolding features for extremely simple implementations of capturing raw fields with only a couple lookups (i.e. a pre-defined category). I'm thinking this would be considered a very appropriate use of these frameworks.
Has anyone worked on these types of quick and dirty apps before in normally large-scale, heavy-handed enterprise environments with success? Any tips for communicating this need/appropriateness to non-technical management?
The only way to get this implemented in a rigid organization is to get this working and demo it -- without approval. It's very hard for management to say no to a finished project.
I work for a really big company & have written many utility apps based on Rails (as well as contributed to some larger Rails projects). That said, the biggest concern is not the quality of the app, but who's going to support/maintain it when you leave or get hit by the bus.
IMHO, The major fear that an enterprise organization has - especially if the application becomes more critical to it's core business - is how to support it. If it doesn't fit into it's neat little box of supported technologies, it's less likely to happen.
Corporations have been bitten by this many times in the past & are cautious when bringing in new technology.
So, if you can drum up more folks to learn Ruby/Rails in your group (or elsewhere in your company), you may be able to make a good case for it. Otherwise, sad to say, your probably better off implementing something on Sharepoint :-(.
If you already have a Java infrastructure, then creating a Grails app will require little to no additional IT ramp up to support and maintain. The support and maintenance cost and effort should be the same as for a Java application (i.e. Grails apps run on Tomcat, use the same JVM, use the same diagnostic/profiling tools, etc.).
In my experience, larger IT organizations have a harder time supporting Ruby when its not already in the toolchain because its a new language, new deployment environment, and requires a considerable amount of support and maintenance ramp up.
I would develop a minimal viable product, then make friends with someone in IT who can help you deploy it into a staging or production environment. Then get a few of the users to hop on board and test it like its a Beta product. After that, open it up to a larger audience.
So as others have said, forgiveness over permission, but be smart about the impact on the IT organization.
What would be a proper way to simulate a large number of requests to test if my web application can handle it?
You could try using Microsoft's WCAT tools. Look here: http://support.microsoft.com/kb/231282
They're free, too. That's always nice.
Depending on your budget, you may be interested in some load testing software designed for this. A Google search brings up all sorts of alternatives. This is probably the best way to do it.
This one has a free trial version and isn't too pricey, but I would recommend shopping around first.
I've used JMeter in the past, and I find it to be very useful for stress/load testing as website, even ones written in ASP.NET (with or without MVC).
In general you would want to (with any tool) write a script of what an average user of your site would do. You may even end up creating multiple of these scripts. Tools like JMeter even allow for a random element to be added to a script. With these scripts created a load testing tool can then simulate as many users as you desire hitting your site.
I would recommend allow JMeter to slowly ramp up the number of concurrent users and watch the response time graph. At the point where the response time starts increasing too highly is at the point where you've hit the maximum number of users (given you scripts) that your site can handle.
ab and httperf are two, more unixy options, if you don't mind delving in that direction.
There's a nice screencast for using httperf by peepcode.
Use the load testing tools from Visual Studio Team System. 2010 if you can get it.
The tools are great to use and provide wonderful instrumentation. There is also a programming model to go with the tools, allowing you to make some very complex testing scenarios possible.
Post the URL on stackoverflow.
Make it sound like a challenge, so lots of people come check it out: "Can you find the hidden performance problem in this app?"