Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I can think of quite a few components that need to be created when authoring a web application. I know it should probably be done incrementally, but I'd like to see what order you usually tackle these tasks in. Layout your usual order of events and some justification.
A few possible components or sections I've thought of:
Stories (i.e. pivotaltracker.com)
Integration tests (Rspec, Cucumber, ...)
Functional tests
Unit Tests
Controllers
Views
Javascript functionality
...
The question is, do you do everything piecemeal? (one story, one integration test, get it passing, move onto the next one, ...) OR complete all of one component first then move onto the next.
I'm a BDDer, so I tend to do outside-in. At a high level that means establishing the project vision first (you'd be amazed how few companies actually do this), identifying other stakeholders and their goals (legal, architecture, etc.) then breaking things down into feature sets, features and stories. A story is the smallest usable piece of code on which we can get feedback, and it may be associated with one or more scenarios. This is what Chris Matts calls "feature injection" - creating features because they are needed to support stakeholder goals and the project vision. I wrote an article about this a while back. I justify this because regardless of how good or well-tested your code is, it won't matter if it's the wrong code in the first place.
Once we have the story and scenarios, I tend to write the UI first, followed by the classes which support it. I wrote a blog post about a real-life example here - we were programming in Java so you might have to do things a bit differently with Rails, but the principles remain. I tend to start writing unit tests when there's actually behaviour to describe - that is, a class behaves differently depending on its context, on what has already happened before. Normally the first class will indeed be the controller, which I tend to populate with static data just to get the UI into shape. I'll write the first unit tests to help me get rid of that static data.
Doing the UI first lets me get feedback from stakeholders early, since it's the UI that the users will be interacting with. I then start with the "happy path" - the thing which lets the users do the most valuable thing - followed by the exceptional cases, validation, etc.
Then I do my best to persuade my PM to let us release our code early, because it's only when the users actually get hold of it to play with that you find out what you really did wrong.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I'm starting a massive project for the first time. I was supposed to be one of the developers on this big project, and all of a sudden, the lead developer and his team backed out of the contract. Now I'm left managing this big project myself with a few junior developers under me and I'm trying to get a firm grasp on how this code should be broken up.
Logically, to me, the code should be broken up by the screens it has. I know that might not be how it SHOULD be done. so tell me, how SHOULD it be done? The app has about 6 screens total. It connects to a server, which maintains data about all other instances of the app on other phones. You could think of it as semi-social. it will also use the camera feature in certain parts, and it will definitely use geolocation. probably geofencing. It will obviously need an API to connect to the server. most likely more APIs than just the 1. I cant say much more about it without breaking an NDA.
So again, my question pertains to how the code should be broken up to make it as efficient as possible. Personally, i'll be doing little coding on the project. probably mostly code reviews, unit testing and planning. Should it have 1 file per screen, and parts that are repeated should have their own classes? should it be MVC? We're talking a 30k line app here, at its best and most efficient. is there a better way to break the code apart than the ways I've listed?
I guess my real question is, does anybody have good suggestions for books that would address my current issue? code clean was suggested, that's a good start. I've already read the mythical man month and code complete but they don't really address my current issue. i need suggestions for books that will help me learn how to structure and plan the creation of large code bases
As I'm sure you know this is a pretty vague question you could write a book answering it. In fact, I would recommend you read one, like Clean Code. But I'll take a stab at a 10,000 foot level overview.
First if you are doing an iPhone app, you will want to use MVC because that is how Apple has setup their frame work. That means each screen will have (at least) a view-controller, possibly a custom view or NIB.
In addition you will want your view controllers pointing to your model (your business objects) and not the other way around. These objects should implement the use cases without any user interface logic. That is what your view-controller and view will be doing.
How do you break apart your use cases? Well, that's highly specific to your program and I won't be able to tell you with a (lot of) details. There isn't a single right answer. But in general you want to isolate each object from other objects as much as possible. If ever object reference ever other object, then you don't really have an OO design, you have a mess. Especially when you are talking about unit tests and TDD. If when you test one part you end up pulling in the whole system, then you are not testing just one small unit, are you?
Really though, get a good book about OO design. It's a large subject that nobody will be able to explain in a SO answer. I think Clean Code is a good start, maybe other people will have other suggestions?
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I'm doing a contract job in Ruby on Rails and it is turning into a disaster for productivity largely because of writing tests. This is my first time using TDD in practice and it hasn't gone well because I am spending so much time writing tests that I've hardly gotten any results to show. I'm thinking that perhaps I'm trying to test too much by writing tests for every feature in each model and controller.
If I can't aim for 100% test coverage, what are some criteria that I could use to determine "is this feature worth testing"? For example, would integration tests trump unit tests?
If you're just getting started with testing in the ruby or Rails world, I'd make the following suggestions:
Start with rspec. Automated acceptance/integration testing with a tool like Cucumber can be a large timesink for a single developer who has never used it before. Success with those tools is often contingent upon A) quality UI specs that are very very specific, B) UI conventions which are easily testable with headless browser emulators and C) Familiarity with the tools ahead of go time.
Test individual methods. Test that they return the values you expect. Test that when you feed them bad data, they respond in an appropriate manner. Test any edge cases as you become aware of them.
Be very careful that you are stubbing and mocking correctly in your tests. It's easy to write a test for 30 minutes only to discover that you're not really testing the thing you need to be testing.
Don't go overboard with micro-managing your TDD - some folks will tell you to test every tiny step in writing a method: first test that the model has a method called 'foo', then test whether it returns non-nil, then test that it returns a string, then test that the string contains a certain substring. While this approach can be helpful when you're implementing something complex, it can be a time sink as well. Just skip the first two steps. That being said, it's easy to go too far in the other direction, specifying a method's behavior with a complex test before you begin implementing it, then beginning the implementation only to find you've botched the test.
Don't write tests that just say 'this is how i wrote the feature, don't change it'. Tests should reflect the essential business logic of a method. If you are writing tests specifically so that they will fail if another developer changes some non-critical part of your implementation, you are wasting time and adding superfluous lines of code.
These are just a few observations I've made from having been in similar situations. Best of luck, testing can be a lot of fun! No, really. I mean it.
100% test coverage is a fantasy and a waste of time. Your tests should serve a purpose, typically to give you confidence that the code you wrote works. Not absolute confidence, but some amount of confidence. TDD should be a tool, not a restriction.
If it's not making your work come out better, why are you doing it? More importantly, if you fail to produce useful code and lose the contract, those tests weren't too useful after all were they? It's a balance, and it sounds like you're on the wrong side.
If you're new to Rails, you can get a small dose of its opinionated creator's view on testing in this 37signals blog article on the topic. Small rules of thumb, but maybe something to push you in a new direction on the subject.
There are also good references on improving your use of RSpec like betterspecs.org, The RSpec Book and Everyday Rails Testing with RSpec. Using it poorly can result in a lot of headache maintaining the specs.
My advice is to try and get your testing and your writing of code as tightly coupled as possible, combined with an Agile approach to the project.
This way you will constantly have new stuff to show the client as testing will just be baked in. The biggest mistake I see with teams that are new to testing is to continue to see the testing as a separate activity. Most of all I continue to see developers say that a feature is done... but will need some refactoring and some better tests at "some points". "Some point" rarely comes. One thing is inescapable though - at least for several months it will be much slower in the short term but much better quality and you'll avoid building the "big ball of mud" I've seem in so many larger institutions.
A few things:
Don't
Test the database
Test ActiveRecord or whatever ORM you're using
Do
For models:
Test validations
Test custom logic
For controllers:
Test non-trivial routes
Test redirects
Test authentication
Test instance variable assignment
For views:
I haven't gotten around to testing views, but I've run into situations where I wish I did. For example testing fields in forms.
More at Rails Guides
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I looked in stackoverflow and could fine one or two questions that have a similar title than this one, but none of it answers what I'm asking. Sorry if this is duplicated.
In unity tests, there is a guideline that says "One assertion per test". By reading around stackoverflow and the internet, it is commonly accepted that this rule can be relaxed a bit, but every unit test should test one aspect of the code, or one behavior. This works well because when a test fails you can immediately see what failed and fixing it most likely the test will not fail again in other point in the future.
This works well for Rails unit tests, and I have been using it for functional testing as well without any problem. But when it comes to integration tests, it is somewhat implicit that you should have many assertions in your tests. Apart from that, they usually repeat tests that are already done once in functional and in unit tests.
So, what are considered good practices when writing integration tests in these two factors:
Length of the integration tests: How to measure when a integration test should be splited in two? Number of requests? Or larger is always better
Number of assertions on integration tests: Should it repeat the assertions presented on unit tests and functional tests about the current state of the system every time, or should it have only 5 or so asserts on the end to test if the correct output was generated?
Hopefully someone will provide a more authoritative answer, but my understanding is that an integration test should be built around a specific feature. For example, in an online store, you might write one integration test to make sure that it's possible to add items to your cart, and another integration test to make sure it's possible to check out.
How long should an integration test be?
As long as it takes to cover a feature, and no more. Some features are small, some are large, and their size is a matter of taste. When they're too big, they can easily be decomposed into several logical sub-features. When they're too small, their integration tests will look like view or controller tests.
How many assertions should they have?
As few as possible, while still being useful. This is true of all tests, but it goes doubly for integration tests because they're so slow. This means testing only the things that are most important, and trying not to test things that are implied by other data. In the case of the checkout feature, I might assert that the order was created for the right person and has the right total, but leave the exact items untested (since my architecture might generate the total from the items). I wouldn't make any assertions before that that I didn't have to, since traversing the application—filling this field, clicking that button, waiting for this modal to open—covers all the integration behavior I need tested, and anything else could be covered by view tests if they need to be tested at all.
All together, in general this means that whereas unit tests tend to be only a couple lines long and preceded by a larger setup block, Rails integration tests tend to be a dozen lines long or more (most of which are interaction), and lack a setup block entirely.
Length of the integration tests: I agree that length here doesn't matter that much. It's more about the feature you're testing and how many steps does it take to test it. For example let's say you're testing a wizard of five steps which creates a project. I would put all the five steps in one test end check if the relevant data appeared on screen. However I would split the test if the wizard would allow for different scenarios that need to be covered.
Number of assertions on integration tests: Don't test things that are already tested in other test, but make sure user expectations are met. So test what, the user expects to see on the screen not back-end specific code. Sometimes you might still need to check the right data is in the database, for example when its not supposed to appear on the screen.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I recently joined a new company with a large existing codebase. Most of my experience has been in developing small-medium sized apps which I was involved with since the beginning.
Does anyone have any useful tools or advice on how to begin working on a large app?
Do I start at the models and move to controllers? Are there scripts to visually map out model relationships? Should I jump in, start working on it and learn the apps structure as I go?
Railroad should help you understand the big picture. It generates diagrams for your models and controllers.
I found unit tests to be the most efficient, effective and powerful tool. So, before making any change, make sure your application has a minimum LOC so that you won't break any existing feature working on.
You should care about unit tests (of course I'm talking about unit/functional/integrational tests) because:
they ensure you won't break any existing feature
they describe the code so that you won't need tons of comments everywhere to understand why that code fragment acts in that way
having test you'll spent less time debugging and more time coding
When you have tests, you can start refactoring your app.
Here's a list of some tools I usually work with:
Rack::Bug
New Relic
You might want to view some of the wonderful Gregg's videos about Scaling Rails to get more powerful tools.
Also, don't forget to immediately start tracking how your application is performing and whether it is raising exceptions. You can use one of the following tools
Hoptoad
Exceptional
If you need to fix some bug, don't forget to reproduce the issue with a test first, then fix the bug.
Not specific on Rails, but I would start reading the requirements and architecture documentation. After that get familiar with the domain by sketching the models and their relationship on a big sheet of paper.
Then move on to the controllers (maybe look at the routes first).
The views should not contain that much information, I guess you can pretty much skip them.
If you still need to know more, the log of the versioning system (given they use one) is also a good place to get to know how the project evolved.
When I've been in this situation, I try one of three things:
Read all the code top to bottom. This lets you see what code is working, and you can report progress easily (I read through all the view code this week). This means you spend time on things that may not be helpful (unused code) but you get a taste of everything that is there. This is very boring.
Start at the beginning and go to the end. From the login page or splash screen, start looking at that code, then the next page, then the next page. Look at the view, controller, and database code. This takes some time, but it gives you the context for why you need that code or database table. And it allows you do see most often the ones that get used in the most places. This is more interesting.
Start fixing bugs. This has the benefit of showing progress on your new project (happy boss) taking work from other people (happy co-workers) and learning at the same time (happy developer). It provides the context of number 2, and you can skip rarely used code from number 1. This is the most interesting way for me.
Also, keep track of what you've learned. Get a cheap spiral-bound notebook and write down an outline of what you've learned. Imagine yourself giving a talk on the code you're learning about or bug you're fixing. Take enough notes to give that talk, and spice it up with a factoid or two to make it interesting. I give my notebooks dignity and purpose by calling them "Engineering Notebooks", put a title on the front (my name, company, date), and bringing them to every meeting. It looks very professional compared to the guys who don't show up with paper to take notes. For this, don't use a wiki. It can't be relied upon, and I spend a week playing with the wiki instead of learning.
As mentioned above, being the new guy is a good chance to do the things nobody ever got around to like unit tests, documenting processes, or automating running tests. Document the process to set up a new box and get all the software installed to be productive. Take an old box under someone's desk and put a continuous integration install on it, and have it email you when the tests fail. You can forward that email whenever someone else checks in code that breaks the tests. Start writing tests to figure out how things work together, esp. if there aren't any/very many tests.
Try to ask lots of questions in one-on-one situations. This shows you're engaged and learning, and it helps you find the experts in the different parts of the app. On a big project you may need to go to one person for one topic and a different person for other topics. It also helps identify who thinks they know more than they really do or who talks more than you really need.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I'm working on a client server app using the Tracer Bullet approach advocated in The Pragmatic Programmer and would like some advice. I'm working through each use case from initiation on the client through to the server and back to the client again to display the result.
I can see two ways to proceed:
Cover the basic use cases, just
writing enough code to satisfy the
use case I'm working on, then go back
and flesh out all the error handling
later.
Flesh out each use case as much as
possible, catching all exceptions and polishing the interface, before going on to the
next use case.
I'm leaning towards the first option but I'm afraid of forgetting to handle some exception and having it bite me when the app is in production. Or of leaving in unclear "stub" error messages. However if I take the second option then I think I'll end up making more changes later on.
Questions:
When using tracer bullet development which of these two approaches do you take and why?
Or, is there another approach that I'm missing?
As I understand it, the Tracer Bullet method has two main goals
address fundamental problems as soon as possible
give the client a useful result as soon as possible
Your motivation in not "polishing" each use case is probably to speed up 2. further. The question is whether in doing so you endanger 1. and whether the client is actually interested in "unpolished" results. Even if not, there's certainly an advantage in beng able to get feedback from the client quickly.
I'd say your idea is OK as long as
You make sure that there aren't any fundamental problems hiding in the "unpolished" parts - this could definitely happen with error handling
You track anything you have to "polish" later in an issue tracker or by leaving TODOs in the source code - and carefully go through those once the use cases are working
The use cases are not so "unpolished" that the client can't/won't give you useful feedback on them
If you take approach #1, you will have 90% of the functionality working pretty quickly. However, your client will also think you are 90% done and will wonder why it is taking you 9 times as long to finish the job.
If you take approach #1 then I would call that nothing more than a prototype and treat it that way. To represent it as anything more than that will lead to nothing but problems later on. Happy day scenarios are only 10% of the job. The remaining 90% is getting the other scenarios to work and the happy day scenario to work reliably. It is very hard to get non-developers to believe that. I usually do something between #1 & #2. I attempt to do a fairly good job of identifying use-cases and all scenarios. I then attempt to identify the most architecturally impacting scenarios and work on those.
I would suggest for Tracer bullets you can use a combination of positive + negative test cases
Positive test cases(these will be mentioned in your user stories/feature documents/functional specifications)
Negative test cases(common negative scenarios that can be expected in a BAU scenario)
(Rare business scenarios can be left out after careful consideration.)
These test cases were run using specflow to automate testing.
Inclusion of Common Negative scenarios in your test cases provides sufficient confidence that successive developments can be done using the underlying code.
Sharing the experience here http://softwarecookie.wordpress.com/2013/12/26/tracer-bullet/