I try to learn Rspec, but I don't understand what is that. Let me explain. I have read many articles and blogs and I was able to understand a few things(basic terms, how to install, how to use and other). But I don't understand the main. What is behavior? Question may seem absurd but I realy don't understand this.
For example I have simple rails app. Blog. Create articles, comments etc. What is behavior there?
This example maybe is not good.
I can not understand the essence of behavior. What mean this word for object(acticles, comments)?
Can explain me this? Maybe can someone show some examples? What behavior need to testing? And what is behavior?
The simplest explanation of behavior I see is the following.
In OOP objects send and receive messages. Following receive of a message, object behaves, i.e. it changes it's state or sends messages to another objects.
Testing the behavior you should check if the object behaves correspondingly to a message it received.
BDD states: you first define the behavior via a spec and then write the code to enable object behave as intended.
Rspec is having good thing is behaviour based writing specs. It is an reusability specs can be created and used by sharing of different specs. It is normally called as shared examples in the view of specs. Just follow the links for your tutorial
http://blog.davidchelimsky.net/2010/11/07/specifying-mixins-with-shared-example-groups-in-rspec-2/
https://www.relishapp.com/rspec/rspec-core/docs/example-groups/shared-examples
Related
I am all confused with TDD vs BDD :)
How does TDD and BDD differ in each of below point?
Development: Test case first, development follows next
RestService(HTTP): Don't make rest calls? If so,
a) do we return only hardcoded json using a mock object?
b) how to handle REST call failures? We should have test case for that too?
Especially for item 2, i have googled so many articles, but couldn't find a sample (code) approach on how to handle rest calls.
BDD and TDD are not comparable to each other, although they are both used in test first development.
BDD is more than just writing tests with an English-like syntax, e.g. Kiwi. BDD (also known as ATDD—Acceptance Test Driven Development) starts with developers, QA, and designers (e.g. business, and interaction designers), working together to develop a shared understanding of the proposed solution. It is common to use examples to illustrate the behavior, also known as Specification by Example.
I have found that a useful way to think of abstraction is distinguishing between what you do (abstract, high-level policy), and how you do it (concrete, low-level details). Every concrete detail exists to fulfill a higher-level policy. When you see something concrete, it is beneficial to identify the policy it is serving.
The specification by example can be used to create high-level acceptance tests, which test what the application does, i.e. its behavior.
Unit tests are used to test how the app implements a solution, i.e. test that the appropriate messages are sent to its collaborators/dependencies at the appropriate time.
The phases of the standard TDD cycle are Red, Green, Refactor. During the green phase, your goal is to get the test passing as quickly as possible, by hook or by crook—it is acceptable to write ugly, unorganized code. Once the test passes, you refactor the code to make it more readable/changeable.
Similarly, with a BDD/ATDD cycle, you have Red, Green, Refactor. During the green phase of BDD, just get the acceptance test to pass. All of the code you write can exist within the test itself. During the refactor phase of BDD, you extract test code into production code. You can use TDD to guide the extraction.
So, for a given BDD acceptance test, you might have multiple TDD tests.
Regarding how to test REST calls, let's go back to the premise of abstraction—distinguishing what we do from how we do it.
Calling a REST service is a concrete action. The policy it satisfies may be to provide a list of model objects.
Let's say the use case you are implementing is to invite a friend to lunch. Part of the use case responsibility is to obtain the list of friends from a server; it doesn't care how the server finds the friends.
Your BDD tests would handle getting the list of friends, picking a friend, and completing the invitation. Your BDD tests would not worry about actually making REST calls.
When you use TDD to implement the the class that handles communication with the server, you could have tests that retrieve JSON from a remote data source (i.e. the server), and ensure the JSON is properly parsed into User model objects. You could also have tests to cover the data source responding with an error, etc.
At the point you actually make a REST call, in the implementation of a remote data source that uses REST to communicate with the backend server, I would classify that as an integration test, as you are testing the integration with a component you don't control, i.e. the actual backend server. The integration tests only need to confirm that the server returns JSON data in the format your app expects, or that errors are returned when appropriate.
BDD is actually derived from TDD, so it's not surprising there's a little confusion! BDD is exactly like TDD (or ATDD if you're doing it for a whole system), but without the word "Test". It turns out that can be pretty powerful.
Particularly, it lets developers have conversations with non-technical business people about what the system should do. You can also use it to have conversations about what a class should do, or a module of code should do, even with a technical expert.
So in the example of your REST service, you can imagine that I'm a dev and you're an expert who knows what the REST service should do.
Me: What should it do?
You: It should let me read a record.
Me: Great! Can you give me an example of a record?
You: I have one here...
Me: Is there any context in which someone shouldn't be able to read the record?
You: Sure, if they don't have permissions.
...
Me: Okay, so I've done Read, let's do Update. Can you give me an example of a typical update?
You: Here you go.
Me: Fantastic, and you want it to respond just with success or fail. Is there any scenario in which it should fail?
You: Sure. The record shows when it was last updated. If someone else has already updated it in the meantime, yours should fail when you submit it.
So you see you can use BDD to explore all kinds of scenarios, including those around a REST service. The trick is to ask, "Can you give me an example?" Then you get a concrete example, which you can then automate if you want to. The conversations help us look for other examples and scenarios which we might have missed.
Don't use BDD tools to automate for a technical audience! BDD tools like Cucumber, JBehave etc. work with real English that's a lot harder to refactor than code. Use JUnit, NUnit etc. if you're just doing something like a REST service. You can put "Given, When, Then" in comments, or make a little DSL.
So now you can see that with your REST call failure, if I were coding it, I'd have an example like:
Me: So, this call failure... can you give me an example?
You: Sure, if you access a record that's been deleted it's going to fail.
Me: Give me a typical example of a record that might get deleted?
You: The one we're using before is good.
Me: Okay, is there a situation in which we shouldn't delete a record?
You: Yes, if it's already been published...
Etc.
You can see that throughout, I'm not really using the word "test". Tests are a nice by-product in BDD. It's used more for exploration and specification of requirements. The conversations in BDD are the most important part of it.
The reason it's tricky to find examples of using BDD for REST is first because REST is deliberately simple and doesn't often have a lot of behaviour, and second because BDD's scenarios aren't generally phrased in terms of their implementation, focusing instead on the value of what the service or system provides ("read a record").
TDD and ATDD are exactly the same, if they're done well. It's just easier to have conversations about examples and scenarios than it is to have them about tests.
I'm collaborating on a Rails application where the test suite is full of this kind of declarations:
before do
expect(job_post).to receive(:destroy).and_return(true)
delete :destroy, params_id
end
(And the test suite is green)
Now, using expect inside a before block doesn't make sense to me. I can't find any documentation on the subject.
What's even weirder, is that it seems to act as a sort of stub. If I remove it, the tests fail.
UPDATE:
Someone has already replied with a refactored test.
While appreciated (thanks), that does not answer my question. In fact, I know how to write controller specs and eventually I will completely rewrite this test file.
My question is about:
Does the code I reported make sense?
Is this use of expect documented? Where?
If it does make sense, could someone provide some pointers?
Yup it doesn't make sense. If they wanted to simulate the job_post receive thing they should have done it using a stub, not actually doing an operation on the object. Then they do a delete for what reason, exactly?
It's not free but for pointers I'd go to Destroy All Software. There's just a lot of cool things about tests there. I also recently watched The Magic Tricks of Testing, it's helped me a lot on the whole testing thing in general.
I have two feature files feature1.feature and feature2.feature. In feature1.feature, I am creating a field value and adding it to FeatureContext.Current. Is there any possibility of accessing that value from feature2.feature?
I know that FeatureContext class will get cleared once the particular feature run is got over. Is there any other method for accessing values between two different feature files?
Please suggest some ideas.
Thanks in advance.
I would strongly advice against that setup. There's a couple of reasons for that:
The technical reason: SpecFlow doesn't guarantee the order it runs either features of scenarios. You cannot trust it to always be the same.
The business reason: The scenarios you're writing are first and foremost a communication tool. You want them to be easy to understand by themselves. When you talk about a particular scenario you shouldn't have to read through the other scenarios in the feature in order to understand what this special case does. It clogs up your communication around the scenario.
I suggest that you rather duplicated information in each scenario for readability. Should you end up with a lot of repeated information in each scenario you can you the Background-feature of Gherkin. These steps are run once before every scenario in the feature file and can be used to do repeated stuff.
Should you find yourself in a situation where you need to pass information back and forth between scenarios you should probably take a step back and reconsider your scenarios. Are these two different scenarios, really? Or is it maybe just one? How could you express them clearer?
I hope this was useful.
I never tried it, but maybe you can use the [BeforeFeature] and [AfterFeature] attributes to get the value from the featurecontext and set it in the the context of the next feature.
Little by little i begin to understand the power of Rspec, though i still do not see why i would need to use it to test controllers or views (i'm sure there are reasons behind it).
I'm creating a browser game where users attack monsters. In my head, Rspec would be really really useful if it could provide a bruteforcing mechanism for me. For instance, let's say that i want to have a certain user fight all the monsters one by one and provide some conditions that will trigger the tests to fail.
For example, if a user fights a monster of the same level, hp, and about the same strength, it would be really weird if he/she is killed while the monster still has more than 70% of its hp (that's just a scenario).
It seems to me that this kind of behaviour is tested with rspec in combination with cucumber ? I would really like to get some insight on that topic.
Seems to me that the example you give is well suited for Cucumber. You are trying to test what happens when a user fights a certain monster. You would set up each scenario and then go through steps to exercise various portions of the user experience.
rSpec is for unit testing, i.e. making sure that each method of your models, controllers and views does the proper thing and gives you proper results. By definition, a unit test isolates itself to the code you are test so, for example, if a controller method needs data from a model, that data is mocked or stubbed for each condition of the method being tested. That way your test is not affected by other parts of the code not really under test.
When you start working on an existing Rails project what are the steps that you take to understand the code? Where do you start? What do you use to get a high level view before drilling down into the controllers, models, helpers and views? Do you have any specific techniques, tricks or tools that help speed up the process?
Please don't respond with "Learn Rails & Ruby" (like one of the responses the last guy who asked this got - he also didn't get much response to his question so I thought I would ask again and prompt a bit more). I'm pretty comfortable with my own code. It's sorting other people's that does my head in and takes me a long time to grok.
Look at the models. If the app was written well, this should give you a picture of its domain model, which is where the interesting logic should live. I also look at the tests for the models.
The way that the controllers/views were implemented should be apparent just by using the Rails app and observing the URLs.
Unfortunately, there are many occasions where too much logic lives in controllers and even views. That means you'll have to take a look into those directories too. Doubley-unfortunate, tests for these layers tend to be much less clear.
First I use the app, noting the interesting controller and action names.
Then I start reading the code for these controllers, and for the relevant models when necessary. Views are usually less important.
Unlike a lot of the people so far, I actually don't think tests are the place to start. I think they're too narrow, too focused. It'd be like trying to understand basic physics/mechanics by first zooming into intra-molecular forces and quantum mechanics. I also think you're relying too much on well-written tests, and in my experience, a lot of people don't write sufficient tests or write poor tests (which don't give an accurate sense of what the code should actually do).
1) I think the first thing to do is to understand what the hell the app actually does. Use it, at least long enough to build an idea of what its main purpose is and what the different types of data might be and which actions you can perform, and most importantly, why.
2) You need to step back and see the big picture. I think the best way to do that is by starting with schema.rb. This tells you a few really important things:
What is the vocabulary/concepts of this project. What does "User" actually mean in this app? Why does the app have both "User" and "Account" models and how are they different/related?
You could learn what models there are by looking in app/models but this will actually tell you what data each model holds.
Thanks to *_id fields, you'll learn the associations between the models, which helps you understand how it all fits together.
I'd follow this up by looking at each model's *.rb file for (hopefully) comments, validations, associations, and any additional logic relevant to each. Keep an eye out for regular ol' Ruby classes that might live in lib/.
3) I, personally, would then take a brief glance at routes.rb as it will tell you two key things: a brief survey of all of the actions in the app, and, if the routes and controllers/actions are well named and thought out, a quick sense of where different functionality might live.
At this point you're probably ready to dig into a specific thing you need to learn. Find the controller for the feature you're most interested in and crack it open. Start reading through the relevant actions, see which models are involved, and now maybe start cracking open tests if you want to.
Don't forget to use the rest of your tools: Ruby/Rails debuggers, browser dev tools, logs, etc.
I would say take a look at the tests (or specs if the project uses RSpec) to get an idea at the high-level of what the application is supposed to do. Once you understand from the top level how the models/views/controllers are expected to behave, you can drill into the implementations.
If the Rails project is in a somewhat stable state than I have always been a big fan of using the debugger to help navigate the code base. I'll fire up the browser and begin interacting with the app then target some piece of functionality and set a breakpoint at the beginning of the associated function. With that in place I just study the parameters going into the function and the value being returned to get a better understanding of what's going on. Once you get comfortable you can modify the functionality a little bit to ensure you understand what's going on. Just performing some static analysis on the code can be cumbersome! Good luck!
I can think of two reasons to be looking at an existing app with which I have no previous involvement: I need to make a change or I want to understand one or more aspects because I'm considering using them as input to changes I'm considering making to another app. I include reading-for-education/enlightenment in that second case.
A real benefit of the MVC pattern in particular, and many web apps in general is that they are fairly easily split into request/response pairs, which can to some extent be comprehended in isolation. So you can start with a single interaction and grow your understanding out from that.
When needing to modify or extend existing code, I should have a good idea of what the first change will be - if not then I probably shouldn't be fooling with the code yet! In a Rails app, the change is most likely to involve view, model or a combination of both and I should be able to identify the relevant items fairly quickly. If there are tests, I check that they run, then attempt to write a test that exposes the missing functionality and away we go. If there are no tests then it's a bit trickier - I'm going to worry that I might inadvertently break something: I'd consider adding tests to give myself a more confidence, which will in turn start to build some understanding of the area under study. I should fairly quickly be able to get into a red-green-refactor loop, picking up speed as I learn my way around.
Run the tests. :-)
If you're lucky it'll have been built on RSpec, and that'll describe the behavior regardless of the implementation.
I run rake test in a terminal
If the environment does not load, I take a look at the stack trace to figure out what's going on, and then I fix it so that the environment loads and run the tests again
I boot the server and open the app in a browser. Clicking around.
Start working with the tasks at hand.
If the code rocks, I'm happy. If the code sucks, I hurt it for fun and profit.
Aside from the already posted tips of running specs, and decomposing the MVC, I also like:
rake routes
as another way to get a high-level view of all the routes into the app
./script/console
The rails irb console is still my favorite way to inspect models and model methods. Grab a few records and work with them in irb. I know it helps me during development and test.
Look at the documentation, there is pretty good documentation on some projects.
It's a little bit hard to understand other's code, but try it...Read the code ;-)