I am building a website with rails on AWS and I am trying to determine the best ways to stress-test while also getting some idea of the cost I will be paying by user (very roughly). I have looked at tools like Selerium and I am curious if I could do something similar with Postman.
My objects are:
Observe what kind of load the server would be under during the test, how the cpu and memory are affected.
See how the load generated would affect the cpu cycles on the system that would generate cost to me by AWS.
Through Postman I can easily generate REST calls to my rails server and simulate user interaction, If I created some kind of multithreaded application that would make many calls like to the server, would that be an efficient way to measure these objectives?
If not, is there a tool that would help me either either (or both) of these objectives?
thanks,
You can use BlazeMeter to do the load test.
This AWS blog post show you how you can do it.
I've been working on a project that I want to add automated tests. I already added some unit tests, but I'm not confident with the process that I've been using, I do not have a great experience with automated tests so I would like to ask for some advice.
The project is integrated with our web API, so it has a login process. According to the logged user the API provides a configuration file which will allow / disallow the access to some modules and permissions within the mobile application. We also have a sync process where the app will access several methods from the API to download files (PDFs, html, videos, etc) and also receive a lot of data through JSON files. The user basically doesn't have to insert data, just use the information received in the sync process.
What I did to add unit tests in this scenario so far was to simulate a logged user, then I added some fixture objects to the user and test them.
I was able to test the web service integration, I used Nocilla to return fake JSONs and assert the result. So far I was only able to test individual request, but I still don't know how should I test the sync process.
I'm having a hard time to create unit tests for my view controllers. Should I unit test just the business logic and do all the rest with tools like KIF / Calabash?
Is there an easy way to setup the fixture data and files?
Thanks!
Everybody's mileage may vary but here's what we settled on and why.
Unit tests: We use a similar strategy. Only difference is we use OHTTPStubs instead of Nocilla because we saw some more flexibility there that we needed and were happy to trade off the easier syntax of Nocilla.
Doing more complicated (non-single query) test cases quickly lost its luster because we were essentially rebuilding whole HTTP request/response flows and that wasn't very "unit". For functional tests, we did end up adopting KIF (at least for dev focused efforts, assuming you don't have a seaparte QA department) for a few reasons:
We didn't buy/need the multi-language abstraction layer afforded by
Appium.
We wanted to be able to run tests on many devices per
build server.
We wanted more whitebox testing and while
Subliminal was attractive, we didn't want to build hooks in our main
app code.
Testing view controller logic (anything that's not unit-oriented) is definitely much more useful using KIF/Calbash or something similar so that's the path I would suggest.
For bonus points, here are some other things we did. Goes to show what you could do I guess:
We have a basic proof of concept that binds KIF commands to a JSON RPC server. So you can run a test target on a device and have that device respond to HTTP requests, which will then fire off test cases or KIF commands. One of the advantage of this is that you can reuse some of the test code you wrote for single device for multiple device test cases.
Our CI server builds integration tests as a downstream build of our main build (which includes unit tests). When the build starts we use XCTool to precompile tests, and then we have some scripts that starts recording a quicktime screen recording, runs the KIF tests, exports the result, and then archive it on our CI server so we can see a live test run along with test logs.
Not really part of this answer but happy to share that if you ping me.
I am currently evaluating how to test a rather big and complex web application, based on Rails 4 on the server side and EmberJS on the client side. In our app, the client exclusively communicates through a restful JSON API with the server.
We did a lot of unit testing based on Konacha so far and are now willing to setup integration/acceptance tests too. We are not sure whether we should start writing end-to-end tests, so tests including a running instance of our server, or whether we should go for integration testing the API and the client side separately.
Our preferred choice at the moment is end-to-end testing, because we fear that in the case of integration testing API and client separately we have twice the effort of creating and maintaining tests and that there might be tiny, little specialities of the comunication between API and client which we could not catch.
Well, we like modern & fast testing frameworks like Konacha, so we don't really want to use Selenium. Not only because it feels a little bit old, but also because its performance is quite poor. Still you will need to control instantiation of mock data on the server and reset of the server, why we came up with the following approach:
We implemented a Testing API which conceptually is used to control the state of the server, e.g. it has the following methods:
GET /api/test/setup # Simple bootstrapping of the database, e.g. populate table with ISO language codes etc...
GET /api/test/reset # Reset the database, using `database_cleaner` gem
A konacha test case would then call setup and reset, before and after each test case respectively.
What do you think about this approach?
Not sure what I would call the test of the API and the client separately, but even if you think about running this kind of test, you still should got for the end-to-end test.
So yes, I think your idea of going for end-to-end testing is very good.
Your idea of setting up simple commands to allow test automation for your system (the setup and the reset commands) is very good as well. Be prepared to add more during automation - while
an end-to-end-test is conceptually a black-box test, in reality it's often a grey-box test, i.e. you will need to access the internal state of your system. I would call this the "operation and maintenance interface" of the system under test.
I am a newbie to rails. I have used feature flags when i was in java world. I found that there are a few gems in rails (rollout and others) for doing it. But how to turn a feature on/off on the fly in rails.
In java we can use a mbean to turn features on the fly. Any idea or pointers on how to do this? I dont want to do a server restart on my machines once a code is deployed.
Unless you have a way of communicating to all your processes at once, which is non-standard, then you'd need some kind of centralized configuration system. Redis is a really fast key-value store which works well for this, but a database can also do the job if a few milliseconds per page load to figure out which features to enable isn't a big deal.
If you're only deploying on a single server, you could also use a static YAML or JSON configuration file that's read before each request is processed. The overhead of this is almost immeasurable.
I am developing a live update for my application. So far, I have created almost all unit tests but I have no idea how to test an specific class that connects to a FTP server and downloads new versions.
To test this class, should I create an FTP test server and use it in my unit tests? If so, how can I make sure this FTP server is always consistent to my tests? Should I create manually every file I will need before the test begin or should I automate this in my Test Class (tear down and setup methods)?
This question also applies to unit testing classes that connects do any kind of server.
EDIT
I am already mocking my ftp class so I dont always need to connect to the ftp server in other tests.
Let me see if I got this right about what Warren said in his comment:
I would argue that once you're talking to a separate app over TCP/IP
we should call that "integration tests". One is no longer testing a
unit or a method, but a system.
When a unit test needs to communicate to another app (that can be a HTTP server or FTP server) is this no longer a unit test but a integration server? If so, am I doing it wrong by trying to use unit testing techniques to create this test? Is it correct to say that I should not unit test this class? It does make sense to me because it seems to be a lot of work for a unit test.
In testing, the purpose is always first to answer the question: what is tested - that is, the scope of the test.
So if you are testing a FTP server implementation, you'll have to create a FTP client.
If you are testing a FTP client, you'll have to create a FTP server.
You'll have therefore to downsize the test extend, until you'll reach an unitary level.
It may be e.g. for your purpose:
Getting a list of the current files installed for the application;
Getting a list of the files available remotely;
Getting a file update;
Checking that a file is correct (checksum?);
and so on...
Each tested item is to have some mocks and stubs. See this article about the difference between the two. In short (AFAIK), a stub is just an emulation object, which always works. And a mock (which should be unique in each test) is the element which may change the test result (pass or fail).
For the exact purpose of a FTP connection, you may e.g. (when testing the client side) have some stubs which return a list of files, and a mock which will test several possible issues of the FTP server (time out, connection lost, wrong content). Then your client side shall react as expected. Your mock may be a true FTP server instance, but which will behave as expected to trigger all potential errors. Typically, each error shall raise an exception, which is to be tracked by the test units, in order to pass/fail each test.
This is a bit difficult to write good testing code. A test-driven approach is a bit time consuming at first, but it is always better in the long term. A good book is here mandatory, or at least some reference articles (like Martin Fowler's as linked above). In Delphi, using interfaces and SOLID principles may help you writing such code, and creating stubs/mocks to write your tests.
From my experiment, every programmer can be sometimes lost in writing tests... good test writing can be more time consuming than feature writing, in some circumstances... you are warned! Each test shall be see as a feature, and its cost shall be evaluated: is it worth it? Is not another test more suitable here? Is my test decoupled from the feature it is testing? Is it not already tested? Am I testing my code, or a third-party/library feature?
Out of the subject, but my two cents: HTTP/1.1 may be a better candidate nowadays than FTP, even for file update. You can resume a HTTP connection, load HTTP content by chunks in parallel, and this protocol is more proxy friendly than FTP. And it is much easier to host some HTTP content than FTP (some FTP servers have also known security issues). Most software updates are performed via HTTP/1.1 these days, not FTP (e.g. Microsoft products or most Linux repositories).
EDIT:
You may argue that you are making integration tests, when you use a remote protocol. It could make sense, but IMHO this is not the same.
To my understanding, integration tests take place when you let all your components work together as with the real application, then check that they are working as expected. My proposal about FTP testing is that you are mocking a FTP server in order to explicitly test all potential issues (timeout, connection or transmission error...). This is something else than integration tests: code coverage is much bigger. And you are only testing one part of the code, not the whole code integration. This is not because you are using some remote connection that you are doing integration tests: this is still unitary testing.
And, of course, integration and system tests shall be performed after unitary tests. But FTP client unitary tests can mock a FTP server, running it locally, but testing all potential issues which may occur in the real big world wide web.
If you are using Indy 10's TIdFTP component, then you can utilize Indy's TIdIOHandlerStream class to fake an FTP connection without actually making a physical connection to a real FTP server.
Create a TStream object, such as TMemoryStream or TStringStream, that contains the FTP responses you expect TIdFTP to receive for all of the commands it sends (use a packet sniffer to capture those beforehand to give you an idea of what you need to include), and place a copy of your update file in the local folder where you would normally download to. Create a TIdIOHandlerStream object and assign the TStream as its ReceiveStream, then assign that IOHandler to the TIdFTP.IOHandler property before calling Connect().
For example:
ResponseData := TStringStream.Create(
'220 Welcome' + EOL +
... + // login responses here, etc...
'150 Opening BINARY mode data connection for filename.ext' + EOL +
'226 Transfer finished' + EOL +
'221 Goodbye' + EOL);
IO := TIdIOHandlerStream.Create(FTP, ResponseData); // TIdIOHandlerStream takes ownership of ResponseData by default
FTP.IOHandler := IO;
FTP.Passive := False; // Passive=True does not work under this setup
FTP.Connect;
try
FTP.Get('filename.ext', 'c:\path\filename.ext');
// copy your test update file to 'c:\path\filename.ext'...
finally
FTP.Disconnect;
end;
Unit tests are supposed to be fast, lightening fast. Anything that slows them down discourages you from wanting to run them.
They are also supposed to be consistent from one run to another. Testing an actual file transfer would introduce the possibility for random failures in your unit tests.
If the class you are testing does nothing more than wrap the api of the ftp library you are using then you've reached one of the boundaries of your application you don't need to unit test it. (Well, sometimes you do. Its called exploratory testing but these are usually thrown away once you get your answer)
If, however, there is any logic in the class you should try to test it in isolation from the actual api. You do this by creating a wrapper for the ftp api. Then in your unit tests you create a test double that can stand in as a replacement for the wrapper. There are lots of variations that go by different names: stub, fake, mock object. Bottom line is you want to make sure your unit tests are isolated from any external influence. A unit test with sporadic behavior is less than useless.
Testing the actual file transfer mechanism should be done in integration testing which is usually run less frequently because its slower. Even in integration testing you'll want to try to control the test environment as much as possible. (i.e. testing with a ftp server on a the local network that is configured to mimic the production server).
And remember, you'll never catch everything up front. Errors will slip through no matter how good the tests are. Just make sure when they do that you add another test to catch them the next time.
I would recommend either buying or checking out a copy of XUnit Test Patterns by Gerard Meszaros. Its a treasure trove of useful information on what/when/how to unit test.
Just borrow the FTP or HTTP Server demo that comes with whatever socket component set you prefer (Indy, ICS, or whatever). Instant test server.
I would put it into a tools folder to go with my unit tests. I might write some code that checks if TestFtpServer.exe is already live, and if not, launch it.
I would keep it out of my unit test app's process memory space, thus the separate process.
Note that by the time you get to FTP server operations, unit testing should really be called "integration testing".
I would not manually create files from my unit test. I would expect that my code should check out from version control, and build, as it is, from a batch file, which runs my test program, which knows about a sub-folder called Tools that contains EXEs and maybe a folder called ServerData and LocalData that could be used to hold the data that is starting out on the server and being transferred down to my local unit test app. Maybe you can hack your demo server to have it terminate a session part way through (when you want to test failures) but I still don't think you're going to get good coverage.
Note If you're doing automatic updates, I think that no amount of unit testing is going to cut it. You need to deal with a lot of potential issues that are internet related. What happens when your hostname doesn't resolve? What happens when a download gets part way through and fails? Automatic-updating is not a great match with the capabilities of unit testing.
Write a couple of focused integration tests for the one component which knows how to communicate with an FTP server. For those tests you will need to start an FTP server before each tests, put there any files needed by the test, and after the test shutdown the server.
With that done, in all other tests you won't use the component which really connects to an FTP server, but you will use a fake or mock version of it (which is backed by some in-memory data structure instead of real files and network sockets). That way you can write unit tests, which don't need an FTP server or network connection, for everything else except the FTP client component.
In addition to those tests, it might be desirable to also have some end-to-end tests which launch the whole program (unlike the component-level focused integration tests) and connect a real FTP server. End-to-end tests can't cover all corner cases (unlike unit tests), but they can help to solve integration issues.