Front-end testing using React and Selenium-Webdriver with Rails as Backend - ruby-on-rails

I just want to test the Front-End part. So, here is my problem:
Background
I have a robust Ruby on Rails (V3.2) backend app and an entiry new and separate front-end app with ReactJs (V16.4).
Problem
We begin to test React app with the help of Selenium-Webdriver and JestJs, we managed to try several views, but the problem arose when we made POST requests to the rails API.
I don't want to fill my database (development) with garbage because of the tests.
Ex: What happens when I want to test the creation of a new user?.
Possible solutions thought
I was thinking in 3 solutions:
Intercept the API calls and mock them by imitating their response (ex: at submitting click using selenium-webdriver).
Make use of Rails test environment through React
Just revert the call of the API doing the opposite, this would mean creating often undesirable actions in the controller. (ex: doing a delete for each post)

It depends if you want to test the whole stack (frontend/backend) or only the frontend part.
Frontend tests
If you only want to test the frontend part go with your first solution : mock API calls.
You will be limited if you just use the selenium-webdriver directly. I would recommend using nightwatch or testcafe. Testcafe does not depend on selenium. This is also optional in the latest versions of Nightwatch.
Testcafe includes a Request mocking API : http://devexpress.github.io/testcafe/documentation/test-api/intercepting-http-requests/mocking-http-responses.html
With Nightwatch you could use nock. See Nightwatch Mock HTTP Requests
Full stack tests
If you want to test the whole stack, you may use this approach : implement a custom API endpoint to allow for resetting your database in a clean state before or after tests execution. (like "/myapi/clean")
You should disable access to this endpoint in production environments.
You can then implement test hooks (before/after) to call your custom api endpoint :
http://nightwatchjs.org/guide#using-before-each-and-after-each-hooks
http://devexpress.github.io/testcafe/documentation/test-api/test-code-structure.html#test-hooks

You could have a test environment. From my experience, garbage data generated by tests is not such a big deal. You can periodically clean it up. Or you can spin up a new environment for every test run.

Finally I decided to use enzyme with jest and sinon.
example code:
import { mount } from "enzyme";
import sinon from "sinon";
beforeAll(() => {
server = sinon.fakeServer.create();
const initialState = {
example: ExampleData,
auth: AuthData
};
wrapper = mount(
<Root initialState={initialState}>
<ExampleContainer />
</Root>
);
});
it("example description", () => {
server.respondWith("POST", "/api/v1/example", [
200,
{ "Content-Type": "application/json" },
'message: "Example message OK"'
]);
server.respond();
expect(wrapper.find(".response").text().to.equal('Example message OK');
})
In the code above we can see how to intercept API calls using the test DOM created by the enzyme and then mock API responses using sinon.

Related

NestJS microservices error with "No matching message handler"

I'm building an application with microservices communicating through RabbitMQ (request-response pattern). Everything works fine but still I have a problem with error "There is no matching message handler defined in the remote service." - When I send POST to my Client app, it should simply send the message with data through client (ClientProxy) and the Consumer app should response. This functionality actually works, but always only for the second time. I know it sounds strange but on my first POST request there is always the error from Client and my every second POST request works. However this problem is everywhere in my whole application, so the particular POST request is just for the example.
Here is the code:
Client:
#Post('devices')
async pushDevices(
#Body(new ParseArrayPipe({ items: DeviceDto }))
devices: DeviceDto[]
) {
this.logger.log('Devices received');
return this.client.send(NEW_DEVICES_RECEIVED, devices)
}
Consumer:
#MessagePattern(NEW_DEVICES_RECEIVED)
async pushDevices(#Payload() devices: any, #Ctx() context: RmqContext) {
console.log('RECEIVED DEVICES');
console.log(devices);
const channel = context.getChannelRef();
const originalMsg = context.getMessage();
channel.ack(originalMsg);
return 'ANSWER';
}
Client has the RMQ settings with queueOptions: {durable: true} and the consumer as well queueOptions: {durable: true} with noAck: false
Please do you have any ideas what may causes the problem? I have tried sending the data with JSON.stringify and changing the message structure to {data: devices} but the error is still there.
I had same error and finally solve it today.
In my project, there is an api-gateway as a hybrid application to receive requests and pass data to other systems, every second request gives an error like below.
error: There is no matching message handler defined in the remote service.
Then I tried to remove the api-gateway hybrid application scope in the code below, the error is gone, hope this helps you out with this.
// api-gateway main.ts
const app = await NestFactory.create(AppModule);
// run as a hybrid app —→ remove it
app.connectMicroservice({
transport: Transport.RMQ,
noACK: false,
options: {
urls: [`amqp://${rmqUser}:${rmqPassword}#127.0.0.1:5672`],
queue: 'main_queue',
queueOptions: {
durable: false,
},
},
});
// run hybrid app
await app.startAllMicroservices(); —→ remove it
await app.listen(3000);
I solved this issue by placing the #EventPattern decorator on to a #Controller decorator method
I had this error while NOT using RabbitMQ. I found very little help online around this error message outside of it being related to RabbitMQ.
For me it was an issue where I was importing a DTO from another microservice in my microservice's Controller. I had a new DTO in my microservice that has a similar name to one in another microservice. I accidentally selected the wrong one from the automated list.
Since there wasn't any real indicator that my build was bad, just this error, I wanted to share in case others made the same mistake I did.
I encountered this same issue today and could not find any solution online and stumbled upon your question. I solved it in a hacky way and am not sure how it will behave when the application scales.
I basically added one #EventPattern (#MessagePattern in your case) in the controller of the producer microservice itself. And I called the client.emit() function twice.
So essentially the first time it gets consumed by the function that is in the producer itself and the second emit actually goes to the actual consumer.
This way only one POST call is sufficient.
Producer Controller:
#EventPattern('video-uploaded')
async test() {
return 1;
}
Producer client :
async publishEvent(data: VideosDto) {
this.client.emit('video-uploaded', data);
this.client.emit('video-uploaded', data);
}
I've experienced the same error in my another project and after some research I've found out that problem is in the way of distributing messages in RabbitMQ - named round-robin. In my first project I've solved the issue by creating a second queue, in my second project I'm using the package #golevelup/nestjs-rabbitmq instead of default NestJS library, as it is much more configurable. I recommend reading this question

How to purposely delay an AJAX response while testing with Capybara?

I have a React component that mimics the "link preview" feature that most modern social media sites have. You type in a link and it fetches the image, title, etc...
I do this by having the React component make an AJAX call back to my server to fetch the URL preview data.
While it's fetching I show an intermediate "loading" state (i.e. some loading icon or spinning wheel)
The relevant React snippet looks like
this.setState({ isLoadingAttachment: true })
return $.ajax({
type: "GET",
url: some_url,
dataType: "json",
contentType: "application/json",
}).success(function(response){
// Succesful! Do Success stuff
component.setState({ isLoadingAttachment: false })
}).error(function(response) {
// Uh oh! Handle failure stuff
component.setState({ isLoadingAttachment: false })
});
Note how the isLoadingAttachment state variable is only valid for a brief second while the server is doing the fetching. Both the success and error scenarios immediately disable it.
I'd like to test some functionality during my "loading" state with my Capybara feature specs. I've mocked all the web calls and the data to be returned by the server, but it all happens so quickly that it passes through the "loading" state before I can even run any expect().. statement on it. I also purposely don't call wait_for_ajax so the page will go ahead without waiting for the ajax, but it's still too fast.
Lastly I also tried purposefully delaying the server call by 1.0 second, but that didn't work either. I assume because the whole thing is single threaded somehow?
# `foo` is an arbitrary method called during the server-side execution
allow_any_instance_of(MyController).
to receive(:foo) { sleep(1.0) }.and_call_original
Any thoughts on how I could do this?
Thanks!
Capybara starts up the app server in a different thread than the tests, however if you're using the default Capybara.server setting you may have issues with your app calling back to itself since it uses webrick by default. Instead you should specify Capybara.server = :puma. Beyond that, mocking responses is generally a bad idea in feature specs (which are generally meant to be end-to-end tests) since it means you're not actually testing your apps code the way it would run in production anymore. A better solution is to use something like puffing-billy - https://github.com/oesmith/puffing-billy - to mock web responses outside of your apps code which would allow you to do something like
proxy.stub('https://example.com/proc/').and_return(Proc.new { |params, headers, body|
sleep 2
{ :text => "Your results"}
})

rails rspec capybara cannot get my internal api to connect

Constructing a basic rails app I'm re-factoring to do heavy lifting on an external docker/compute as a service i.e. iron.io. the 'worker'
In refactoring created Grape API to allow status of processing from remote 'worker' to notify the server when processing is done. The user interface then uses ajax to poll the local server to update. API and basic tests all ok. It also works in development using Delayed::job running the worker.
I however cannot seem to get my capybara tests to work end to end as the delayed::job running process making the HTTP request back to the server always gets connection refused.
It works fine if i run a rails server in parallel as the tests: (RAILS_ENV="test" rails s -p 3001), then make sure the ENV variable is set to port 3001.
I had tried
various combination of Capybara.configure (as below)
in the test: visit url (where url="http://#{Capybara.server_host}:#{Capybara.server_port}" ) to see if that 'kicks off' the server perhaps
various webdrivers (poltergeist, selenium etc)
Any thoughts, experience or guidance much appreciated
Ben
note: in the code
populate the domain & port via ENV[''] variables that are populated (these environment variables will be set in the running environment iron.io)
port & app_host set as below
ENV variables populated in the test
Capybara.configure do |config|
config.run_server = true
config.server_port = "9876"
config.app_host = "http://127.0.0.1:9876"
end
rails 4.1.0
rspec 3.4.0
capybara 2.7.0
poltergeist 1.5.1
selenium 2.53.0
I think you're trying to have your test too too much. I would recommend that you "mock out" the interactions with the other service to make the tests self sufficient. In the past I have added a test.js that:
Mocks out ajax on the page
Checks for specific requests to have been made (page.evaluate_script)
Responds back to them in the way your external service will (execute_script)
Like this:
# test.js
$.ajax = function(settings) {
window.__ajaxRequests || (window.__ajaxRequests = []);
window.__ajaxRequests.push(settings);
return {
done: function(cb) { settings.__done = cb; }
}
}
# spec/features/jobs_spec.rb
visit '/jobs'
click_button 'Start job'
requests = page.evaulate_script('window.__ajaxRequests')
expect(requests.size).to eq(1)
expect(requests[0].url).to eq('http://jobs.yourproduct.com/start')
...
expect(page).not_to have_content('Job completed')
page.execute_script('window.__ajaxRequests[0].__done({data:{status:"complete"}})')
expect(page).to have_content('Job completed')

Postman: How to make multiple requests at the same time

I want to POST data from Postman Google Chrome extension.
I want to make 10 requests with different data and it should be at the same time.
Is it possible to do such in Postman?
If yes, can anyone explain to me how can this be achieved?
I guess there's no such feature in postman as to run concurrent tests.
If I were you, I would consider Apache jMeter, which is used exactly for such scenarios.
Regarding Postman, the only thing that could more or less meet your needs is - Postman Runner.
There you can specify the details:
number of iterations,
upload CSV file with data for different test runs, etc.
The runs won't be concurrent, only consecutive.
Do consider jMeter (you might like it).
Postman doesn't do that but you can run multiple curl requests asynchronously in Bash:
curl url1 & curl url2 & curl url3 & ...
Remember to add an & after each request which means that request should run as an async job.
Postman however can generate curl snippet for your request: https://learning.getpostman.com/docs/postman/sending_api_requests/generate_code_snippets/
I don't know if this question is still relevant, but there is such possibility in Postman now. They added it a few months ago.
All you need is create simple .js file and run it via node.js. It looks like this:
var path = require('path'),
async = require('async'), //https://www.npmjs.com/package/async
newman = require('newman'),
parametersForTestRun = {
collection: path.join(__dirname, 'postman_collection.json'), // your collection
environment: path.join(__dirname, 'postman_environment.json'), //your env
};
parallelCollectionRun = function(done) {
newman.run(parametersForTestRun, done);
};
// Runs the Postman sample collection thrice, in parallel.
async.parallel([
parallelCollectionRun,
parallelCollectionRun,
parallelCollectionRun
],
function(err, results) {
err && console.error(err);
results.forEach(function(result) {
var failures = result.run.failures;
console.info(failures.length ? JSON.stringify(failures.failures, null, 2) :
`${result.collection.name} ran successfully.`);
});
});
Then just run this .js file ('node fileName.js' in cmd).
More details here
Not sure if people are still looking for simple solutions to this, but you are able to run multiple instances of the "Collection Runner" in Postman. Just create a runner with some requests and click the "Run" button multiple times to bring up multiple instances.
Run all Collection in a folder in parallel:
'use strict';
global.Promise = require('bluebird');
const path = require('path');
const newman = Promise.promisifyAll(require('newman'));
const fs = Promise.promisifyAll(require('fs'));
const environment = 'postman_environment.json';
const FOLDER = path.join(__dirname, 'Collections_Folder');
let files = fs.readdirSync(FOLDER);
files = files.map(file=> path.join(FOLDER, file))
console.log(files);
Promise.map(files, file => {
return newman.runAsync({
collection: file, // your collection
environment: path.join(__dirname, environment), //your env
reporters: ['cli']
});
}, {
concurrency: 2
});
In postman's collection runner you can't make simultaneous asynchronous requests, so instead use Apache JMeter instead. It allows you to add multiple threads and add synchronizing timer to it
If you are only doing GET requests and you need another simple solution from within your Chrome browser, just install the "Open Multiple URLs" extension:
https://chrome.google.com/webstore/detail/open-multiple-urls/oifijhaokejakekmnjmphonojcfkpbbh?hl=en
I've just ran 1500 url's at once, did lag google a bit but it works.
The Runner option is now on the lower right side of the panel
If you need to generate more consecutive requests (instead of quick clicking SEND button). You can use Runner. Please note it is not true "parallel request" generator.
File->New Runner Tab
Now you can "drag and drop" your requests from Collection and than keep checked only request you would like to generate by a Runner setting 10 iterations (to generate 10 requests ) and delay for example to 0 (to make it as fast as possible).
Easiest way is to get => Google Chrome "TALEND API TESTER"
Go to help + type in Create Scenario
...or just go to this link => https://help.talend.com/r/en-US/Cloud/api-tester-user-guide/creating-scenario
I was able to send several POST API calls simultaneously.
You can use Fiddler with started traffic capture to record manual queries from Postman, then select them in Fiddler's sessions list as much as you want and replay (press R key) - they would run in parallel.
https://docs.telerik.com/fiddler/generate-traffic/tasks/resendrequest
You can run multiple instances of postman Runner and run the same collection with different data files in each instance.
Open multiple postman. It replicates it and run concurrently.

Ember/Rails end-to-end testing error

I have an Ember CLI app with a Rails back-end API. I am trying to set up end-to-end testing by configuring the Ember app test suite to send requests to a copy of the Rails API. My tests are working, but I am getting the following strange error frequently:
{}
Expected: true
Result: false
at http://localhost:7357/assets/test-support.js:4519:13
at exports.default._emberTestingAdaptersAdapter.default.extend.exception (http://localhost:7357/assets/vendor.js:52144:7)
at onerrorDefault (http://localhost:7357/assets/vendor.js:42846:24)
at Object.exports.default.trigger (http://localhost:7357/assets/vendor.js:67064:11)
at Promise._onerror (http://localhost:7357/assets/vendor.js:68030:22)
at publishRejection (http://localhost:7357/assets/vendor.js:66337:15)
This seems to occur whenever a request is made to the server. An example test script which would recreate this is below. This is a simple test which checks that if a user clicks a 'login' button without entering any email/password information they are not logged in. The test passes, but additionally I get the above error before the test passes. I think this is something to do with connecting to the Rails server, but have no idea how to investigate or fix it - I'd be very grateful for any help.
Many thanks.
import Ember from 'ember';
import { module, test } from 'qunit';
import startApp from 'mercury-ember/tests/helpers/start-app';
module('Acceptance | login test', {
beforeEach: function() {
this.application = startApp();
},
afterEach: function() {
Ember.run(this.application, 'destroy');
}
});
test('Initial Login Test', function(assert)
{
visit('/');
andThen(function()
{
// Leaving identification and password fields blank
click(".btn.login-submit");
andThen(function()
{
equal(currentSession().get('user_email'), null, "User fails to login when identification and password fields left blank");
});
});
});
You can check in the Network panel of Chrome or Firefox developer tools that the request is being made. At least with ember-qunit you can do this by getting ember-cli to run the tests within the browser rather than with Phantom.js/command-line.
That would help you figure out if it's hitting the Rails server at all (the URL could be incorrect or using the wrong port number?)
You may also want to see if there is code that needs to be torn down. Remember that in a test environment the same browser instance is used so all objects need to be torn down; timeouts/intervals need to be stopped; events need to be unbound, etc.
We had that issue a few times where in production there is no error with a utility that sent AJAX requests every 30 seconds, but in testing it was a problem because it bound itself to the window (outside of the iframe) so it kept making requests even after the tests were torn down.

Resources