rails rspec capybara cannot get my internal api to connect - ruby-on-rails

Constructing a basic rails app I'm re-factoring to do heavy lifting on an external docker/compute as a service i.e. iron.io. the 'worker'
In refactoring created Grape API to allow status of processing from remote 'worker' to notify the server when processing is done. The user interface then uses ajax to poll the local server to update. API and basic tests all ok. It also works in development using Delayed::job running the worker.
I however cannot seem to get my capybara tests to work end to end as the delayed::job running process making the HTTP request back to the server always gets connection refused.
It works fine if i run a rails server in parallel as the tests: (RAILS_ENV="test" rails s -p 3001), then make sure the ENV variable is set to port 3001.
I had tried
various combination of Capybara.configure (as below)
in the test: visit url (where url="http://#{Capybara.server_host}:#{Capybara.server_port}" ) to see if that 'kicks off' the server perhaps
various webdrivers (poltergeist, selenium etc)
Any thoughts, experience or guidance much appreciated
Ben
note: in the code
populate the domain & port via ENV[''] variables that are populated (these environment variables will be set in the running environment iron.io)
port & app_host set as below
ENV variables populated in the test
Capybara.configure do |config|
config.run_server = true
config.server_port = "9876"
config.app_host = "http://127.0.0.1:9876"
end
rails 4.1.0
rspec 3.4.0
capybara 2.7.0
poltergeist 1.5.1
selenium 2.53.0

I think you're trying to have your test too too much. I would recommend that you "mock out" the interactions with the other service to make the tests self sufficient. In the past I have added a test.js that:
Mocks out ajax on the page
Checks for specific requests to have been made (page.evaluate_script)
Responds back to them in the way your external service will (execute_script)
Like this:
# test.js
$.ajax = function(settings) {
window.__ajaxRequests || (window.__ajaxRequests = []);
window.__ajaxRequests.push(settings);
return {
done: function(cb) { settings.__done = cb; }
}
}
# spec/features/jobs_spec.rb
visit '/jobs'
click_button 'Start job'
requests = page.evaulate_script('window.__ajaxRequests')
expect(requests.size).to eq(1)
expect(requests[0].url).to eq('http://jobs.yourproduct.com/start')
...
expect(page).not_to have_content('Job completed')
page.execute_script('window.__ajaxRequests[0].__done({data:{status:"complete"}})')
expect(page).to have_content('Job completed')

Related

Front-end testing using React and Selenium-Webdriver with Rails as Backend

I just want to test the Front-End part. So, here is my problem:
Background
I have a robust Ruby on Rails (V3.2) backend app and an entiry new and separate front-end app with ReactJs (V16.4).
Problem
We begin to test React app with the help of Selenium-Webdriver and JestJs, we managed to try several views, but the problem arose when we made POST requests to the rails API.
I don't want to fill my database (development) with garbage because of the tests.
Ex: What happens when I want to test the creation of a new user?.
Possible solutions thought
I was thinking in 3 solutions:
Intercept the API calls and mock them by imitating their response (ex: at submitting click using selenium-webdriver).
Make use of Rails test environment through React
Just revert the call of the API doing the opposite, this would mean creating often undesirable actions in the controller. (ex: doing a delete for each post)
It depends if you want to test the whole stack (frontend/backend) or only the frontend part.
Frontend tests
If you only want to test the frontend part go with your first solution : mock API calls.
You will be limited if you just use the selenium-webdriver directly. I would recommend using nightwatch or testcafe. Testcafe does not depend on selenium. This is also optional in the latest versions of Nightwatch.
Testcafe includes a Request mocking API : http://devexpress.github.io/testcafe/documentation/test-api/intercepting-http-requests/mocking-http-responses.html
With Nightwatch you could use nock. See Nightwatch Mock HTTP Requests
Full stack tests
If you want to test the whole stack, you may use this approach : implement a custom API endpoint to allow for resetting your database in a clean state before or after tests execution. (like "/myapi/clean")
You should disable access to this endpoint in production environments.
You can then implement test hooks (before/after) to call your custom api endpoint :
http://nightwatchjs.org/guide#using-before-each-and-after-each-hooks
http://devexpress.github.io/testcafe/documentation/test-api/test-code-structure.html#test-hooks
You could have a test environment. From my experience, garbage data generated by tests is not such a big deal. You can periodically clean it up. Or you can spin up a new environment for every test run.
Finally I decided to use enzyme with jest and sinon.
example code:
import { mount } from "enzyme";
import sinon from "sinon";
beforeAll(() => {
server = sinon.fakeServer.create();
const initialState = {
example: ExampleData,
auth: AuthData
};
wrapper = mount(
<Root initialState={initialState}>
<ExampleContainer />
</Root>
);
});
it("example description", () => {
server.respondWith("POST", "/api/v1/example", [
200,
{ "Content-Type": "application/json" },
'message: "Example message OK"'
]);
server.respond();
expect(wrapper.find(".response").text().to.equal('Example message OK');
})
In the code above we can see how to intercept API calls using the test DOM created by the enzyme and then mock API responses using sinon.

Running Pact against test environment in Rails API

Just playing around with Pact against my Rails API and noticed that the out-of-the-box Pact setup runs against the "development" environment by default.
How do I configure to run against the "test" environment without having to specify it in command line when I run the task (RAILS_ENV=test). Couldn't find it easily in the docs how to do it.
Using following gems:
pact (1.10.0)
pact-mock_service (0.12.1)
pact-support (0.6.0)
pact_helper.rb:
require 'pact/provider/rspec'
Pact.service_provider 'Auslan API Service' do
honours_pact_with 'Auslan Web App' do
# This example points to a local file, however, on a real project with a continuous
# integration box, you would use a [Pact Broker](https://github.com/bethesque/pact_broker) or publish your pacts as artifacts,
# and point the pact_uri to the pact published by the last successful build.
pact_uri './user-specs-user-api.json' # need to update this
end
end
Pact.configure do | config |
config.diff_formatter = :embedded
end
Pact.provider_states_for 'User-Specs' do
provider_state 'there are users already added inside the database' do
set_up do
user1 = User.create(email: 'abcd#a.au', first_name: 'Jane', last_name: 'Doe', password: 'abcd#1234')
# set the Auth token
token = Knock::AuthToken.new(payload: { sub: user1.id }).token
pacts = File.join(File.dirname(File.expand_path(__FILE__)), '../../user-specs-user-api.json')
Dir.glob(pacts).each do |f|
text = File.read(f)
output_of_gsub = text.gsub(/\"Authorization\"\s*:\s*\".+\"/) { "\"Authorization\": \"Bearer #{token}\"" }
File.open(f, "w") { |file| file.puts output_of_gsub }
end
end
end
end
Thanks,
Mo
I haven't written any code to allow that to happen. The part of the code where the app gets loaded is here: https://github.com/pact-foundation/pact-ruby/blob/master/lib/pact/provider/configuration/service_provider_dsl.rb#L16
You can override the app in the configuration if you have a handle to it, but I can't remember how to do that with with a Rails app off the top of my head. Maybe you could have a play around with the Rack builder and see if you can pass in any environment variables to it. I'd be happy to accept a PR if you can work out how to do it.

Rails: How to listen to / pull from service or queue?

Most Rails applications work in a way that they are waiting for requests comming from a client and then do their magic.
But if I want to use a Rails application as part of a microservice architecture (for example) with some asychonious communication (Serivce A sends an event into a Kafka or RabbitMQ queue and Service B - my Rails app - is supposed to listen to this queue), how can I tune/start the Rails app to immediately listen to a queue and being triggered by event from there? (Meaning the initial trigger is not comming from a client, but from the App itself.)
Thanks for your advice!
I just set up RabbitMQ messaging within my application and will be implementing for decoupled (multiple, distributed) applications in the next day or so. I found this article very helpful (and the RabbitMQ tutorials, too). All the code below is for RabbitMQ and assumes you have a RabbitMQ server up and running on your local machine.
Here's what I have so far - that's working for me:
#Gemfile
gem 'bunny'
gem 'sneakers'
I have a Publisher that sends to the queue:
# app/agents/messaging/publisher.rb
module Messaging
class Publisher
class << self
def publish(args)
connection = Bunny.new
connection.start
channel = connection.create_channel
queue_name = "#{args.keys.first.to_s.pluralize}_queue"
queue = channel.queue(queue_name, durable: true)
channel.default_exchange.publish(args[args.keys.first].to_json, :routing_key => queue.name)
puts "in #{self}.#{__method__}, [x] Sent #{args}!"
connection.close
end
end
end
end
Which I use like this:
Messaging::Publisher.publish(event: {... event details...})
Then I have my 'listener':
# app/agents/messaging/events_queue_receiver.rb
require_dependency "#{Rails.root.join('app','agents','messaging','events_agent')}"
module Messaging
class EventsQueueReceiver
include Sneakers::Worker
from_queue :events_queue, env: nil
def work(msg)
logger.info msg
response = Messaging::EventsAgent.distribute(JSON.parse(msg).with_indifferent_access)
ack! if response[:success]
end
end
end
The 'listener' sends the message to Messaging::EventsAgent.distribute, which is like this:
# app/agents/messaging/events_agent.rb
require_dependency #{Rails.root.join('app','agents','fsm','state_assignment_agent')}"
module Messaging
class EventsAgent
EVENT_HANDLERS = {
enroll_in_program: ["FSM::StateAssignmentAgent"]
}
class << self
def publish(event)
Messaging::Publisher.publish(event: event)
end
def distribute(event)
puts "in #{self}.#{__method__}, message"
if event[:handler]
puts "in #{self}.#{__method__}, event[:handler: #{event[:handler}"
event[:handler].constantize.handle_event(event)
else
event_name = event[:event_name].to_sym
EVENT_HANDLERS[event_name].each do |handler|
event[:handler] = handler
publish(event)
end
end
return {success: true}
end
end
end
end
Following the instructions on Codetunes, I have:
# Rakefile
# Add your own tasks in files placed in lib/tasks ending in .rake,
# for example lib/tasks/capistrano.rake, and they will automatically be available to Rake.
require File.expand_path('../config/application', __FILE__)
require 'sneakers/tasks'
Rails.application.load_tasks
And:
# app/config/sneakers.rb
Sneakers.configure({})
Sneakers.logger.level = Logger::INFO # the default DEBUG is too noisy
I open two console windows. In the first, I say (to get my listener running):
$ WORKERS=Messaging::EventsQueueReceiver rake sneakers:run
... a bunch of start up info
2016-03-18T14:16:42Z p-5877 t-14d03e INFO: Heartbeat interval used (in seconds): 2
2016-03-18T14:16:42Z p-5899 t-14d03e INFO: Heartbeat interval used (in seconds): 2
2016-03-18T14:16:42Z p-5922 t-14d03e INFO: Heartbeat interval used (in seconds): 2
2016-03-18T14:16:42Z p-5944 t-14d03e INFO: Heartbeat interval used (in seconds): 2
In the second, I say:
$ rails s --sandbox
2.1.2 :001 > Messaging::Publisher.publish({:event=>{:event_name=>"enroll_in_program", :program_system_name=>"aha_chh", :person_id=>1}})
in Messaging::Publisher.publish, [x] Sent {:event=>{:event_name=>"enroll_in_program", :program_system_name=>"aha_chh", :person_id=>1}}!
=> :closed
Then, back in my first window, I see:
2016-03-18T14:17:44Z p-5877 t-19nfxy INFO: {"event_name":"enroll_in_program","program_system_name":"aha_chh","person_id":1}
in Messaging::EventsAgent.distribute, message
in Messaging::EventsAgent.distribute, event[:handler]: FSM::StateAssignmentAgent
And in my RabbitMQ server, I see:
It's a pretty minimal setup and I'm sure I'll be learning a lot more in coming days.
Good luck!
I'm afraid that for RabbitMQ at least you will need a client. RabbitMQ implements the AMQP protocol, as opposed to the HTTP protocol used by web servers. As Sergio mentioned above, Rails is a web framework, so it doesn't have AMQP support built into it. You'll have to use an AMQP client such as Bunny in order to subscribe to a Rabbit queue from within a Rails app.
Lets say Service A is sending some events to Kafka queue, you can have a background process running with your Rails app which would lookup into the kafka queue and process those queued messages. For background process you can go for cron-job or sidekiq kind of things.
Rails is a lot of things. Parts of it handle web requests. Other parts (ActiveRecord) don't care if you are a web request or a script or whatever. Rails itself does not even come with a production worthy web server, you use other gems (e.g., thin for plain old web browsers, or wash_out for incoming SOAP requests) for that. Rails only gives you the infrastructure/middleware to combine all the pieces regarding servers.
Unless your queue can call out to your application in some fashion of HTTP, for example in the form of SOAP requests, you'll need something that listens to your queueing system, whatever that may be, and translates new "tickets" on your queue into controller actions in your Rails world.

Ember/Rails end-to-end testing error

I have an Ember CLI app with a Rails back-end API. I am trying to set up end-to-end testing by configuring the Ember app test suite to send requests to a copy of the Rails API. My tests are working, but I am getting the following strange error frequently:
{}
Expected: true
Result: false
at http://localhost:7357/assets/test-support.js:4519:13
at exports.default._emberTestingAdaptersAdapter.default.extend.exception (http://localhost:7357/assets/vendor.js:52144:7)
at onerrorDefault (http://localhost:7357/assets/vendor.js:42846:24)
at Object.exports.default.trigger (http://localhost:7357/assets/vendor.js:67064:11)
at Promise._onerror (http://localhost:7357/assets/vendor.js:68030:22)
at publishRejection (http://localhost:7357/assets/vendor.js:66337:15)
This seems to occur whenever a request is made to the server. An example test script which would recreate this is below. This is a simple test which checks that if a user clicks a 'login' button without entering any email/password information they are not logged in. The test passes, but additionally I get the above error before the test passes. I think this is something to do with connecting to the Rails server, but have no idea how to investigate or fix it - I'd be very grateful for any help.
Many thanks.
import Ember from 'ember';
import { module, test } from 'qunit';
import startApp from 'mercury-ember/tests/helpers/start-app';
module('Acceptance | login test', {
beforeEach: function() {
this.application = startApp();
},
afterEach: function() {
Ember.run(this.application, 'destroy');
}
});
test('Initial Login Test', function(assert)
{
visit('/');
andThen(function()
{
// Leaving identification and password fields blank
click(".btn.login-submit");
andThen(function()
{
equal(currentSession().get('user_email'), null, "User fails to login when identification and password fields left blank");
});
});
});
You can check in the Network panel of Chrome or Firefox developer tools that the request is being made. At least with ember-qunit you can do this by getting ember-cli to run the tests within the browser rather than with Phantom.js/command-line.
That would help you figure out if it's hitting the Rails server at all (the URL could be incorrect or using the wrong port number?)
You may also want to see if there is code that needs to be torn down. Remember that in a test environment the same browser instance is used so all objects need to be torn down; timeouts/intervals need to be stopped; events need to be unbound, etc.
We had that issue a few times where in production there is no error with a utility that sent AJAX requests every 30 seconds, but in testing it was a problem because it bound itself to the window (outside of the iframe) so it kept making requests even after the tests were torn down.

Ruby on Rails -- Faye Framework -- private_pub

I'm using private_pub to implement a one-to-one chat-like application.
Here is my story: as a user, I would like to receive a message when my partner leaves the chat – closes the window, etc.
Looking through the Faye Monitoring docs here is my attempt at binding on unsubscribe:
# Run with: rackup private_pub.ru -s thin -E production
require "bundler/setup"
require "yaml"
require "faye"
require "private_pub"
require "active_support/core_ext"
Faye::WebSocket.load_adapter('thin')
PrivatePub.load_config(File.expand_path("../config/private_pub.yml", __FILE__), ENV["RAILS_ENV"] || "development")
wts_pubsub = PrivatePub.faye_app
wts_pubsub.bind(:subscribe) do |client_id, channel|
puts "[#{Time.now}] Client #{client_id} joined #{channel}"
end
wts_pubsub.bind(:unsubscribe) do |client_id, channel|
puts "[#{Time.now}] Client #{client_id} disconnected from #{channel}"
PrivatePub.publish_to channel, { marius_says: 'quitter' }
end
run wts_pubsub
but I keep getting timeouts: [ERROR] [Faye::RackAdapter] Timeout::Error
Prying into PrivatePub#publish_to, data holds what I expect both when I'm publishing from the Rails or the private_pub app, but the private_pub app keeps hanging.
How can I get publishing from private_pub to work?
Your second bind should be to disconnect event instead of unsubscribe.
Also, remember to fire off a Faye/PrivatePub disconnect event in your client side code when a browser window is closed.
Note: You might need to do this for all open sessions with the Faye server or just on a channel by channel basis based on chat application's design
In plain JS this might be something like:
window.onbeforeunload = functionThatTriggersFayeDisconnectEvent;
Sorry for not using proper markup, posting from mobile.
After hours of research and numerous attempts, this is the solution I found:
Replace PrivatePub.publish_to channel, { marius_says: 'quitter' } with:
system "curl http://localhost:9292/faye -d 'message={\"channel\":\"#{channel}\", \"data\":{\"channel\":\"#{channel}\",\"data\":{\"message\":{\"content\":\"#{client_id} disconnected from this channel.\"}}}, \"ext\":{\"private_pub_token\":\"ADD_APPROPRIATE_SECRET_HERE\"}}' &"
This will trigger an asynchronous request (curl + &) which will bypass the problem. Not the best fix, but it works.

Resources