Rspec: run an outside rails server - ruby-on-rails

This question is about starting a rails server of the external project from a rspec environment.
There is 2 projects.
First project act as the Admin Back Office, it's the central application where users interact with web pages. I call it BackOffice
Second project is a Json API Server which will receive commands from the Admin Back Office through json requests.I call it ApiServer
I am trying to test API interaction between those 2 rails projects, and I would like to set-up rspec so I can write and maintain my spec files in BackOffice project. Those specs would start a ApiServer rails server and then play around to perform the tests.
My issue is about starting the ApiServer rails server. After looking at the rails app initialization files, I assumed I had to add a require to "config/environment".
But when I insert into BackOffice/spec/spec_helper.rb
require File.expand_path('../../../ApiServer/config/environment', __FILE__)
I get the error
`initialize!': Application has been already initialized. (RuntimeError)
# Backtrace to the file:
# ApiServer/config/environment.rb
# Line:
# Rails.application.initialize!
I also tried to simply call the following in backticks
`cd /api/path; bundle exec rails s -p 3002`
but got the same kind of error
Then I got inspiration from Capybara source code, and required the "ApiServer/application", then I am able to create a ApiServer.new object, but as soon as I call initialize! on it it I get the same message.
Any help is greatly appreciated. Cheers

Actually the second app is nothing more then an external service, which is better to stub for the tests.
There is one nice article from thoughtbot about using vcr gem to mock external web services:
https://robots.thoughtbot.com/how-to-stub-external-services-in-tests

Obligatory "don't do that unless you really need to".
However, since it seems you know what you need:
Short answer:
You need to isolate both application in system environment and launch it from there using system-calls syntax.
Long answer:
What you're trying to do is to run two Rails applications in the same environment. Since they both are Rails applications they share a lot of common names. Running them ends in name clash, which you're experiencing. Your hunch to try simple back ticks was good one, unfortunately you went with a bundler in already existing environment, which also clashes.
What you have to do in order to make it work is to properly isolate (in terms of code, not in terms of network i.e. communication layer ) application and then run launcher from rspec. There are multiple ways, you could:
Use Ruby process control (Check this graph, you could try to combine it with system level exec)
Daemonize from Operating System level (init.d etc.)
Encapsulate in VM or one of the wrappers (Virtualbox, Vagrant, etc.)
Go crazy and put code on separate machine and control it remotely (Puppet, Ansible, etc.)
Once there, you can simply run launcher (e.g. daemon init script or spawn new process in isolated environment) from RSpec and that's it.
Choosing which way to go with is highly dependent on your environment.
Do you run OSX, Linux, Windows? Are you using Docker? Do you manage Ruby libraries through things like RVM? Things like this.

Generally it's a bad idea to require launching another service/application to get your unit tests to pass. This type of interaction is usually tested by mocking or vcring responses, or by creating environment tests that run against deployed servers. Launching another server is outside the scope of rspec and generally, as you've discovered, will cause a lot of headaches to setup and maintain.
However, if you're going to have these rails projects tightly coupled and you want them to share resources, I'd suggest investigating Rails Engines. To do this will require a substantial amount of work but the benefits can be quite high as the code will share a repository and have access to each other's capabilities, while maintaining application isolation.
Engines effectively create a rails application within another rails application. Each application has it's own namespace and a few isolating guards in place to prevent cross app contamination. If you have many engines it becomes ideal to have a shell rails application with minimal capabilities serving each engine on a different route/namespace.
First you need to create housing for the new api engine.
$ rails plugin new apiserver --mountable
This will provide you with lib/apiserver/engine.rb as well as all the other scaffolding you'll need to run your API as an engine. You'll also notice that config/routes.rb now has a route for your engine. You can copy your existing routes into this to provide a route path for your engine. All of your existing models will need to be moved into the namespace and you'll need to migrate any associated tables to the new naming convention. You'll also have some custom changes depending on your application and what you need to copy over to the engine, however the rails guide walks your through these changes (I won't enumerate all of them here).
It took a coworker about a week of work to get a complicated engine copied into another complicated rails server while development on both apps was occurring and with preserving version control history. A simpler app -- like an api only service -- I imagine would be quicker to establish.
What this gives you is another namespace scope at the application root. You can change this configuration around as you add more engines and shared code to match various other directory structures that make more sense.
app
models
...
apiserver
app
...
And once you've moved your code into the engine, you can test against your engine routers:
require "rails_helper"
describe APIServer::UsersController do
routes { APIServer::Engine.routes }
it "routes to the list of all users" do
expect(:get => users_path).
to route_to(:controller => "apiserver/users", :action => "index")
end
end
You should be able to mix and match routes from both services and get cross-application testing done without launching a separate Rails app and without requiring an integration environment for your specs to pass.
Task rabbit has a great blog on how to enginize a rails application as a reference. They dive into the what to-do's and what not-to-do's in enginizing and go into more depth than can be easily transcribed to a SO post. I'd suggest following their procedure for engine decision making, though it's certainly not required to successfully enginize your api server.

You can stub requests like:
stub_request(:get, %r{^#{ENV.fetch("BASE_URL")}/assets/email-.+\.css$})

Related

Automatic Reloading of Sinatra App mounted inside Rails App

I've run into what I hope is an easy-ish fix for someone more experienced than me when adding a Sinatra app to an existing Rails app.
I currently have a large rails monolith that I am planning to break apart into an SPA backed by a JSON API. Since I will still need to support the monolith until the API is completed I would like to mount the API (written in Sinatra) inside the existing Rails app as I port functionality over with a goal of removing the Rails app itself in a few months. The reason I've mounted the Sinatra app inside Rails instead of setting it up as a separate service was that I wanted easy code sharing between the two as I intend to continue using ActiveRecord as my ORM in Sinatra once the migration is complete.
I've mounted the Sinatra mockup inside the Rails app without any issues using the Rails routes.rb file as:
Rails.application.routes.draw do
mount API::Core, at: '/api'
...
end
I'm not doing any work with config.ru as mounting the API inside the routes.rb file seemed to suit my needs. If I need to put in some more legwork to get this running properly that is not an issue.
The entrypoint for the sinatra app is also fairly simple with just a couple controllers loaded in to segment the routing:
require 'sinatra/base'
require 'sinatra/json'
require_relative 'controllers/first_controller'
require_relative 'controllers/second_controller'
module API
class Core < Sinatra::Base
register Sinatra::FirstControllerApi
register Sinatra::SecondControllerApi
end
end
The problem I'm running into is one I expected with Sinatra but haven't been able to flex my google-fu enough to find a solution. Rails automatically reloads code on each change/request as expected but Sinatra does not. Every time I change controller code in the Sinatra API I need to restart the entire Rails server to serve the new content. Since my dev environment runs in docker containers this can take a while to start up each time and is becoming cumbersome.
Is there a 'canonical' solution to the problem of automagically reloading the Sinatra app mounted inside the Rails app or am I just over-complicating the problem? I know there are some gems for Sinatra that apply to this space but I haven't seen any info in how to get them working in this 'odd' edge case.
Please let me know if you need any more info, hopefully I've provided enough for someone to comprehend my issues.
Just like with a Sinatra app on it's own, a Sinatra app mounted inside o rails will not automatically reload as you make changes. Typically in Sinatra people would use external tools to help solve this problem, so they might set up something like Shotgun to watch the directory and restart the server any time a change has happened.
But, like most things in Rails, there is a way hook into it's functionatly for your own programming benefit.
ActiveSupport::FileUpdateChecker
Rails has a handy ActiveSupport::FileUpdateChecker that can watch a file, or a list of files. When they change it will call back to a block. You can use this to reload the Sinatra app -- or anything else for that matter.
In your case you might want to add the following to config/environments/development.rb in the config block for the application:
Rails.application.configure do
# ...
sinatra_reloader = ActiveSupport::FileUpdateChecker.new(Dir["path/to/sinatra/app/**"]) do
Rails.application.reload_routes!
end
config.to_prepare do
sinatra_reloader.execute_if_updated
end
end
Robert Mosolgo has a good write up on his blog: Watching Files During Rails Development. His approach is a little more thorough, but more complicated.

Rails engine migrations, schema dump, and dependencies.

I am following this tutorial on creating Rails Engines and I'm curious about whether or not I need to list all of my host's dependencies (I'm creating a Rails engine called admin inside a larger Rails application) inside the Engine's gem file (apparently the engine will be accessed via a gem). Why do I need to do this?
Also why does the engine need all of the host's migrations? Or does the engine just need the migrations relevant to the files that I'm moving over to the engine?
An engine should be completely independent from its host. It's isolated in code and data, and should be able to drop into any host and work the same way. That means the engine doesn't have any special knowledge of the inner workings of its host, and the host has no special knowledge of the engine's internals.
If your engine depends on a model called Admin, then it should include a migration template for creating an admins table and 100% of the code needed to interact with Admins. The migration template will be copied into the host's db/migrations folder and run alongside its other migrations. Don't add migrations to the engine itself, because it'll have no way of running them once it's inside a host. Remember: the engine cannot know anything internal to the host, including its database schema.
I strongly recommend you create and maintain this separation. It'll save you huge headaches in the future.
Within the engine, you need to include all dependencies and code for the engine alone. Do not add dependencies or code required for the host, because the engine isn't allowed to know about them.
This is harder than it sounds, but there are great examples of engines you can follow. Check out RailsAdmin and Devise for high-quality samples of code organization, data management, and testing.
Testing is important. In order for your engine to actually display pages or interact, you may need to include dependencies like Rails. You can do that, but make sure you add them as development dependencies to your Gemfile. See the above projects for examples of how to do this.
I recommend you build your engine outside your host project, because it'll force you to write tests that don't rely on the host app. If your engine is testable and works well on its own, it'll work great when you drop it into your host too.

How to tell RSpec to restart Rails before an example is run?

Is it possible to tell rspec to restart Rails before an example is run? I'm building an Engine that hooks into the Rails initialization process and the users can make some configuration changes, in an initializer, that impact how Rails and the Engine are configured. I want to be able to simulate those configuration changes, restart rails and test the result.
I haven't done this feat yet, but as best practice I think your engine tests should be part of the engine and should have minimal dependencies.
Some approaches I've seen and believe you should try and combine:
Mock a minimal parent rails app to test your engine.
Write multiple dummy apps to test with.
Instead of loading the entire rails application, you can split spec_helper and rails_helper in smaller parts, also gaining in setup time.
You can write custom rake tasks to switch environment before spawning a new test thread.
You can also overwrite at runtime the configuration values which reflect in your test (plus: use dependency injection!).
If your initializer is complex enough, you could extract it in a testable helper and wire it up in your test initializers.
Also, there seems to be a gem for that: Combustion.

How to test a Rails engine that depends on its hosting application?

I have a rails application, and I'm trying to split the code into several engines.
The main application Holds one main controller - Api::ApplicationController, and all the controllers in the engines inherit from that controller, for example:
Api::Products::ProductsController < Api::ApplicationController
I use RSpec for testing.
When I run the application everything works just perfect, but when I try to run the engine's Rspecs, I get an error:
uninitialized constant Api::ApplicationController
As you understand the engine cannot work separately from the application, so I tried to stub Api::ApplicationController inside the spec (products_controller_spec.rb), but it fails before it even starts to run the spec.
I thought maybe I should Implement it in a different way, and inject ApplicationController functionality somewhere instead of inherit from it, but I don't know where.
Another thing I considered, is to inject the engine's specs into the hosting application and run it from there, but it doesn't seem to work, and it is not possible to debug the specs that way.
How can I test it?
Rails Engines create small dummy applications for testing purposes. You need to create a minimal application that has the features you need to test your application, and it will use this to run it's specs.
See here:
http://edgeguides.rubyonrails.org/engines.html#testing-an-engine
A better solution for this case I think is to put shared application code that all the engines share as modules in a shared gem that all your engines can include.

Testing one Rails app within another

On edx.org there is Software as a Service course that grades all submitted assignments.
You upload a zip-file with a Rails project and they run a bunch of integration and unit tests.
How do they do it?
My thoughts are they mount an uploaded application as Rails engine. Is it possible to test one Rails app within another? I'd like to create a similar service, but I don't know what to start with.
I would imagine you could do this in a similar way that Jenkins runs continuous testing on a project. Where each project uploaded to your site gets expanded into a workspace, and then you shell out and execute commands. But that allows for a lot of variable configurations and complexity that probably doesn't make sense in the scope you've proposed. It also doesn't protect your underlying OS from the projects your testing.
You could also probably use an application container like docker and manage each uploaded application that way as well, which would keep everything self-contained, and isolate the application from the OS. It also puts the onus on the developer to package and manage dependencies correctly. I'm guessing they are probably using docker or something similar, here's an example Using Docker for MOOCS
At the point you want to capture the test results and report them back, I'd think they probably use something similar to the Junit formatter for Rspec or they just parse the rspec output directly.

Resources