Rails engine migrations, schema dump, and dependencies. - rails-engines

I am following this tutorial on creating Rails Engines and I'm curious about whether or not I need to list all of my host's dependencies (I'm creating a Rails engine called admin inside a larger Rails application) inside the Engine's gem file (apparently the engine will be accessed via a gem). Why do I need to do this?
Also why does the engine need all of the host's migrations? Or does the engine just need the migrations relevant to the files that I'm moving over to the engine?

An engine should be completely independent from its host. It's isolated in code and data, and should be able to drop into any host and work the same way. That means the engine doesn't have any special knowledge of the inner workings of its host, and the host has no special knowledge of the engine's internals.
If your engine depends on a model called Admin, then it should include a migration template for creating an admins table and 100% of the code needed to interact with Admins. The migration template will be copied into the host's db/migrations folder and run alongside its other migrations. Don't add migrations to the engine itself, because it'll have no way of running them once it's inside a host. Remember: the engine cannot know anything internal to the host, including its database schema.
I strongly recommend you create and maintain this separation. It'll save you huge headaches in the future.
Within the engine, you need to include all dependencies and code for the engine alone. Do not add dependencies or code required for the host, because the engine isn't allowed to know about them.
This is harder than it sounds, but there are great examples of engines you can follow. Check out RailsAdmin and Devise for high-quality samples of code organization, data management, and testing.
Testing is important. In order for your engine to actually display pages or interact, you may need to include dependencies like Rails. You can do that, but make sure you add them as development dependencies to your Gemfile. See the above projects for examples of how to do this.
I recommend you build your engine outside your host project, because it'll force you to write tests that don't rely on the host app. If your engine is testable and works well on its own, it'll work great when you drop it into your host too.

Related

Rspec: run an outside rails server

This question is about starting a rails server of the external project from a rspec environment.
There is 2 projects.
First project act as the Admin Back Office, it's the central application where users interact with web pages. I call it BackOffice
Second project is a Json API Server which will receive commands from the Admin Back Office through json requests.I call it ApiServer
I am trying to test API interaction between those 2 rails projects, and I would like to set-up rspec so I can write and maintain my spec files in BackOffice project. Those specs would start a ApiServer rails server and then play around to perform the tests.
My issue is about starting the ApiServer rails server. After looking at the rails app initialization files, I assumed I had to add a require to "config/environment".
But when I insert into BackOffice/spec/spec_helper.rb
require File.expand_path('../../../ApiServer/config/environment', __FILE__)
I get the error
`initialize!': Application has been already initialized. (RuntimeError)
# Backtrace to the file:
# ApiServer/config/environment.rb
# Line:
# Rails.application.initialize!
I also tried to simply call the following in backticks
`cd /api/path; bundle exec rails s -p 3002`
but got the same kind of error
Then I got inspiration from Capybara source code, and required the "ApiServer/application", then I am able to create a ApiServer.new object, but as soon as I call initialize! on it it I get the same message.
Any help is greatly appreciated. Cheers
Actually the second app is nothing more then an external service, which is better to stub for the tests.
There is one nice article from thoughtbot about using vcr gem to mock external web services:
https://robots.thoughtbot.com/how-to-stub-external-services-in-tests
Obligatory "don't do that unless you really need to".
However, since it seems you know what you need:
Short answer:
You need to isolate both application in system environment and launch it from there using system-calls syntax.
Long answer:
What you're trying to do is to run two Rails applications in the same environment. Since they both are Rails applications they share a lot of common names. Running them ends in name clash, which you're experiencing. Your hunch to try simple back ticks was good one, unfortunately you went with a bundler in already existing environment, which also clashes.
What you have to do in order to make it work is to properly isolate (in terms of code, not in terms of network i.e. communication layer ) application and then run launcher from rspec. There are multiple ways, you could:
Use Ruby process control (Check this graph, you could try to combine it with system level exec)
Daemonize from Operating System level (init.d etc.)
Encapsulate in VM or one of the wrappers (Virtualbox, Vagrant, etc.)
Go crazy and put code on separate machine and control it remotely (Puppet, Ansible, etc.)
Once there, you can simply run launcher (e.g. daemon init script or spawn new process in isolated environment) from RSpec and that's it.
Choosing which way to go with is highly dependent on your environment.
Do you run OSX, Linux, Windows? Are you using Docker? Do you manage Ruby libraries through things like RVM? Things like this.
Generally it's a bad idea to require launching another service/application to get your unit tests to pass. This type of interaction is usually tested by mocking or vcring responses, or by creating environment tests that run against deployed servers. Launching another server is outside the scope of rspec and generally, as you've discovered, will cause a lot of headaches to setup and maintain.
However, if you're going to have these rails projects tightly coupled and you want them to share resources, I'd suggest investigating Rails Engines. To do this will require a substantial amount of work but the benefits can be quite high as the code will share a repository and have access to each other's capabilities, while maintaining application isolation.
Engines effectively create a rails application within another rails application. Each application has it's own namespace and a few isolating guards in place to prevent cross app contamination. If you have many engines it becomes ideal to have a shell rails application with minimal capabilities serving each engine on a different route/namespace.
First you need to create housing for the new api engine.
$ rails plugin new apiserver --mountable
This will provide you with lib/apiserver/engine.rb as well as all the other scaffolding you'll need to run your API as an engine. You'll also notice that config/routes.rb now has a route for your engine. You can copy your existing routes into this to provide a route path for your engine. All of your existing models will need to be moved into the namespace and you'll need to migrate any associated tables to the new naming convention. You'll also have some custom changes depending on your application and what you need to copy over to the engine, however the rails guide walks your through these changes (I won't enumerate all of them here).
It took a coworker about a week of work to get a complicated engine copied into another complicated rails server while development on both apps was occurring and with preserving version control history. A simpler app -- like an api only service -- I imagine would be quicker to establish.
What this gives you is another namespace scope at the application root. You can change this configuration around as you add more engines and shared code to match various other directory structures that make more sense.
app
models
...
apiserver
app
...
And once you've moved your code into the engine, you can test against your engine routers:
require "rails_helper"
describe APIServer::UsersController do
routes { APIServer::Engine.routes }
it "routes to the list of all users" do
expect(:get => users_path).
to route_to(:controller => "apiserver/users", :action => "index")
end
end
You should be able to mix and match routes from both services and get cross-application testing done without launching a separate Rails app and without requiring an integration environment for your specs to pass.
Task rabbit has a great blog on how to enginize a rails application as a reference. They dive into the what to-do's and what not-to-do's in enginizing and go into more depth than can be easily transcribed to a SO post. I'd suggest following their procedure for engine decision making, though it's certainly not required to successfully enginize your api server.
You can stub requests like:
stub_request(:get, %r{^#{ENV.fetch("BASE_URL")}/assets/email-.+\.css$})

Nesting Ruby gems inside a Rails project

How do I create a gem project nested inside my current Rails project?
I've got a Rails project with several parts that could easily be gems. I would like to extract these parts into gems but not leave the current Rails project. Creating new source control repos for the gems add additional complexity that project or organization is not ready or able to handle. These complexities will be overcome at some point and I would like to be ready.
So far I can only think of these items.
Relocate code to a single directory root. I'm guessing this would be in the vendor path
Create a <something>.gemspec
Link to the gem in the Gemfile of the Rails app
gem 'my_lib_code', path: 'vendor/my_lib_code'
What else do I need to do? I'm sure I'm missing something important.
If this were a c project I would create another shared library that the make process spits out. Or if this where a c# project I would make a .dll. For Java I would...
I'm sure Ruby can do the same as all the other languages. Something that is a half way step between a normally fully extracted gem and just some code siting in my lib path.
This is a perfectly fine approach for a component-based architecture.
You have a single repository, a single test suite, and a single deployment process, while at the same time you are "forced" to think of clean interfaces and separation of concerns.
Of course if you are planning on sharing this functionality with other projects, an externally hosted (but not necessarily public) Gem would serve better.
Implementation wise, you can get some nifty ideas from Stephan Hagemann's talk at this year's RailsConf: "Get started with Component-based Rails applications!"

Rails Engine migrations to a different folder

Reference question
Our group works with a common application but we also individually work on Engines. Is there a configuration is Rails 3 that allows us to put Engine-related migrations files on a different folder?
The goal is to track our migrations in Git, but also separate migrations related to the common app from the ones for Engines.
You can actually name migrations whatever you want. Docs. I would suggest the following convention.
[DATE]_[ENGINE_NAME|CORE]_[DESCRIPTION].rb
There you go!

How to externalize Rails model (api/gem/plugin)

currently I am working on a RoR application (2.3.14 with ActiveRecord) (let's call it A).
Now I started another project B (a remote testing app using capybara, looks something like this: https://github.com/searls/remote-capybara-cucumber-example).
But now I need to have access to the model of application B for test data setup (and possibly test assertions). I therefore would like to use the existing model classes (and some additional libraries like factory_girl if necessary).
I certainly don't want to wrap my project B in a Rails app and copy the model classes. So is there a way to organize A so that B can access the model and create/update/destroy entities?
Are there any keywords for further research (I tried several google searches containing rails model as a gem, as a plugin, externalize rails model etc... but nothing useful turned up (mostly the documentation of ActiveRecord)
Rails 2.x does make it very hard to share the model layer between two applications.
If you don't care about maintaining migrations twice, you can put the models into a gem and then require it in your apps.
Another way is to symlink the db and app/models directories from both applications to a shared folder. This works quite well though you have to be careful because rake tasks and generators now affect both applications.
Rails 3.1 ships with an improved implementation of rails engines. Engines allow you to isolate parts of an rails application and package them up as a gem.
You could try using an alias (symbolic link) to the A's app/models directory in the B project.
On Mac/Linux:
ln -s /volumes/code/project-a/app/models/ /volumes/code/project-b/models

How to manage differences using same code base for multiple Rails 2.3 websites

We have a website using Rails 2.3.x, bundler, nginx, passenger and git, and would now like to use the same code to deploy a very similar site. Differences between the two will include:
Locale
Databases
Validations in some cases
Views in some cases
What is the best way to manage these differences while using the same code base?
Some ideas we've had:
Create new Rails environments, such as production-a and production-b and handle differences in the appropriate environment files. One potential problem is that many gems and plugins are hardcoded to look for production or development environments.
Use Passenger to set a global variable or use the domain per request to determine which context to use. The problem with this are rake tasks, cron jobs, etc that would not have access to this state.
Maintain two versions of the config directory. This would be inconvenient maintaining 2 versions of all the config file, many of which would be identical. Also, I'm now sure how to leverage git to do this correctly.
Any ideas, tips, or examples would be greatly appreciated! Question #6753275 is related but seems incomplete.
One solution I have used in a rails 2.3.x project was to convert the entire site to an engine. That actually is pretty easy, create a folder under vendor\plugins\ and move all the app stuff there. You can see an explanation for rails 2.3 here.
If needed you can even move all migrations and stuff there as well, and use a rake task
to run those.
Everything that needs to be overruled can then just be placed in the actual rails project using the engine. So you would have two rails-projects, with their own configuration, locales and some local overrules, and one big shared plugin/engine.
We used git submodules to keep the code in sync over different projects.
In rails 3 this is even easier, since the engine can now just be a gem.
Hope this helps.

Resources