I have an app with a Rails backend and a React frontend. I am deploying it in Docker containers: one for the app, one for postgres, and one as a data volume container. I have it working, but the app image file is huge (3Gb!) and takes a long time to build.
I'd love a way to split it up. The React app needs a bunch of Node packages, but only for development; once it's all webpack-ed the React app is essentially static files. And the Rails app doesn't need Node at all.
I don't need all the development-time tooling in the production image, but as it is, I feel like I need to have it all in the same codebase so I can (eventually) set up a CI/CD environment that can build the app and run all the tests. Is there a way to do this such that I'd have a container for the React/Node app and a container for Rails, and connect them at runtime?
I think that you may have found the answer to your question already - split the code bases.
We all have some kind of knee-jerk reflex to want to keep everything in a project in the same repo. It feels safe. Dealing with separate repos seems quite scary, but so does not moshing CSS and JS into HTML for most beginners.
I feel like I need to have it all in the same codebase so I can
(eventually) set up a CI/CD environment that can build the app and run
all the tests
Well that would be nice - however testing javascript through Ruby or automated browsers is painfully slow. You end up with a "fast" suite of unit tests and a slow "suite" of integration tests that take 15+ minutes.
So whats the alternative?
Your API and your SPA application (angular) actually do very different things.
The API takes HTTP requests and poops out JSON. It runs on a Ruby on Rails server and talks with a database and even other API's.
You would do integration tests of you API by sending HTTP requests and testing the response.
Your API should not really care if the request comes from a Fuzzle widget and renders a happy face or not. Its not the API's job.
RSpec.describe 'Pets API' do
let!(:pet) { create(:pet) }
let(:json) { JSON.parse(response.body) }
describe 'GET /pets' do
get '/pets'
expect(json["name"]).to eq pet.name
end
end
The SPA server basically just needs to serve static HTML and just enough javascript to get stuff rolling.
A docker container seems almost overkill here - you just want a nginx server behind a reverse proxy or a load balancer as you're only serving up one thing.
You should have tests written in javascript that either mock out the API server or talk to a fake API server. If you really have to you could automate a browser and let it talk to test version of the API.
Your SPA will most likely have its own JS based toolkit and build process, and most importantly - its own test suite.
Of course this highly opinionated, but think about it - both projects will benefit from having their own infrastructure and a clear focus. Especially the API part which can end up really strange if you start building it around a user interface.
You can take a look to my rails react project at github.com/yovasx2/aquacontrol
Don't forget to start and fork it
Related
In the process of building a SPA, we opted for a combination of Rails API and Ember-cli.
From what we understand, the architecture of the app will be the following:
Rails API will run the back-end of the app, as an API
Ember-cli will run the front-end of the app, as a front-end MV* framework
Data will be served by Rails API to Ember-cli with JSON
What does not seem really clear though, is what should be the development workflow?
In other words, should we:
Build the back-end (rails models, etc), then build the front-end and finally connect both?
Build everything at the same time, but one feature at a time?
Go with another option?
I would recommend building both at the same time, in separate apps (that way you can test your API as an actual API rather than just a backend), but in close proximity to one another. This way you can make sure both play nicely with each other and you're getting the results you actually need, plus if something that you do on one causes an error on the other the bug will become immediately apparent.
Let me know if this answers your question enough, I can clarify/ give additional examples from here if you'd like.
I am developing a system composed of two different rails applications (server and client) which communicate via rest web services.
I have tests for each app individually, but I would like to add some test for the integration between the two platforms, to assert that one creates a request compatible with what the other is expecting.
Any hints would be appreciated.
I have a similar architecture and we are using VCR in order to record all server side responses and avoid make requests always. It could turn annoying and right now I'm looking for a way to clean data from server after every request
I think VCR could be a good start point in order to test integration between your apps.
You can find documentation here -> Relish Docs
I think there could be several approaches here, depending what you have implemented..
If the client Rails app has user interface, try to write Selenium tests to perform the integration test in your local dev environment or a staging environment that has both deployed. (not ideal if the interface is still a prototype, changing frequently...)
Maybe part of the client can be written as a Ruby gem (e.g. the communication rest api is a ruby gem). When the server app in testing environment, the server Rails app can use the Client gem to run integration test, i.e. call the module function, the same function is used by client. The client Rails app can also use the gem to make requests to the server. Here's a good guide to start migrating some of your reusable code to rubygem.
Lets say; I am developing a Web Application which talks to a RESTful web service for certain things.
The RESTful web service isn't third party, but is being developed parallely with main application (A good example would be, E commerce application and payment processor; or Social network and SSO system).
In such a system, acceptance(cucumber) or functional tests can be done in two ways:
By mocking out all external calls using object level mocking library; such as Mocha or JMock.
By doing mocking at http level, using libraries such as webmock.
By actually letting the main application make the actual call.
Problem with #1 and #2 is, if API of underlying application changes; my tests would keep passing and code will actual break and hence defeating the purpose of tests in first place.
Problem with #3 is, there is no way I can do rollback of data, the way test suite does on teardown. And I am running my tests parallely and hence if I let actual web services go, I will get errors such as "username taken" or stuff.
So the question to community is, what is the best practice?
Put your main application in a development or staging environment. Spin up your web service in the same environment. Have the one call the other. Control the fixture data for both. That's not mocking; you're not getting a fake implementation.
Not only will this allow you to be more confident in real-world stability, it will allow you to test performance in a staging environment, and also allow you to test your main application against a variety of versions of the web service. It's important that your tests don't do the wrong thing when your web service changes, but it's even more important that your main application doesn't either. You really want to know this confidently before either component is upgraded in production.
I can't see that there is a middleground between isolating your client from the service and actually hitting the service. You could have erroneously passing tests because the service has changed behavior, but don't you have some "contract" with the development team working on that service that holds them responsible for breakage?
You might try fakeweb and get a fresh copy of expected results each day so your tests won't run against stale data responses.
I am building a REST Web Service layer on top of a Rails app that will be used by an Iphone application. The response format is XML.
I would like to build some acceptance tests that should be external to the rails stack (and should test everything, including the http server). The test scenarios are quite complex, involving the process of searching/posting/reviewing an order. What would be the best solution to accomplish this?
a. Ruby script using curl/curb to fetch the request and Hpricot to parse the request
b. Selenium
c. ..
It would also be nice that those tests could be used as integration tests (therefore, run on every git commit). What integration solution would you recommend?
a. Integrity
b. CruiseControl
c. something else
I've used three approaches over this last few years
Active-resource
I found this to be too concerned with looking like active-record to be a great solution. In some cases I had to patch parts of it to work as I'd like a REST client to behave.
Rest-client
This gem is very good - well documented and does works as expected. I combined this with my own simple DSL and it's worked out better than a generic testing framework
XML over HTTP
I use this for performance testing. Very flexible but the learning curve is higher than Rest-client. If you go down this approach you could use the Net::HTTP core class or the HTTParty gem (I haven't tried this but it looks great>
A really good resource is this Net::HTTP cheat-sheet
For ad-hoc testing I've also found the Rest Client add-in for Firefox very useful.
Use selenium-rc in ruby mode and you'll be a happy camper. Webrat/Cucumber already do this for you so you can just put that in a second project and run the tests that way, all you'll have to do is override the host (so instead of localhost you'll be using your domain).
As to CI I'm afraid I don't know the best one.
you can't possibly mean mks integrity...if so, the answer is anything but. CC is a good CI tool. really good.
I use a web service to convert files. The service returns the converted file as an HTTP POST, along with identifier data. My app receives the response, updates its database and saves the file to the appropriate location.
At least that's the idea, but how do I develop and test this on a local machine? Since it isn't publicly facing, I can't provide a directive URL. What's the best way to handle this? I want to keep the process as clean as possible, and the only ideas I can come up with have seemed excessively kludgey.
Given how common REST API development is, I assume there are well-established best practices for this. Any help appreciated.
The solution will change a bit depending on which server your using.
But the generally accepted method is using the loopback address: 127.0.0.1 in place of a fully qualified domain name. Your server may need to be reconfigured to listen on this IP address, but that's usually a trivial fix.
example: http://127.0.0.1/path/to/resource.html
You can use curl or even your browser if your application has a proper frontend. There are many other similar tools to test this from a command line, and each language has a set of libraries for establishing http connections and transferring data along them.
If your machine isn't accessible to the service you are using, then your only option would really be to build a local implementation of the service that will exercise your API. A rake task that sends the POST with the file and the info would be a nice thing so you could start your rails app locally, and then kick off the task with some params to run your application through its paces.
This is the case any time you are trying to develop a system that can't connect to a required resource during development. You need to build a development harness of sorts so that you can exercise all the different types of actions the external service will call on your application.
This certainly won't be easy or straight forward, especially if your interface to this external service is complicated. Be sure to have your test cases send bad POSTs to your application so that you are sure you handle both what you expect, and what you don't.
Also make sure that you do some integration testing with the actual service before you "go-live" with the application. Hopefully you can deploy to an external server that the web service will be able to access in order to test. Amazon's EC2 hosting environment would let you set up a server very quickly, run your tests, and then shut down without much cost at all.
You have 2 options:
Set up dynamic dns and expose your app to the outside world. This only works if you have full control over your network.
Use something like webrat to fake the posts to your app. Since it's only 1 request, this seems pretty trivial.
Considering that you should be writing automated tests for this, I'd go with #2. I used to do #1 when developing facebook apps since there was far to many requests to mock them all out with webrat.
If your question is about testing, why don't you use mocks to fake the server? It's more elegant than using Webrat, and easier to deploy (you only have one app instead of an app and a test environment).
More info about mocks http://blog.floehopper.org/presentations/lrug-mock-objects-2007-07-09/
You've got some info about mocks with Rspec here http://rspec.info/documentation/mocks/