On edx.org there is Software as a Service course that grades all submitted assignments.
You upload a zip-file with a Rails project and they run a bunch of integration and unit tests.
How do they do it?
My thoughts are they mount an uploaded application as Rails engine. Is it possible to test one Rails app within another? I'd like to create a similar service, but I don't know what to start with.
I would imagine you could do this in a similar way that Jenkins runs continuous testing on a project. Where each project uploaded to your site gets expanded into a workspace, and then you shell out and execute commands. But that allows for a lot of variable configurations and complexity that probably doesn't make sense in the scope you've proposed. It also doesn't protect your underlying OS from the projects your testing.
You could also probably use an application container like docker and manage each uploaded application that way as well, which would keep everything self-contained, and isolate the application from the OS. It also puts the onus on the developer to package and manage dependencies correctly. I'm guessing they are probably using docker or something similar, here's an example Using Docker for MOOCS
At the point you want to capture the test results and report them back, I'd think they probably use something similar to the Junit formatter for Rspec or they just parse the rspec output directly.
Related
Is it possible to deploy Ruby on Rails app on FTP?
If possible then how run migration on it?
My app also have a cronjob. How to set it?
How I deploy my webiste on FTP?
If any tutorial etc availble?
Technically it is possible to deploy by FTP, but the question is, why would you want to? It's a nightmare when compared to a modern, automated deployment system. There's also serious security concerns since FTP is not in any way encrypted and is extremely easy to crack into. Using public Wi-Fi exposes you to the risk of your credentials being captured.
The traditional way to deploy a Rails application is with Capistrano which handles packaging up your application through your version control system and rolling it on to your production system.
If you're not using a version control system that's the first thing you need to fix. Hacking away on files randomly and throwing them to a server over FTP produces quick results but over time it makes it very difficult to get a consistent, tested, reliable build over to your target server.
Remember that Rails is not something that runs automatically like .php files can be, you'll need to use something like Passenger to handle launching your application.
If all this seems a bit convoluted, it's worth trying Heroku to get started. They have a very streamlined approach.
If I understand right what you are asking (is it possible to run Ruby program using only FTP as a protocol), the answer is no, it is not possible. Ruby files is not Web static content (HTML, JS, CSS) that is executed in a browser and hence you can just upload it somewhere (as an option using FTP) and then access via Web. In case of Ruby, apart from uploading content you need to execute commands there (start interpreter, rake etc.) and this is not possible to do using plain FTP.
Normally you may want to use SSH channel to the deployment server to run the program after it has been uploaded. In that case upload is possible via FTP, but as well secure version of it, SFTP (or SCP to just copy files between local and remote machines).
Hope it helps.
This question is about starting a rails server of the external project from a rspec environment.
There is 2 projects.
First project act as the Admin Back Office, it's the central application where users interact with web pages. I call it BackOffice
Second project is a Json API Server which will receive commands from the Admin Back Office through json requests.I call it ApiServer
I am trying to test API interaction between those 2 rails projects, and I would like to set-up rspec so I can write and maintain my spec files in BackOffice project. Those specs would start a ApiServer rails server and then play around to perform the tests.
My issue is about starting the ApiServer rails server. After looking at the rails app initialization files, I assumed I had to add a require to "config/environment".
But when I insert into BackOffice/spec/spec_helper.rb
require File.expand_path('../../../ApiServer/config/environment', __FILE__)
I get the error
`initialize!': Application has been already initialized. (RuntimeError)
# Backtrace to the file:
# ApiServer/config/environment.rb
# Line:
# Rails.application.initialize!
I also tried to simply call the following in backticks
`cd /api/path; bundle exec rails s -p 3002`
but got the same kind of error
Then I got inspiration from Capybara source code, and required the "ApiServer/application", then I am able to create a ApiServer.new object, but as soon as I call initialize! on it it I get the same message.
Any help is greatly appreciated. Cheers
Actually the second app is nothing more then an external service, which is better to stub for the tests.
There is one nice article from thoughtbot about using vcr gem to mock external web services:
https://robots.thoughtbot.com/how-to-stub-external-services-in-tests
Obligatory "don't do that unless you really need to".
However, since it seems you know what you need:
Short answer:
You need to isolate both application in system environment and launch it from there using system-calls syntax.
Long answer:
What you're trying to do is to run two Rails applications in the same environment. Since they both are Rails applications they share a lot of common names. Running them ends in name clash, which you're experiencing. Your hunch to try simple back ticks was good one, unfortunately you went with a bundler in already existing environment, which also clashes.
What you have to do in order to make it work is to properly isolate (in terms of code, not in terms of network i.e. communication layer ) application and then run launcher from rspec. There are multiple ways, you could:
Use Ruby process control (Check this graph, you could try to combine it with system level exec)
Daemonize from Operating System level (init.d etc.)
Encapsulate in VM or one of the wrappers (Virtualbox, Vagrant, etc.)
Go crazy and put code on separate machine and control it remotely (Puppet, Ansible, etc.)
Once there, you can simply run launcher (e.g. daemon init script or spawn new process in isolated environment) from RSpec and that's it.
Choosing which way to go with is highly dependent on your environment.
Do you run OSX, Linux, Windows? Are you using Docker? Do you manage Ruby libraries through things like RVM? Things like this.
Generally it's a bad idea to require launching another service/application to get your unit tests to pass. This type of interaction is usually tested by mocking or vcring responses, or by creating environment tests that run against deployed servers. Launching another server is outside the scope of rspec and generally, as you've discovered, will cause a lot of headaches to setup and maintain.
However, if you're going to have these rails projects tightly coupled and you want them to share resources, I'd suggest investigating Rails Engines. To do this will require a substantial amount of work but the benefits can be quite high as the code will share a repository and have access to each other's capabilities, while maintaining application isolation.
Engines effectively create a rails application within another rails application. Each application has it's own namespace and a few isolating guards in place to prevent cross app contamination. If you have many engines it becomes ideal to have a shell rails application with minimal capabilities serving each engine on a different route/namespace.
First you need to create housing for the new api engine.
$ rails plugin new apiserver --mountable
This will provide you with lib/apiserver/engine.rb as well as all the other scaffolding you'll need to run your API as an engine. You'll also notice that config/routes.rb now has a route for your engine. You can copy your existing routes into this to provide a route path for your engine. All of your existing models will need to be moved into the namespace and you'll need to migrate any associated tables to the new naming convention. You'll also have some custom changes depending on your application and what you need to copy over to the engine, however the rails guide walks your through these changes (I won't enumerate all of them here).
It took a coworker about a week of work to get a complicated engine copied into another complicated rails server while development on both apps was occurring and with preserving version control history. A simpler app -- like an api only service -- I imagine would be quicker to establish.
What this gives you is another namespace scope at the application root. You can change this configuration around as you add more engines and shared code to match various other directory structures that make more sense.
app
models
...
apiserver
app
...
And once you've moved your code into the engine, you can test against your engine routers:
require "rails_helper"
describe APIServer::UsersController do
routes { APIServer::Engine.routes }
it "routes to the list of all users" do
expect(:get => users_path).
to route_to(:controller => "apiserver/users", :action => "index")
end
end
You should be able to mix and match routes from both services and get cross-application testing done without launching a separate Rails app and without requiring an integration environment for your specs to pass.
Task rabbit has a great blog on how to enginize a rails application as a reference. They dive into the what to-do's and what not-to-do's in enginizing and go into more depth than can be easily transcribed to a SO post. I'd suggest following their procedure for engine decision making, though it's certainly not required to successfully enginize your api server.
You can stub requests like:
stub_request(:get, %r{^#{ENV.fetch("BASE_URL")}/assets/email-.+\.css$})
I want to create an online admin dashboard which will indicate whether our most important tests are passing. I've already written the tests, but how I could run them on demand and see what their results are in a non-test (production) environment?
I'd like to avoid rewriting the test code if possible.
I have 3 ideas for how to do this (not sure if they're any good):
Using system calls to grep the verbose (-v) output of the tests
Use a different test_runner that will give easier to parse ouptut via system call
Somehow abstract the testing functions so that they can be used in a non-test context. (Not sure how I would do this..
For testing purposes for Ruby/Rails I use RSpec and I have well over 1000 tests that are already written in my application. Once the app goes live I want to have the tests run once a day on the production server (since all the code is there) and see if anything fails. The test itself will be run against test data and now production data. This is an effort so that any new code that is added in the future will not cause anything to unknowingly break.
I have found a few solutions:
Travis-CI (only open-source ... not suitable for closed-source projects)
Jenkins-CI (not sure if it works with or well with Ruby/RSpec/Rails)
Watir (not sure if Ruby/Rspec works with it, but the tool itself is written in Ruby).
Preferably something that checks the codebase daily and then emails me when something isn't working.
I also plan on integrating JavaScript testing with the a testing library of my choice (I just need the automation platform for testing it).
Can someone provide me some insight as to which tool to use? Or does anyone have any other tools to recommend?
Jenkins-CI works great with rspec, and can run your jasmine javascript testing, and your cucumber javascript tests as well.
The only thing I'd recommend is not to test on your production server itself. When you push changes to your source-control repository, Jenkins will download the new code and run your tests there. When you're green (tests pass), push the code to production.
I posted this very same item on SERVERFAULT, but got no reply. So here goes:
I'm currently in the process of finishing up a Rails application. I am using Warbler to package it up as a ".war" file and am using GlassFish to deploy it. I do this because the application is to be distributed to companies for in-house use. Arguably i could/should have used another framework to develop an application of this nature, however, I chose ease/speed of development over deployment hassle.
That said, I've got the setup working reasonably well on my development machine. However, I'm curious as to how to go about automating environment initialization. In other words, I need to figure out how to ensure that all DBs, files,etc. are configured upon deployment.
All of the examples I've seen thus far assume you're running your IDE on the system to which you wish to deploy and they have you run your rake tasks manually before deployment. However I need to simply give the end user the ".war" and be able to run all rake tasks upon application deployment/launch.
Can someone point me in the right direction regarding this? FWIW there is nothing in the Glassfish manual about environment initialization etc. -- then again, I don't suppose I should expect them to cover every single aspect of deployment.
Best.
Depending on your database requirements you can embed Derby within the Glassfish environment. You can easily create a blank/default database and then put that clean version in each Glassfish environment you have to set up.
I'm not sure what else you need to configure and initialize, but I'd say that if you can, script it up, either with some rake tasks. Embedding Derby takes care of database startup and initialization. Remember that a war file is just a zip file, so adding config files via a script shouldn't be so hard. You can use rails initializers (/config/initializers/) to load up yml files for configuration or whatever you need to do as the app starts up.
You won't be able to have the intializers create the schema in the database, but you could have them check for default seed data and put it in if it isn't there.
You should be able to access any part of the file system that Glassfish and the JVM can access. I don't know much about Glassfish but the only problems I've had with jruby rails apps on Tomcat were related to relative paths being relative to where the startup script was called from, and not always relative to the installation root. This could probably be solved with the right startup scripts in Tomcat or setting the appropriate start-in folder, I just haven't had a need to dive in to that very much.