Rails test across multiple environments - ruby-on-rails

Is there some way to change Rails environments mid-way through a test? Or, alternately, what would be the right way to set up a test suite that can start up Rails in one environment, run the first half of my test in it, then restart Rails in another environment to finish the test? The two environments have separate databases.
Some necessary context: I'm writing a Rails plugin that allows multiple installations of a Rails app to communicate with each other with user assistance, so that a user without Internet access can still use the app. They'll run a local version of an app, and upload their work to the online app by saving a file to a thumbdrive and taking it to an Internet cafe.
The plugin adds two special environments to Rails: "offline-production" and "offline-test". I want to write functional tests that involve both the "test" and "offline-test" environments, to represent the main online version of the app and the local offline version of the app respectively.
Edit: I've been reading up on Rack::Test, and it seems like it might be the way to go, because it places the testing framework outside of rails itself. But I still have no idea how I can use it to do a test that involves more than one environment. Any ideas, anyone?

Maybe think about the issue from the perspective of having more than one db connection, instead of having more than one environment? You can't really switch environments part way through, at least without a lot of hacking and screwing things up. :)
http://anandmuranal.wordpress.com/2007/08/23/multiple-database-connection-in-rails/

Related

Fixing (or adding features to) a Rails server in production

I would like to know what the best practice is for taking a ruby on rails application in production and adding a feature to it or debugging a broken feature?
What I mean is, say you have a working application and you have lots of people using it. You want to add a new feature to this app. You clone your application to your local machine. Create a new feature (or w/e) branch.
Now what do you change/do so you don't destroy your database and so you are able to test and debug this application on your local machine?
Also, let's say this is an older rails application with an older ruby version.
I would also like to note that I am having trouble finding any information this and am willing to read books and lots of text to learn if it is a very involved task.
Although the complexity of this type of operation varies quite a bit, usually based on the complexity of the application itself, I think a few generalizations can be made.
Tests
Obviously, do not break any existing tests. Write tests for you new functionality, even if they are the first tests in the application.
Data
Ideally, you will have data to work with that very closely mirrors your production data. In some cases (CMS) this may be an actual dump of the production database and assets, restored locally. In other cases (billing portal for a hospital), you will probably need to rely on well-constructed seed data. Once your automated tests pass, you can perform manual QA against the (possibly simulated) production data.
Staging
If you do not have a staging environment that 100% mirrors your production environment, set one up now. This should be set up as close as you possibly can to your production environment, using the database guidelines from above. Merge your feature branch into staging prior to merging it into production. This will allow you to do a final QA test in a near-production environment. This can be used to test not only new application features, but new server versions, ruby version, etc.
CI/CD
It is becoming very common to use CI/CD to automate the testing and deployment of feature branches. This can help enforce code quality guidelines. It can also allow you to run the tests in an environment that matches production, for extra peace of mind.
Backups
Obviously, even with all of this, things can still go wrong. Keeping up-to-date backups is vital, for worst case scenarios.

Is it possible to have two different apps with thinking sphinx pointing to the same database?

I inherited a legacy app running Thinking Sphinx v3. I've been working on a large update for it, upgrading rails, etc.
My updated app now has a different Thinking Sphinx index, but it shares the same schema. It also uses delta indexing with Delayed Job.
I have a beta environment fully up and running but I now want to point the beta app to the production database so my colleagues can test the update safe in the knowledge that if anything goes awry they can always fallback to the live app.
Is it possible for these two environments to co-exist? How should I be configuring my app or the database server?
It's generally possible to have two apps pointing to the same database, yes. Of course, there may be behaviours in one that impact the other, so you'd want to consider that complication!
With regards to Thinking Sphinx: each app's daemon and index data will be separate from the other, so that won't be a problem either. If you're running both apps on the same machine, though, you'll want to make sure the daemons are using different ports through the mysql41 setting.

Multiple redmine instances best practices

I'm studying the best way to have multiple redmine instances in the same server (basically I need a database for each redmine group).
Until now I have 2 options:
Deploy a redmine instance for each group
Deploy one redmine instance with multiple database
I really don't know what is the best practice in this situation, I've seen some people doing this in both ways.
I've tested the deployment of multiple redmines (3 instances) with nginx and passenger. It worked well but I think with a lot of instances it may not be feasible. Each app needs around 100mb of RAM, and with the increasing of requests it tends to allocate more processes to the app. This scenario seems bad if we had a lot of instances.
The option 2 seems reasonable, I think I can implement that with rails environments. But I think that there are some security problems related with sessions (I think a user of site A is allowed to make actions on site B after an authentication in A).
There are any good practice for this situation? What's the best practice to take in this situation?
Other requirement related with this is: we must be able to create or shut down a redmine instance without interrupt the others (e.g. we should avoid server restarts..).
Thanks for any advice and sorry for my english!
Edit:
My solution:
I used a redmine instance for each group. I used nginx+unicorn to manage each instance independently (because passenger didn't allow me to manage each instance independently).
The two options are not so different after all. The only difference is that in option 2, you only have one copy of the code on your disk.
In any case, you still need to run different worker processes for each instance, as Redmine (and generally most Rails apps) doesn't support database switching for each request and some data regarding a certain environment are cached in process.
Given that, there is not really much incentive to share even the codebase as it would require certain monkey patches and symlink-magic to allow the proper initialization for the intentional configuration differences (database and email configuration, paths to uploaded files, ...). The Debian package does that but it's (in my eyes) rather brittle and leads to a rather non-standard system.
But to stress again: even if you share the same code on the disk between instances, you can't share the running worker processes.
Running multiple instances from the same codebase is not officially supported by Redmine. However, Debian/Ubuntu packages seem to support such approach... See:
Multiple instances of redmine on Debian squeeze
So, generally:
If you use Debian/Ubuntu go with option #2
Otherwise go with #1
Rolling forward a couple of years, and you might now want to consider a third option of using docker containers for each of your redmine instances.
I've been using https://github.com/sameersbn/docker-redmine.git , and have been quite happy with it except that it doesn't yet support handling of incoming mail for creating and commenting on tickets.

How to architect Rails site that can be edited while running?

I am writing a Rails app that "scrapes/navigates" some other websites and webservices for content. I am using Mechanize and Savon to do the heavylifting.
But given the dynamic nature of the web, I'd like to make my calls to these editable by the admin users of the site - rather than requiring me to release a new version of the site.
The actual scraping thread happens async to the website, using the daemons gem.
My requirements are:
Thinking that the scraping/webservice calling code is quite simple, the easiest route is to make the whole class editable by the admins.
Keep a history of the scraping code - so that we can fairly easily revert if we introduce a problem.
Initially use the code from the file system, but as soon as thats been edited and stored somewhere, to use that code instead.
I am thinking my options are:
Store the code in the db (with a history table for the old versions)
Store the code in a private git repo somewhere and access that for the history/latest versions.
I am thinking the git route might be easiest, given its raison d'etre is to track file history...
But perhaps there is a gem/plugin that does all this for me, out of the box?
Thanks in advance for any tips/advice.
~chris
I really hope you aren't doing something like what's talked about here...
Assuming you are doing a proper mixin, there used to be a gem called "acts_as_versioned" which would do something like you want. It's been a while so I don't know if it's been turned into a plugin or if it's been abandoned. Essentially the process it uses was to provide a combination key for your versioned table.
Your database would have a structure like this:
Key column (id for the record)
Version column (id for the record's version)
All the record attributes
Let's say you had a table for your scripts, and the script you wanted has three versions. Your table would have the following records:
123, 3, '#Be good now'
123, 2, 'puts "Hi"'
123, 1, '#Do not be bad'
Getting the most recent version would be as simple as
Scripts.find :first, :conditions=>{:id=>123}, :order=>"version desc"
Rolling back would be as simple as removing the most recent version, or having another table with a pointer to the active version. It's up to you.
You are correct in that git, subversion, mercurial and company are going to be much better at this. To provide support, you just follow these steps:
Check out the script on the server (using a tag so you can manage what goes there at any time)
Set up a cron job to check out the new script periodically (like every six hours or whatever you feel comfortable with)
The daemon you have for running the script should run the new version automatically.
IF your site is already under source control, and IF you're running under mod_rails/passenger, you could follow this procedure:
edit scraping code
commit change locally
touch yourapp/tmp/restart.txt
that should give you history of the change and you shouldn't have to re-deploy.
A bit safer, but not sure if it's possible for you is on a test/developement server: make change, commit locally, test it, then on production server, git pull then touch tmp/restart.txt
I've written some big spiders and page analyzers in the past, and one of the things to keep in mind is what code is providing what service to the entire application.
Rails is providing the presentation of the data being gathered by your spidering engine. The presentation is one side of the coin, and spidering is the other, and they should be two separate code bases, tied together by some data-sharing mechanism, which, in your case, is the database. The database gives you some huge advantages as does having Rails available, when your spidering code is separate. It sounds like you have some separation already, but I'd recommend creating a wider gap. With that in mind, here's how I've done it before, and what I'd do now.
Previously, I had a separate app for my spidering that was spawning multiple spider tasks. Each task would look at a bunch of different URLs, throw their results in the database, then quit. Each time one quit the main app would spawn another spider to process more URLs. Each loop, the main app checked a YAML configuration file for run-time parameters, like how many sub-tasks it should have running, how many URLs they'd get, how long they'd wait for connections, etc. It stored the last modification date of the config file each time it loaded it so, if I made a change to the file, the app would sense it in a reasonably short time, reread the file, and adjust its behavior.
All state information about the URLs/pages/sites being scraped/spidered, was kept in the database so I could check on its progress. I could see how many had been processed or remained in the queue, the various result codes, and the content being returned. If I didn't like something I could even tweak the filters to skip junk pages, knowing the spidering tasks would be updated in a few minutes.
That system worked extremely well, spidered a major customer's series of websites without a glitch, running for several weeks as I added new sites to the list. (We were helping one of the Fortune 50 companies improve their sites, and every site had been designed and implemented by a different team, making every site completely different. My code had to be flexible and robust; I was really happy with how it worked out.)
To change it, these days I'd use a database table to hold all the configuration info. That way I could easily build an admin form, and let someone else inherit the task of adjusting the app's runtime configuration. The spider tasks would also be written so they'd pull their configuration from the database, rather than inherit it from the main app. I originally had the main app do all the administration and pass the config info to the spidering apps because I wanted to keep the number of connections to the database as low as possible. I was using Postgres and now know it could have easily handled the load, so by letting the individual tasks handle their configuration I could have made it more responsive.
By making the spidering engine separate from the presentation engine it was possible to temporarily stop one or the other without affecting the progress of the spidering job. Once I had the auto-reload of the prefs in place I don't think I had to stop the spidering engine, I just adjusted its prefs. It literally ran for weeks without stopping and we eventually pulled the plug because we had enough data for our needs.
So, I'd recommend tweaking your code so your spidering engine doesn't rely on Rails, instead it will be fired off by cron or a separate scheduling app. If you have to temporarily stop Rails your engine will run anyway. If you have to temporarily stop the engine then Rails can continue serving pages. The database sits between the two acting as the glue.
Of course, if the database goes down you're hosed all the way around, but what else is new? :-)
EDIT: Chris said:
"I see your point about the splitting the code out, though my Ruby-fu is low - not sure how far I can separate things without having to have copies of the ActiveModel/migrations stuff, plus some shared model classes."
If we look at your application as spider engine <--> | <-- database --> | <--> Rails/MVC/presentation, where the engine and Rails separately read and write to the database, and look at what each does well, that helps figure out how to break them into separate code bases.
Rails is designed to handle migrations, so let it. There's no reason to reinvent that wheel. But, how often do you do migrations, and what is effected when you do? You do them seldom once the application is stable, and, at that point you'd do them in a maintenance cycle to tweak the database. You can shut down the spidering engine and the web interface for a few minutes, migrate the database, then bring things up and you're off and running. Migrations are a necessary evil, but are hardly show-stoppers once in production. Most enterprises have "Software Sunday", or some pre-announced window of maintenance, so do the same.
ActiveRecord, modeling and associations are pretty easy to deal with too. The models are in a file that is required internally by Rails already, so the spidering engine can inherit the database know-how that way too; Multiple apps/scripts can use the same model file. You don't see the Rails books talk about it much, but ActiveRecord is actually pretty easy to use outside of Rails. Search the googles for activerecord without rails for more info.
You can pull in ActiveSupport also if you want some of its extensions to classes by doing a regular require, but the Rails "view" and "controller" logic, which normally applies to presenting the web interface, shouldn't be needed at all in the engine.
Business logic, which goes in the controllers in Rails could even be refactored into separate methods that get required by the Rails side of things and by the spidering engine. It's a different way of looking at Rails but falls in line with the "DRY" mantra - don't repeat yourself, so make things modular and require (or require_relative) bits and pieces that are the building blocks of the entire system.
If you don't want a totally separate codebase, you can take advantage of Rail's script runner, which gives a script access to the ActiveRecord::Base and ActiveRecord::Associations and ActiveSupport. Do a rails runner -h from your app's main directory, or search for "rails runner" for more info. runner is not good for a job that starts and runs many times an hour, because Rail's startup cost is high. But, if you have a long-running task, say one that runs in parallel with your rails app, then it's a great choice. I'd give it serious consideration for the spidering side of your application. Eventually you might want to break the spidering-engine out to a separate host so the presentation side has a dedicated host, so runner will help you buy time and do it in small steps.

how to "test" an externally managed db in rails

Background: We have a dependency on an externally managed database. This is a company-wide resource. We have a read-only account into it and have no control over or input into the schema or contents.
Issue: We're using ActiveRecord as our ORM into said resource; we manage the connection information separate from our central db connection information. It worked out fine. We have some characterization tests that verify that our ActiveRecords retrieve the data for a few know datapoints. However, we have no test/dev environment replacement strategy for this database. Right now all of our environments are configured to use the production database connection:
That sucks
We don't want the production password on the build server, so our build is broken
The queries to the production database server are slow and because caching is off in test/dev our homepage loads REALLY slow locally
So we need something else in test/dev mode.
Q) Why not just have another sqlite database locally that mimics the schema of the production database?
A) Because we've tried that for another connection and it's lousy for at least a couple reasons.
It's fairly complex managing the separate schema (sqlite db file) in the rake process just for testing/dev.
Testing ActiveRecords outside of a schema that's managed by some process that ensures schema consistency between environments is largely meaningless.
The database configuration doesn't feel like the right seam. The database connection aspect of this, and thus the AR, is not part of what we're developing, it's just a connection library in this case. As long as we can ensure our test/dev replacement for it acts the same as the production AR, then it doesn't matter if we use AR for this in test/dev. I hope that made sense, it's an important point.
Q) You could use SchemaDumper to grab the schema of the production database and use it to generate the test database. That way all the SQLy details would be automated and it would look more like typical rails stuff.
A) Yeah, that would be pretty hot, but SchemaDumper doesn't seem to play nicely with the production database connection. It just hangs after a while and we don't get the whole schema. Bummer. That also doesn't avoid having to manage that whole other database file and work managing said file into our rake tasks.
What I really want to do is to have production use the ARs that are tested in the characterization tests and then have another object that's a plain old PORO that reads stuff out of a yaml file (like a fixture) that replaces the object in the test/development/build environments.
Q) But Najati, isn't putting that stuff in a yaml file the same as defining the schema?
A) Well, yeah, sorta. Its just a lot more direct and easier to manage if it's in some PORO that loads some crap out of a yamlfile than if I also have to work some half-baked schema management into our build tasks; we do this currently and it's pretty lousy and, frankly, doesn't seem to be buying us much. Also the test schema and the test data fixture duplication the information: "this is what we want the test version of this data to look like" - why do we need both? I claim "So that you can use the same AR in both environments." is not a sufficient argument to justify the complexity of managing the extra sqlite db file.
Q) I feel like there's something you're not telling me.
A) I've been cheating on my Weight Watchers. Also,
In the past when I've had something like this my solution looked like this:
Characterization tests that capture the important aspects of the external service's behavior, run not with the unit test suite, but as a separate process on the build server, maybe once every 4 hours or every night or whatever.
A fake implementation that used the same set of tests to exercise it's behavior to ensure that it was providing similar functionality to the test/dev environment.
Spring (and probably dependency injection containers in general) makes this easy. You just swap out beans in your environment-specific bean config and the test env just goes on it's merry way.
Given my understanding/knowledge, Rails doesn't seem to be lending itself to this very well. I supposed I could redefine the class in my test/dev environment scripts, but that seems really shady. For one thing, I don't know if that would keep the model from being loaded at application start-up, and another, that would add yet another strange wrinkle to our Rails project, another bit of magic that would make the project harder to come up to speed on. I want something that feels like the "service replacement" strategy used in Spring that doesn't require hard-to-find/understand RoR magic.
Uhh. I'll stop there and see if that much prompts anything. Thanks for taking the time to read!
You don't actually test the database. You're testing your models that interact with the app or other 'original' code that might touch the database. If there is magic in the prod database, take it out and put it in fixtures or factories. The fixtures and factories load the test data into a test instance, for example: db_test. When the test has passed or failed the database is rolled back with transactions and your tests can (and should) run atomically. If you are trying to build an app that tests a database, that's a different story. For everyone else, use the testing design that Rails provides: fixtures or factories and a "test" rails database defined in config/database.yml. The YML file swaps out for the dependency injection functionality. It's just a hash of variables, you don't need any pojo spring tricks to swap out environments. :) When rails runs your tests with fixtures or factories, it will load only the test environment as defined in database.yml. This will also integrate nicely with rspec, guard and other tools. When I save one of my models, it creates some data in my test db, runs my test and cleans up the database all just by hitting the save button on my source file.
Integration tests should still use this same mechanism. The only thing that makes this process annoying is legacy databases and I've worked some magic there with metaprogramming to minimize the hassle.
Take a look at factory_girl for factories. And episode railscast #275: http://railscasts.com/episodes/275-how-i-test
I think you have only two options: either you duplicate the database in some way, or you separate the ORM into a thin (as thin as possible) layer and mock it out in your tests.
Besides, you may have an AR schema ready in your db/schema.rb.

Resources