Torquebox Backgroundable Method calls - neo4j

I am using Torquebox to build a Rails application with an embedded Neo4j instance as the datastore. I've read multiple blogs that have said that Torquebox is a great for this because the Backgroundable method calls run in the same process (replacing delayed_job which doesn't work under jRuby anyway).
Unfortunately after playing around with it, this clearly isn't the case since the new thread keeps trying to start Neo4j and it fails.
After looking at the documentation, I did find this which confirms it:
The message processors run in a separate ruby runtime from the application, which may be on a different machine if you have a cluster.
I'm new to Torquebox, so I'm not sure if people are just incorrect on this, or is there another way with Torquebox to do an asynchronous call that runs in the same process so it can interact with an embedded Neo4j data store?

I'm unfamiliar with Rails/Torquebox, but are you creating a new Neo4j graph in each thread? If so, in Neo4j, only one connection can be made to the graph database in an embedded environment. If you host a Neo4j and use a RESTful client to call the DB you can have multiple clients.

Related

Using Java to change ruby views

I have a program written in jruby that is deployed on a tomcat server. Everything about the program works fine except I cannot figure out a way to notify the user when the java processes have been completed.
Java does most of the heavy lifting of the program and I want a ruby view to pop up saying that the processing has been finished.
The closest I got was to use an http get request to try and show the "show" view within ruby but it appears to not work, I'm assuming because it is functioning in a different program runtime.
Is there anyway that upon completion of server tomcat java code to invoke a ruby view change on a client machine.
Edit:
The ruby code runs in parallel as well as the java code. The java code converges into one output but the ruby code doesn't converge and mainly just runs the java code and deals with the front end.
JRuby has Java-Integration and depending on how you Java classes are "shared" (or not) on Tomcat you can have access to such classes from within the Rails app. the question is mostly about designing such a solution to fit, which is hard to tell without knowing the exact requirements.
you would obviously need to share some state (global) between the two apps (assuming different contexts). be it a semaphore or an atomic flag. check out Java concurrent utilities - although if you do not want polling on Ruby's end it might end up a bit tight coupled.
maybe try gaining the servlet-context from one app to another and exporting a shared state (e.g. a queue object) or simply send a request using a dispatcher from the Java context to the Rails one.

Code to be run once and only once at each rails server restart

My application is 2 fold. First it runs a thread that reads an Arduino output to get the data and store them in my model. It does that forever as the Arduino is sensing data forever. Second the main web application analyses the data and draws fancy graphs.
When I start the rails server, I would like the reading thread to run once and only once.
My current implementation is with a thread in the main controller of my application (as I need the model to store the data). Each time I refresh the page, a new thread is created. This is not what I want as concurrent access to the Arduino creates false readings.
I guess it is a pretty classic problem but I do not see what is the way of getting that behaviour with Rails. I have been looking and googling for a week now but I am still stuck.
Thanks in advance.
Pierre
Agreed with #rovermicrover, you should consider separating the task of interacting with the arduino from the web app.
To that end, I'd recommend that you create a rake task for that piece. Then you might consider managing the start/stop/restart of that rake task via foreman. It's a nice clean way to go and you can also do capistrano integration pretty easily.
Wrong Tool For the Job. Kind of. Your not going to want the rails app to monitor the Arduino output, rails isn't really ment for something like that. Your best having a separate dedicated app read the Arduino output, and then save the information to a database.
Arduino Output ---> Application Parsing Output ---> DB ---> Rails App
This way your web application can focus on web, and not be torn between jobs.
An interesting way to do this would be to have the parsing application be a ruby app, and use Active Record outside of rails in this instance. While I have never done it, people have used Active Record in simliar setups in pure Ruby apps. Here is an old example.
http://blog.aizatto.com/2007/05/21/activerecord-without-rails/
This way also when you redeploy your rails app, data will still be able to be collected. It also creates a firewall so if your rails app explodes, or goes down the data collection will be unaffected.

Correct way to implement standalone Grails batch processes?

I want to implement the following:
Web application in Grails going to MongoDB database
Long-running batch processes populating and updating that database in the background
I would like for both of them to reuse the same Grails services and same GORM domain classes (using mongodb plugin for Grails).
For the Web application everything should work fine, including the dynamic GORM finder methods.
But I cannot figure out how to implement the batch processes.
a. If I implement them as Grails service methods, their long-running nature will be a problem. Even wrapping them in some async executors will unnecessarily complicate everything, as I'd like them each to be a separate Java process so they can be monitored and stopped easily and separately.
b. If I implement them as src/groovy scripts and try to launch from command line, I cannot inject the Grails services properly (ApplicationHolder method throws NPE) or get the GORM finder methods to work. The standalone GORM guides all have Hibernate in mind and overall it seems not the right route to pursue.
c. I considered the 'batch-launcher' Grails plugin but it failed to install and seems a bit abandoned.
d. I considered the 'run-script' Grails command to run the scripts from src/groovy and it seems it might actually work in development, but seems not the right thing to do in production.
I cannot be the only person with such a problem - so how is it generally solved?
How do people run standalone scripts sharing the code base and DB with their Grails applications?
Since you want the jobs processing to be in a separate JVM from your front-end application, the easiest way to do that is to have two instances of Grails running, one for the front-end that serves web requests, and the other to deal with job processing.
Thankfully, the rich ecosystem of plugins for Grails makes this sort of thing quite easy, though perhaps not the most efficient, since running an entire Grails application just for processing is a bit overkill.
The way I tend to go about it is to write my application as one app, with services that take care of the job processing. These services are tied to the RabbitMQ plugin, so the general flow is that the web requests (or quartz scheduled jobs) put jobs into a work queue, and then the worker services take care of processing them.
The advantage with this is that, since it's one application, I have full access to all of the domain objects, etc., and I can leverage the dissconnected nature of a message queue to scale out my front- and back-ends seperately without needing more than one application. Instead, I can just install the same application multiple times and configure the number of threads dedicated to processing jobs and/or the queues that the job processors are looking at.
So, with this setup, for development, I will usually just set the number of job processing threads to whatever makes sense for the development work I'm doing, and then just a simple grails run-app, and I have a fully functional system (assuming I have a RabbitMQ server running to use as well).
Then, when I go to deploy into production, I deploy 2 instances of the application, one for the front-end work and the other for the back-end work. I just configure the front-end instance to have 1 or 0 threads for processing jobs, and the back-end instance I give many more threads. This lets me update either portion as needed or spin up more instances if I need to scale one part or the other.
I'm sure there are other ways to do this, but I've found this to be both really easy to develop (since it's all one application), and also really easy to deploy, scale, and maintain.

Create multiple Rails servers sharing same database

I have a Rails app hosted on Heroku. I have to do long backend calculations and queries against a mySQL database.
My understanding is that using DelayedJob or Whenever gems to invoke backend processes will still have impact on Rails (front-end) server performance. Therefore, I would like to set up two different Rails servers.
The first server is for front-end (responding to users' requests) as in a regular Rails app.
The second server (also a Rails server) is for back-end queries and calculation only. It will only read from mySQL, do calculation then write results into anothers Redis server.
My sense is that not lot of Rails developers do this. They prefer running background jobs on a Rails server and adding more workers as needed. Is my sever structure a good design, or is it an overkill? Is there any pitfall I should be aware of?
Thank you.
I don't see any reason why a background job like DelayedJob would cause any more overhead on your main application than another server would. The DelayedJob runs in it's own process so the dyno's for your main app aren't affected. The only impact could be on the database queries but that will be the same whether from a background job or another app altogether that is accessing the same database.
I would recommend using DelayedJob and workers on your primary app. It keeps things simple and shouldn't be any worse performance wise.
One other thing to consider if you are really worried about performance is to have a database "follower", this is effectively a second database that keeps itself up to date with your primary database but can only be used for reads (not writes). There may be better documentation about it, but you can get the idea here https://devcenter.heroku.com/articles/fast-database-changeovers#create_a_follower. You could then have these lengthy background jobs read data from here leaving your main database completely unaffected.

RabbitMQ with EventMachine and Rails

we are currently planning a rails 3.2.2 application where we use RabbitMQ. We would like to run several kind of workers (and several instances of a worker) to process messages from different queues. The workers are written in ruby and are laying in the lib directory of the rails app.
Some of the workers needs the rails framework (active record, active model...) and some of them don't. The first worker should be called every minute to check if updates are available. The other workers should process the messages from their queues when messages (which are send by the first worker) are present and do some (time consuming) stuff with it.
So far, so good. My problem is, that I only have little experiences with messaging systems like RabbitMQ and no experiences with the rails interaction between them. So I'm wondering what the best practices are to get the two playing with each other. Here are my requirements again:
Rails 3.2.2 app
RabbitMQ
Several kind of workers
Several instances of one worker
Control the amount of workers out of rails
Workers are doing time consuming tasks, so they have to be async
Only a few workers needs the rails framework. The others are just ruby files with some dependencies like Net or File
I was looking for some solution and came up with two possibilities:
Using amqp with EventMachine in a new thread
Of course, I don't want my rails app to be blocked when a new worker is created. The worker should run in another thread and do its work asynchronously. And furthermore, it should not start a new instance of my rails application. It should only require the things the worker needs.
But in some articles they say that there are some issues with Passenger. And another fact that I don't like is, that we are using webbrick for development and we ought to include workarounds for that too. It would be possible to switch to another webserver like thin, but I don't have any experience with that either.
Using some kind of daemonizing
Maybe its possible to run workers as a daemon, but I don't know how much overhead this would come up with, or how I can control the amount of workers.
Hope someone can advise a good solution for that (and I hope I made myself clear ;)
It seems to me that AMQP is a big shot to kill your problem. Have you tried to use Resque? The backed Redis database has some neat features (like publish/subscribe and blocking list pop) which make it very interesting as a message queue, and Resque is very easy to use in any Rails app.
The workers are daemonized, and you decide which worker of your pool listens to which queue, so you can scale each type of job as needed.
Using EM reactor inside a request/response cycle is not recommended, because it may conflict with an existing event loop (for instance if your app is served by thin), in any case you have to configure it specifically for your web server, OTOS it may be interesting to have an evented queue consumer, if your jobs have blocking IO and are not processor-bound.
If you still want to do it with AMQP, see Starting the event loop and connecting in Web applications and configure for your web server accordingly. Or use bunny to push synchronously in the queue (and whichever job consumer you deam useflu, like workling for instance)
we are running slightly different -- but similar technology stack.
daemon kit is used for eventmachine side of the system... no rails, but shared models (mongomapper & mongodb). EM is pulling messages off the queues, and doing whatever logic is required (we have ruleby in the mix, but if-then-else works too).
mulesoft ESB is our outward-facing message receiver and sender that helps us deal with the HL7/MLLP world. But in v1 of the app, we used some java code in ActiveMQ to manage HL7 messages.
the rails app then just serves up stuff for the user to see -- again, using the shared models.

Resources