Is it a good idea to use MQ to store data in DB? - scalability

I'm going to use rabbitMQ as a message broker and switch most of the scripts to sending data to queue instead of performing direct writes/reads. Consumer will get those messages and perform corresponding operations. In my dreams this will give me more flexibility choosing DB engine, app level sharding and so on. But is it a good idea generally? Or am I missing something? Current write load is ~15k inserts/deletes for mysql and 30-50k sets for redis instances. Read load is the same ~15-20k selects, and 50-70k gets for redis.

The biggest issue you'll face will be the fact that your DB writes will be asynchronously processed. If a client writes data to the DB and then instantly reads it back, the value might not be what it originally inserted because the Rabbit queue might have been very busy or slow, delaying the update operation. Or an admin might accidentally purge your queue and then you'll have all these clients thinking their transactions had been committed but nothing will have been stored.
This sounds like a classic case of premature optimization. It's a solution in search of a problem, and you should probably avoid doing it.

With amqp you can run a none asynchronous operations using a RPC way, with this kind of architecture you should figure out all problems related with asynchronous operations.

Related

Rails - passing data from background job to main thread

I am using in my app a background job system (Sidekiq) to manage some heavy job that should not block the UI.
I would like to transmit data from the background job to the main thread when the job is finished, e.g. the status of the job or the data done by the job.
At this moment I use Redis as middleware between the main thread and the background jobs. It store data, status,... of the background jobs so the main thread can read what it happens behind.
My question is: is this a good practice to manage data between the scheduled job and the main thread (using Redis or a key-value cache)? There are others procedures? Which is best and why?
Redis pub/sub are thing you are looking for.
You just subscribe main thread using subscribe command on channel, in which worker will announce job status using publish command.
As you already have Redis inside your environment, you don't need anything else to start.
Here are two other options that I have used in the past:
Unix sockets. This was extremely fiddly, creating and closing connections was a nuisance, but it does work. Also dealing with cleaning up sockets and interacting with the file system is a bit involved. Would not recommend.
Standard RDBMS. This is very easy to implement, and made sense for my use case, since the heavy job was associated with a specific model, so the status of the process could be stored in columns on that table. It also means that you only have one store to worry about in terms of consistency.
I have used memcached aswell, which does the same thing as Redis, here's a discussion comparing their features if you're interested. I found this to work well.
If Redis is working for you then I would stick with it. As far as I can see it is a reasonable solution to this problem. The only things that might cause issues are generating unique keys (probably not that hard), and also making sure that unused cache entries are cleaned up.

Speeding up web service by writing to redis first, disk after?

I have a web service that runs multiple DB queries and takes roughly ~500ms-1,000ms (depending on how much I/O EC2 decides to give me at the given junction if invocation). Users want stuff faster than 1,000ms, and understandably so. What I'm thinking of doing is taking the request parameters, stuffing them into a redis queue without writing to disk, and then running a job in an asynchronous queue which does the disk writes. Does something like this happen normally in practice? am I insane for suggesting it?
So long as your Redis is persisting to disk on regular intervals, this should work. You want to limit the number of scenarios where you might lose data. A sufficiently aggressive persistence schedule for Redis should work for most cases.
Try to give feedback to the user immediately that their action has been received and is being processed. Nothing is more confusing than a slight delay before it appears that might prompt people to attempt the upload again.

How to guarantee data integrity for concurrent Rails/Active Record operations

I need to implement a feature for a rails site that will involve reading and exporting most of my database.
I know this operation is going to take a while. That's fine-- I've got delayed job for that.
What I'm worried about is the data changing during the running of the job, and the resulting export being corrupted because of that.
My initial thought was to do all of the reads within a transaction. However, I would also like to be running the reads concurrently, if possible. ActiveRecord docs say that Transactions cannot be shared between Connections, and Connections cannot be shared between Threads. So it looks as though I am restricted to a single thread with this approach.
Any suggestions for a workaround? Is there another way to give the job a consistent view of the data that doesn't involve transactions? Or is there some alternative to ActiveRecord/Mysql out there that can distribute transactions across threads?

Executing large numbers of asynchronous IO-bound operations in Rails

I'm working on a Rails application that periodically needs to perform large numbers of IO-bound operations. These operations can be performed asynchronously. For example, once per day, for each user, the system needs to query Salesforce.com to fetch the user's current list of accounts (companies) that he's tracking. This results in huge numbers (potentially > 100k) of small queries.
Our current approach is to use ActiveMQ with ActiveMessaging. Each of our users is pushed onto a queue as a different message. Then, the consumer pulls the user off the queue, queries Salesforce.com, and processes the results. But this approach gives us horrible performance. Within a single poller process, we can only process a single user at a time. So, the Salesforce.com queries become serialized. Unless we run literally hundreds of poller processes, we can't come anywhere close to saturating the server running poller.
We're looking at EventMachine as an alternative. It has the advantage of allowing us to kickoff large numbers of Salesforce.com queries concurrently within a single EventMachine process. So, we get great parallelism and utilization of our server.
But there are two problems with EventMachine. 1) We lose the reliable message delivery we had with ActiveMQ/ActiveMessaging. 2) We can't easily restart our EventMachine's periodically to lessen the impact of memory growth. For example, with ActiveMessaging, we have a cron job that restarts the poller once per day, and this can be done without worrying about losing any messages. But with EventMachine, if we restart the process, we could literally lose hundreds of messages that were in progress. The only way I can see around this is to build a persistance/reliable delivery layer on top of EventMachine.
Does anyone have a better approach? What's the best way to reliably execute large numbers of asynchronous IO-bound operations?
I maintain ActiveMessaging, and have been thinking about the issues of a multi-threaded poller also, though not perhaps at the same scale you guys are. I'll give you my thoughts here, but am also happy to discuss further o the active messaging list, or via email if you like.
One trick is that the poller is not the only serialized part of this. STOMP subscriptions, if you do client -> ack in order to prevent losing messages on interrupt, will only get sent a new message on a given connection when the prior message has been ack'd. Basically, you can only have one message being worked on at a time per connection.
So to keep using a broker, the trick will be to have many broker connections/subscriptions open at once. The current poller is pretty heavy for this, as it loads up a whole rails env per poller, and one poller is one connection. But there is nothing magical about the current poller, I could imagine writing a poller as an event machine client that is implemented to create new connections to the broker and get many messages at once.
In my own experiments lately, I have been thinking about using Ruby Enterprise Edition and having a master thread that forks many poller worker threads so as to get the benefit of the reduced memory footprint (much like passenger does), but I think the EM trick could work as well.
I am also an admirer of the Resque project, though I do not know that it would be any better at scaling to many workers - I think the workers might be lighter weight.
http://github.com/defunkt/resque
I've used AMQP with RabbitMQ in a way that would work for you. Since ActiveMQ implements AMQP, I imagine you can use it in a similar way. I have not used ActiveMessaging, which although it seems like an awesome package, I suspect may not be appropriate for this use case.
Here's how you could do it, using AMQP:
Have Rails process send a message saying "get info for user i".
The consumer pulls this off the message queue, making sure to specify that the message requires an 'ack' to be permanently removed from the queue. This means that if the message is not acknowledged as processed, it is returned to the queue for another worker eventually.
The worker then spins off the message into the thousands of small requests to SalesForce.
When all of these requests have successfully returned, another callback should be fired to ack the original message and return a "summary message" that has all the info germane to the original request. The key is using a message queue that lets you acknowledge successful processing of a given message, and making sure to do so only when relevant processing is complete.
Another worker pulls that message off the queue and performs whatever synchronous work is appropriate. Since all the latency-inducing bits have already performed, I imagine this should be fine.
If you're using (C)Ruby, try to never combine synchronous and asynchronous stuff in a single process. A process should either do everything via Eventmachine, with no code blocking, or only talk to an Eventmachine process via a message queue.
Also, writing asynchronous code is incredibly useful, but also difficult to write, difficult to test, and bug-prone. Be careful. Investigate using another language or tool if appropriate.
also checkout "cramp" and "beanstalk"
Someone sent me the following link: http://github.com/mperham/evented/tree/master/qanat/. This is a system that's somewhat similar to ActiveMessaging except that it is built on top of EventMachine. It's almost exactly what we need. The only problem is that it seems to only work with Amazon's queue, not ActiveMQ.

Mnesia asynchronous transaction

I would like to have a master-slave setup of Erlang nodes, where read and write operations happen on the master node only. Slave nodes are only kept as hot-standbys.
As I understand the default behavior of Mnesia is to acquire the lock synchronously on all nodes before executing the write operation. This would result in high latency especially for geographically distributed nodes.
My question is: does Mnesia support asynchronous transactions, where locks are only acquired on the master node, and write operations are propagated afterwards towards slave nodes?
I think you will be happier if you build this off-site replication using a message queue system (rabbitmq perhaps) updating the replicated db yourself from the message queue feed. WAN links are more likely to become congested or go down, and message-queue protocols have ways to handle that. Erlang distribution just give up and you have to spill the updates into a file until the replica comes up and can consume it.
For best symmetry, have posting-to-the-message queue be the primary method to update the db. So even the master is updated by consuming from the message queue. If a response is needed, the current master can send a message back to the issuer of the message.
Mnesia does have a few different kinds of mnesia transaction contexts but nothing that really fit exactly with what you want.
Maybe your application can benefit using sticky locks. I guess that it is quite close to your needs, but...not exactly what you wanted http://www.erlang.org/documentation/doc-5.8.3/lib/mnesia-4.4.17/doc/html/Mnesia_chap4.html#id70700
Interesting Q and equally interesting A!
Basically, what you are suggesing, Christian, is e.g. to have a gen_server - serializing the access to the DB.
First time I did that and then I realized: hang on! Mnesia is transactional so it sounds a bit odd to first serialize access and then sort of do it again by updating the DB via a transaction.
I still am a bit puzzled, however, given that mnesia enforces transactional semantics I tend to take that as a hint that you should not have to serialize access yourselves, especially since the implementors of mnesia probably know the system better than I do ;)
I understand that this is not quite a direct answer to your question, however, I'd say use mnesia + memorynodes + disknodes. The memorynodes for quick takeover and the disknodes for recovering after a crash/backup.
HTH,
haavee

Resources