HTTP requests in transactions? - ruby-on-rails

I have a model which sends a HTTP request to an external web service on creation in order to find out some information to add before it is saved.
Currently I'm doing this in a before_create callback. I recently learned that before/after callbacks happen within database transactions.
Am I opening myself up to any issues such as limiting DB throughput by doing this? Would it be better to commit the record before sending the http request and then update the record when it returns?

As long a s you keep a transaction open, all the locks it acquired are active. If you have a call to an external source that may stall you for a long period of time, be sure not to to have any unrelated locks in the same transaction.
In other words: don't put anything else into the same transaction.
If you don't mind the new row being visible before you look up the additional information, you might just commit and later update the row.
Or you fetch the information from the external web service before you even start the transaction. That would be cleanest / fastest solution for the database.
PostgreSQL lock types.
How to view locks.

Related

What is clientChangeTokenData in CKModifyRecordsOperation?

I am working on CloudKit sync in my app ("Tiny data, all devices" model, with a custom zone in the private database).
CKModifyRecordsOperation contains clientChangeTokenData property of NSData type which is described in the docs as follows:
When you modify records from a fetch operation, specify a client-generated data token using this property to indicate which version of the record you last modified. Compare the data token you supplied to the data token in the next record fetch to confirm the server has successfully received the device’s last modify request.
I don't get why I should bother given that with each request, I get a completion block which tells me whether the server has successfully received my request. Why do I need to manually compare this client token?
Is specifying clientChangeTokenData required to handle my use case correctly? I track local data changes and push everything in the queue on each data change. Remote changes are tracked via zone subscription.
If it is required, how do I generate this token correctly given that I have all kinds of record changes in my CKModifyRecordsOperation (my API usage aims for batch operations). What is the general workflow here?
Thank you.
It's unclear from the docs so I'd guess the clientChangeTokenData is useful in the case of sending up a large modify records operation, e.g. deleting 100 records. Then say your app sends a fetch request in another operation with a query (or fetch changes) result set that would be affected by the modifications which either:
is started and is running concurrently to the existing modify operation which hasn't finished yet.
is started before the server has finished processing the results of the previous modifications (the docs tend to allude to a processing delay).
If the fetch completion contains a different clientChangeTokenData to the one sent with the modify then you know it hasn't received (or finished processing?) the changes yet. In this situation you could either error, with an alert to say the server needs more time, or automatically retry the fetch after some time.
By the way in my tests, this token is per-device.
You would only have a reason to check the token if you had local changes that you want to write to CloudKit and you want to make sure that your changes are based on the latest version of the data in CloudKit.
You could also just ignore the token and save the data anyway. If the data has changed in the mean time, you will get a CloudKit error and you could handle it then.

Is it ever necessary to flush in a service?

I know if a Grails service is transactional a call to save(flush: true) can be rolled back. My question is if there is ever a need to call flush whilst in a service.
It depends on the working scenario. Ideally, it wont be necessary to flush every time you save something in the service class because the session gets flushed once returned back from the service class.
But think of a scenario where you have two different hibernate sessions working separately but data from one depends on another, then you would need to flush.
For example, Session 2 needs data read from db which frequently gets updated by Session 1 at the same time then that information has to be flushed to underlying persistence to make it available for session 2.
You can get granularity about how transactions can be handled by using #Transactional in service class explicitly and specifying the Propagation/Isolation Strategy, if required.
If you are doing bulk inserts using Hibernate then you will want to flush the Session periodically in order to prevent an OutOfMemoryException as the Session will keep growing until it is flushed (and cleared). Flushing writes the objects queued in the Hibernate Session cache to the database (in other words, doing SQL inserts), but the inserts are within the scope of a DB transaction so they can be rolled back.
The Hibernate docs have further discussion on this topic.

How to check if user has bought consumable item?

First, thanks everyone.
Prerequisite:I am providing consumable items in my application.
product:
List item
User purchase the item by iap.
before my application received the updatedTrancactions(Transaction),Network is disconnected.
So my server don't have data to verify the receipt. the user also can not get the "Virtual currency".
Would anyone tell me how to solve this problem,or give me some tip. Thanks very much.
its the standard client-server problem. In case the connection between client and server is severed (due to timeout or other reasons), common way to do it is to retry the request. But if your API calls are not Idempotent and calling an API multiple times can affect the state of your system that many times then we have to resort to do something more clever. Some options you have -
Have a local database. When a purchase happens, then first update the state in you local DB. Late lazily sync the DB from client to server, I hear coredata or sqlite is excellent. User is not aware of this and since DB is local the UI will be extra snappy for the user.
Second approach is - in case of a failed HTTP call. You keep retrying till the call succeeds.
Incase the API is non-idempotent, then you need to have a concept of a token. i.e. a API call with the same token called multiple times is first checked on the server-side if the initial call was a success only if it was a failure execute again. ex. this is very important in banking solutions. Imagine multiple debits from your bank account due to timeouts and someone programmed to keep retrying!
This is all I am able to think of right now. Give it a spin and tell us what worked for you...

Design for long running ASP.net MVC web request

I'm aware of the model that involves a scheduled task runninng in the back ground which runs jobs registered with a web request but how about this for an idea that keeps everything within ASP.net...
User uploads a CSV file with, perhaps, several thousand rows. The rows are persisted to the database. I think this would take maybe a minute or so which would be an acceptable wait.
Request returns to the browser and then an automatic Ajax request would go back to the server and request, say, ten rows at a time and process them. (Each row requires a number of web service requests.)
Ajax call returns, display is updated and then another automatic Ajax request goes back for more rows. This repeats until all rows are completed.
If user leaves the web page, then they could return and restart the job.
Any thoughts?
Cheers, Ian.
If i get you right, you actually dont need any "interaction" between background jobs and the long-running request, you just want to "lauch" background jobs with incoming requests? Not such a good idea. Take a look at the Quartz.NET project, it is scheduler embeddable into ASP.NET application, it will handle this stuff for you without need of requests. Of course, if there is app pool shutdown, also your scheduler goes down, but this you cant guarantee not to happen even with your long-running requests solution, dependent on browser waiting on other side.
Also take a look on this interesting article from phil haack on this topic, with his own little scheduler library specific for ASP.NET :
http://haacked.com/archive/2011/10/16/the-dangers-of-implementing-recurring-background-tasks-in-asp-net.aspx
A server side program (or ideally service) could still be quick and dirty and would be more reliable. You could still do step 1 as you have proposed, upload the file and insert the data ( don't forget to increase the maxRequestLength time out value in the web.config ). Then have a program running on the server that checks for new records and processes them.
If the user needs status you could store an entry in the database for each file and update the database record when the import is complete.
Maybe I'm reading the question and interpreting it in a weird way, but why couldn't you read the file into a database and store in a table the current line of the file that you've completed through. You could then track your progress via the db and just send small json objects telling the user how far along you are. That way if their connection drops you can keep processing their request, and if they return later you can notify them of how far along the job is. Also, if multiple clients are connecting you can use the db to queue and throttle (by serializing) the workload. Or if the user connects mid-job with another file, then their new request will be queued up after their current job.

How to send many emails via ASP.NET without delaying response

Following a specific action the user takes on my website, a number of messages must be sent to different emails. Is it possible to have a separate thread or worker take care of sending multiple emails so as to avoid having the response from the server take a while to return if there are a lot of emails to send?
I would like to avoid using system process or scheduled tasks, email queues.
You can definitely spawn off a background thread in your controller to handle the emails asynchronously.
I know you want to avoid queues, but another thing i have done in the past is written a windows service that pulls email from a DB queue and processes it at certain intervals. This way you can separate the 2 applications if there is a lot of email to be sent.
This can be done in many different ways, depending on how large your application is and what kind of reliability you want. Any of these ways should help you achieve what you want (in ascending order based on complexity):
If you're using IIS SMTP Server or another mail server that supports a pickup directory option, you can go with that. With this option, instead of sending the emails directly, they are saved first in the pickup directory. Your call will immediately return after the email is saved in the pickup directory, so the user won't have to wait until the email is sent. On the other hand, the server will try to send the email as soon as it's saved in the pickup directory so it's almost immediate (just without blocking the call).
You can use a background thread like described in other answers. You'll need to be careful with this option as the thread can end unexpectedly before it finishes its job. You'll need to add some code to make sure this works reliably (personally, I'd prefer not to use this option).
Using a messaging queue server like MSMQ. This is more work and you probably should only look into this if you have a large scale application or have good reasons not to use the first option with the pickup directory.
There are a few ways you could do this.
You could store enough details about the message in the database, and write a windows service to loop through them and send the email. When the user submits the form it just inserts the required data about the message and trusts the service will pick it up. Almost an email queue which you said you didn't want, but you're going to end up in a queue situation with almost any solution.
Another option would be to drop in NServiceBus. Use that for these kinds of tasks.
I typically compile the message body and store that in a table in the db along with the from and to addresses, a subject, and a timestamp indicating when the email was sent. Then I have a background task check the table periodically and pull any that haven't been sent. This task attempts to send each email and updates the timestamp accordingly. One advantage of storing the compiled message body up front is that the background task doesn't have to do any processing of context-specific data, and therefore can be pretty darn simple.
Whenever an operation like is hingent upon an event, there is always the possibility something will go wrong.
In ASP.NET you can spawn multiple threads and have those threads do the action. Make sure you tell the thread it's a background thread, otherwise ASP.NET might way for the thread to finish before rendering your page:
myThread.IsBackground = true;
I know you said you didn't want to use system process or scheduled tasks, but a windows service would be a viable approach to this as well. The approach would be to use MS Queue, or save the actions needing to be done in a DataBase table. Then have a windows service check every minute or so and do those actions.
This way, if something fails (Email server down) those emails / actions can still be done.
They will also be recorded for audit's (which is very nice to have).
This method allows you're web site to function as a website while offloading these tasks to another service. The last thing you need is for multiple ASP.NET processes to be used up waiting for emails to send. let something else handle that.

Resources