tl;dr: In Rails, what is the recommended way of taking one form of input (in this case, a URL), and converting it to something else (in this case, a Facebook ID using their graph API) before saving it?
I'm working on creating a (very simple) site to track some Facebook and Twitter accounts. In both cases, I want a user to enter a URL into a form, but I'd like to then convert that URL into a (Facebook or Twitter) ID before saving to the database, for the sake of consistency and future-proofing.
My experience with Rails is very limited (just finished Michael Hartl's RailsTutorial), and I'm not sure how to set up my form to perform some action on the input before saving it. Any suggestions?
The common pattern for this, is doing it in the before_save callback (see http://api.rubyonrails.org/classes/ActiveRecord/Callbacks.html). However, as these operation(s) sometime take very noticeable time, it's better to do it in the background using DelayedJob, Resque, Sidekiq or at least girl_friday. The enqueuing of the job can still go into the before_save callback. And you probably want to check there if the id is already known, so it doesn't need to fetch it again.
Related
I've built a Rails app, basically a CRUD app for memos/notes.
A notes title must be unique. If a user enters a name already taken a warning message is shown prompting them to chose another.
My question is how to make this latency for this feedback as close to zero as possible. When creating a note little UX speed bumps like this will get annoying for user quickly.
Of course the main bottleneck is the network. Inspired by Meteor (and mini-mongo) I was thinking some kind of local storage could be a solution?
I.E. When app first loads, send ALL JSON to the client with ALL note titles. The app (front end is Angular JS) could check LocalStorage (or App Cache, Web SQL?) instead of incurring a network round trip. The feedback would be instant.
I've used LocalStorage in the past to augment an app, but in the scenario it'd really seriously depend on it. I'm not sure how confident I'd be building on something that user might not have. Also as the number of user Notes/Memos I have doubts how feasible it is to send a JSON object down the wire with ALL the note titles. That might get pretty big. On the other hand MeteorJS seems to do this with no probs.
Has anyone done something similar or have any pointers? Thanks!
I don't know how Meteor works here, but you're right that storing all note titles in localStorage is not a good idea. Actually, you don't need localStorage here, you can just put it in a JS array, because you need this data only once (when checking new note title).
I think, there could be 2 possible solutions:
You can change your business requirements and allow non-unique title. Is there really a necessity for titles to be unique?
You can verify note title when user submits form. In this case you can provide suggestions for users, so they not spend time guessing vacant title.
Or, if titles must be unique only within a user (two users can have same title for their notes), you can really load all note titles in JS array and check uniqueness while users types in a title.
Or you can send an AJAX request checking title uniqueness as soon as user finished typing the title. In this case you can win some seconds.
Or you can send an AJAX request as soon as user typed in 3 symbols. The request will return all titles that begin with these 3 symbols, so you don't need to load all the titles.
I have an application that makes API requests to salesforce using restforce.
Specifically the application finds a contact object, returns IDs for all related objects and then pulls the full record for every related object based on their ID.
This takes a long time for two reasons:
There are a lot of request to an external API, usually takes a few fractions of a second for each to reply and for some there can be +500 individual requests.
There is often a large amount of data being pulled back via each request.
All requests currently fall within the salesforce rest API limits but I'm getting timeout errors from my development server as it can take 5+ minutes for some of these requests to process.
Rails 4.2 - How best to handle this?
My question is how do I best get rails to handle this?
I can fire the API requests either from the controller (which definitely violates the skinny controllers) or from the view (via helper methods, which seems like a dodgy hack).
Ideally I'd like to get it running in a background job, but i'm unsure if I can just include all the authentication and other methods in a job in the same way I can include helper methods?
Even if I could get it to work in a background job, I'm unsure what best practice might be for the user experience. Ideally I'd like to route them to a page telling them to "hang tight, go get a coffee" with a progress bar, and then auto route them to the final page once the request is complete...
But I'm unsure how to generate a temporary display until a job has been completed?
Could anyone recommend any gems or strategies that might help me digest this problem?
You should definitely use a background job for this.
Give a database object to the job, which it will update to signal that is has finished, and maybe from time to time to indicate progress.
On the user side, simply tell them that the background job is working, with eventually a progress indicator, and display the result once the database object giving to the job tells you it's ready.
I am currently using wepay with rails. Don't worry this post is nothing about wepay.
So when a customer wants to buy something from my site, he/she will be redirected to wepay.
Then after paying on wepay, wepay will redirect the user to /purchases/received
After X amount of time, Wepay will also do a post call to /purchases/callback to tell me that the payment has been captured (credit card processing is slow)
So my original plan is as follows:
For the Purchase model, have a field, wepay_id and wepay_confirmed.
When the user place an order on wepay, the redirection to /puchases/received will create a purchase instance and save in my db
When the callback is called look up by wepay_id and then set wepay_confirmed to true.
However, as I discovered that the X amount of time could be so fast that /purchases/callback is called before /purchases/received could create the object.
So now I have two options:
Allow /purchases/callback to create an empty Purchase instance with just the id and confirmed = true. As I was doing this, I realized that I no longer can validate my model in the traditional manner. This really bugs me.
Create a separate table called Wepay_Confirmed. Whenever callback is called, create an entry in wepay_confirmed. Map the presence of an (checkout_id) in this table to Purchase.confirmed attribute.
I am thinking of doing 2. How can I do this? Do I have to generate a scaffold for a specific model to map to Wepay_Confirmed?
If you have any other suggestions, please reply
I would try to keep your application the way it is because it does make sense however you should look into returning an error code to wepay and have them submit the request later after the record is created.
Just emailed the developers over at WePay and got this response:
Hi Devin,
We do have automatic IPN retries. Retries happen 5 minutes after the
initial try, if the retry doesn't work, we try 15 minutes later, and
then an hour later. However, right now they are only on empty 404
responses.
The best solution is to actually just ignore the IPN if he does not
have the record in his database. Our IPNs only tell an application to
look up the checkout details with the /checkout call. They do not have
any details of the checkout. Since he should be looking up the
/checkout status anyway when he creates the checkout object on his
end, he doesn't need the IPN to tell him to look up the status in this
case.
If that doesn't work for him he can also email me at api#wepay.com and
we may be able to work out a solution.
Andrew
So it looks like you can modify the flow of you application to ignore the IPN's without a record and check manually or you can respond with a 404 and they will retry at the above intervals.
As I mentioned in my comment, I would personally prefer to create the purchase record upon purchase, then send the user to the WePay site, then handle the return trip and callback as actions to be completed against that original purchase site.
For one, that matches the reality of the transaction more accurately. When a user makes a purchase from your site, it makes sense to me that it's something you should persist at that point.
The two elements of the WePay transaction (the return trip to your site and the charge confirmation callback) would all act on that original purchase record. This will also allow you to see how many people abandon the purchase process when they hit WePay, which could reveal issues in your user experience that might help to maximize conversions.
I created a gem called wepay-rails which handles all of this for you. Under the hood it creates the entry (WepayCheckoutRecord) before sending the payer off to wepay. It has an IPN listener built in that handles the updating of that record. In my personal rails app, I am using state machine on the WepayCheckoutRecord model to track the changes to the state and doing 'things' as the state changes on that record.
I hope that helps.
Adam -
If you take the 2nd approach, you dont need to scaffold it. You can just create a migration and use it inside one of your other 'scaffolds'. Scaffolds are really just a way to get started with a resource. I dont think your intent here is to have a fully-fledged resource. Unless it is then you can use it as a scaffold.
What I would like to do is have my admin user be able to see - in real time (via some AJAX/jQuery niceness) - what my user's are doing.
How do I go about doing that ?
I assume it has something to do with session activity - and I have started saving the session to the db, rather than the cookie.
But generally speaking, how do I take that info and parse it in real time ?
I looked at my session table and aside from the ids (id and session_id), I see a 'data' field. That data field stores a hash - which I can't make any sense of (looks like an md5 hash).
How would I use that to see that User A just clicked on Link B, and right after that User B clicked on link A, etc. ?
Is there a gem - aside from rackamole - that might be able to help me?
You might want to check out Mixpanel. They are easy to setup and have some of what you are asking for.
The session data only contains the values stored in the session[]-hash from the user. It doesn't store which action/controller was called, so you don't know which "link was clicked".
Get the activity of your users:
Besides rackamole you have two options IMHO.
Use a before_filter in your ApplicationController to store the relevant info you are interested in. (Name of controller, action or URI, additional parameters and id of the logged in user for example).
Use an AJAX-call at the bottom of each page which posts back the info you are interested in (URI, id of logged in user, etc.) to your server. This allows faster response times from the server, as the info is stored after the page has already been delivered. Plus, you don't have to use a Rails-request to store it. The AJAX-request could also be calling a simple PHP-script writing the data to disk. This is much faster.
Storing this activity:
Store this data/info either in the database or in a logfile. The database will give your more flexibility like showing all actions from one user, or all visitors for one page, etc. The logfile solution will give you better performance.
Realtime vs. Oldschool:
As for pulling out your collected data in realtime, you have to build your own solution. To do this elegantly (without querying your server once a second to look if new data has arrived) you'll need another server process. Search for AJAX Push for more info.
Depending on your application I'd ask myself if realtime notifications for this are really necessary (because of all the hassles of setting this up).
To monitor the activity on your site, it should be enough to have a page listing the latest actions and manually refresh it (or refresh it automatically every ten seconds).
Maybe you can test https://github.com/raid5/acts_as_scribe#readme
It works with Rails 3 too.
Warning: some of this may be very wrong-headed, so please let me know if my assumptions are incorrect.
Here's what I'm trying to accomplish:
I'm using restful-authentication for login. However, as I am using flex/ruby_amf for my UI, I have to separately authenticate each connection from flex.
The way I decided to do that was by having the log-in screen redirect to the embedded flash page, inserting the session-id as a flashvar. The flash app sends the session-id with every request, and a before filter on all of the relevant controllers checks to see if the user associated with the session identified by the session-id is logged on.
The way I associate a user with session is by adding a 'user_id' column to the sessions table, and doing an sql "update sessions set user_id...'" type query called from the login function.
However, the user_id only gets updated the 2nd time the user logs in. A little investigating showed that the record in the sessions table does not yet exist during execution of the login function.
So, if everything up to this point makes sense, and conforms to best-practices, etc., then my question is:
At what point in time is the record in the sessions table created? Is there a way to update the session object in the login function and have rails write the user_id to the database for me?
The behavior of sessions in rails is a real mystery to me. I'd appreciate any help.
Thank you.
In Rails 2.3, the session is saved after the Rack application has finished its processing. In traditional Rails applications, this will be after the request is fully processed: before filters, controller action, view rendering, and after filters. Look in actionpack/lib/action_dispatch/vendor/rack-1.1.pre/rack/session/abstract/id.rb.
If you think about it, this makes perfect sense. Writing the session to its store every time you place something in the session would incur a lot of extra overhead.
It's Rails, so if you want to mess with it enough, sure, you can monkeypatch yourself a way to write the session to store anytime you wish. I don't recommend it. You'll end up having to rework the code constantly as Rails evolves.
You are right that for ActiveRecord::SessionStore, one row does map to one session. The data column is an encoded form of every object you put in the session. Each time a request comes in, Rails has to reconstitute the session as it existed by creating new instances of all the objects you previously stored in it.