How would I organize the flow of this Rails code? - ruby-on-rails

I'm using Shippinglogic to gather tracking information for submitted tracking numbers.
I'm handling a number of things behind the scenes of the UI, but I'm not sure how to properly organize this.
So here's the flow:
User submits tracking number either via form input or URL (example.com/track/1234567890). If the number doesn't already exist in the database, then the next step happens...
After a number is submitted, I run the number through some logic to determine who the carrier is (UPS, FedEx, USPS, DHL, etc). The user never specifies...it's all done automatically.
After the carrier is determined, then I need to make the actual call to the carrier API (via Shippinglogic) to get tracking information.
After I get the tracking details, I need to save it to the database.
Then, the tracking details are finally returned to the user.
Since users can submit either via form or via a URL (without any sort of POST action), I'm trying to run it all through my show method in the controller where I check if the number exists and if not, submit it via Number.create(:tracking_number => '1234567890') but once I get into the model, I just kinda get lost on what to do next.

Well I would have the users directed to the new or create actions where you can handle creation and detect if the record already exists. Once that's handled you most likely want to send them off to the show page where you can display the tracking information from your data source and any information you have saved in your database. This way you are preserving the nature of the application and other developers would be able to work with the application if they need to.
Edit:
I had a project like this and I move my detection code out into a separate function inside the model so I could make changes to it and abstract it from a specific call on the model. I performed my API requests in the background on the model so I could cache data in the database and refresh the records that were deemed active once an hour.
Basically if it needed to use the data from the record or save some data as part of the record I made a function in the model. This enabled me to split a bunch of functions out from specific modifications to controller actions and the like.

Related

Rails: working on temporary instance between requests and then commit changes to database

I have already read Rails - How do I temporarily store a rails model instance? and similar questions but I cannot find a successful answer.
Imagine I have the model Customer, which may contain a huge amount of information attached (simple attributes, data in other tables through has_many relation, etc...). I want the application's user to access all data in a single page with a single Save button on it. As the user makes changes in the data (i.e. he changes simple attributes, adds or deletes has_many items,...) I want the application to update the model, but without committing changes to the database. Only when the user clicks on Save, the model must be committed.
For achieving this I need the model to be kept by Rails between HTTP requests. Furthermore, two different users may be changing the model's data at the same time, so these temporary instances should be bound to the Rails session.
Is there any way to achieve this? Is it actually a good idea? And, if not, how can one design a web application in which changes in a model cannot be retained in the browser but in the server until the user wants to commit them?
EDIT
Based on user smallbutton.com's proposal, I wonder if serializing the model instance to a temporary file (whose path would be stored in the session hash), and then reloading it each time a new request arrives, would do the trick. Would it work in all cases? Is there any piece of information that would be lost during serialization/deserialization?
As HTTP requests are stateless you need some kind of storeage between requests. The session is the easiest way to store data between requests. As for you the session will not be enough because you need it to be accessed by multiple users.
I see two ways to achive your goal:
1) Get some fast external data storage like a key-value server (redis, or anything you prefer http://nosql-database.org/) where you put your objects via serializing/deserializing (eg. JSON).
This may be fast depending on your design choices and data model but this is the harder approach.
2) Just store your Objects in the DB as you would regularly do and get them versioned: (https://github.com/airblade/paper_trail). Then you can just store a timestamp when people hit the save-button and you can always go back to this state. This would be the easier approach i guess but may be a bit slower depending on the size of your data model changes ( but I think it'll do )
EDIT: If you need real-time collaboration between users you should probably have a look at something like Firebase
EDIT2: Anwer to your second question, whether you can put the data into a file:
Sure you can do that. But you would need some kind of locking to prevent data loss if more than one person is editing. You will need that aswell if you go for 1) but tools like redis already include locks to achive your goal (eg. redis-semaphore). Depending on your data you may need to build some logic for merging different changes of different users.
3) Another aproach that came to my mind would be doing all editing with Javascript and save it in one db-transaction. This would go well with synchronization tools like firebase (or your own synchronization via Rails streaming API)

How can I persist objects between requests with ASP.NET MVC?

I'm just starting to learn ASP.NET MVC and I'd like to know how I can retain model objects between subsequent requests to controller action methods?
For example say I'm creating a contact list web app. Users can create, update, rename, and delete contacts in their list. However, I also want users to be able to upload a contact list exported from other programs. Yet I don't want to just automatically add all the contacts in the uploaded file I want to give the user a secondary form where they can pick which uploaded contacts should be actualy added to their list.
So first I have a ContactController.Upload() method which shows an upload form. This submits to ContactController.Upload(HttpPostedFileBase file) which reads the file that was posted into a set of Contact model objects. Then I want to display a list of all the names of the contacts in the list and allow the user to select those that should be added to their contact list. This might be a long list that needs to be split up into multiple pages, and I might also want to allow the user to edit the details of the contacts before they are actually added to their contact list.
Where should I save the model objects between when a user uploads a file and when they finally submit the specific contacts they want? I'd rather not immediately load all the uploaded contacts into the back end database, as the user may end up only selecting a handful to actually add. Then the rest would need to be deleted. Also I would have to account for the case when a user uploads a file, but never actually completes the upload.
From what I understand an instance of a controller only lasts for one request. So should I create a static property on my Contact controller that contains all the latest uploaded contact model object collections? And then have some process that periodically checks the age of these collections and clears out any that are older then some specified expiration time?
A static property on the controller is trouble. First off, it won't work in a web farm and second it you'd have to deal with multiple requests from different users. If you really don't want to use your database you could use the ASP.NET Session.
No, you don't want a static property, as that would be static to all instances of the controller, even for other users.
Instead, you should create a table used to upload the data to. This table would be used as an intermediary between when the user uploads the data, and completes the process. Upon completion, you copy the contacts you want to keep into your permanent table, then delete the temporary data. You can then run a process every so often that purges incomplete data that is older than a specified time limit.
You could also use the HttpContext.Cache, which supports expiration (and sliding expiration) out-of-the box.
Alternatively, and perhaps even better (but more work) you could use cookies and have the user modify the data using javascript in her browser before finally posting it to you.
However, I'd strongly recommend to store the uploaded information in the database instead.
As you pointed out, it might be a lot of data and the user might want to edit it before clicking 'confirm'. What happens if the user's machine (or browser) crashes or she has to leave urgently?
Depending on the way you store the data the data in this scenario will probably be lost. Even if you used the user id as a cache key, a server restart, cache expiration or cache overflow would cause data loss.
The best solution is probably a combination of database and cookie storage where the DB keeps the information in a temporary collection. Every n minutes, or upon pagination, the modified data is sent to the server and updated in the DB.
The problem with storing the data in session or memory is what happens if the user uploads 50k contacts or more. You then have a very large data set in memory to deal with which depending on your platform may effect application performance.
If this is never going to be an issue and the size of the imported contacts list is manageable you can use either the session or cache to store the dataset for further modifications. Just remember to clear it when the user has committed the changes, you don't want a few heavy datasets hanging around in session.
If you store the dataset in session using your application controller then it will be available to all controllers while it is needed.

How do I see real-time activity of my users in Rails 3?

What I would like to do is have my admin user be able to see - in real time (via some AJAX/jQuery niceness) - what my user's are doing.
How do I go about doing that ?
I assume it has something to do with session activity - and I have started saving the session to the db, rather than the cookie.
But generally speaking, how do I take that info and parse it in real time ?
I looked at my session table and aside from the ids (id and session_id), I see a 'data' field. That data field stores a hash - which I can't make any sense of (looks like an md5 hash).
How would I use that to see that User A just clicked on Link B, and right after that User B clicked on link A, etc. ?
Is there a gem - aside from rackamole - that might be able to help me?
You might want to check out Mixpanel. They are easy to setup and have some of what you are asking for.
The session data only contains the values stored in the session[]-hash from the user. It doesn't store which action/controller was called, so you don't know which "link was clicked".
Get the activity of your users:
Besides rackamole you have two options IMHO.
Use a before_filter in your ApplicationController to store the relevant info you are interested in. (Name of controller, action or URI, additional parameters and id of the logged in user for example).
Use an AJAX-call at the bottom of each page which posts back the info you are interested in (URI, id of logged in user, etc.) to your server. This allows faster response times from the server, as the info is stored after the page has already been delivered. Plus, you don't have to use a Rails-request to store it. The AJAX-request could also be calling a simple PHP-script writing the data to disk. This is much faster.
Storing this activity:
Store this data/info either in the database or in a logfile. The database will give your more flexibility like showing all actions from one user, or all visitors for one page, etc. The logfile solution will give you better performance.
Realtime vs. Oldschool:
As for pulling out your collected data in realtime, you have to build your own solution. To do this elegantly (without querying your server once a second to look if new data has arrived) you'll need another server process. Search for AJAX Push for more info.
Depending on your application I'd ask myself if realtime notifications for this are really necessary (because of all the hassles of setting this up).
To monitor the activity on your site, it should be enough to have a page listing the latest actions and manually refresh it (or refresh it automatically every ten seconds).
Maybe you can test https://github.com/raid5/acts_as_scribe#readme
It works with Rails 3 too.

iPhone Data Best Practices - caching vs remote

I'm developing an iPhone app that uses a user account and a web API to get results (json) from a website. The results are a list of user's events.
Just looking for some advice or strategies - when to cache and when to make an api call... and if the iPhone SDK has anything built in to handle these scenarios.
When I get the results from the server, they populate an array in a controller. In the UI, you can go from a table listing view, to a view of an individual event result - so two controllers share a reference to the same event object.
What gets tricky is that a user can change the details of an event. In this case I make a copy of the local Event object for the user's changes, in case they make an error. If the api call successfully goes through and updates that event on the server, I take these local changes from the Event copy and set the original Event object to match with setters.
I have the original controller observing if any change is made to the local Event object so that it can reflect it in the UI.
Is this the right way of doing things? I don't want to make too many API calls to reload data from the server, But after a user makes an update should I be pulling down the list again with the API call?
...I want to be careful that my local objects don't become out of sync with the remote.
Any advice is appreciated.
I took a similar approach with an app I built. I simply made a duplicate version of the remote data model with Core Data, and I use etags on the backend to prevent sync issues (in my case, it's okay to create duplicate records).
It sounds like you're taking a good approach to this.
Some time back, I developed an iOS app where in, I had almost same requirement to store data on server as well as locally to avoid several network call and also user can see their information without any delay.
In that app, user can store photos, nodes, checkIns and social media post and with all this data, app can form a beautiful timeline. So what we did was, we had everything locally and whenever user phone come in some WIFI zone, we start uploading that data to server and sync both (local and remote) databases.
Note this method works well when only one user can access this data.

rails: how to detect other clients browsing the same url

What want to be accomplished is:
I want to "synchronize web browsers". my site has a "wait URL" where when browser gets there it will be kept waiting till another browser also go there and then both will be presented with a quiz-like game.
Right now the wait url will call each second to Rails to check if other player came to the game. How can in the Rails framework detect a different client connecting to the same URL?
As the controller is recreated per request looks like is not the place, not the view for sure and storing this in the model looks really clumsy.
Also, after the pairing I need to check and compare every answer of the paired users so somehow that information must be retained
What you're trying to do is share information between users. So the database or memcached are the most sensible.
Simplest: I'd create an ActiveRecord object, perhaps called Quiz, instances of which people join by virtue of going to a URL, e.g using default routes:
http://yoursite.com/quizes/join/3434
You'd need an ajax poller poller to notify the others; use periodically_call_remote for this -- you could use render :nothing => true by default and render something else if there was an error to keep it efficient. You can also use the frequency method as a basis to determine whether people leave the quiz as well (e.g. if frequency is 1s, then assume someone has left if they didn't ping after 5-10s).
Assuming these users are not registered with the site so don't have some kind of user id you could store I would suggest using the session. It is a per user data store. By default the session is stored in an encrypted cookie on the users machine. However you can use ActiveRecord as the session store and could maybe query that table directly?
Store the URL in the session and do a search for it at a later time. You can normally only access the current users session using the Rails 'session' hash but maybe (untested) if you created a model called Session (or maybe something more specific like 'WaitingGamers') which used the sessions table you could lookup the information you need.
I would guess when using ActiveRecord as the session store the session data is stored as a serialised hash. Use Marshall to turn it back in to a regular hash and find the data you stored in there.
I'm not a rails expert, but since all the state resides in your database that would be the place to keep this information.
You could keep a "waiting users" table, and in the "wait URL" view check if the user is already in the table. If not, add him to the table. Then, check if there is another user waiting (maybe there's more than one?) and if so, match them up and delete them from the table.
Another improvement would be to keep a timestamp for each user in the "waiting users" table, which gets updated in the view - this would serve as a keep-alive that will enable you to detect users that left the "wait URL" page or closed the browser.

Resources