Local storage on Rails - ruby-on-rails

I've built a Rails app, basically a CRUD app for memos/notes.
A notes title must be unique. If a user enters a name already taken a warning message is shown prompting them to chose another.
My question is how to make this latency for this feedback as close to zero as possible. When creating a note little UX speed bumps like this will get annoying for user quickly.
Of course the main bottleneck is the network. Inspired by Meteor (and mini-mongo) I was thinking some kind of local storage could be a solution?
I.E. When app first loads, send ALL JSON to the client with ALL note titles. The app (front end is Angular JS) could check LocalStorage (or App Cache, Web SQL?) instead of incurring a network round trip. The feedback would be instant.
I've used LocalStorage in the past to augment an app, but in the scenario it'd really seriously depend on it. I'm not sure how confident I'd be building on something that user might not have. Also as the number of user Notes/Memos I have doubts how feasible it is to send a JSON object down the wire with ALL the note titles. That might get pretty big. On the other hand MeteorJS seems to do this with no probs.
Has anyone done something similar or have any pointers? Thanks!

I don't know how Meteor works here, but you're right that storing all note titles in localStorage is not a good idea. Actually, you don't need localStorage here, you can just put it in a JS array, because you need this data only once (when checking new note title).
I think, there could be 2 possible solutions:
You can change your business requirements and allow non-unique title. Is there really a necessity for titles to be unique?
You can verify note title when user submits form. In this case you can provide suggestions for users, so they not spend time guessing vacant title.
Or, if titles must be unique only within a user (two users can have same title for their notes), you can really load all note titles in JS array and check uniqueness while users types in a title.
Or you can send an AJAX request checking title uniqueness as soon as user finished typing the title. In this case you can win some seconds.
Or you can send an AJAX request as soon as user typed in 3 symbols. The request will return all titles that begin with these 3 symbols, so you don't need to load all the titles.

Related

Hide feature flag when viewed/visited in Ruby

I have a new feature in my Rails project. I need to insert a "New!" flag in its menu, so user will notice that a new feature is available. Once the new feature page is visited, this "flag" must disappear.
How is it possible with Ruby on Rails?
The absolute simplest way is to look for a sawFeatureX cookie and set it when the page is rendered or the user dismisses the notification.
A more robust solution would be to store the info on the user model in the db, but that ends up giving you a lot of one-off boolean fields which may or may not be what you want.
There are MANY variations. You could use something like HelloBar to point out the new content without inlining it into the menu. So. Many. UX. Variations.
But for a one-time thing, a cookie or db-backed solution seems simple and easy.
I hate this problem.
A cookie is easy, but gross and doesn't scale. You really don't want to pay the price of sending this data back and forth on every request until the end of time.
Saving on the user record seems like a sin against database design.
A separate DB table with all these "I saw feature X" seems like such overkill and I hate something that is just going to grow without bound being in my main DB.
You can put it in Redis, memcached, but do you really need to store it in RAM? that's the most expensive place to do this.
I think the ideal solution is something like https://www.prefab.cloud/documentation/once_and_only_once which is a service (i wrote) that stores this little "bob saw X" off in a database I don't need to manage/care. It handles cacheing etc so that it's as fast as having it in Redis/etc but durable and doesn't get expired.

Persisting data in MVC for the duration of a users session

Apologies in advance as I'm sure this topic has no doubt been asked before but I couldn't find any post that answers my specific query.
Bearing in mind that I'm new to MVC this is where I have got to. I've got a project developed under VS 2010 using the MVC 3 framework. I've got a search page which consists of 6 fields and a nested model which itself holds around 3 fields.
I can successfully post all this data back to itself and the data is successfully passed as a model and back agian so the fields keep the data which the user has supplied.
Before I move on to actually using this search criteria on another view a thought hit me. I want to keep this search criteria, and possibly even the search results in memory for the duration of the users session.
The reasoning behind this is simply to save my users time by:
a) negating the need to keep re-inputting their search criteria regardless of how they enter or leave the search page
b) speed up the user experience by presenting the search results more quickly
The later isn't as important as the first requirement.
I've done some google searches and indeed had a look through this site on similar topics. From what I've read using sessions (which I would typically use if developing a PHP site) is a no no. From the reasons I've read as to why you shouldn't use sessions seem valid and I'm happy to go along with it.
But now I'm left in a place where I'm scratching my head wondering to myself what exactly is best practice to achieve this simple goal that could be applied to similar situations later down the line in the project.
I also looked at the OutputCache method and that didn't behave as I expected it to. In a test I set the timeout for 30 seconds. After submitting a search I clicked the link to my search page to see if the fields would auto-populate, they didn't. But then clicking the search button the values in the cache were retrieved. I thought I was making progress but when I tried to submit a new value the old value from the cache came back i.e. I couldn't actually change my search criteria with the cache enforced. So I've discounted this as an avenue to explore.
The last option seems to suggest the use of cookies as the most likely candidate, but rightly or wrongly I feel this isn't the best solution. I would have thought the MVC 3 design pattern would have an easier and recommended method of persisting values. I'm sure there is but I've just not discovered it yet.
I have started to use JQuery and again this has been mentioned but I'm not sure this is right direction to take either.
So in summary my question really comes down to what is considered by the wider community as best practice for persisting data in my situation. Effiency, scalability and resiliancy is paramount as I'll have a large global user base that will end up using this web app.
Thanks in advance!
Pete
I'd just use cookies. They're simple to use, you can persist them for as long as you want or have them expire when the users closes their browser, and it doesn't sound like you are storing anything sensitive in them.

Searching for a song while using multiple API's

I'm going to attempt to create an open project which compares the most common MP3 download providers.
This will require a user to enter a track/album/artist name i.e. Deadmau5 this will then pull the relevant prices from the API's.
I have a few questions that some of you may have encountered before:
Should I have one server side page that requests all the data and it is all loaded simultaneously. If so, how would you deal with timeouts or any other problems that may arise. Or should the page load, then each price get pulled in one by one (ajax). What are your experiences when running a comparison check?
The main feature will to compare prices, but how can I be sure that the products are the same. I was thinking running time, track numbers but I would still have to set one source as my primary.
I'm making this a wiki, please add and edit any issues that you can think of.
Thanks for your help. Look out for a future blog!
I would check amazon first. they will give you a SKU (the barcode on the back of the album, I think amazon calls it an EAN) If the other providers use this, you can make sure they are looking at the right item.
I would cache all results into a database, and expire them after a reasonable time. This way when you get 100 requests for Britney Spears, you don't have to hammer the other sites and slow down your application.
You should also make sure you are multithreading whatever requests you are doing server side. Curl for instance allows you to pull multiple urls, and assigns a user defined callback. I'd have the callback send a some data so you can update your page with as the results come back. GETTUNES => curl callback returns some data for each url while connection is open that you parse it on the client side.

Why would Google Search use client-side URL parameters?

Yesterday morning I noticed Google Search was using hash parameters:
http://www.google.com/#q=Client-side+URL+parameters
which seems to be the same as the more usual search (with search?q=Client-side+URL+parameters). (It seems they are no longer using it by default when doing a search using their form.)
Why would they do that?
More generally, I see hash parameters cropping up on a lot of web sites. Is it a good thing? Is it a hack? Is it a departure from REST principles? I'm wondering if I should use this technique in web applications, and when.
There's a discussion by the W3C of different use cases, but I don't see which one would apply to the example above. They also seem undecided about recommendations.
Google has many live experimental features that are turned on/off based on your preferences, location and other factors (probably random selection as well.) I'm pretty sure the one you mention is one of those as well.
What happens in the background when a hash is used instead of a query string parameter is that it queries the "real" URL (http://www.google.com/search?q=hello) using JavaScript, then it modifies the existing page with the content. This will appear much more responsive to the user since the page does not have to reload entirely. The reason for the hash is so that browser history and state is maintained. If you go to http://www.google.com/#q=hello you'll find that you actually get the search results for "hello" (even if your browser is really only requesting http://www.google.com/) With JavaScript turned off, it wouldn't work however, and you'd just get the Google front page.
Hashes are appearing more and more as dynamic web sites are becoming the norm. Hashes are maintained entirely on the client and therefore do not incur a server request when changed. This makes them excellent candidates for maintaining unique addresses to different states of the web application, while still being on the exact same page.
I have been using them myself more and more lately, and you can find one example here: http://blixt.org/js -- If you have a look at the "Hash" library on that page, you'll see my implementation of supporting hashes across browsers.
Here's a little guide for using hashes for storing state:
How?
Maintaining state in hashes implies that your application (I'll call it application since you generally only use hashes for state in more advanced web solutions) relies on JavaScript. Without JavaScript, the only function of hashes would be to tell the browser to find content somewhere on the page.
Once you have implemented some JavaScript to detect changes to the hash, the next step would be to parse the hash into meaningful data (just as you would with query string parameters.)
Why?
Once you've got the state in the hash, it can be modified by your code (or your user) to represent the current state in your application. There are many reasons for why you would want to do this.
One common case is when only a small part of a page changes based on a variable, and it would be inefficient to reload the entire page to reflect that change (Example: You've got a box with tabs. The active tab can be identified in the hash.)
Other cases are when you load content dynamically in JavaScript, and you want to tell the client what content to load (Example: http://beta.multifarce.com/#?state=7001, will take you to a specific point in the text adventure.)
When?
If you had a look at my "JavaScript realm" you'll see a border-line overkill case. I did it simply because I wanted to cram as much JavaScript dynamics into that page as possible. In a normal project I would be conservative about when to do this, and only do it when you will see positive changes in one or more of the following areas:
User interactivity
Usually the user won't see much difference, but the URLs can be confusing
Remember loading indicators! Loading content dynamically can be frustrating to the user if it takes time.
Responsiveness (time from one state to another)
Performance (bandwidth, server CPU)
No JavaScript?
Here comes a big deterrent. While you can safely rely on 99% of your users to have a browser capable of using your page with hashes for state, there are still many cases where you simply can't rely on this. Search engine crawlers, for example. While Google is constantly working to make their crawler work with the latest web technologies (did you know that they index Flash applications?), it still isn't a person and can't make sense of some things.
Basically, you're on a crossroads between compatability and user experience.
But you can always build a road inbetween, which of course requires more work. In less metaphorical terms: Implement both solutions so that there is a server-side URL for every client-side URL that outputs relevant content. For compatible clients it would redirect them to the hash URL. This way, Google can index "hard" URLs and when users click them, they get the dynamic state stuff!
Recently google also stopped serving direct links in search results offering instead redirects.
I believe both have to do with gathering usage statistics, what searches were performed by the same user, in what sequence, what of the search results the user has followed etc.
P.S. Now, that's interesting, direct links are back. I absolutely remember seeing there only redirects in the last couple of weeks. They are definitely experimenting with something.

Preventing double HTTP POST

I have made a little app for signing up for an event. User input their data and click "sign me in".
Now sometimes people are double in the database, the exact same data that got inserted 2 times very quickly after each other. This can only mean someone clicked the button twice, which caused two posts to happen.
This is common web problem, as credit card apps and forum apps often say: "Clicking once is enough!".
I guess you could solve it by checking for the exact same data to see if the post is unique, but I wonder if there are other methods.
This ofcourse does not count for ASP.NET webforms, because POST doesn't matter as much.
While JavaScript solutions can disable the submit button after it has been clicked, this will have no effect on those people who have JavaScript disabled. You should always make things work correctly without JavaScript before adding it in, otherwise there's no point as users will still be able to bypass the checks by just disabling JavaScript.
If the page where the form appears is dynamically generated, you can add a hidden field which contains some sort of sequence number, a hash, or anything unique. Then you have some server-side validation that will check if a request with that unique value has already come in. When the user submits the form, the unique value is checked against a list of "used" values. If it exists in the list, it's a dupe request and can be discarded. If it doesn't exist, then add it to the list and process as normal. As long as you make sure the value is unique, this guarantees the same form cannot be submitted twice.
Of course, if the page the form is on is not dynamically generated, then you'll need to do it the hard way on the server-side to check that the same information has not already been submitted.
Most of the answers so far have been client-side. On the server-side, you can generate a hidden field with a GUID when you first produce the form, and then record that GUID as a submitted form when the post is received. Check it before doing any more processing.
Whenever a page is requested from the server , generate a unique requestToken , save it in server side,mark status as NOT Processed and pass it along with the current requested page. Now whenever a page submit happens , get the requestToken from the "POST"ed data and check the status and save the data or take alternate action.
Most of the banking applications use this technique to prevent double "POST"ing.So this is a time proven & reliable way of preventing double submissions.
A user-side solution is to disable the submission button via Javascript after the first click.
It has drawbacks, but I see it often used on e-commerce websites.
But, it won't never replace a real server-side validation.
Client side techniques are useful, but you may want to couple it with some server side techniques.
One way to do this is to include a unique token in the form (e.g. a GUID or similar), so that when you come to process the form you can check to see whether the token has already been used, preventing a double submission.
In your case, if you have a table with event visitors, you might include this token as a column.
A client-only solution won't be enough, as stated in many of the answers here. You need to go with a server-side fail-safe.
An often overlooked reason that disabling the submit button doesn't work is, the user can simply refresh the submit target (and click OK on the "are you sure you want to resubmit the POST data?" dialog). Or even, some browsers may implicitly reload the submitted page when you try to save the page to disk (for example, you're trying to save a hard-copy of an order confirmation).
Almost no one has js disabled.
Think about coding your e-commerce website for the 70 year old woman who double clicks every link and button.
All you want to do is add a javascript to prevent her clicking "Order Now" twice.
Yes - check this at the server side too "be defensive" - but don't code for that case. But for the sake of a better UI do it on the client side too.
Here are some scripts that I found:
//
// prevent double-click on submit
//
jQuery('input[type=submit]').click(function(){
if(jQuery.data(this, 'clicked')){
return false;
}
else{
jQuery.data(this, 'clicked', true);
return true;
}
});
and
// Find ALL <form> tags on your page
$('form').submit(function(){
// On submit disable its submit button
$('input[type=submit]', this).attr('disabled', 'disabled');
});
None of the solutions address a load-balance server.
If you have some load balancer, send a UUID (or any type of unique number) to the server to store and read again will not work well if the server is not aware of other servers, because each request could be processed by a different server in a stateless environment. These servers need to read/write to the same place.
If you have multiple servers you will need to have some shared cache (like a Redis) among the servers to read/write the unique value in the same place (what could be an over-engineering solution, but works).
Client side alteration is a common technique:
Disable submit button
Change the screen to a "please wait" screen
If the form was modal, changing the screen back to their usual process (this has the benefit of making things look really slick)
But it's not perfect. It all relies on JS being available and if that's not the case, without back-end duplication detection, you'll get duplicates still.
So my advice is to develop some sort of detection behind the scenes and then improve your form to stop people with JS being able to double-submit.
You can track the number of times the form's been submitted and compare it to the number of unique visits to the page with the form on it in the session.
Beside the many good techniques already mentioned, another simple server-side method, that has the drawback of requiring a session, is to have a session variable that is switched off on the first submit.

Resources