Rails app too many parameters - ruby-on-rails

I have a page with a list of items and above the list I have multiple links that act as filters. Clicking on the links causes an ajax request to be fired with a whole host of URL parameters. Any example set of params after clicking a few filters:
?letters=a-e&page=1&sort=alphabetically&type=steel
It is all working fine but I feel like the params on the URL are very messy, and the code behind has to do alot of checking to see which params exist, merge new ones, overwrite existing ones etc.
Is there a nicer way to accomplish this without URL parameters.
I guess the downside to that would be the fact a user would not be able to link to a specific filtered view or is there a way this could be accomplished too?

You have several options when working with long query strings. If this isn't really causing a problem (like requests dying) then you should ask yourself if it's really worth the effort to switch it to something else.
Use POST Requests
If the length of the query string is causing problem, you can switch to using POST requests instead of GET request from your filter links. That will prevent the URL from containing the filter parameters, but your controller can still deal with the parameters in the same way.
The link_to helper can be setup to use a different HTTP verb as follows:
link_to("My Filter", filter_path, method: :post)
Make sure you update your routes appropriately if you use this technique.
Use an Ajax Request to Refresh the Page
If you configure your filters to all be remote (Ajax) links, you can update the filters and refresh the contents of the page without ever changing the URL. This is the basic pattern of the solution:
Send a remote request to the server with the current filter options
Update the page contents based on those filters
Make sure the filters (and remote request) will submit all of the current parameters again
Store Filters in the User's Session
If you store the current filters in the session, whenever the user visits the base page, you can retrieve the stored filters and only display the appropriate information. Your filter links could still be GET requests (including the lengthy query strings), but instead of rendering the page after the filter request, you would redirect back to the main list with no extra query parameters. That would make it appear to the user that the URL never changed, and would even allow you to remember their last filter if they navigate away.
Sharing Links
Like you mentioned, sharing links becomes a problem with all of these solutions. You can provide a "share this filter" section on the page to help mitigate that. You would put a URL the user could copy in that section that includes the necessary information to recreate the filter. The links could contain the full query string or perhaps an encoded version of the filter.

Related

Visitor using URL that doesn't exist

I had someone visit my site today from a link like this:
www.example.com/pagename.php?_sm_byp=iVVVMsFFLsqWsDL4
Can someone explain to me how that works since my actual URL ends with pagename.php and I never allowed a user to input any PHP query and never have session IDs or anything similar.
This is not unusual. Many sites/servers allow (or rather, ignore) arbitrary query components.
For example, you can append ?foo=bar to those URLs and still get a HTTP status 200:
https://stackoverflow.com/?foo=bar
http://en.wikipedia.org/wiki/Stack_Overflow?foo=bar
Now as they are linked here, users might visit them, so these URLs would appear in their logs. Apart from manually appending such a query component, they might also be added by various scripts, e.g. for tracking purposes, or third-party services that link to your pages (… and sometimes their origin is unknown).
If you don’t want your URLs to work with arbitrary query components, you can configure your backend/server in such a way that it redirects to the URLs without the query components, or respond with 404, or whatever.
If you keep allowing this, but want to prevent that bots index your URLs with these unnecessary query components, you can specify the canonical variants of your URLs with the canonical link relation.

Play Framework - GET vs POST

New to web development, my understanding is that GET is used to get user input and POST to give them output. If I have a hybrid page, eg. on StackOverflow, if I write a question, it POSTs a page with my question, but also has a text box to GET my answer. In my routes file, what method would the URL associated with my postQgetA() method specify - GET or POST?
From technical point of view you can use only GET to perform almost every operation, but...
GET is most common method and it's used when you ie. click on the link, to get data (and do not modify it on server), optionally you send id of the resource to get (if you need to get data of single user).
POST is most often used to sending new data to the server ie. from form - to store them in your database (or proccess in any other way)
There are also other request methods (ie. DELETE, PUT) you can use with Play, however some of them need to be 'emulated' via ie. ajax, as there is not possible to set method of the common link ie. to DELETE. It's described how to use non-GET/POST methods in Play! (Note, that Julien suggests there, using GET for delete action although is possible it's a broken semantics.)
There are also other discussions on StackOverflow where you can find examples and suggestions for choosing correct method for your routes.
BTW, if you sending some request, let's say it's POST you don't need to perform separate GET as sending a request generates a response in other words, after sending new question with POST first you're trying to save it to DB, if no errors render the page and send it back in response.

Using GET request instead of POST one

The issue is that there is a form I want to fill. It's submitted via a POST request. But technically I can use only GET requests (pass an URL with GET parameters). And I don't have access to a server where the site (with a form to fill) is located.
I've tried to use POST params in GET request, but it didn't work. The other thing that came to my mind was to send GET request to my own server, which will perform the desired POST request. But I need the request to be commited from my IP, but not from the server's one...
Can anybody give some piece of advice concerning solving this problem?
Is this homework? Seems odd that you wouldn't be able to use a POST.
Regardless,
The best way to do this would be to override the onclick event of your submit button; the JS function could poll for the fields you are looking to submit. Then use encodeURIComponent() on the values to get them sent to your webserver correctly.
Here you would be able to load the new page with the get?element=value&.... request.
http://www.w3schools.com/jsref/jsref_encodeURIComponent.asp
Firefox allows you to turn an HTML form (either GET or POST) with at least one regular text input field into a search keyword bookmark, although I'm not sure how easy it is to import and export an individual bookmark from one copy of Firefox to another.

#rails_folks | ''http://twitter.com/#!/user/followers" | Please explain

How would you achieve this route in rails 3 and the last stable version 2.3.9 or soish?
Explained
I don't really care about the followers action. What I'm really after is how to create '!#' in the routing.
Also, What's the point of this. Is it syntax or semantics?
Rails doesnt directly get anything after the #. Instead the index page checks that value with javascript and makes an AJAX request to the server based on the url after the #. What routes they use internally to handle that AJAX request I am not sure.
The point is to have a Javascript powered interface, where everyone is on the same "page" but the data in the hashtag allows it to load any custom data on the fly, and without loading a whole new page if you decide to view a different user, for instance.
The hash part is never sent to the URL, but it is a common practice to manipulate the hash to maintain history, and bookmarking for AJAX applications. The only problem being that by using a hash to avoid page reloads, search engines are left behind.
If you had a site with some links,
http://example.com/#home
http://example.com/#movies
http://example.com/#songs
Your AJAXy JavaScript application sees the #home, #movies, and #songs, and knows what kind of data it must load from the server and everything works fine.
However, when a search engine tries to open the same URL, the hash is discarded, and it always sends them to http://example.com/. As a result the inner pages of your site - home, movies, and songs never get indexed because there was no way to get to them until now.
Google has creating an AJAX crawling specification or more like a contract that allows sites to take full advantage of AJAX while still reaping the benefits of indexing by searching engines. You can read the spec if you want, but the jist of it is a translation process of taking everything that appears after #! and adding it as a querystring parameter.
So if your AJAX links were using #!, then a search engine would translate a URL like,
http://example.com/#!movies
to
http://example.com/?_escaped_fragment_=movies
Your server is supposed to look at this _escaped_fragment_ parameter and respond the same way that your AJAX does.
Note that HTML5's History interface now provides methods to change the address bar path without needing to rely upon the hash fragment to avoid page reloads.
Using the pushState and popState methods
history.pushState(null, "Movies page", "/movies");
you could directly change the URL to http://example.com/movies without causing a page refresh. Search engines can continue to use the same URL that you would be using in that case.
The part after the # in a URI is called the fragment identifier, and it is interpreted by the client, not the server. You cannot route this, because it will never leave the browser.

GET or POST, which method to use for submitting a form?

I'm writing a web form for my Ruby on Rails app. The form has a text field, some checkboxes, a set of radio buttons and two text boxes.
What are the pluses and minuses of using GET over POST and vice versa. I always thought you should use GET for retrieving the form and POST for submitting, but I've just learnt you can do both. Does it really make a difference? Cheers.
<% form_tag({ :action => "create" }, :method => "get") do %>
GET requests are always added to the URL, where as POST is submitted with the body of the request. As you note both can be used to retrieve and send data, but there are some distinctions:
As GET is sent with the URL you are limited in size to the maximum length of the query string. This varies from browser to browser, but is usually at least around 2000 characters (on modern browsers). This usually makes it inappropriate for sending large text fields (eg email).
AS the GET command is exposed in the query string it can be easily modified by the user
As the GET command is in the query string it does make it easier for users to bookmark a specific page, assuming your page will work with some state variables stored.
POST is usually more appropriate for sending data, as it is suits the nature of a request, mostly because of the limitations of the above.
The HTML specifications technically define the difference between both as "GET" means that form data is to be encoded (by a browser) into a URL while the "POST" means that the form data is to appear within a message body.
But the usage recommendation would be that the "GET" method should be used when the form processing is "idempotent", and in those cases only. As a simplification, we might say that "GET" is basically for just getting (retrieving) data whereas "POST" may involve anything, like storing or updating data, or ordering a product, or sending E-mail.
Depends on if you're being semantic or not. Both GET and POST hold intrinsic meaning if you're making an HTML-based API. But in general, GET is used for fetching data, POST is used for submitting data.
The biggest difference is that GET puts all the data in the URL (which can be limited in size), while POST sends it as a data part of the HTTP request. If you allow data entry with GET requests, you're also making a lot of web exploits a lot easier, like CSRF. Someone can just make a pre-filled link to the vulnerable form's action (say, a password change form?), send it to unsuspecting users who click it and unknowingly change their password.
In addition, no browser will warn the user if they refresh a GET page that does data entry (which would make a duplicate entry, if you're not careful), but on a POST, most browsers will show a warning.
I think other answers covered the main stuff. Also I want to add this bit. Using GET for critical data such as sending password over a GET request will expose the password more than POST because it'll be stored in browser's history cache, proxy caches, server logs etc.

Resources