How would you achieve this route in rails 3 and the last stable version 2.3.9 or soish?
Explained
I don't really care about the followers action. What I'm really after is how to create '!#' in the routing.
Also, What's the point of this. Is it syntax or semantics?
Rails doesnt directly get anything after the #. Instead the index page checks that value with javascript and makes an AJAX request to the server based on the url after the #. What routes they use internally to handle that AJAX request I am not sure.
The point is to have a Javascript powered interface, where everyone is on the same "page" but the data in the hashtag allows it to load any custom data on the fly, and without loading a whole new page if you decide to view a different user, for instance.
The hash part is never sent to the URL, but it is a common practice to manipulate the hash to maintain history, and bookmarking for AJAX applications. The only problem being that by using a hash to avoid page reloads, search engines are left behind.
If you had a site with some links,
http://example.com/#home
http://example.com/#movies
http://example.com/#songs
Your AJAXy JavaScript application sees the #home, #movies, and #songs, and knows what kind of data it must load from the server and everything works fine.
However, when a search engine tries to open the same URL, the hash is discarded, and it always sends them to http://example.com/. As a result the inner pages of your site - home, movies, and songs never get indexed because there was no way to get to them until now.
Google has creating an AJAX crawling specification or more like a contract that allows sites to take full advantage of AJAX while still reaping the benefits of indexing by searching engines. You can read the spec if you want, but the jist of it is a translation process of taking everything that appears after #! and adding it as a querystring parameter.
So if your AJAX links were using #!, then a search engine would translate a URL like,
http://example.com/#!movies
to
http://example.com/?_escaped_fragment_=movies
Your server is supposed to look at this _escaped_fragment_ parameter and respond the same way that your AJAX does.
Note that HTML5's History interface now provides methods to change the address bar path without needing to rely upon the hash fragment to avoid page reloads.
Using the pushState and popState methods
history.pushState(null, "Movies page", "/movies");
you could directly change the URL to http://example.com/movies without causing a page refresh. Search engines can continue to use the same URL that you would be using in that case.
The part after the # in a URI is called the fragment identifier, and it is interpreted by the client, not the server. You cannot route this, because it will never leave the browser.
Related
I have a page with a list of items and above the list I have multiple links that act as filters. Clicking on the links causes an ajax request to be fired with a whole host of URL parameters. Any example set of params after clicking a few filters:
?letters=a-e&page=1&sort=alphabetically&type=steel
It is all working fine but I feel like the params on the URL are very messy, and the code behind has to do alot of checking to see which params exist, merge new ones, overwrite existing ones etc.
Is there a nicer way to accomplish this without URL parameters.
I guess the downside to that would be the fact a user would not be able to link to a specific filtered view or is there a way this could be accomplished too?
You have several options when working with long query strings. If this isn't really causing a problem (like requests dying) then you should ask yourself if it's really worth the effort to switch it to something else.
Use POST Requests
If the length of the query string is causing problem, you can switch to using POST requests instead of GET request from your filter links. That will prevent the URL from containing the filter parameters, but your controller can still deal with the parameters in the same way.
The link_to helper can be setup to use a different HTTP verb as follows:
link_to("My Filter", filter_path, method: :post)
Make sure you update your routes appropriately if you use this technique.
Use an Ajax Request to Refresh the Page
If you configure your filters to all be remote (Ajax) links, you can update the filters and refresh the contents of the page without ever changing the URL. This is the basic pattern of the solution:
Send a remote request to the server with the current filter options
Update the page contents based on those filters
Make sure the filters (and remote request) will submit all of the current parameters again
Store Filters in the User's Session
If you store the current filters in the session, whenever the user visits the base page, you can retrieve the stored filters and only display the appropriate information. Your filter links could still be GET requests (including the lengthy query strings), but instead of rendering the page after the filter request, you would redirect back to the main list with no extra query parameters. That would make it appear to the user that the URL never changed, and would even allow you to remember their last filter if they navigate away.
Sharing Links
Like you mentioned, sharing links becomes a problem with all of these solutions. You can provide a "share this filter" section on the page to help mitigate that. You would put a URL the user could copy in that section that includes the necessary information to recreate the filter. The links could contain the full query string or perhaps an encoded version of the filter.
We have a system that was created using mvc 3 and has a LOT of ajax calls from our views.
There are a number of performance issues (not linked to the ajax) so we are looking at potentially starting from scratch.
Primarily the screens are setup screens so we get some data back, edit and save.
I'm having a difficult job finding any worthwhile material on when to use ajax and when to stick with good old posts.
Does anyone have any input on a good rule of thumb or links as to when to use what...?
If we did go down the re-write rule it would be using mvc 4.
For a fast and slick UI response, use AJAX, as it does not reload the page each time it performs an operation.
Use GET requests for viewing information, and POST requests for editing/saving.
Now AJAX requests can either be through a GET or POST. GET requests are used for viewing something, without editing and POST requests are used, when you wish to edit something. One uses POST when does not wish to expose the sensitive data. When using POST, the data of a request goes in the body of the request as opposed to GET. In GET, the data requested is appended to the URL.
Eg. GET REQUEST
example.com/blog/?name1=value1&name2=value2 HTTP/1.1
POST REQUEST
example.com/blog/ HTTP/1.1
Host: abc.com
name1=value1&name2=value2
Moreover, a user login page, which contains senetive information will be authenticated using a POST request, whereas queries on Google are GET requests, and we can verify that see our search terms appended to the google.com url.
Use AJAX when your boss says the screen flickers.
This is largely a question of usability, and behavior. As such, it's subjective. You have to ask yourself (or your users).. How do you want the page (or elements) to behave? If you don't care if there is a round trip, then a standard post/redirect/get may be in order. If you want to keep the current page state after an operation, then an ajax call may be a better choice.
They both do the same thing, they only do it in different ways. You have to decide which way you want it to behave in.
I'd say partial posts (AJAX) make sense when the result is that your page does not change significantly (if you're staying on the same page and only posting a small thing and maybe rebuilding a small segment of the page).
If you're rebuilding the entire page with new data, or obviously if you're redirecting elsewhere, a full post makes sense.
AJAX calls are significantly smaller and faster, can still provide the same server stuff (Session, authentication, etc.), and can still return partial views based on a model, so you don't even have to lose your MVC pattern. It's a little more javascript, but if all you're doing is making a small post and expecting a small change to your page, AJAX can dramatically improve user experience, while at the same time reducing bandwidth.
I had someone visit my site today from a link like this:
www.example.com/pagename.php?_sm_byp=iVVVMsFFLsqWsDL4
Can someone explain to me how that works since my actual URL ends with pagename.php and I never allowed a user to input any PHP query and never have session IDs or anything similar.
This is not unusual. Many sites/servers allow (or rather, ignore) arbitrary query components.
For example, you can append ?foo=bar to those URLs and still get a HTTP status 200:
https://stackoverflow.com/?foo=bar
http://en.wikipedia.org/wiki/Stack_Overflow?foo=bar
Now as they are linked here, users might visit them, so these URLs would appear in their logs. Apart from manually appending such a query component, they might also be added by various scripts, e.g. for tracking purposes, or third-party services that link to your pages (… and sometimes their origin is unknown).
If you don’t want your URLs to work with arbitrary query components, you can configure your backend/server in such a way that it redirects to the URLs without the query components, or respond with 404, or whatever.
If you keep allowing this, but want to prevent that bots index your URLs with these unnecessary query components, you can specify the canonical variants of your URLs with the canonical link relation.
so i am working in a .tpl file meaning i am open to js, html and php answers. what i want to do is whenever a person refreshes the page, experience a change in the url or exits the browser, my site would take an action based on this change of state. so basically, when they leave that specific page of mines in any way, i would call a function. the reason i want this is because i am saving this editable image on my site. but whenever they leave the page, i want the image the created to be autosaved.
this task splits into client-side and server-side parts. At client side you should bind to interesting browser events, triggering some background http requests to some service URLs of your website, this is probably JS. At the server side, you should provide corresponding reaction to these requests, which is probably PHP.
As long as these service URLs are to be called intermittently by various visitors, be sure to keep an eye on what request came from which client's window. PHP sessions should help you.
I'd propose to work this separately, first to get saving machinery working -- just bind everything to explicit big buttons at the page (page close, url change, etc), then replace each button with the binding to exact JS event. Keep in mind differencies among browsers.
Yesterday morning I noticed Google Search was using hash parameters:
http://www.google.com/#q=Client-side+URL+parameters
which seems to be the same as the more usual search (with search?q=Client-side+URL+parameters). (It seems they are no longer using it by default when doing a search using their form.)
Why would they do that?
More generally, I see hash parameters cropping up on a lot of web sites. Is it a good thing? Is it a hack? Is it a departure from REST principles? I'm wondering if I should use this technique in web applications, and when.
There's a discussion by the W3C of different use cases, but I don't see which one would apply to the example above. They also seem undecided about recommendations.
Google has many live experimental features that are turned on/off based on your preferences, location and other factors (probably random selection as well.) I'm pretty sure the one you mention is one of those as well.
What happens in the background when a hash is used instead of a query string parameter is that it queries the "real" URL (http://www.google.com/search?q=hello) using JavaScript, then it modifies the existing page with the content. This will appear much more responsive to the user since the page does not have to reload entirely. The reason for the hash is so that browser history and state is maintained. If you go to http://www.google.com/#q=hello you'll find that you actually get the search results for "hello" (even if your browser is really only requesting http://www.google.com/) With JavaScript turned off, it wouldn't work however, and you'd just get the Google front page.
Hashes are appearing more and more as dynamic web sites are becoming the norm. Hashes are maintained entirely on the client and therefore do not incur a server request when changed. This makes them excellent candidates for maintaining unique addresses to different states of the web application, while still being on the exact same page.
I have been using them myself more and more lately, and you can find one example here: http://blixt.org/js -- If you have a look at the "Hash" library on that page, you'll see my implementation of supporting hashes across browsers.
Here's a little guide for using hashes for storing state:
How?
Maintaining state in hashes implies that your application (I'll call it application since you generally only use hashes for state in more advanced web solutions) relies on JavaScript. Without JavaScript, the only function of hashes would be to tell the browser to find content somewhere on the page.
Once you have implemented some JavaScript to detect changes to the hash, the next step would be to parse the hash into meaningful data (just as you would with query string parameters.)
Why?
Once you've got the state in the hash, it can be modified by your code (or your user) to represent the current state in your application. There are many reasons for why you would want to do this.
One common case is when only a small part of a page changes based on a variable, and it would be inefficient to reload the entire page to reflect that change (Example: You've got a box with tabs. The active tab can be identified in the hash.)
Other cases are when you load content dynamically in JavaScript, and you want to tell the client what content to load (Example: http://beta.multifarce.com/#?state=7001, will take you to a specific point in the text adventure.)
When?
If you had a look at my "JavaScript realm" you'll see a border-line overkill case. I did it simply because I wanted to cram as much JavaScript dynamics into that page as possible. In a normal project I would be conservative about when to do this, and only do it when you will see positive changes in one or more of the following areas:
User interactivity
Usually the user won't see much difference, but the URLs can be confusing
Remember loading indicators! Loading content dynamically can be frustrating to the user if it takes time.
Responsiveness (time from one state to another)
Performance (bandwidth, server CPU)
No JavaScript?
Here comes a big deterrent. While you can safely rely on 99% of your users to have a browser capable of using your page with hashes for state, there are still many cases where you simply can't rely on this. Search engine crawlers, for example. While Google is constantly working to make their crawler work with the latest web technologies (did you know that they index Flash applications?), it still isn't a person and can't make sense of some things.
Basically, you're on a crossroads between compatability and user experience.
But you can always build a road inbetween, which of course requires more work. In less metaphorical terms: Implement both solutions so that there is a server-side URL for every client-side URL that outputs relevant content. For compatible clients it would redirect them to the hash URL. This way, Google can index "hard" URLs and when users click them, they get the dynamic state stuff!
Recently google also stopped serving direct links in search results offering instead redirects.
I believe both have to do with gathering usage statistics, what searches were performed by the same user, in what sequence, what of the search results the user has followed etc.
P.S. Now, that's interesting, direct links are back. I absolutely remember seeing there only redirects in the last couple of weeks. They are definitely experimenting with something.