Ember search engine - ruby-on-rails

I am creating an Ember app that has a search engine built into it say for houses. My results change a lot as houses are found / added or removed / sold. Therefore my search results change all the time.
I also have pages for each house which has a similar houses section on it that shows always changing similar houses to this one.
I am trying to find the best way to make this app crawlable to search engines.
I could like discourse use noscript tages for each page but as all my houses pages can hold different information and structure depending on the agent/ seller this would be a lot more work basically duplicating what the client is doing!
I could go down the phantomjs route and cache all my pages and serve via the escapedfragment_ method but i am thinking this would be a resource intensive approach with content changing so much. Also with my house pages having similar houses on them that can change depending on the user / location etc, i am not sure how to cache these sections.
Another method i am toying with is to convert my page / section templates into a serverside template so i can render it on the server. For example when a customer creates a house page via my ember app in the format they require they click publish and i convert the rendered html into serverside template with placeholders etc for data.
Anyone help with this ? Any ideas / suggestions / advice would be great!

I think you've kind of answered your own question. This is all about trade offs and finding the solution that is best for your particular case. There is no silver bullet. Personally I go with something close to the noscript route, but instead of putting things inside noscript tags, I put them in regular divs with a class of no-ember, which are visible by default. Then when the document is ready I test to see if the client supports push state. If so, I initialize my Ember app and hide the no-ember divs. If not, then all of the no-ember divs are visible so that the client can see/use the content like normal.

Related

Persisting data in MVC for the duration of a users session

Apologies in advance as I'm sure this topic has no doubt been asked before but I couldn't find any post that answers my specific query.
Bearing in mind that I'm new to MVC this is where I have got to. I've got a project developed under VS 2010 using the MVC 3 framework. I've got a search page which consists of 6 fields and a nested model which itself holds around 3 fields.
I can successfully post all this data back to itself and the data is successfully passed as a model and back agian so the fields keep the data which the user has supplied.
Before I move on to actually using this search criteria on another view a thought hit me. I want to keep this search criteria, and possibly even the search results in memory for the duration of the users session.
The reasoning behind this is simply to save my users time by:
a) negating the need to keep re-inputting their search criteria regardless of how they enter or leave the search page
b) speed up the user experience by presenting the search results more quickly
The later isn't as important as the first requirement.
I've done some google searches and indeed had a look through this site on similar topics. From what I've read using sessions (which I would typically use if developing a PHP site) is a no no. From the reasons I've read as to why you shouldn't use sessions seem valid and I'm happy to go along with it.
But now I'm left in a place where I'm scratching my head wondering to myself what exactly is best practice to achieve this simple goal that could be applied to similar situations later down the line in the project.
I also looked at the OutputCache method and that didn't behave as I expected it to. In a test I set the timeout for 30 seconds. After submitting a search I clicked the link to my search page to see if the fields would auto-populate, they didn't. But then clicking the search button the values in the cache were retrieved. I thought I was making progress but when I tried to submit a new value the old value from the cache came back i.e. I couldn't actually change my search criteria with the cache enforced. So I've discounted this as an avenue to explore.
The last option seems to suggest the use of cookies as the most likely candidate, but rightly or wrongly I feel this isn't the best solution. I would have thought the MVC 3 design pattern would have an easier and recommended method of persisting values. I'm sure there is but I've just not discovered it yet.
I have started to use JQuery and again this has been mentioned but I'm not sure this is right direction to take either.
So in summary my question really comes down to what is considered by the wider community as best practice for persisting data in my situation. Effiency, scalability and resiliancy is paramount as I'll have a large global user base that will end up using this web app.
Thanks in advance!
Pete
I'd just use cookies. They're simple to use, you can persist them for as long as you want or have them expire when the users closes their browser, and it doesn't sound like you are storing anything sensitive in them.

Filtering Database results in Ruby on Rails

I have created a rails application where users can create and apply for jobs.
As you can imagine many of these jobs come from various countries/cities and have different salaries and industries etc. I would like to create a system that will allow my users to filter through all the options to find what they're most interested in.
I would like to use a combination of radio buttons and a salary slider bar (probably Jquery) in my view to select the results that show. I would then like the page to reload (without refreshing - like AJAX) when the user hits a button called filter results.
A good example of the kind of filtering system I would like to achieve can be seen at WIWT.com if you just click on their top filters button they have an excellent filtering system.
It would be great to know where to get started on this and whether there are any easy to use Gems already out there? Also if anyone could point me in the direction of a thorough tutorial that would be great as much of what I have found has been fairly incomplete and based around has_scope.
Thanks!

How can I support user edited content in a complex web application?

I'm building a web site (using ASP.NET, MVC 3, Razor) and I'm not using an off the shelf CMS. This is because I evaluated a lot of existing CMS's, and found them all to have a massive learning curve, tons of features I didn't need, and they force you into a page oriented model. By "page oriented model", I mean that you can specify a general page layout and stylesheets, but the object that the user can edit is a whole page, which displays, for example, in a central panel, and maybe you can customize the sidebars as well.
But this site is very design centric, and needs to be much more fluid and granular than this. By "design-centric", I mean that the site was built in Photoshop by a graphic designer, and there is heavy use of images and complex styling to map the design to HTML/css/js. Also, every page on the site is totally different. There are also UI elements such as accordion panels, in which we need the user to be able to edit the content of each panel, but certainly not the jQuery+HTML that powers the accordion. The users are subject matter experts but very non-technical.
So I'll have a page with lots of complex layout and styling, which I don't want the user to access, but within this there will be, say, a div containing text that I would like the user to be able to edit.
How can I best accomplish this?
So far, I'm implementing this by having snippets, which are little units of html, stored in external files, that the user can edit. In run mode, these are loaded and displayed inline (with a little "Edit This Content" button if you're logged in and have permissions). If you click the Edit button, you get a little WYSIWYG editing screen, where you can edit and save changes. So I can control all the messy stuff, and put in little placeholders for user editable content. But this isn't entirely simple for me, and I'm wondering if there's a better way.
Don't mean to necro this, but it seems to be the most relevant question to what I'm currently researching. I recently built something similar as you described above, but I'm pulling data from a database instead of static files. For each page (like /about or /contact) in the Controller I pull data for that page from the DB in the form of a Json string key/value pair. Key is the placeholder tag, Value is the.. value. After deserializing, I simply populate a list and assign it to a ViewBag, then in the CSHTML I ViewBag.List.Keyname to grab the text.
I have a small admin control panel which allows me to modify the text in the database. Having little hover-overs like you do is a great idea though!
Well, I stuck with my original plan:
So far, I'm implementing this by having snippets, which are little
units of html, stored in external files, that the user can edit. In
run mode, these are loaded and displayed inline (with a little "Edit
This Content" button if you're logged in and have permissions). If you
click the Edit button, you get a little WYSIWYG editing screen, where
you can edit and save changes. So I can control all the messy stuff,
and put in little placeholders for user editable content. But this
isn't entirely simple for me, and I'm wondering if there's a better
way.
It works reasonably well for now.

Loading data from database by Ajax - Ruby on Rails app

Sometimes at websites all comments or other data from DB is hidden by default. When user click at link like "Display comments" all comments from database are dynamically selected and placed under the content. It must be great for mysql performance, because content is generated only when user excatly need it. I would like to implement this stuff at my app.
I've got one idea to do this so far. Remote action with #comments = Content.comments and next page.insert_html at RJS template. Is it good idea or maybe I should choose different way?
The decision is purely based on the application that you are developing. For example if in case of stack overflow it does not make sense to show only the question and show answer link. But in case of a blog post it may be fine.
In the above situation, I don't think there will be a good improvement in performance by removing the comments of the content on show page. We can achieve the same functionality by making use of javascript methods. Hide the content on page load and show in on client request.

Why would Google Search use client-side URL parameters?

Yesterday morning I noticed Google Search was using hash parameters:
http://www.google.com/#q=Client-side+URL+parameters
which seems to be the same as the more usual search (with search?q=Client-side+URL+parameters). (It seems they are no longer using it by default when doing a search using their form.)
Why would they do that?
More generally, I see hash parameters cropping up on a lot of web sites. Is it a good thing? Is it a hack? Is it a departure from REST principles? I'm wondering if I should use this technique in web applications, and when.
There's a discussion by the W3C of different use cases, but I don't see which one would apply to the example above. They also seem undecided about recommendations.
Google has many live experimental features that are turned on/off based on your preferences, location and other factors (probably random selection as well.) I'm pretty sure the one you mention is one of those as well.
What happens in the background when a hash is used instead of a query string parameter is that it queries the "real" URL (http://www.google.com/search?q=hello) using JavaScript, then it modifies the existing page with the content. This will appear much more responsive to the user since the page does not have to reload entirely. The reason for the hash is so that browser history and state is maintained. If you go to http://www.google.com/#q=hello you'll find that you actually get the search results for "hello" (even if your browser is really only requesting http://www.google.com/) With JavaScript turned off, it wouldn't work however, and you'd just get the Google front page.
Hashes are appearing more and more as dynamic web sites are becoming the norm. Hashes are maintained entirely on the client and therefore do not incur a server request when changed. This makes them excellent candidates for maintaining unique addresses to different states of the web application, while still being on the exact same page.
I have been using them myself more and more lately, and you can find one example here: http://blixt.org/js -- If you have a look at the "Hash" library on that page, you'll see my implementation of supporting hashes across browsers.
Here's a little guide for using hashes for storing state:
How?
Maintaining state in hashes implies that your application (I'll call it application since you generally only use hashes for state in more advanced web solutions) relies on JavaScript. Without JavaScript, the only function of hashes would be to tell the browser to find content somewhere on the page.
Once you have implemented some JavaScript to detect changes to the hash, the next step would be to parse the hash into meaningful data (just as you would with query string parameters.)
Why?
Once you've got the state in the hash, it can be modified by your code (or your user) to represent the current state in your application. There are many reasons for why you would want to do this.
One common case is when only a small part of a page changes based on a variable, and it would be inefficient to reload the entire page to reflect that change (Example: You've got a box with tabs. The active tab can be identified in the hash.)
Other cases are when you load content dynamically in JavaScript, and you want to tell the client what content to load (Example: http://beta.multifarce.com/#?state=7001, will take you to a specific point in the text adventure.)
When?
If you had a look at my "JavaScript realm" you'll see a border-line overkill case. I did it simply because I wanted to cram as much JavaScript dynamics into that page as possible. In a normal project I would be conservative about when to do this, and only do it when you will see positive changes in one or more of the following areas:
User interactivity
Usually the user won't see much difference, but the URLs can be confusing
Remember loading indicators! Loading content dynamically can be frustrating to the user if it takes time.
Responsiveness (time from one state to another)
Performance (bandwidth, server CPU)
No JavaScript?
Here comes a big deterrent. While you can safely rely on 99% of your users to have a browser capable of using your page with hashes for state, there are still many cases where you simply can't rely on this. Search engine crawlers, for example. While Google is constantly working to make their crawler work with the latest web technologies (did you know that they index Flash applications?), it still isn't a person and can't make sense of some things.
Basically, you're on a crossroads between compatability and user experience.
But you can always build a road inbetween, which of course requires more work. In less metaphorical terms: Implement both solutions so that there is a server-side URL for every client-side URL that outputs relevant content. For compatible clients it would redirect them to the hash URL. This way, Google can index "hard" URLs and when users click them, they get the dynamic state stuff!
Recently google also stopped serving direct links in search results offering instead redirects.
I believe both have to do with gathering usage statistics, what searches were performed by the same user, in what sequence, what of the search results the user has followed etc.
P.S. Now, that's interesting, direct links are back. I absolutely remember seeing there only redirects in the last couple of weeks. They are definitely experimenting with something.

Resources