Specifying a meta description for each mobile page? - jquery-mobile

In jQuery Mobile, we can use the "data-title" attribute to specify a page title for each individual page within a multi-page HTML document, like...
<section id="home" data-role="page" data-title="Home Page Title">
Which of course makes a lot of sense, because there may be several individual mobile pages contained within a single HTML document, so it's good to be able to specify a unique page title for each one, from an SEO perspective.
But what if we also want to specify a META description for each individual mobile page (in a multi-page document). Is this possible? Or are we stuck using the single META description in the head section of the document? (which would obviously be less than ideal from a SEO perspective).
I've been researching this topic online for the past hour, but surprisingly have yet to find any info on it.
Anyone have the scoop on this?

Related

jquery mobile multi-page internal hyperlinking

This appears to be pretty basic but I can't figure it out.
Using a jqm multipage template, I'm trying to allow users to jump from a link one page (id='page1') directly to an image in another page (id='page2').
FIDDLE
It appears I am constrained, by html hyperlinking rules and jqm, to this:
<a href='#page2'>go to image on p2</a>
... which of course jumps the user to the top of page2.
But that's not what I want. I want the user to jump directly to the IMAGE, which is close to the bottom of page2, tagged like so:
<img id='image-id'>
But tagging the link with the image's id (not the page's id), i.e. tagging it like this
<a href='#image-id'>go to image on p2</a>
doesn't work.
I get the feeling I'm missing something very obvious, but can't figure it out.
Any suggestions? Or is this not possible?
I've got a different problem but found this question in my travels... thought I would add an extract from the jquery mobile page:
http://demos.jquerymobile.com/1.4.5/navigation-linking-pages/
Note: You cannot link to a multipage document with Ajax navigation active because the framework will only load the first page it finds, not the full set of internal pages. In these cases, you must link without Ajax (see next section) for a full page refresh to prevent potential hash collisions. There is currently a subpage plugin that makes it possible to load in multi-page documents.

Search engines ignoring meta description content and showing footer

I have a site that is very simple and has mostly images and a login form and a link to signup. No actual text exist in the body except for the footer which shows the link to usage terms and copyright notice.
My site is currently showing up on search engine results with the footer content showing instead of what I put in the <meta name="description"...> tag. Why is this?
How can I not allow the search engines to index my site with the footer content showing? Or at least show the meta description first? Do I need to put some text in the form of a title attribute or alt attribute somewhere?
As +Filburt pointed out, you could add your site to Webmaster Tools which will offer you valuable information about your site's presence on the web and in the Google Search results. It may also provide you hints about what do we think about your meta descriptions :)
Generally, you will want to
write the meta description to be as natural as possible, don't stuff keywords in it,
describe the page's content accurately in this tag,
and to have a unique meta description for each page.
While we can't guarantee that the meta description that you provided will be used as search result snippets, following the above tips will greatly increase the chance.
Here are some more information about the meta description tag: http://www.google.com/support/webmasters/bin/answer.py?answer=35264
It works to some extent to use <meta name="description" /> but Google will complain (and choose to ignore it) when every page has the same description.
If you are interested in how Google deals with your site you could sign up for their Webmaster Tools - they offer a good starting point for SEO-ing your site.
You could add content invisible to your visitor but Google checks this and considers hidden content as cheating for page rank because this used to be a common SEO technique.
meta tags were a failure and have been broadly ignored ever since the Google era began picked-up again with enthusiasm.
The problem was, humans would put stale, inaccurate, or irrelevant information in the meta tags. This was fifteen years ago when cataloging the Internet still seemed feasible.
Google came along and decided that what a web page actually says was more useful. Everybody else followed suit shortly after.
Now people are trying human-authored metadata again, they're calling it the "semantic web". My hopes are not high.

Sharepoint 2007 Newbie question - webpage markup

A web developer created our new intranet in Sharepoint 2007, none of the pages have metadata titles or descriptions (or any metadata tags) also the page has no semantic structure, divs are used instead of H1, H2, H3 etc.
He said this was a quickie out of the box creation but i'm really concerned about the quality of the page structure and the impact this will have on the effectiveness of the internal search engine.
Does a Sharepoint 2007 website come out of the box in this condition and am I right or wrong to be concerned about this?
Any advice would be appreciated as I would not accept the build of an external site like but he said "sharepoint is just like this" and I have no idea.
Many thanks
John
Edit the default master page. Add your meta tags there if the site is going to have same meta keywords and descriptions. If you want to customize the meta tags for each and every page then you will have to find the contentplaceholder ID that will hold the head section of the page and put all your meta tags there in each and every page.

rails 3 + ajax: what is simplest way to embed variable-length report html from my rails app onto another webpage

My app collects data reports that our users want to embed onto their own webpages.
The report is a short daily activity report... but the report length varies based upon the particular data that day, so a fixed-iframe will not work... one day the report might have 10 lines, the next day the same report might have 100 lines.
Currently, they have to put a huge iframe on their webpage that attempts to be larger than the biggest report... but that's hit or miss.
I need to offer a better way that
a) works independently of the size of the html data that is returned, and
b) still allows us to control the display of the report via CSS on our server (eg, does not require them to add a our stylesheet to their website)
The info I've seen using ajax with rails -- including an earlier question I asked -- seems to assume the destination page has an iframe...
But an iframe will not resize to fit the data, right?
Not having done ajax before, I'm looking for the simplest solution to embed this kind of variable-length content from my rails3 app to remote webpages.
suppose, for example my app has a url/route/view:
http://example.com/dailyreport/customer_key=ABCDEFGHIJ
that currently returns a very simple css-styled html page with that customer's daily report:
<head>
<link rel="stylesheet" type="text/css" href="/stylesheets/reports.css">
</head>
<body>
REPORT IS DISPLAYED HERE
</body>
I'd appreciate any info on how to transition from our current fixed-iframe non-ajax approach to an approach (presumably ajax? without iframe?) that handles arbitrary-length data coming from our app.

robots.txt to restrict search engines indexing specified keywords for privacy

I have a large directory of individual names along with generic publicaly available and category specific information that I want indexed as much as possible in search engines. Listing these names on the site itself is not a concern to people but some don't want to be in search results when they "Google" themselves.
We want to continue listing these names within a page AND still index the page BUT not index specified names or keywords in search engines.
Can this be done page-by-page or would setting up two pages be a better work around:
Options available:
PHP can censor keywords if user-agent=robot/search engine
htaccess to restrict robots to non-censored content, but allowing to a second censored version
meta tags defining words not to index ?
JavaScript could hide keywords from robots but otherwise viewable
I will go through the options and tell you some problems I can see:
PHP: If you don't mind trusting user agent this will work well. I am unsure how some search engines will react to different content being displayed for their bots.
htaccess: You would probably need to redirect the bot to a different page. You could use the url parameters but this would be no different then using a pure PHP solution. The bot would index the page it is redirected to and not the page you wish to visit. You may be able to use the rewrite engine to over come this.
meta tags: Even if you could use meta tags to get the bot to ignore certain words, it wouldn't guarantee that search engines won't ignore it since there is no set "standard" for meta tags. But that doesn't matter since I don't no of any way to get a bot to ignore certain words or phrases using meta tags.
JavaScript: No bot I have ever heard of executes (or even reads) JavaScript when looking at a page, so I don't see this working. You could display the content you want hidden to the users using JavaScript and bots won't be able to see it but neither will users who have JavaScript disabled.
I would go the PHP route.
You can tell robots to skip indexing particular page by adding ROBOTS meta:
<META NAME="ROBOTS" CONTENT="NOINDEX, NOFOLLOW">
UPDATE: The ways to restrict indexing of particular words I can think of are:
Use JS to add those to the page (see below).
Add module to the server that would strip those words from the rendered page.
JavaScript could be something like this:
<p>
<span id="secretWord">
<SCRIPT TYPE="text/javascript">
<!--
document.write('you can protect the word by concating strings/having HEX codes etc')
//-->
</script>
</span>
</p>
The server module is probably best option. In ASP.NET it should be fairly easy to do that. Not sure about PHP though.
What's not clear from your posting is whether you want to protect your names and keywords against Google, or against all search engines. Google is general well-behaved. You can use the ROBOTS meta tag to prevent that page from being indexed. But it won't prevent search engines that ignore the ROBOTS tags from indexing your site.
Other approaches you did not suggest:
Having the content of the page fetched with client-side JavaScript.
Force the user to execute a CAPTCHA before displaying the text. I recommend the reCAPTCHA package, which is easy to use.
Of all these, the reCAPTCHA approach is probably the best, as it will also protect against ilbehaved spiders. But it is the most onerous on your users.

Resources