One of our website has URL like this : example.oursite.com. We decided to move our site with an URL like this www.oursite.com/example. To do this, we wrote a rewrite rule in our Apache server that redirect to our new URL with a code 301.
Many websites link to us with URLs of the form example.oursite.com/#id=23. The problem is that the redirection erase the hash part of the URL with IE. As far as I know, the hash part is never sent to the server.
I wanted to implement the redirection with javascript to keep the hash part, but the Search Engine will not be aware that our URL changed. (no code 301 returned)
I want the Search Engine to be notified of our new URL(301) because we need to transfer the page rank to our new URL.
Is there a way to redirect with a 301 code and keep the hash part(#id=23) of in the URL ?
Search engines do in fact care about hash tags, they frequently use them to highlight specific content on a page.
To the question, however, anchor locations are unfortunately not sent to the server as part of the HTTP request. If you want to redirect a user, you will need to do this in Javascript on the client side.
Good article: http://web.archive.org/web/20090508005814/http://www.mikeduncan.com/named-anchors-are-not-sent/
Seeing as the server will never see the # (ruling out 301 Redirects) and Google has deprecated their AJAX Crawling scheme, it seems that a front-end solution is the only way!
How I did it:
(function() {
var redirects = [
['#!/about', '/about'],
['#!/contact', '/contact'],
['#!/page-x', '/pageX']
]
for (var i=0; i<redirects.length; i++) {
if (window.location.hash == redirects[i][0]) {
window.location.replace(redirects[i][1]);
}
}
})();
I'm assuming that because Google crawlers do indeed execute Javascript, the new pages will be indexed properly.
I've put it in a <script> tag directly underneath the <title> tag, so that it get executed before any other JS/CSS. Note that this script should only be required for your index file.
I am fairly certain that the hash/page anchor/bookmark part of a URL is not indexed by search engines, and therefore has no effect on your page ranking. Doing a google search for "inurl:#" returns zero documents, so that backs up my assumption. Links from external sites will be indexed without the hash.
You are right in that the hash part isn't sent to the server, so as far as I am aware, there isn't a good way to be able to create a redirection url with the hash in it.
Because of this, it's up to the browser to correctly manage the hash during a redirect. Firefox 3.5 appears to do this successfully. If you append a hash to a URL that has a known redirect, you will see the URL change in the address bar to the new location, but the hash stays on there successfully.
Edit: In response to the comment below, if there isn't a hash sign in the external URL for the part you need, then it is entirely possible to rewrite the URL. An Apache rewrite rule would take care of it:
RewriteCond %{HTTP_HOST} !^exemple\.oursite\.com [NC]
RewriteCond %{HTTP_HOST} !^$
RewriteRule ^/(.*) http://www.oursite.com/exemple/$1 [L,R]
If you're not using Apache, then you'll have to look into the server docs for something similar.
Google has a special syntax for AJAX applications that is based on hash URLs: http://code.google.com/web/ajaxcrawling/docs/getting-started.html
You could create a page on the old address that catches all requests and redirects to the new site with the correct address and code.
I did something like that, but it was in asp.net, which I guess it's not the language you use. Anyway there should be a way to do this in any language.
When returning status 301, your server is supposed to return a 'Location:' header which points to the new location. In practice, the way this is implemented varies; some servers provide the full URL (netloc and path), some just provide the new path and expect the browser to look for that path on the original netloc. It sounds like your rewrite rule is stripping the path.
An easy way to see what the returned Location header is, in the python shell:
>>> import httplib
>>> conn = httplib.HTTPConnection('exemple.oursite.com')
>>> conn.request('HEAD', '/')
>>> res = conn.getresponse()
>>> print res.getheader('location')
I'm afraid I don't know enough about mod_rewrite to tell you how to do the rewrite rule correctly, but this should give you an idea of what your server is actually telling clients to do.
The search bots don't care about hash tags. And if you are using them for some kind of flash or AJAX calls, you have more serious problems than your 301 redirects don't work. Because unless you have the content in an alternate form, the search engines are not indexing your site and you are definitely suffering as far as SEO goes.
I registered my account so I can't edit.
zombat : I'm sorry I made a mistake in my comment. The link to our video is exemple.oursite.com/#video_id=233. In this case, my rewrite rule in Apache doesn't work.
Nick Berardi: We changed the way our links work. We don't use # anymore, only for backward compatibility
Related
I'm on the learning curve for 301 redirects and have done lots of research, including looking at answers on this forum. I haven't found the answer to my specific query, which requires removing elements from the middle of the url request.
Namely, I am building a new site with dynamic links (WordPress, but the question applies to any CMS).
I need to redirect from links (also dynamic) structured as:
sitename.com/issue/february-2016/post/dynamic-post-name
(february-2016 is an example - could be 'march-2014' or any of a range of terms)
to:
sitename.com/post/dynamic-post-name
Another way to say this: Any request url with /article/ needs to grab that last string (which I think would be the wildcard?) and redirect it as: sitename.com/post/$
Is this possible?
Update: With more research, I found a possible answer that worked in a testing tool, although I've not tested it live on my site.
Does this look correct?
RewriteRule ^([^/]+)/([^/]+)/article/([^.]+)$ article/$3 [QSA,L]
RewriteRule ^article/.*/(.*)$ post/$1 [QSA,L,R=301]
Something like this should work.
The characters captured within the brackets (.*) will be the $1.
Feel free to change article and post to fit your need.
In this case, it will redirect
http://example.com/article/february-2016/post/dynamic-post-name
to
http://example.com/post/dynamic-post-name
Friends, this is a complex problem for me. I have researched on this many times and at last have came to you (with hope that I will get the solution). We had products URLs like:
/product_info.php/products_id/75
As per SEO, I wanted keyword rich URL so, we added a slug in products.php file and modified the URL as:
/product_info.php/products_id/75/product-title
But its also not an ideal URL. I wanted this as:
domainname.com/products/product-title/75
Changes I made in .htaccess file is as follows-
RewriteRule ^products/([A-Za-z0-9-]+)/([0-9]{2})/?$ product_info.php?products_id=$2=$1 [L]
RedirectMatch 301 ^/product_info.php/products_id/([0-9]{2})/([A-Za-z0-9-]+)$ http://www.livevaastu.com/products/$2/$1
Now problem is- our old URLs (which has no slugs) are indexed by Google. And I am not getting any idea how to redirect those old ones to new ones. Also there are many products pages so I couldn't redirect them one-by-one. You guys are genius. Can you help me any how. (without laughing on me). M not a developer.
You can't produce product_info.php?products_id=$2=$1 from your old URLs of /product_info.php/products_id/75 because they don't have the product title ("slug").
For one thing product_info.php?products_id=$2=$1 doesn't make any sense. Is that a typo? What are the key/value pairs in that query string?? This should look something like products_id=$1&product_title=$2 where each derived "value" from the mod_rewrite match gets assigned to a known "key", something you can use in $_GET or $_REQUEST to find the value.
Edit to help with what I think you are trying to achieve here, based on discussion:
If you want your old URLs to lead to the new "pretty" URLs, you will need to use PHP to do this. As mentioned, there simply is not adequate information in the URL to invent the product names. But you could pretty easily have something at the top of each page (i.e. in a header file) which looked to see if the "title" $_GET parameter is present or not (once you clean up the double-equal sign and replace it with proper key/value pairs). This might look something like:
<?php
if( !isset( $_GET['product_title'] ) ) {
// Code here to look up $product_title from the $product_id, presumably in a DB
header("HTTP/1.1 301 Moved Permanently");
header("Location: /products/$product_title/$product_id");
exit();
}
I am changing the way links show on my web site. I changed from allowing space in the URL to a new format where the URL has dashes where spaces used to be.
This effects only ONE string in the middle of the URL.
Google has indexed many of my pages with the old spaces in the URL but now they show up as 404s. Is it possible for me to put some code in place (temporary) that can redirect those URLs with spaces to the ones with dashes. I think it's a 403 redirect. A permanent redirect.
Thanks,
We wen't through the same thing recently. We ended up creating a LegacyController, which basically called into RedirectToActionPermanent or RedirectToRoutePermanent. (HTTP 301 - Moved Permanently).
Ideally, you should let IIS7 do the redirects, but we couldn't, because we needed to call our DB in order to figure out where to go.
If your redirect is as simple as you say it is (e.g no "dynamic" info in the URL), then you should use IIS.
Why don't you try to configure you routing to support both: legacy and new routes?
Basically /a b c/page and /a-b-c/page should be mapped to the same action of controller.
I've inherited a site with hundreds of scattered HTML and non-framework PHP files, which I am porting to Ruby on Rails 3.0.
As functionality is added in the Rails app, the corresponding pages are deleted from the document root; but, because there are often links to these in Google or from external sites, simply returning a 404 is not acceptable.
A URL like '/contact.php' should redirect to '/app/contact/', for example.
For the first few cases of this, I created simple stub html files at the old locations, with Meta tags within to perform the redirect. This doesn't scale well, particularly once I start replacing product pages, of which there are thousands.
My preference is to delete the old pages, then have the 404 handler dispatch these to the new Rails app, which will examine the URL using regexes and database lookup to try to figure out what the replacement page is, then issue a 301 redirect to that new page.
In httpd.conf, I placed the directive:
ErrorDocument 404 /app/error/handle404
# /app/error is a rails url.
When I hit "http://localhost/does-not-exist", this causes my ErrorController to be invoked, as expected.
However, within the controller, I cannot find the original path ("/does-not-exist") anywhere in request, request.headers, or ENV - I've been calling likely methods like request.request_uri (which contains /app/error/handle404), and examining request.headers and ENV without finding the expected original path.
The Apache access_log shows only the request for /does-not-exist, indicating that it transparently invoked /app/error/handle404 (without doing a redirect or causing a second request to be made).
How can I get access to the original URL?
Edit: to clarify, here is the sequence of events:
User hits legacy path like http://mysite/foo.php, probably coming from some ancient link from a blog.
...but foo.php no longer exists!
this is a 404, thus Apache invokes ErrorDocument
directive is "ErrorDocument 404 /railsapp/error/handle404"
Rails routes this to ErrorController action "handle404" - this is working correctly
problem: in ErrorController, request.request.uri, request.headers do not provide any clue as to which URL the user was actually trying to get to, like "/foo.php"; I need to know the original URL to serve up an appropriate replacement page.
As I couldn't find the original, non-rewritten URL in the Rails request, I ended up doing it in PHP - plain, old-fashioned, non-framework PHP with explicit mysqli_*() calls.
The PHP error handler receives the necessary information in the $_SERVER hash; $_SERVER['REQUEST_URI'] contains the original URI that I needed.
I look this up in a database, and if I find a corresponding entry, issue a 301 redirect to the new location; if there's no entry, I simply display a 404 page to the user.
Simplified (PHP):
$url = $_SERVER['REQUEST_URI'];
$redir = lookupRedirect($url); # database stuff here
if (! $redir) {
include ('404.phtml');
} else {
header("Status: 301");
header("Location: " . $redir['new_url']);
}
It's an ugly kluge, but I just couldn't find a way to make the Rails app aware of the error URL.
Can I create a clean URL for WebBroker webpages/applications ?
A typical WebBroker URL normally looks like:
hxxp://www.mywebsite.com/myapp.dll?name=fred
or
hxxp://www.mywebsite.com/myapp.dll/names/fred
What I would prefer is:
hxxp://www.mywebsite.com/names/fred
Any idea how I can achieve this with Delphi/WebBroker ? (ISAPI/Apache)
The typical way of doing this is to use apache's mod_rewrite to redirect requests to the url w/ parameters. Many, many applications do this to create 'human readable' and more search engine friendly urls.
For example, you might add this rule to make action=sales&year=2009 look like sales-2009.htm:
RewriteRule ^sales-2009.htm?$ index.php?action=sales&y=2009 [L]
When the user goes to 'sales-2009.htm', its actually redirected to the php page with the appropriate parameters. To the end user, though, it still displays sales-2009.htm in the browser's url bar.
You can, of course, use regular expressions w/ mod_rewrite, such that you can make the redirections much more flexible. You could, for example, make a single expression in the above example that would map any year to the correct parameter.