Website redevelopment and preserving Google Analytics - asp.net-mvc

I'm currently working on a project to re-develop a public sector website. The website uses Google Analytics and has done since April 2007, so alot of data has been captured.
The new site will be developed using ASP.net MVC and as part of the the redevelopment we want to make the URL's more SEO friendly by getting rid of the ?id=123 and replace it with /news/2010/01/this-is-the-title.aspx. Other pages will have different routes however the may have the same content.
I've already developed a legacy route engine which can issue a 301 and redirect the user to the new page however I am uncertain of what will happen to the Google Analytics.
I really don't want to start over with the analytics. Is there a way to join up the new url to the old url on Google Analytics?

I guess you could make use of the _trackPageview function to track a different page name. See How do I track AJAX applications?.
For that, you would obviously need to know programmatically the name of the old URL.
It might be less hassle to make a cut, and start a new profile from scratch with the new URLs.

Related

Get rid of old links to a retired website in Google search

I have a website that has been replaced by another website with a different domain name.
In Google search, I am able to find links to the pages on the old site, and I hope they will not show up in future Google search.
Here is what I did, but I am not sure whether it is correct or enough.
Access to any page on the old website will be immediately redirected to the homepage of the new website. There is no one-to-one page mapping between the two sites. Here is the code for the redirect on the old website:
<meta http-equiv="refresh" content="0;url=http://example.com" >
I went to Google Webmasters site. For the old website, I went to Fetch as Google, clicked "Fetch and Render" and "Reindex".
Really appreciate any input.
A few things you'll want to do here:
You need to use permanent server redirects, not meta refresh. Also I suggest you do provide one-to-one page mapping. It's a better user experience, and large numbers of redirects to root are often interpreted as soft 404s. Consult Google's guide to site migrations for more details.
Rather than Fetch & Render, use Google Search Console's (Webmaster Tools) Change of Address tool. Bing have a similar tool.
A common mistake is blocking crawler access to an retired site. That has the opposite of the intended effect: old URLs need to be accessible to search engines for the redirects to be "seen".

How to delete old Google Urls with parameters

a month ago i relaunched a Website in Typo3 CMS. Before that, the site was hosted with Joomla CMS.
In Joomla Config, SEO Links were disabled, so Google indexed the Page Urls this:
www.domain.de/index.php?com_component&itemid=123....
for example.
Now, a month later (after the Typo3 Relaunch), these Links are still visible in Google because the Urls don't return a 404-Error. That's because "index.php" also exists on Typo3 and Typo3 doesnt care about the additional query string/variables - it returns a 200 status code and shows the front page.
In Google Webmaster Tools it's possible to delete single Urls from the Google Index, but that way i have to delete about 10000 Urls manually...
My Question is: Is there a way to remove these old Urls from the Google Index?
Greetings
With this amount of URL's there is only one sensible solution, implement the proper 404 handling in your TYPO3, or even better redirections to same content placed in TYPO3.
You can use TYPO3's handler (search for it in Install Tool > All configuration) it's called pageNotFound_handling, you can use options like REDIRECT for redirecting to some page or even USER_FUNCTION, which allow you to use own PHP script, check the description in the Install Tool.
You can also write a simple condition in TypoScript and check if Joomla typical params exists in the URL - so that easy way you can return custom 404 page. If it's important to you to make more sophisticated condition (for an example, you want to redirect links which previously pointed to some gallery in Joomla, to new gallery in TYPO3) you can make usage of userFunc condition and that would be probably best option for SEO
If these urls contain an acceptable number of common indicators, you could redirect these links with a rule in your virtual host or .htaccess so that google will run into the correct error message.
I wrote a google chrome extension to remove urls in bulk in google webmaster tools. Check it out here: https://github.com/noitcudni/google-webmaster-tools-bulk-url-removal.
Basically, it's a glorified for loop. You put all the urls in a text file. For example,
http://your-domain/link-1
http://your-domain/link-2
Having installed the extension as described in the README, you'll find a new "choose a file" button.
Select the file you just created. The extension reads it in, loops thru all the urls and submits them for removal.

Tracking Page Popularity in a Time Frame in Rails

I'm very new to web development and this seems like a basic question, so perhaps I just lack the correct terminology to search it on Google.
On my site I plan to have many dynamically generated pages, based off data in a MySQL server, and I would like to know which ones people have been visiting the most, in say, the last 24 hours, so that I can place these most popular page on the front page of the site. How would I/would I be able to accomplish this in a Rails application.
What you're looking for is a web analytics solution to analyze your traffic, and possibly your marketing effectiveness. Here are some of the most prominent services you could use with your website:
Google Analytics
Chartbeat
Reinvigorate
HaveAMint
GetClicky
Piwik
Woopra
Personally, I use Google Analytics as its setup is darn simple: configure your account, add a Javascript snippet on each of the pages you want to track, and you're done.
You could also look out for web analytics software that you would host. All in all, take a look at this Wikipedia page for more information.

How do search engines see dynamic profiles?

Recently search engines have been able to page dynamic content on social networking sites. I would like to understand how this is done. Are there static pages created by a site like Facebook that update semi frequently. Does Google attempt to store every possible user name?
As I understand it, a page like www.facebook.com/username, is not an actual file stored on disk but is shorthand for a query like: select username from users and display the information on the page. How does Google know about every user, this gets even more complicated when things like tweets are involved.
EDIT: I guess I didn't really ask what I wanted to know about. Do I need to be as big as twitter or facebook in order for google to make special ways to crawl my site? Will google automatically find my users profiles if I allow anyone to view them? If not what do I have to do to make that work?
In the case of tweets in particular, Google isn't 'crawling' for them in the traditional sense; they've integrated with Twitter to provide the search results in real-time.
In the more general case of your question, dynamic content is not new to Facebook or Twitter, though it may seem to be. Google crawls a URL; the URL provides HTML data; Google indexes it. Whether it's a dynamic query that's rendering the page, or whether it's a cache of static HTML, makes little difference to the indexing process in theory. In practice, there's a lot more to it (see Michael B's comment below.)
And see Vartec's succinct post on how Google might find all those public Facebook profiles without actually logging in and poking around FB.
OK, that was vastly oversimplified, but let's see what else people have to say..
As far as I know Google isn't able to read and store the actual contents of profiles, because the Google bot doesn't have a Facebook account, and it would be a huge privacy breach.
The bot works by hitting facebook.com and then following every link it can find. Whatever content it sees on the page it hits, it stores. So even if it follows a dynamic url like www.facebook.com/username, it will just remember whatever it saw when it went there. Hopefully in that particular case, it isn't all the private data of said user.
Additionally, facebook can and does provide special instructions that search bots can follow, so that google results don't include a bunch of login pages.
profiles can be linked from outside;
site may provide sitemap

Removing /dotnetnuke/ in all page urls

We want to remove the /dotnetnuke/ from all 300 pages of our website that has been running since Feb 09.
Google isn't indexing all of our pages just 98. I'm thinking that the /dotnetnuke/ is causing our content to be too deep in our site for Google to find(?)
We also don't have any Page Rank although our site appears on page one for most search queries. Obviously we don't want to lose our position in Google.
Would you advise that we do remove the /dotnetnuke/ in our urls and if so should we create a new site and use 301 redirects or is there a way of removing the /dotnetnuke/ from our existing urls but still keeping our Google history?
Many thanks
DotNetNuke uses its own URL rewriting which is built in to the framework. DotNetNuke uses the provider model, so you can also plug in your own URL rewriter or secure one from a third party. If that is what you need, I'd suggest taking a look at Bruce Chapman's iFinity URL Rewriter as a quality free third party extension to DotNetNuke. He also offers a fancier commercial version called URL Master, which I haven't needed to use as of yet.
However, I believe the /dotnetnuke/ you're referring too may not actually be part of your "pages," but the actual alias of your DotNetNuke portal (i.e. www.yoursite.com/dotnetnuke). This would mean that /dotnetnuke/ is part of your base path for all pages because using the base path as an identifier is how DotNetNuke determines that you want to load a particular portal. If this is the case, you could potentially just change your portal alias to be www.yoursite.com (depending on the level of access you have to the site/server).
Lastly, sometimes virtual pages do not get included in DotNetNuke's site map. If you are using a third party module for your dynamic content - it may in fact not be represented on your site map. I'd look in to what pages are currently represented on your site map as well.
In IIS7 you can use URL rewrite functionality to hide /dotnetnuke/.
301 redirect will also work fine (just make sure you are not using 302 - Google doesn't like it)
Another answer in adition to the first 2 is that you are running DNN on GoDaddy hosting. Godaddy has a strange way of setting up sites here is how you can remove that problem
Set up a second (non primary) domain. Under domain management, you can actually assign the second domain to point to a subdirectory. Make sure that the subdirectory is whatever you set dnn to
Might have this wrong as i got it off godaddys site but have done it twice and got it to work correctly

Resources