Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I have a blog on blogspot with url say myblog.blogspot.com. Now its getting around 30,000 page view in a month. I want to change the blog url as myblog.com.
But, I worry that the amount of traffic I have gained till now, will become nil because of new url. Google page rank and alexa rank will go to nil.
So, should I change the domain of my blog or not?
Maybe this Link will help you: How do I use a custom domain
It s a simple forward, so your rankings will not go to nil.
Your original Blogspot address will automatically forward to your new domain. That way, any existing links or bookmarks to your site will still work.
When you migrate from a sub-domain of blogspot to your own domain you must set up proper redirects. The redirects should be the permanent (301) type, not the temporary (302) variety). Permalinks should redirect directly to corresponding permalinks:
http://myblog.blogspot.com/ -> http://myblog.com/
http://myblog.blogspot.com/this-is-a-blog-post -> http://myblog.com/this-is-a-blog-post
You should also make sure you change all your internal links to make sure they don't mention your old sub-domain. If you control any external links, you should change those. You might even consider asking some webmasters to change the external links that point to your blog.
Even if you do the redirects correctly, there is a good chance that you will lose Google traffic for some time. The last time I tried a move from a sub-domain to a full domain (several years ago), I lost about 75% of my Google referrals for about 8 months. After 8 months, Google seemed to trust my new domain again and my traffic came right back.
Google has a change of address tool as part of webmaster tools. It is limited to use on "full-domains" and it won't work in your case because you are starting out on a sub-domain. Google has a help document that goes along with it which you may still find useful.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I'm aware of the differences between GET and POST (security and caching, in particular). Additionally, when I search this question using Google, I'm only greeted by results telling me how to hack site search in Google Analytics for POST-based engines. I already know how to do that.
What I'm wondering is why employ a POST-based search engine in the first place? What are the salient advantages? I can't imagine why site search queries would need to be secure. So maybe it has something to do with caching?
Thanks so much in advance to anyone who can shed light on this.
No real "answer" to this one - it's entirely up to the site owners choice and/or the options the software they use on their website.
I would however say that there are very valid reasons for search terms to be secure. If you are searching for personal private medical conditions for example, or perhaps your own sexual preferences that you'd prefer not to be widely known. And then there's search terms used in more restrictive countries than you're obviously used to where having a history of those search terms on your computer could get you in very serious trouble.
Google has long restricted search terms from being passed on to the next website in the referrer field for just the reasons.
Advantages of a GET based search page:
Easy to copy and paste link for someone else.
Adds to your web history.
Allows search engine to be implemented in client side (e.g. like Google Custom Search Engine uses with a JavaScript call to Google's main search engine rather than a complicated server side search engine implementation).
Advantages of POST based search pages are mostly to do with security:
Cannot be accidentally shared by copying and pasting URL.
Does not add search terms to web history
Cannot leak search terms in referrer fields for sites you click on. This takes extra effort to do with a GET request (like Google has done) but is default with POST requests.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
Is there any solution to protect google analytics from receiving fake information from Spammers ? The problem is anybody can send information by knowing tracking id.
I found the following solution, filter domain. But that's for web and how to implement for mobile app ?
https://blog.kissmetrics.com/protect-analytics-from-hacking/
Thanks
Visible Tracking ID
On a website you can right click, Show source but you can't on a mobile application. Therefore your trying ID can't be find from your clients application. Nonetheless, if your code is pushed to a public repository (Github for example) robots may find it.
Random spammers
Even if your trackingID is kept secret you'll have some bot spamming your account randomly (they try every possible tracking ID). Google added a tool to prevent this: go to Admin > View > View settings > turn on Exclude all hits from known bots and spiders. Then Google will automatically filter hits from the known fake domains.
Hostname security hack
Even with Google's automatic filter you may still see some spam. This can be fixed with what they explain in the link you provided.
In your article they create a custom dimension and use it to filter real data from spam data. You can also use this with your mobile application, the thing is, it does not need to be a host name it just need to be a string only known by you, its sort of a new secret added to your tracking ID (which should already be secret).
This works because bots can handle the try of every trackingID but they can't try every custom dimension with every possible value, it's too much work for them.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I have an asp.net web forms application that has been live for a number of years and as such has quite a lot of indexed content on google.
Ideally, I'd prefer that all Url's for the website are in lowercase but I understand that having 2 versions of the same content indexed in search engines (MixedCase.aspx and mixedcase.aspx) will be bad for seo.
I was wondering:
a) Should I just leave everything in its current Mixed Case form and never change it?
OR
b) I can change the code so everything is in lowercase from here on in, BUT, is there a way of doing this so as the search engines are aware of this change and don't penalise me?
Having two versions of the same URL will cause duplicate content issues, although the search engines are generally smart enough to know that the two pages are the same.
There are two good solutions. The first is to use the canonical meta tag to specify your preferred version of the URL. With this solution, both MixedCase.aspx and mixedcase.aspx would show the same page, but the search engines know for definite which is the "correct" URL to show. Make sure you update all your links to the lowercase version.
The second solution is to use 301 Redirects. Usually this is preferred because users will always wind up at the correct page. If they decide to link to it, they're using the correct version. As Rocky says, the redirects will need to stay in place permanently if you already have links from other sites. However, technical (or time) limitations may mean you need to use the canonical method.
You are wise to be wary of having two URLs serving the same content, as you will experience duplicate content issues from the search engines.
You can transfer your URLs, and their PR, from mixed case to lower case without too much of an issue by providing a 301 response code on the old mixed case URLs to the new lower case URLs.
So you would essentially have two URLs for every page:
Old mixed case URL which 301 redirects to the lower case URL
New lower case URL which serves the content
You will need to keep the old URLs in effect for a long time, possibly permanently (e.g. especially if there are third party links to them). Having done this myself, the search engines will continue to request the old URLs for years, even when they know that they redirect to the new URLs (Yahoo, in particular, was guilty of this).
force lowercase, redirect all mixed case URLs HTTP 301 to the lowercase version.
Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 months ago.
Improve this question
I have a problem concerning Google cache my old content URLs while I created a new website
I have an old website where the old webpages are dead now and created a new website with new webpages.
Because I have old content so when people search on Google for old content the old URLs appear in the search results (as it was cached) instead of the new ones which should be appearing (but not indexed yet), this is because the old content is already indexed by Google and the new ones are not indexed yet.
While when people search of new content the new URLs appear. So for the new content there is no problem, but the problem I have is with the old content.
For that reason above, now I created a new pages with the old URL names to redirect to the new page with the new URL when people search for old content.
My question is what I did to solve this will help the old URLs to disappear from Google cached pages and start to index the OLD content with new URLs instead or should I keep with page not found?
Here's an example of the case I have:
When I search for old content this URL appear in search results --
www.example.com/Sectionnewsdetail.aspx?id=10132
which is deleted and land on page not found
So I created a webpage with the old name
Sectionnewsdetail.aspx to redirect to the new content page --
http://www.example.com/Content/SectionNews.aspx?NewsID=13855
whenever any one click on the old URL on Google my solution redirects him to the new page
So which case will help Google cache forget the old URLs and index the new URLs.
Keeping page not found or the solution I did as explained above?
Try submitting your site again. But It could still take a week or two.
The easiest way could be adding the cross-domain rel="canonical" link element in your old website. Google Tutorial
There are situations where it's not easily possible to set up
redirects. This could be the case when you need to move your website
from a server that does not feature server-side redirects. In a
situation like this, you can use the rel="canonical" link element
across domains to specify the exact URL of whichever domain is
preferred for indexing. While the rel="canonical" link element is seen
as a hint and not an absolute directive, we do try to follow it where
possible.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
Our in-house built CMS system has the ability to have descriptive url (Descriptive URLs vs. Basic URLs) versus basic urls (http://test.com/index.php?id=34234) We want to know other than giving a little more feedback to crawlers out there, if will mean something else.
Do having this descriptive urls bring us other benefits?
Should we limit the size of the URL to certain amount of words?
Thanks for you time.
There are several benefits to descriptive URIs:
It can help with search engine optimization if they include relevant keywords
URIs without query parameters can be cached for GET requests (query parameters prevent caching)
They are descriptive to the user, so their location within the site is clearer to them. This is helpful if they save the link too, or give it to a friend. The web benefits from semantic content, and this is just another way to provide it.
They may also be able to modify the URI directly, though this is a potential downside too.
It is generally good to keep the length under 256 characters due to legacy constraints, but today, the actual limit in practice is not well defined.
Descriptive URLS feature major SEO benefits, as search engines weigh the contents of the URL heavily.
There are many benefits to it. Not only do they work better for SEO, but they are often times hackable for your end-users.
https://stackoverflow.com/questions/tagged/php
That tells me pretty straight forward that I'm going to find questions tagged as "PHP." Without knowing any special rules, I could guess how to find the jQuery questions.
You will run into a limit on the amount of space you can squeeze into a url, but limit the urls to core-terms (like titles to an article, etc) and you'll be fine.
One suggestion is to use these types of urls, but have a fall-back plan. For instance, the url to this question is:
Is having a descriptive URL needed to be a web 2.0 website?
The first parameter is 1347835, which is the question id. The second parameter is the question title. The title here is completely optional. It's not needed to access this page, but when you use it in links it increases the SEO for this page.
If you were to require the title be exact, that may cause more problems than you want. Make the SEO-content like this optional for loading the content itself. SO only requires the question-id, as I stated before.