Google Plus Authorship not working - search-engine

I know there are already 1 topic on this, but mine is a lil different.
I have added all the markup required for Google Plus verification, its been 1months now still it doesn't showup on google search with my picture.
my website: http://www.ajinkyaxjs.com
I have added below in my HTML page:
<link rel="author" href="https://plus.google.com/+AjinkyaBorade?rel=author"/>
<link rel="publisher" href="https://plus.google.com/+AjinkyaBorade?rel=publisher"/>

The two way connection is not validating for Google. Use Google's structured data testing tool to validate the markup and connections are correct.

Related

Google Adwords conversion is not tracking

I have a problem with Google Adwords conversion.
I've set up the tags and i can see with Google tag assistant, that the Global site tag is working, and when i checkout it shows correctly with the amount payed etc. in Google tag assistant, but nothings shows in Google Adwords.
I've set up the global tag in my header, between and and the conversion code on my order-confirmation.tpl page (In Prestashop 1.7.2.1)
See picture from Google Tag assistant here
What can the problem be, i've searched the entire web but i cant seem to find the problem.
Hope you can help me out! :-)

Why do sites still use <meta name="keywords">? [duplicate]

This question already has answers here:
Keywords meta tag: Useful or time waster?
(2 answers)
Closed 8 years ago.
The keywords meta tag seems like a staggering dinosaur from the early days of failing to trick Google. Search engines naturally prioritize actually readable words, as users don't want information they can't see, right?
So why do Tumblr and Youtube automatically insert meta tags?
A youtube watch page:
<meta name="keywords" content="sonic the hedgehog, final fantasy, mega man x, i swear i am not abusing tags, newgrounds, flash">
Tumblr's official staff blog:
<meta name="keywords" content="features,HQ Update,Tumblr Tuesday,Esther Day" />
In both cases, the keywords are taken from the explicitly user-entered tags. Youtube takes them from any tags that the uploader specified, and Tumblr takes the first 5 post tags on the page. (Tumblr even automatically inserts these tags on every blog page without the ability to opt-out.)
There must be some reason they're going through this trouble, right? Are they for older/smaller search engines? Internal analytics? I can't imagine it's an enormous strain on their servers, but the existence shows they prioritize something highly enough for the small additional loads.
Firstly, its not much trouble. The tags are already defined. Secondly, just because google won't use the meta data exclusively doesn't mean that google or other sites can't use it. It's provided in an easy to read place for programs that need it. Parsing html can be hard, especially when your site is constantly changing, so providing a constant place for tags with little to no effort is just something that they do.

Strange google result listing, invalid URL created

Would be great if you guys could shed some light on this, has baffled me:
I was asked by a client if I could try and make the search term for his comedy night "sketchercise" put his website top of the Google ranking. I simply changed the title tag of the header for the whole site from "Allnutt and Simpson" to "Allnutt and Simpson - Sketchercise # Ginglik - Sketch Duo". It did the trick and now the site comes up top of the Google listing when typing in "sketchercise". However, it gives off this very strange link:
http://www.allnuttandsimpson.com/index.php/videos/
This is the link to the google search result too:
http://www.google.co.uk/search?sourceid=chrome&ie=UTF-8&q=sketchercise
This link is invalid, it doesn't make any sense. I guess it has something to do with the use of hash tags and the AJAX driven site, but before I changed the title tag, it linked to the site fine using the # tags. What is the deal with this slash?
The strangest part is that the valid URL for the videos page on that site is /index.php#vidspics, I have never used the word "videos" in a url!
If anyone can explain the cause of this or just help me stop it from happening, I'd be very grateful. I realise that this is an SEO question and I hate that stuff generally, but I hope you can see this is a bit of a strange case!
Just to compare, if you google "allnutt and simpson" it works just fine links to the site and all of it's pages absolutely fine as .php pages (and then my JS converts them to hash tags to keep things clean)
It's because there must be a folder called 'videos' under your hosted files, use an FTP client and check this.
Google crawls every folder and file unless you tell him not to do this, look for robot.txt files to learn how to avoid indexation.
Also ask google to remove that result when you solve this.
Finally that behaviour is not related with hash tags, these are just references to javascript in order to display the appropiate content in you webpage.
Not sure why its posted like this but the only way to stop that page from appearing is using a google webmaster account for this website and make sure the crawlers can't find this link anymore. The alternative is have the site admin put this tag, <META NAME="ROBOTS" CONTENT="NOINDEX, NOFOLLOW"> , in the header when isset($_REQUEST(videos)) is true.
The slash in the address is the parsed form of www.allnuttandsimpson.com/index.php?=videos. You can have the web server change all the php parameters into slashes to make the links look pretty.
Best option for correct results is to create a sitemap and submit it to https://www.google.com/webmasters/tools/ for that site. You will need access.
Oh forgot, the sitemap will make google see all the pages you want it to post, use this for the major pages like those in the main menu. To remove links you don't want requires a robots.txt in the main directory of the site.

Search engines ignoring meta description content and showing footer

I have a site that is very simple and has mostly images and a login form and a link to signup. No actual text exist in the body except for the footer which shows the link to usage terms and copyright notice.
My site is currently showing up on search engine results with the footer content showing instead of what I put in the <meta name="description"...> tag. Why is this?
How can I not allow the search engines to index my site with the footer content showing? Or at least show the meta description first? Do I need to put some text in the form of a title attribute or alt attribute somewhere?
As +Filburt pointed out, you could add your site to Webmaster Tools which will offer you valuable information about your site's presence on the web and in the Google Search results. It may also provide you hints about what do we think about your meta descriptions :)
Generally, you will want to
write the meta description to be as natural as possible, don't stuff keywords in it,
describe the page's content accurately in this tag,
and to have a unique meta description for each page.
While we can't guarantee that the meta description that you provided will be used as search result snippets, following the above tips will greatly increase the chance.
Here are some more information about the meta description tag: http://www.google.com/support/webmasters/bin/answer.py?answer=35264
It works to some extent to use <meta name="description" /> but Google will complain (and choose to ignore it) when every page has the same description.
If you are interested in how Google deals with your site you could sign up for their Webmaster Tools - they offer a good starting point for SEO-ing your site.
You could add content invisible to your visitor but Google checks this and considers hidden content as cheating for page rank because this used to be a common SEO technique.
meta tags were a failure and have been broadly ignored ever since the Google era began picked-up again with enthusiasm.
The problem was, humans would put stale, inaccurate, or irrelevant information in the meta tags. This was fifteen years ago when cataloging the Internet still seemed feasible.
Google came along and decided that what a web page actually says was more useful. Everybody else followed suit shortly after.
Now people are trying human-authored metadata again, they're calling it the "semantic web". My hopes are not high.

Google sees something that it shouldn't see. Why?

For some mysterious reason, Google has indexed both these adresses, that lead to the same page:
/something/some-text-1055.html
and
/index.php?pg=something&id=1055
(short notice - the site has had friendly urls since its launch, I have no idea how google found the "index.php?" url - there are "unfriendly" urls only in the content management system, which is password-restricted)
What can I do to solve the situation? (I have around 1000 pages that are double-indexed.) Somebody told me to use "disallow: index.php?" in the robots.txt file.
Right or wrong? Any other suggestions?
You'd be surprised as how pervasive and quick the google bots are at indexing site content. That, combined with lots of CMS systems creating unintended pages/links making it likely that at some point those links were exposed is the most likely culprit. It's also possible your administration area isn't as secure as you think, the google bot got through that way.
The well-behaved, and google recommended, things to do here are
If possible, create 301 redirects from you query string style URLs to your canonical style URLs. That's you saying "hey there, web bot/browser, the content that used to be at this URL is now at this other URL"
Block the query string content in your robots.txt. That's like asking the spiders or other automated programs "Hey, please don't look at this stuff. These aren't the URLs you're looking for"
Google apparently allows you to specify a canonical URL now via a <link /> tag in the top of your page. Consider adding these in.
As to whether doing the well behaved things is the the "right" thing to do re: Google rankings ... who knows. Only "Google" knows how their algorithms work now, and will work in the future, and by Google, I mean a bunch of engineers and executives with conflicting goals on how search should work.
Google now offers a way to specify a page's canonical URL. You can use the following code in your HTML to tell Google your canonical URL:
<link rel="canonical" href="http://www.example.com/product.php?item=swedish-fish" />
You can read more about canonical URLs on Google on their blog post on the subject, here: http://googlewebmastercentral.blogspot.com/2009/02/specify-your-canonical.html
According to the blog post, Ask.com, Microsoft Live Search and Yahoo! all support the canonical tag.
If you use sitemap generators to submit to search engines, you'll want to disallow in them as well. They are likely where Google got your links, from crawling your folder and from checking your logs.
Better check what URI has been requested ($_SERVER['REQUEST_URI']) and redirect if it was /index.php.
Changing robots.txt will not help, since the page is already indexed.
The best is to use a permanent redirect (301).
If you want to remove a page once indexed by Google the only way, more or less, is to make it return a 404 not found message.
Is it possible you're posting a form to a similar url and google is simply picking it up from the source?

Resources