Disallow group of folders using robots.txt - search-engine

I need to disallow the access without inform at robots.txt which folder is that.
In my case I have three folders /_private1, /_private2 and /_private3.
I need to know if I use like above, I'm going to protect my folder against Google and others. How should I do that?
Disallow: /_*
Disallow: /_

To disallow any directory or file whose name begins with an underscore:
User-agent: *
Disallow: /_
You should be aware that this does not completely guarantee that these directories will never show up on any search engines. It prevents robots.txt compliant robots (which includes all major search engines) from crawling them, but they could still theoretically show up in a search if someone decides to link to them. If you really need to keep this info private, best practice is to put it behind a password.

Related

Drupal 8 - How to block search engines from reading or indexing a specific content type(Resource)

I don't want to see resource content type nodes in the google search result.
I know, I can add the URL in the robots.txt if I have to restrict any particular content.
But, how can I restrict all the nodes of one particular content type in Drupal 8 from being crawled by the search engines like google?
The only way that I know of to block search engines is through robots.txt.
With Drupal, in order to block an entire type of content, you must make sure there are no ad-hoc aliases. Check /admin/config/search/path. Any abnormal aliases to instances of your content will allow the search engines to bypass general rules that you set.
Then, add a rule to robots.txt to disallow the pattern for that content.
Example:
Disallow: /node/
If you have some specific subset of content that you would like to block consider using Pathauto to create path patterns that will allow you to easily target those subsets with robots.txt rules.
If you're able to install the metatag module, you will need to add the field metatag to the content type that you're trying to exclude from google. To exclude from google make sure you checked in the metatag dropdown while editing the pages the following:
Prevents search engines from indexing this page.
If you have too many, you would need to write a hook_update in order to get the changes applied to all the existing pages.

Remove multiple indexed URLs (duplicates) with redirect

I am managing a website that has only about 20-50 pages (articles, links and etc.). Somehow, Google indexed over 1000 links (duplicates, same page with different string in the URL). I found that those links contain ?date= in url. I already blocked by writing Disallow: *date* in robots.txt, made an XML map (which I did not had before) placed it into root folder and imported to Google Webmaster Tools. But the problem still stays: links are (and probably will be) in search results. I would easily remove URLs in GWT, but they can only remove one link at the time, and removing >1000 one by one is not an option.
The question: Is it possible to make dynamic 301 redirects from every page that contains $date= in url to the original one, and how? I am thinking that Google will re-index those pages, redirect to original ones, and delete those numerous pages from search results.
Example:
bad page: www.website.com/article?date=1961-11-1 and n same pages with different "date"
good page: www.website.com/article
automatically redirect all bad pages to good ones.
I have spent whole work day trying to solve this problem, would be nice to get some support. Thank you!
P.S. As far as I think this coding question is the right one to ask in stackoverflow, but if I am wrong (forgive me) redirect me to right place where I can ask this one.
You're looking for the canonical link element, that's the way Google suggests to solve this problem (here's the Webmasters help page about it), and it's used by most if not all search engines. When you place an element like
<link rel='canonical' href='http://www.website.com/article'>
in the header of the page, the URI in the href attribute will be considered the 'canonical' version of the page, the one to be indexed and so on.
For the record: if the duplicate content is not a html page (say, it's a dynamically generated image), and supposing you're using Apache, you can use .htaccess to redirect to the canonical version. Unfortunately the Redirect and RedirectMatch directives don't handle query strings (they're strictly for URIs), but you could use mod_rewrite to strip parts of the query string. See, for example, this answer for a way to do it.

Will this robot.txt code disallow every Search Engine?

I would like to know if this code disallow every search engine to scan my directory.
User-agent: *
Disallow: /
also does this code is updated with the new htlm 5 protocole ?
<META NAME="ROBOTS" CONTENT="NOINDEX, NOFOLLOW">
Really useful or not needed anymore?
No, it'll only stop those that obey the robots.txt protocol (there exist search engines that intentionally disobey robots.txt to find "hidden" stuff).
It will however stop the vast majority of search engines that your average consumer would use, from seeing it.

When should I use a trailing slash in my URL?

When should a trailing slash be used in a URL? For example - should my URL look like /about-us/ or like /about-us?
I am fully aware of the SEO-related issues - duplicate content and the canonical thing; I'm trying to figure out which one I should use in the context of serving pages correctly alone.
For example, my colleague is thinking that a trailing slash at the end means it's a "folder" - a "directory", so this is not a correct style. But I think that without a slash in the end - it's not quite correct either, because it almost looks like a folder, but it isn't and it's not a normal file either, but a filename without extension.
Is there a proper way of knowing which to use?
It is not a question of preference. /base and /base/ have different semantics. In many cases, the difference is unimportant. But it is important when there are relative URLs.
child relative to /base/ is /base/child.
child relative to /base is (perhaps surprisingly) /child.
In my personal opinion trailing slashes are misused.
Basically the URL format came from the same UNIX format of files and folders, later on, on DOS systems, and finally, adapted for the web.
A typical URL for this book on a Unix-like operating system would be a file path such as file:///home/username/RomeoAndJuliet.pdf, identifying the electronic book saved in a file on a local hard disk.
Source: Wikipedia: Uniform Resource Identifier
Another good source to read: Wikipedia: URI Scheme
According to RFC 1738, which defined URLs in 1994, when resources contain references to other resources, they can use relative links to define the location of the second resource as if to say, "in the same place as this one except with the following relative path". It went on to say that such relative URLs are dependent on the original URL containing a hierarchical structure against which the relative link is based, and that the ftp, http,
and file URL schemes are examples of some that can be considered hierarchical, with the components of the hierarchy being separated by "/".
Source: Wikipedia Uniform Resource Locator (URL)
Also:
That is the question we hear often. Onward to the answers! Historically, it’s common for URLs with a trailing slash to indicate a directory, and those without a trailing slash to
denote a file:
http://example.com/foo/ (with trailing slash, conventionally a directory)
http://example.com/foo (without trailing slash, conventionally a file)
Source: Google WebMaster Central Blog - To slash or not to slash
Finally:
A slash at the end of the URL makes the address look "pretty".
A URL without a slash at the end and without an extension looks somewhat "weird".
You will never name your CSS file (for example) http://www.sample.com/stylesheet/ would you?
BUT I'm being a proponent of web best practices regardless of the environment.
It can be wonky and unclear, just as you said about the URL with no ext.
I'm always surprised by the extensive use of trailing slashes on non-directory URLs (WordPress among others). This really shouldn't be an either-or debate because putting a slash after a resource is semantically wrong. The web was designed to deliver addressable resources, and those addresses - URLs - were designed to emulate a *nix-style file-system hierarchy. In that context:
Slashes always denote directories, never files.
Files may be named anything (with or without extensions), but cannot contain or end with slashes.
Using these guidelines, it's wrong to put a slash after a non-directory resource.
That's not really a question of aesthetics, but indeed a technical difference. The directory thinking of it is totally correct and pretty much explaining everything. Let's work it out:
You are back in the stone age now or only serve static pages
You have a fixed directory structure on your web server and only static files like images, html and so on — no server side scripts or whatsoever.
A browser requests /index.htm, it exists and is delivered to the client. Later you have lots of - let's say - DVD movies reviewed and a html page for each of them in the /dvd/ directory. Now someone requests /dvd/adams_apples.htm and it is delivered because it is there.
At some day, someone just requests /dvd/ - which is a directory and the server is trying to figure out what to deliver. Besides access restrictions and so on there are two possibilities: Show the user the directory content (I bet you already have seen this somewhere) or show a default file (in Apache it is: DirectoryIndex: sets the file that Apache will serve if a directory is requested.)
So far so good, this is the expected case. It already shows the difference in handling, so let's get into it:
At 5:34am you made a mistake uploading your files
(Which is by the way completely understandable.) So, you did something entirely wrong and instead of uploading /dvd/the_big_lebowski.htm you uploaded that file as dvd (with no extension) to /.
Someone bookmarked your /dvd/ directory listing (of course you didn't want to create and always update that nifty index.htm) and is visiting your web-site. Directory content is delivered - all fine.
Someone heard of your list and is typing /dvd. And now it is screwed. Instead of your DVD directory listing the server finds a file with that name and is delivering your Big Lebowski file.
So, you delete that file and tell the guy to reload the page. Your server looks for the /dvd file, but it is gone. Most servers will then notice that there is a directory with that name and tell the client that what it was looking for is indeed somewhere else. The response will most likely be be:
Status Code:301 Moved Permanently with Location: http://[...]/dvd/
So, totally ignoring what you think about directories or files, the server only can handle such stuff and - unless told differently - decides for you about the meaning of "slash or not".
Finally after receiving this response, the client loads /dvd/ and everything is fine.
Is it fine? No.
"Just fine" is not good enough for you
You have some dynamic page where everything is passed to /index.php and gets processed. Everything worked quite good until now, but that entire thing starts to feel slower and you investigate.
Soon, you'll notice that /dvd/list is doing exactly the same: Redirecting to /dvd/list/ which is then internally translated into index.php?controller=dvd&action=list. One additional request - but even worse! customer/login redirects to customer/login/ which in turn redirects to the HTTPS URL of customer/login/. You end up having tons of unnecessary HTTP redirects (= additional requests) that make the user experience slower.
Most likely you have a default directory index here, too: index.php?controller=dvd with no action simply internally loads index.php?controller=dvd&action=list.
Summary:
If it ends with / it can never be a file. No server guessing.
Slash or no slash are entirely different meanings. There is a technical/resource difference between "slash or no slash", and you should be aware of it and use it accordingly. Just because the server most likely loads /dvd/index.htm - or loads the correct script stuff - when you say /dvd: It does it, but not because you made the right request. Which would have been /dvd/.
Omitting the slash even if you indeed mean the slashed version gives you an additional HTTP request penalty. Which is always bad (think of mobile latency) and has more weight than a "pretty URL" - especially since crawlers are not as dumb as SEOs believe or want you to believe ;)
When you make your URL /about-us/ (with the trailing slash), it's easy to start with a single file index.html and then later expand it and add more files (e.g. our-CEO-john-doe.jpg) or even build a hierarchy under it (e.g. /about-us/company/, /about-us/products/, etc.) as needed, without changing the published URL. This gives you a great flexibility.
Other answers here seem to favor omitting the trailing slash. There is one case in which a trailing slash will help with search engine optimization (SEO). That is the case that your document has what appears to be a file extension that is not .html. This becomes an issue with sites that are rating websites. They might choose between these two urls:
http://mysite.example.com/rated.example.com
http://mysite.example.com/rated.example.com/
In such a case, I would choose the one with the trailing slash. That is because the .com extension is an extension for Windows executable command files. Search engines and virus checkers often dislike URLs that appear that they may contain malware distributed through such mechanisms. The trailing slash seems to mitigate any concerns, allowing the page to rank in search engines and get by virus checkers.
If your URLs have no . in the file portion, then I would recommend omitting the trailing slash for simplicity.
Who says a file name needs an extension?? take a look on a *nix machine sometime...
I agree with your friend, no trailing slash.
From an SEO perspective, choosing whether or not to include a trailing slash at the end of a URL is irrelevant. These days, it is common to see examples of both on the web. A site will not be penalized either way, nor will this choice affect your website's search engine ranking or other SEO considerations.
Just choose a URL naming convention you prefer, and include a canonical meta tag in the <head> section of each webpage.
Search engines may consider a single webpage as two separate duplicate URLS when they encounter it with and without the trailing slash, ie example.com/about-us/ and example.com/about-us.
It is best practice to include a canonical meta tag on each page because you cannot control how other sites link to your URLs.
The canonical tag looks like this: <link rel="canonical" href="https://example.com/about-us" />. Using a canonical meta tag ensures that search engines only count each of your URLs once, regardless of whether other websites include a trailing slash when they link to your site.
The trailing slash does not matter for your root domain or subdomain. Google sees the two as equivalent.
But trailing slashes do matter for everything else because Google sees the two versions (one with a trailing slash and one without) as being different URLs.
Conventionally, a trailing slash (/) at the end of a URL meant that the URL was a folder or directory.
A URL without a trailing slash at the end used to mean that the URL was a file.
Read more
Google recommendation

How to block my website from all the web

I have a website and it should not be accessible from any one without the URL or any search engines.
No search engine should be aware of my website, only the person with the link should access it. Can some one suggest the best ideas since I'm going to share my office data's on it.
You can prevent most search engines from indexing your site with a robots.txt file. More details here: http://www.robotstxt.org/
However, this is not very secure. Some robots ignore robots.txt. The best way to restrict access is either to require a user to log in before entering the site, or use a firewall to allow only that user's IP address.
You need to add a robots.txt file to the root folder of your website indicating that search engine spiders should not index tour website.
http://www.robotstxt.org/robotstxt.html
But its left to the search engines to read the file. Most popular search engines do honor this file. Another method is to not have a index.htm or default.htm in your website Even if it exists, remove any links to internal pages. This way spiders will never know the site structure of your website.
Wow. OK:
1) robots.txt
http://www.robotstxt.org/robotstxt.html
2) Authentication. If you're using apache, password protect the site
3) Ensure no one ever links to it from anywhere.
4) Consider a different alternative, like dropbox.
http://www.dropbox.com/

Resources