Are .cshtml (Razor) indexable by Google? [closed] - asp.net-mvc

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
I create new website by MVC5.The pages (.cshtml) in the my website are dynamically generated. I want to know that are pages with .cshtml index able by google crawler?

First, .cshtml is never served directly, so in one sense, no they will never be indexed by Google or any other search engine, because they cannot be seen by Google or any other search engine.
However, those .cshtml files are utilized by controller actions to return an HTML response. As a result, any route indexed by Google that leads to an action that utilizes one of your .cshtml files will allow Google to index the parsed contents of that file. This is not the same thing has Google directly indexing the physical file, though.

MVC does not serve .cshtml files. It processes them and converts them into plain HTML text output. This makes it possible to be completely compliant with web standards and cross-platform (or not, depending on whether you use the built-in HTML helpers or go out of bounds and out of compliance). This includes, but is not limited to, being able to be indexed by search engines such as Google.

Related

Custom CMS features in ASP.NET MVC site [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
We have a requirement for our partners to built a CMS in which some of pages on the site should be configurable by them using the admin panel. So we are planning to create a static website with pre-defined layout and pages and replace those content when partner users will make changes in admin panel.
Now the problem is that this application will be used by multiple partners and if one will edit the information through admin panel then it should not effect the other partners custom pages.
The solution which we are thinking is to replicate the same code on multiple domains for each set of partners and replace the content. Can somebody suggest how can we do in one domain and not replicating the code again and again.
We resolved it by creating the sub-domains for each partner and by creating a standard HTML page with fixed layout and saving the customizable information in database for each partner like text, image paths etc. Then depending upon the partner we get the dynamic content from database and replace on page using jquery. So no need to duplicate the code and no manual effort.

Why do some large websites use .html extension? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I'm just curious... why do some large sites (high traffic, a lot of data) use .html extension on their pages even though it's is clear that it's interpreted by php on the server side?
For example metrolyrics.com/top100.html
It's pretty clear that it uses php on the back-end, but still got .html suffix.
Is it better for SEO? Or am I wrong about the back-end and these pages are really static HTMLs as their extension says?
Every opinion is welcomed. Thanks! :)
Metrolyrics might not necessarily be using PHP for its back-end. It could be using other server side languages such as Ruby or Python.
I'd say one of the main reasons for not having PHP in a websites url is for protection. It is more difficult for people to hack a website if they don't know what language is being used on the back-end.
Secondly, websites tend to look more professional if they don't have an extension. And it raises less questions for end users. It's true that people are more used to seeing .html at the end of a URL, users may get more confused if they see .php instead.
It was a well known convention back in the day where static HTML pages were highly regarded for seo. Basically what they done was to keep thousands of these generated HTML pages on the server, making the website look like a content monster and thus elevating its Page Rank and the amount of google scans.
It's also a good way to cache pages and decrease server calls.

Is Polymer SEO friendly? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
EDIT:
This is a very old question, when escaped_fragment was necessary for search engines, but nowadays, search engines do understand Javascript very well, so this question becomes irrelevant.
===========
I was wondering how much SEO friendly could Polymer be.
As all the code is fully dynamic like Angular, how can the search engines pick up the information of the page? Because also doing things in Angular, I really had a hard time making it SEO friendly.
Will there be a tool to generate the escaped_fragment automatically to feed the search engines?
I guess Google may have thought of the solution, but I wasn't able to find it (even on Google).
According to the Polymer FAQ all we have is
Crawlers understand custom elements? How does SEO work?
They don’t. However, search engines have been dealing with heavy AJAX based application for some time now. Moving away from JS and being more declarative is a good thing and will generally make things better.
http://www.polymer-project.org/faq.html#seo
Not very helpful
This question has bothered me also. The polymer team has this to say about it, looks promising!
UPDATE
Also figure it's worth adding some context from the conversation on the polymer list, with some helpful information as to the status from Eric Bidelman.
Initial examination of the structure of the Polymer site suggests that it serving up static content with shadow-DOM content already inlined in the page. Each HTML file can be loaded from the server directly, via HTTP GET, and subsequent navigation uses pushState (documentation) to inject pages into the current DOM if pushState and JavaScript is supported.
It's recommended to use pushState over _escaped_fragment_, since it's slightly less messy, but you'll still need to do regular templating on the server. See The Moz Blog for more information on this.
DISCLAIMER
I may have missed or misinterpreted some things here, and this is just a quick peek at the guts of the page, but hopefully this helps.

Should I include file extensions in my webpages url paths? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I've been trying to do some research for whether or not I should include file extensions in my url paths in my website (and whether or not it is detrimental to use relative urls).
Among some of the sites I have visited for this research are listed below:
http://css-tricks.com/snippets/htaccess/remove-file-extention-from-urls/
http://www.seo-theory.com/2011/11/30/how-do-pretty-urls-help-search-engine-optimization/
However, none of them have really answered my questions.
As for whether or not to include file extensions:
Assuming that all of the links are NOT broken, and I have constructed them properly, are there any downfalls to linking to other pages within my site and including the file extension? Originally, I thought I should include them just for specificity's sake, but now I know it doesn't make as pretty of a url.
Does this effect SEO greatly?
Should I go back and erase all .cshtml, .jpg, etc. from my url paths? Should I include this removal from the tags that link to my .js and .css external files?
If it matters, the context of this question is coming from a C#.net WebMatrix environment.
You can erase the .cshtml part of your urls if you like (I generally do) but you should not erase the extensions of images, style sheets, JavaScript files etc. The .cshtml files will be found by the Web Pages framework through its rudimentary routing system, but that only applies to .cshtml and .vbhtml files. If you remove the extensions from other types of file, they will not be found. And in any event, it would be pointless. It's not as if you want Google to index your .css file (which it doesn't).
As to whether removing the .cshtml extension will affect SEO - no, it will make no difference. If it did, you would easily be able to find a lot of advice to that effect.

Whats the effect of this kind of a redirect on SEO? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
If I register a domain abc.com, but point via a redirect to another domain's subfolder like def.com/abc, what's the effect on SEO? Will Google index both abc.com and def.com/abc and display them for keyword "abc"?
Is there any way to avoid this or pomote abc.com so it displays more prominently than def.com/abc?
What's the best practice in this scenario?
Thanks for the help.
I'll refer to Google's official SEO guide. Here's a link to the full guide.
Provide one version of a URL to
reach a document
To prevent users from linking to one version of a URL and others linking to a different version (this
could split the reputation of that content between the URLs), focus on using and referring to one URL
in the structure and internal linking of your pages. If you do find that people are accessing the same
content through multiple URLs, setting up a 301 redirect from non-preferred URLs to the dominant
URL is a good solution for this. You may also use canonical URL or use the rel="canonical" link element
if you cannot redirect.
Since Google relies on links to your pages, you're going to have an issue if people link to one URL versus another. So in short, yes, that will probably have an effect on your optimization.
Edit: Google's algorithm may be smart enough to know your redirect and follow it when indexing your pages. Probably your best outcome will be to not do the redirect and instead point the domain at your content. The second best will be to use the HTTP 301 status code.

Resources