Why do some large websites use .html extension? [closed] - url

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I'm just curious... why do some large sites (high traffic, a lot of data) use .html extension on their pages even though it's is clear that it's interpreted by php on the server side?
For example metrolyrics.com/top100.html
It's pretty clear that it uses php on the back-end, but still got .html suffix.
Is it better for SEO? Or am I wrong about the back-end and these pages are really static HTMLs as their extension says?
Every opinion is welcomed. Thanks! :)

Metrolyrics might not necessarily be using PHP for its back-end. It could be using other server side languages such as Ruby or Python.
I'd say one of the main reasons for not having PHP in a websites url is for protection. It is more difficult for people to hack a website if they don't know what language is being used on the back-end.
Secondly, websites tend to look more professional if they don't have an extension. And it raises less questions for end users. It's true that people are more used to seeing .html at the end of a URL, users may get more confused if they see .php instead.

It was a well known convention back in the day where static HTML pages were highly regarded for seo. Basically what they done was to keep thousands of these generated HTML pages on the server, making the website look like a content monster and thus elevating its Page Rank and the amount of google scans.
It's also a good way to cache pages and decrease server calls.

Related

How to publish and implement Ruby file to my website [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I'm a beginner of Ruby. I want to establish my website by programming with Ruby language.
Before that, I used to upload HTML files to my web-host server, so that I could update my website. But now I have no idea about what should I do with Ruby file.
Thank you!
If you are using the rails framework you may wanna try using cloud9 instead. Brackets from what i understand is "local". Whereas cloud9 is a "real" IDE. Additionally you can easily push your code to heroku where your code will be hosted at.
To get things started just head over to cloud9 to create an account to setup your IDE. If you are unfamiliar with it there are lots of guides out there that can help you to get things started.
However if you lack the fundamentals in using the rails framework, guides.rubyonrails.org may be a good place to start too.
Update:
There is no best language when it comes to developing a website. The fundamentals of a webpage are HTML/CSS.
HTML gives you the bones or structure, such as your titles and your paragraphs etc.
CSS gives you your styling, such as creating buttons or changing font color etc.
This 2 languages form the core part of what would be your website, at the very least on the front end (meaning to say what people see when they visit your website)
JavaScript is not a must but definitely a plus. It is able to improve the UI/UX (user interface/user experience) of your website.
Lastly would be your back-end language; what handles the processes that goes on behind the scenes. If you choose ruby (and by extension rails) then that is fine as well. Basically your back-end language will support your database and CRUD (create, read, update, delete) actions.

Difference between 2 Amazon affiliate type of links?

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 20 hours ago.
Improve this question
I keep seeing different kinds of Amazon affiliate links and I don’t understand why.
What’s the difference between the following? Why make the longer URL?
(From the Amazon link guide):
http://www.amazon.com/exec/obidos/ASIN/B00005JG32/davetaylor
http://www.amazon.com/gp/product/B00005JG32/ref=cm_lm_fullview_prod_3/102-2173641-6432913?_encoding=UTF8&v=glance
The bottom URL actually isn't an affiliate link. The /ref= part of the url is used by Amazon internally to track how users get to products on their site. The ?tag= query string is what identifies affiliation.
https://www.amazon.com/exec/obidos/ASIN/B00005JG32/davetaylor
https://www.amazon.com/dp/B00005JG32?tag=davetaylor
The below option is actually shorter and cleaner when the unnecessary parts of the URL are removed.
Amazon's canonical URL format is:
https://www.amazon.com/{product-title-keywords}/dp/{$ASIN}
In this case, it would be:
https://www.amazon.com/Hasbro-40224-Disney-Monopoly/dp/B00005JG32
When you go to the page's source code, you will see it on a line of code that says:
<link rel="canonical" href="https://www.amazon.com/Hasbro-40224-Disney-Monopoly/dp/B00005JG32" />
Any other link is an addition to or an abstraction of the canonical URL.
Now, you have more power to choose a URL format that serves you best.
Depending on your use cases, you might also want to consider using an Amazon link shortener or link obfuscator before publishing your Amazon URLs.
There doesn't seem to be any non-technical difference. The less cluttered link still starts the shopping session as usual.

Is Polymer SEO friendly? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
EDIT:
This is a very old question, when escaped_fragment was necessary for search engines, but nowadays, search engines do understand Javascript very well, so this question becomes irrelevant.
===========
I was wondering how much SEO friendly could Polymer be.
As all the code is fully dynamic like Angular, how can the search engines pick up the information of the page? Because also doing things in Angular, I really had a hard time making it SEO friendly.
Will there be a tool to generate the escaped_fragment automatically to feed the search engines?
I guess Google may have thought of the solution, but I wasn't able to find it (even on Google).
According to the Polymer FAQ all we have is
Crawlers understand custom elements? How does SEO work?
They don’t. However, search engines have been dealing with heavy AJAX based application for some time now. Moving away from JS and being more declarative is a good thing and will generally make things better.
http://www.polymer-project.org/faq.html#seo
Not very helpful
This question has bothered me also. The polymer team has this to say about it, looks promising!
UPDATE
Also figure it's worth adding some context from the conversation on the polymer list, with some helpful information as to the status from Eric Bidelman.
Initial examination of the structure of the Polymer site suggests that it serving up static content with shadow-DOM content already inlined in the page. Each HTML file can be loaded from the server directly, via HTTP GET, and subsequent navigation uses pushState (documentation) to inject pages into the current DOM if pushState and JavaScript is supported.
It's recommended to use pushState over _escaped_fragment_, since it's slightly less messy, but you'll still need to do regular templating on the server. See The Moz Blog for more information on this.
DISCLAIMER
I may have missed or misinterpreted some things here, and this is just a quick peek at the guts of the page, but hopefully this helps.

Google optimization for wildcard * subdomains [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Good morning #all,
I've created a tiny cms system where users can create their own websites. Every user gets a subdomain like mywebsite.mycmsystem.com. That works really well. The websites are generated dinamically through a php script, but got static html urls throug mod_rewrite. So an URL has mywebsite.mycmsystem.com/home_1234.html instead of mywebsite.mycmsystem.com/page.php?id=1234
I thought that would be better for search engines. Now the problem is that google won't really crawl through all the websites from the users. Is there a way to tell google where to find all the websites or something like this? I searched for hours in the web, but couldn't find something really useful.
Best regards,
Lukas
The numbers in the URL are not causing your indexing issues. URLs with numbers are indexed and crawled just fine.
The best way to tell the search engines about your pages is an XML sitemap.

Apostrophes in the URL. Good idea or bad idea? And why? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I've searched but I'm having trouble finding a conclusive conclusion. I would also be interested if this has any impact on SEO.
I would suggest not to use.
Reasons:
Google requests server with non encoded URL, even if link in the page is containing encoded (%27) version. This behavior may not be same for different browsers & other search engines. Also, Google displays non encoded version in the search results.
You can read link posted by Rahul Tripathi (http://productforums.google.com/forum/#!topic/webmasters/aKVMfwL6WgE) about the impact in search ranking with/without apostrophe.
If you still want to use apostrophe:
Ensure that your web server handles encoded & non encoded URL's well.
Keep a track of your web-server logs for 404 errors due to improper usage of apostrophe by robots.
By the way currently we are running an experiment to record the behaviour of various search engines, while crawling pages with unsafe characters. You can find about it at http://app.searchenabler.com/experiments/.
One example test which we performed.
http://app.searchenabler.com/experiments/unsafe/%20!$&'()*+,-.:;%3C=%3E#[/]%5E_%60%7B%7C%7D~
(You can try to open above URL in different browser's & check the behaviour)
Also you can see how google cached one such URL at
http://webcache.googleusercontent.com/search?q=cache:jkWRWOTPZXwJ:app.searchenabler.com/experiments/unsafe/%2520!%24%26'()*%2B,-.:%3B%253C%3D%253E%40%5B%255C%5D%255E_%2560%257B%257C%257D~+&cd=1&hl=en&ct=clnk

Resources