Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
Is there any consensus on the best practice of path vs page url structures (for simple, relatively static sites), in terms of usability and SEO?
e.g.
http://mysite.com/about.html
vs
http://mysite.com/about/
Where the about folder contains index.html
It would seem in terms of usability, esp. sharing and linking, that the path approach is much better (predicated upon the approach of only sharing the url up to the slash after the folder, and not including the index.html), albeit more complex in terms of organization - and that the page approach is better for SEO.
Also I've never quite understood the difference between
http://mysite.com/about/
and
http://mysite.com/about/index.html
Will the first always redirect to the second, therefore slowing things down? And when sharing the first type of url, should/must one always include the slash?
Thanks
Related
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Good morning #all,
I've created a tiny cms system where users can create their own websites. Every user gets a subdomain like mywebsite.mycmsystem.com. That works really well. The websites are generated dinamically through a php script, but got static html urls throug mod_rewrite. So an URL has mywebsite.mycmsystem.com/home_1234.html instead of mywebsite.mycmsystem.com/page.php?id=1234
I thought that would be better for search engines. Now the problem is that google won't really crawl through all the websites from the users. Is there a way to tell google where to find all the websites or something like this? I searched for hours in the web, but couldn't find something really useful.
Best regards,
Lukas
The numbers in the URL are not causing your indexing issues. URLs with numbers are indexed and crawled just fine.
The best way to tell the search engines about your pages is an XML sitemap.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
We provide a link (example http://indiapriceinfo.in/getbeststore?mobile_id=76340 ) which redirects user to the online store which is selling it at best price (so the redirection link will lead you to different store on different days).
We use these kind of links on multiple domains with 302 redirection. But they all redirect using indiapriceinfo.in, like the above.
Is this bad for SEO? If it is then whats the best pratice to do it.
A 302 redirect is exactly what you should be doing since the redirects are temporary. The fact that the links all point to pages on the same domain is irrelvant as the links are judged on a per page basis, not per site. So indiapriceinfo.in won't gain anything as a domain SEO wise from these links.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I'm considering to use url pattern like below:
example.com/item/r6B0PmUmx07O/just-one-item
example.com/item/r6B0PGgwPJWl/yet-another-item
the part before slug is an unique and unpredictable id for an item.
compare with url like
example.com/item/1001/just-one-item
example.com/item/1002/yet-another-item
is this way bad for SEO?
or will it be bad for crawling by the search engine?(since the crawler cannot 'guess' the next item's id)
I'm not sure how many popular crawlers try to increment number values in URL to hit the page.
They generaly try to traverse by links.
But consider hiding some info from malicious users. If you can reach any info about your users (by example.com/user/1001) there is generally wrong idea to have sequential UID's. It's not mean to be a part of security but sometimes it's good to difficult access to your data. So the competition will have some difficulties when guessing how much products you have on stock :)
Consider supplying dynamical sitemap with links to all your products. This make you sure that every crawler will hit all your items no matter what key it has.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I'm developing web-application (and also native iOS app) that uses images from different websites. I can't avoid using these images, so I need to know more about copyright and authorship.
So, the question is: how can I use images from other websites legally? (these images, of course, not from photostocks or other paid-sites). Interested in fashion industry, I need to use images of clothes of famous designers. If I would declare source link to each pic, will it be ok? Or may be use "User Agreement" that tells full list of used sources?
For better understanding my question, some examples: websites - news aggregators, blogs and so on.
This is a topic that books have been written about, and law school courses taught about. It's not something you're going to find a definitive answer for here.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I've done some Googling trying to find out the origin of the word "slug" as used in URLs. However I can't seem to find any information on it. Does anyone know where this term came from?
http://en.wikipedia.org/wiki/Slug_(web_publishing)
This is what I've heard (from a somewhat reliable source):
Slugs are slow-moving gastropods. When you call someone a slug, you're calling them lazy - it's not a compliment. When you use human-readable terms in a URL instead of a database number or some other form, it's usually only for convenience; you can name URLs virtually anything you want, and so naming them using English words is mostly for readability. It supposedly originated when programmers became too "lazy" to look up a proper code or ID for a website, and began naming them using words. Those "lazy URLs" became slugs.
Again, I'm not sure if this is 100% correct, but it's what I've heard!
Hope this helps!
N.S.