First of all excuse my english I know that it isn't the best. The thing: I got a Rails application, I already generate the sitemap and inside of it there are the doctor's urls, because my app serves as a medical directory, so let's say that I want to Google: "Michael Green" and indeed there is a doctor with that name inside the app, and I generated the sitemap with it's URL, and all the meta tags as keywords, title and description with that name. When I go to Google it's impossible to find that doctor, even if I use Google custom searches and another tools. It's an API, but I serve HTML in order to Google see the important data first and to make the application crawlable.
I know that I'm not a SEO magician but, I think that I put al the neccesary data to make that "doctor" visible for Google, also my static pages as: Contact, information or the main page, these appear on Google results, but the doctors doesn't, the URL for doctors is something like: app.com/doctors/michael-green, and still nothing. So once again excuse my english and I'll appreciate any help. Thank you.
Related
I am having similar problem in my other pages and it’s driving me crazy. I have modified a “manufacturer” module to “series”
as well as the link too.
For example, I was able to change this link:
bishounenboutique.com/manufacturer
to this:
bishounenboutique.com/series
Click on the breadcrumbs, and
the they are fine in that page. However,
if you click on top image “Psycho pass” for example in that link, it will redirect to:
bishounenboutique.com/index.php?route=product/manufacturer/product&manufacturer_id=13
(which is not what i want. Ideally I want the link to be bishounenboutique.com/psycho-pass)
But set that aside, on that page if you click on the breadcrumb link “series”, it gives:
bishounenboutique.com/index.php?route=series
but it is supposed to be:
bishounenboutique.com/series
!!!
Can anyone please give me an idea of why this is happening?
I have already enabled SEO url and renamed the htaccess file. But I don't know why it works on some links but not others.
Thank you.
Products, Manufacturers, Information Pages and Categories all have SEO keyword fields, which you can find in their DATA tabs (except for Manufacturers which is in the GENERAL tab). These need to be set for all of the above to make them work. The reason the second link you've shown doesn't work is because it's product/manufacturer/product not just product/manufacturer
EDIT
product/manufacturer/product is actually a manufacturer link - it's just poorly named by OpenCart. OpenCart still knows it is a manufacturer link and you can see that by the id that's passed along with it (manufacturer_id not product_id)
Also, OC isn't meant to rewrite it's standard URL's, only product, category, manufacturer (individual manufacturer pages not the list of manufacturers) and info pages. This works on all versions that I'm aware of so if it's not with yours, the vQmod is likely at fault and you'll need to get the developer of that to fix it
I've written a route editor mod myself that doesn't require going in and hacking up a vQmod to get it to work so I understand the complications with it
I am making a simple CMS to use in my own blog and I have been using the following code to display articles.
xmlhttp.onreadystatechange=function(){
if (xmlhttp.readyState==4 && xmlhttp.status==200){
document.getElementById("maincontent").innerHTML=xmlhttp.responseText;
}
}
What it does is it sends a request to the database and gets the content associated with the article that was clicked on and writes it to the main viewing area with ".innerHTML".
Thus I don't have actual links to other articles. I know that I can use PHP to output HTML so that it forms a link like :
<a href=getcontent.php?q=article+title>Article Title</a>
But being slightly OCD I wanted my output to be as neat as possible. Although search engine visibility is not a concern for my personal blog I intend to adapt this to a few other sites which have search engine optimization as a priority.
From what I understand, basically search engine robots follow links to index the web sites.
My question is:
Does this practice have any negative implications for search engine visibility? Also; are there other reasons for preferring one approach over the other as I see that almost every site uses the 'link' method.
The link you've written will cause a page reload. In order to leverage the standard AJAX stuff you've got at the top, you need to write the links as something along the lines of
Article Title
This assumes you have a javascript function called ajaxGet that takes an argument of the identifier for the article you're searching for.
If you were to write your entire site that way, search engines wouldn't be able to crawl you at all since they don't execute javascript. Therefore they can't get to anything off the front page. Also, even if they could follow the links, they'd have no way of referencing the page they got to since it doesn't have a unique URL. This is also annoying for users, since they can't get a link to an exact story to bookmark, send to a friend etc.
We have a site that serves up the same content but on country specific domains - so a potential duplicate content issue.
After doing some research, we went with Google's recommendation of using country specific domains instead of www.domain.com/country-directory/
However, when you search from another country, the correct domain does not appear. We have a person in Australia and every time they search google, the .com.au domain doesn't show up.
We have both country domains setup in Google's Webmaster's tools and both have country specific sitemap.xml files which Webmaster tools has no issue with seeing - in fact, there are no errors of any kind (crawl errors etc) as far as Webmaster tools is concerned.
Does anyone know what we might be doing wrong?
Make sure your australian friend is using http://google.com.au. You can conduct the search yourself on their to check if it is working or not, you do not need someone in another country to do so.
Recently search engines have been able to page dynamic content on social networking sites. I would like to understand how this is done. Are there static pages created by a site like Facebook that update semi frequently. Does Google attempt to store every possible user name?
As I understand it, a page like www.facebook.com/username, is not an actual file stored on disk but is shorthand for a query like: select username from users and display the information on the page. How does Google know about every user, this gets even more complicated when things like tweets are involved.
EDIT: I guess I didn't really ask what I wanted to know about. Do I need to be as big as twitter or facebook in order for google to make special ways to crawl my site? Will google automatically find my users profiles if I allow anyone to view them? If not what do I have to do to make that work?
In the case of tweets in particular, Google isn't 'crawling' for them in the traditional sense; they've integrated with Twitter to provide the search results in real-time.
In the more general case of your question, dynamic content is not new to Facebook or Twitter, though it may seem to be. Google crawls a URL; the URL provides HTML data; Google indexes it. Whether it's a dynamic query that's rendering the page, or whether it's a cache of static HTML, makes little difference to the indexing process in theory. In practice, there's a lot more to it (see Michael B's comment below.)
And see Vartec's succinct post on how Google might find all those public Facebook profiles without actually logging in and poking around FB.
OK, that was vastly oversimplified, but let's see what else people have to say..
As far as I know Google isn't able to read and store the actual contents of profiles, because the Google bot doesn't have a Facebook account, and it would be a huge privacy breach.
The bot works by hitting facebook.com and then following every link it can find. Whatever content it sees on the page it hits, it stores. So even if it follows a dynamic url like www.facebook.com/username, it will just remember whatever it saw when it went there. Hopefully in that particular case, it isn't all the private data of said user.
Additionally, facebook can and does provide special instructions that search bots can follow, so that google results don't include a bunch of login pages.
profiles can be linked from outside;
site may provide sitemap
Basically I want to know how many people have tweeted a link to a url, but since there are dozens of link shortener out there I don't see any way to do this without having access to all of their url maps. I found a previous question here but it was over a year old and didn't have any new answers.
So #1, does anyone know of a service/API for doing this?
And #2, can anyone think of a way to accomplish this task other than submitting the long url in question to all the popular link shortening sites?
ps- I'm also open to comments about why this is impossible or impractical.
You could perform a Google search (or the equivalent via API) for any pages that link to your page. This is done with the link: keyword. So if you're trying to figure out how many people link to www.example.com (regardless of whether it's through a link shortner URL), then you would just do a Google search for link:www.example.com.
e.g.: http://www.google.com/search?q=link:www.example.com
Note that this will only find pages that have been indexed, so pages that haven't been crawled, or pages that get crawled infrequently, will not show up in the results until a later date (if at all).
Since all sites have different algorithms for shortening the URLs, and these are different sites that most likely do not share their data with each other, how can you hope to find all of them in a single or small number of queries?
All you can do is brute-force it, and even then this might not be any good if a site is content to create a new value for the same long-form URL (especially if you send a different long-form URL that maps to the same place, like http://www.stackoverflow.com/ rather than http://stackoverflow.com/).
In order to really get this to work, there would have to be a site that ALREADY automatically collects all of this information from every site, which the URL shortening sites voluntarily call. And even if you wrote such a site, that doesn't account for the URL-shortening sites already out there who already have data!
In short, I do not see how this is remotely possible, unless I'm wrong about there being such a database somewhere out there.
So months after asking this question I came across a solution to a similar question, that is how to tell how many times a link has been shared on facebook. The solution, via a simple new API call:
http://graph.facebook.com/http://stackoverflow.com
returns the following json data:
{
"id": "http://stackoverflow.com",
"shares": 1627
}