How to Determine Where Your Visitors Come From Except Referer - google-ads-api

I just stumble this adwords click:
http://www.google.com/aclk?sa=L&ai=XXX&sig=XXX&ved=XXX&adurl=http://example.com
It is supposed to redirect to example.com.
However, If you click on that link, it will redirect you to another site which is not http://example.com.
I think there is something fishy on that example.com.
That site can tell where its visitors come from and display different content for different visitors.
I thought it was the Referer, then I disable the referer in my browser but that site still works.
Anyone know why how to Determine Where Your Visitors Come From Except Referer?

Keep in mind that those clicks "bounce" (ie. redirect) through Google before heading to example.com.
Depending on the circumstance, Google can add additional query string values to the landing page URL during the redirect. Meaning that clickthroughs won't actually land on "http://example.com", but instead something like "http://example.com?gclid=1234567".
In particular, the gclid value is appended to the landing page URL as a way for Google to pass data between Google AdWords and Google Analytics.
So example.com could be looking for a gclid value in order to identify traffic arriving from AdWords. This is independent of the referrer.

The Referrer is the only thing that will do this, unless by "That site can tell where its visitors come from" you are talking about Geo-location....
Note: Referrers are not the most reliable thing in the world. They can be spoofed

Related

F5 redirect to different domain along with useragent

Our production application is configured with F5.
If request comes from mobile to http://xxx.abc.com/show.aspx?show=homePage
I want to create rule is F5 to redirect to a different domain like
http://xxx.xyz.com/show.aspx?show=homePage
My doubt here is if the initial URL contains User Agent since users are browsing from mobile, after the rule is created is by default the User Agent is also passed along.
The URL that we are trying to redirect to should also contain User Agent since my application renders mobile pages based on useragent.
Thanks
The answer for your question is on Devcentral.f5.com:
https://devcentral.f5.com/questions/simple-url-redirect-irule
There are several ways to achieve what you're looking for and also include or exclude user agent data. It will depend on exactly what the redirected server needs. Just search on DevCentral for URL Redirect and you'll get more answers than you'll need. Here's an overview of URL redirections:
https://devcentral.f5.com/articles/2-minute-tech-tip-url-redirects

Avoid URL redirects

when i checked the GTmetrix site in Yslow it has shown that i have few URL redirects which i need to avoid for increasing my site speed.
But i had URL redirects from
http://googleads.g.doubleclick.net/pagead/viewthroughconversion/997898667/?... redirects to http://www.google.com/ads/user-lists/997898667/?...
http://www.youtube.com/embed/XeyKZ4CVsWs redirects to https://www.youtube.com/embed/XeyKZ4CVsWs
http://www.youtube.com/embed/SjoNhZhuaGc redirects to https://www.youtube.com/embed/SjoNhZhuaGc
Youtube and googleads is there any possibility of avoiding these.
For the ads, the general answer is No. Redirects are very common there and there isn't much you can do.
For the embedded youtube.com content, you could explicitly include them with secure links "https://" instead of "http://". That might save you some redirect penalties.

Make the display url of a web site different than the actual url for bookmarking purposes

Is it possible to display a different url than the actual url for bookmarking purposes?
Here's why, web site a.com is live and being use for administrative purposes that have not been added to the new site. So when a user visits a.com, they are redirected to a_new.com. But a_new.com is temporary and will eventually become a.com so I need users to be able to to bookmark a.com even though they are at a_new.com.
Makes sense?
Cool,
Thanks,
W
NO
And this is a good thing (though it won't help you)
For example if user visits www.goodsite.com
And goodsite is a good site but vulnerable to script injection. So an evil hacker changes the bookmarking property of goodsite.com to evilsite.com
Next user who bookmarks the site is in for a surprise.
The best thing to do I think would be when the new domain comes up set up a redirect on the pages of the temporary domain

How to keep Google from indexing the Session ID in the URL?

One of my sites is for old mobile phones that don't accept cookies so it uses a URL-based Session ID.
However, Google is indexing the Session ID, so when my site is searched on Google, all the results come up with a specific Session ID.
On most occasions, that Session ID is no longer valid by the time a guest clicks on it, but I've had at least one case where a guest clicked on a link from Google and it actually logged them into someone else's account, which is obviously a huge security flaw.
So how can I keep Google from indexing the Session ID in my URL's? In case it helps, the Session ID has always been set to "Representative URL" in Google's Webmaster Tools.
You can do this by placing a robots.txt file in your root web directory to tell Googlebot and all other crawlers not to crawl URLs with that attribute.
Here is an example:
Lets say the URL you want to block is in the form of:
http://www.mywebsite.com/page.html?id=1234
The robots.txt syntax to block URLs with the id attribute is:
User-agent: *
Disallow: /*id
You can find out more about robots.txt at http://www.robotstxt.org
Read more about this at http://www.seochat.com/c/a/Search-Engine-Optimization-Help/Preventing-Duplicate-Content-on-an-ECommerce-Site-from-Session-IDs/1/
Check this out, https://developers.google.com/search/docs/advanced/crawling/consolidate-duplicate-urls, you can set canonical urls and google-bot will use this url to crawl your webpage, this can also solve duplicate url issues for the same webpage.

The best way for an App to recognize Bots (Googelbot/Yahoo Slurp)

I have a (Rails) site and I want the search engines to crawl and index it. However, I also have some actions that I want to log as having happened - and these actions can be triggered by logged in users as well as users not logged in. Now, to ensure that the count for non-logged in ie anonymous users doesn't include bot traffic I am considering a few options and am looking for guidance on which way to go:
Set a cookie for all users, if this cookie doesn't come back since Bots usually dont accept or send back cookies, I can distinguish bots from anonymous humans.
Check the header and see if the agent is a bot (some whitelist): How to recognize bots with php?
Set that action to be a POST rather than a GET. Bots issue GETs so they don't get counted.
Any other approaches?
I am sure folks have had to do this before so what's the 'canonical' way to solve this?
If you don't want the spiders to follow the links, then you can use rel="nofollow" on them. However, since there might be other links pointing into the pages, you will probably also want to look at the User-Agent header. In my experience, the most common User-Agent headers are:
Google: Googlebot/2.1 ( http://www.googlebot.com/bot.html)
Google Image: Googlebot-Image/1.0 ( http://www.googlebot.com/bot.html)
MSN Live: msnbot-Products/1.0 (+http://search.msn.com/msnbot.htm)
Yahoo: Mozilla/5.0 (compatible; Yahoo! Slurp;)
Just check the User-Agent header, that might be enough for your purposes. Note that a user agent can just pose as Google bot. So if you want to be sure more checking is needed. But I don't think you'd need to bother further than this.

Resources