I'm getting traffic to my website containing the URL in the structure:
//domain/pagename/?fireglass_rsn=true
Anyone know what the ?fireglass_rsn parameter is doing?
See the screenshot from Google Analytics. These 2 pages are actually the same page. The fireglass_rsn visits are all from Singapore, which is weird as well.
Related
I wanna get full URL in my rails app when someone hit to get my app. like.
someone hit the URL in browser.
http://localhost:3000/
so I can get this in rails app with the help of request.original_url, and this gives me the correct result.
but when someone hit URL like this http://localhost:3000/#log-25 so request.original_urlthis is not giving me exact url which hit exactly it only return http://localhost:3000/ but I need #log-25 this part also.
so how can I get whole URL exactly as hit on the browser please help me.
The # part is not actually sent through to the server, it's a fragment that's used for in-page navigation, among other things, and is only relevant to the browser.
If you need the whole thing, fragment included, you'll need some JavaScript code to encode and supply that through another channel besides the HTTP request where it's intentionally stripped.
This is a website which is relevant to the topic that I am researching - getting an IFrame's current URL address from another domain.
Here it is: http://hidemyipaddress.org/ (to use it simply go to the bottom, enter a website address and click "go").
You can surf any website through their website - and the amazing thing is that they can keep track of your current location, and even show it to you. (Here is a picture to illustrate: http://img199.imageshack.us/img199/6343/image2eb.jpg)
The reason I am asking is because I am trying to do the same thing.
How is this possible, isn't that XSS or something? Thanks for taking your time on this.
This is web based proxy. When you enter an address into the proxy address input and hit search, you are requesting that the proxy server retrieves the website for you. The proxy server requests the page you have asked for, parses the HTML so that all URIs are "proxied URIs", adds any additional HTML such as banners and then returns the page in the http response.
If there were an iframe, the current URL of the iframe would actually be on the same domain. It's a proxy, so the server at hidemyaddress.org is actually returning the html to your client. Furthermore the address of an iframe would be irrelevant. The uri in that address box would just displays the address that you requested. It would not reflect on the src of an iframe or the current location of that frame.
I'm having some issues with Google Analytics URL parameters. Prviously I've built URLs with the Google Analytics URL Builder. these have enabled me to track where visitors to my site have been coming from, how successful various marketing campaigns have been etc.
Recently, I've started using another tag in the URL, one which has nothing to do with Google Analytics, but acts to alter the telephone number on my site when the visitor arrives on it. For example, I'll add &ctcc=adwords onto the end of my tracking URL, and a specified phone number will appear on my site when the user comes through so I can track how many calls my adwords spend has generated.
However, when I've been using this ctcc code, Google Analytics no longer seems to be tracking the traffic numbers to my site :(
Any idea how I can incorporate the two parameters into the URl, and ensure that they both work as expected?
Thanks in advance
It looks like this is a problem with how your server is redirecting traffic with a ctcc query parameter. Look at the following request and its response headers:
So the ctcc parameter is used in some server side tracking (as best as I can tell), and the server is set up to redirect & strip ctcc whenever it gets a request with ctcc. Not being familiar with the system in use, I can't provide details, but you need to reconfigure the redirects to stop changing & into ;. It's the replacement of ampersands with semicolons that is messing up your GA data.
I have a little problem with google bot, I have a server working on windows server 2009, the system called Workcube and it works on coldfusion, there is an error reporter built-in, thus i recieve every message of error, especially it concerned with google bot, that trying to go to a false link, which doesn't exist! the links looks like this:
http://www.bilgiteknolojileri.net/index.cfm?fuseaction=objects2.view_product_list&product_catid=282&HIERARCHY=215.005&brand_id=hoyrrolmwdgldah
http://www.bilgiteknolojileri.net/index.cfm?fuseaction=objects2.view_product_list&product_catid=145&HIERARCHY=200.003&brand_id=hoyrrolmwdgldah
http://www.bilgiteknolojileri.net/index.cfm?fuseaction=objects2.view_product_list&product_catid=123&HIERARCHY=110.006&brand_id=xxblpflyevlitojg
http://www.bilgiteknolojileri.net/index.cfm?fuseaction=objects2.view_product_list&product_catid=1&HIERARCHY=100&brand_id=xxblpflyevlitojg
of course with definition like brand_id=hoyrrolmwdgldah or brand_id=xxblpflyevlitojg is false, i don't have any idea what can be the problem?! need advice! thank you all for help! ;)
You might want to verify your site with Google Webmaster Tools which will provide URLs that it finds that error out.
Your logs are also valid, but you need to verify that it really is Googlebot hitting your site and not someone spoofing their User Agent.
Here are instructions to do just that: http://googlewebmastercentral.blogspot.com/2006/09/how-to-verify-googlebot.html
Essentially you need to do a reverse DNS lookup and then a forward DNS lookup after you receive the host from the reverse lookup.
Once you've verified it's the real Googlebot you can start troubleshooting. You see Googlebot won't request URLs that it hasn't naturally seen before, meaning Googlebot shouldn't be making direct object reference requests. I suspect it's a rogue bot with a User Agent of Googlebot, but if it's not you might want to look through your site to see if you're accidentally linking to those pages.
Unfortunately you posted the full URLs, so even if you clean up your site, Googelbot will see the links from Stack Overflow and continue to crawl them because it'll be in their crawl queue.
I'd suggest 301 redirecting these URLs to someplace that make sense to your users. Otherwise I would 404 or 410 these pages so Google know to remove these pages from their index.
In addition, if these are pages you don't want indexed, I would suggest adding the path to your robots.txt file so Googlebot can't continue to request more of these pages.
Unfortunately there's no real good way of telling Googlebot to never ever crawl these URLs again. You can always go into Google Webmaster Tools and request the URLs to be removed from their index which may stop Googlebot from crawling them again, but that doesn't guarantee it.
I faced a problem some time back on a particular website. It has given many hyperlinks on it to other sites. e.g. of one such URL is:
http://http//example.com/a9noaa.asp
It is clearly incorrect (http comes twice) URL so when one clicks on it there is a page error like "Address not found".
But when one copies the link location and pastes it in the browser’s location bar, it loads that new page correctly. So it’s the problem of incorrect URL being mentioned in the hyperlink.
Will it be possible to make browser check for basic sanity of the URL being accessed like checking that:
word http is present only once,
colon is typed correct,
no unusual character at beginning of URL,
double backlashes are correctly present, etc.
Or that the URL being typed in the address bar and automatically correct the errors in it?
Can any client side code be present to make a internet browser achieve this functionality? Is it possible?
Or are there any plugins for popular browsers (Firefox, IE) already available to achieve this?
Thank you.
-AD.
First of all, http://http//example.com/a9noaa.asp is a valid URI with http as the scheme, the second http as the host name and //example.com/a9noaa.asp as the path. So if it’s not invalid, the browser has no need to correct it.
Now let’s look at the location bar. Most user friendly browsers do some error correction if the location that has been entered is invalid. One of that correction measures is to prepend the string with http:// if that’s not present. So you just have to type example.com to request http://example.com.
Another correction measure is to complete unknown host names with http://www. and and .com before and after the entered string. So you just have to type example, hit enter and you request http://www.example.com.
But any error correction outside the location bar can especially in hyperlinks can be crucial. Take this for example: A guest enters his/her website URI in a guestbook entry but ommits the http://. Now that value is used in a hyperlink but the missing http:// is not prefixed. So the link might look like this:
Website
If you click on such a link, the relative URI of that link would be resolved to an absolute URI using the current document’s URI as the base. So the link might be expanded to http://some.example/guestbook/example.com. Who hasn’t experienced that?
But correcting that missing http:// in the browser is fatal. Because the auther might have intended to reference http://some.example/guestbook/example.com instead of http://example.com that the browser would expect.
So to round it up: Correcting the user’s location bar input suitable when there is something missing (e.g. the http://). But doing that on every link is not.
The URL you posted is not "incorrect", it is valid. Hostnames can take many forms, such as http://localhost/ or http://http/ as well as the more common http://example.com
If you don't include http:// or another protocol in a web link, then the browser assumes you are using a relative link. For example...
link
...will link to http://yoursite.com/www.example.com, because this is a perfectly valid URL - you can name a file www.example.com.
I would recommend contacting the website in question to fix their error. No browsers will correct this automatically.
It really shouldn't be up to the browser to correct mal-formed URLs. A URL is supposed to be a unique identifier of some page. The one doing the linking to the page should take care to link to the correct page. There must be no guesswork involved in opening a URL.
That said, some browsers are better than others. Of the top of my head I think IE won't understand "localhost:8888/test" (no protocol given and not standard port 80), but Firefox will at least try to access it via "http://localhost:8888/test". This kind of best-guess filling-in-the-blanks is fine I think, any further auto-correction would be doing too much.
Safari for example will try to auto-guess domain names for you. If "apple/safari" yields a DNS error, it'll automatically try to complete the address to "apple.com/safari". With your URL it might try to complete it to "http://http.com//example.com/a9noaa.asp", which might yield a page if http.com exists. There's just no one way of doing it, therefore it shouldn't be done at all.