All the errors that occur in our web application is logged to a database, and I'm finding a 404 error occurring hundreds of times in the last month. The page users are attempting to access is "https://companysite.com/applicationsite/:/0"
The application is a classic ASP site with some ASP.NET MVC 3 included through i-frames, although this error appears to be occurring on the classical ASP side judging by the URL.
I've done a search through the entire code (classic and .NET) for the string ":/0" but I'm not seeing anything. I'm at a loss at how this error is occurring. It is happening too often and for too many users to be intentional.
Would anyone happen to know why users are getting this error? Unfortunately I only have the database logs so I'm not really user how to reproduce this error, nor do I know how users are coming across it.
I would suspect that someone (outside of your site) is hitting that URL, which does not exist.
It could simply be that a spider has that URL indexed and is trying to crawl it. Or maybe that is a path to some application that has a vulnerability and someone is testing to see if you are running that application.
Try logging the IP address of where the request is coming from and also the User-Agent. If it is a web crawler, you should be able to see it in the User-Agent.
You could also block the IP address from accessing your site.
Related
I'm using a domain name with this general structure: http://mydomainname.com/
However, when I click it, I get a 404 message saying:
And when I look in the URL, it's not http://mydomainname.com/ but surprisingly http://mydomainname.com/YkPWZ/.
How did YkPWZ/ appear automatically and what can I do to eliminate this issue? Sometimes accessing http://mydomainname.com/ works fine, but most of the time the browser automatically tacks on some random characters at the end of the URL, throwing the 404 message. This is not a browser-specific issue and I've had a few colleagues replicate this issue on different operating systems (both desktop and iOS).
P.S. If it matters at all, I generated my website using Github Pages (markdown files, not HTML).
I'm quite certain this is an issue on the GoDaddy side of things, though I'm unable to find any official documentation on the subject. As noted in comments above, the redirect isn't coming from GitHub Pages.
I found an old thread discussing the issue. Here is a brief summary:
GoDaddy may use redirects like this to handle load balancing on their shared hosting servers.
In several cases, users contacted GoDaddy to ask about the problem and
had the issue resolved, but
were never told the technical specifics of what was happening.
If you wish to stay with GoDaddy I recommend contacting them and sending them to the link I found above. They may be able to resolve the issue for you, though I wouldn't expect an explanation.
Alternatively, you can use another web host. In many circles, GoDaddy isn't rated very highly. It's lucky that there are so many web hosts to choose from. Alternatively, you can use a custom domain directly with GitHub Pages, bypassing a third-party host entirely.
We are hosted on Heroku, and have the NewRelic add on. Every day I check the errors, and almost every day this error comes up.
Action and Type
Middleware/Rack/Rack::MethodOverride#call
EOFError
Message
bad content body
This is a Rails Application, and so I figure it's not doing anything in particular other than returning a 440 response status because there is nothing at the url they are trying to access.
URL
/wp-admin/admin-ajax.php
Through some google-fu I found an article pertaining to this being a brute force attack on wordpress sites.
My specific question is:
Do I worry about this?
I inherited the site and am not sure if this is just something that happens, and if it is something that rails applications don't have to worry about? It seems fairly targeted towards wordpress, but I can't find any documentation on whether I should be doing more to stop this.
Other frequently pinged urls that don't exist on my application
/sites/all/libraries/elfinder/php/connector.minimal.php
/license.php
/tiny_mce/plugins/tinybrowser/upload_file.php
Any enlightenment on the subject would be great. Stack trace available if needed. Thanks in advance, overflowers.
As long as you don't have a route configured to handle those requests you then only have to worry about getting spammed these requests and losing network resources. They'll recieve a 404 Not Found error when they try to reach it and so there is nothing they can really do except slow your site if they spam requests. If they do it often you can ban their IP address.
I have a little problem with google bot, I have a server working on windows server 2009, the system called Workcube and it works on coldfusion, there is an error reporter built-in, thus i recieve every message of error, especially it concerned with google bot, that trying to go to a false link, which doesn't exist! the links looks like this:
http://www.bilgiteknolojileri.net/index.cfm?fuseaction=objects2.view_product_list&product_catid=282&HIERARCHY=215.005&brand_id=hoyrrolmwdgldah
http://www.bilgiteknolojileri.net/index.cfm?fuseaction=objects2.view_product_list&product_catid=145&HIERARCHY=200.003&brand_id=hoyrrolmwdgldah
http://www.bilgiteknolojileri.net/index.cfm?fuseaction=objects2.view_product_list&product_catid=123&HIERARCHY=110.006&brand_id=xxblpflyevlitojg
http://www.bilgiteknolojileri.net/index.cfm?fuseaction=objects2.view_product_list&product_catid=1&HIERARCHY=100&brand_id=xxblpflyevlitojg
of course with definition like brand_id=hoyrrolmwdgldah or brand_id=xxblpflyevlitojg is false, i don't have any idea what can be the problem?! need advice! thank you all for help! ;)
You might want to verify your site with Google Webmaster Tools which will provide URLs that it finds that error out.
Your logs are also valid, but you need to verify that it really is Googlebot hitting your site and not someone spoofing their User Agent.
Here are instructions to do just that: http://googlewebmastercentral.blogspot.com/2006/09/how-to-verify-googlebot.html
Essentially you need to do a reverse DNS lookup and then a forward DNS lookup after you receive the host from the reverse lookup.
Once you've verified it's the real Googlebot you can start troubleshooting. You see Googlebot won't request URLs that it hasn't naturally seen before, meaning Googlebot shouldn't be making direct object reference requests. I suspect it's a rogue bot with a User Agent of Googlebot, but if it's not you might want to look through your site to see if you're accidentally linking to those pages.
Unfortunately you posted the full URLs, so even if you clean up your site, Googelbot will see the links from Stack Overflow and continue to crawl them because it'll be in their crawl queue.
I'd suggest 301 redirecting these URLs to someplace that make sense to your users. Otherwise I would 404 or 410 these pages so Google know to remove these pages from their index.
In addition, if these are pages you don't want indexed, I would suggest adding the path to your robots.txt file so Googlebot can't continue to request more of these pages.
Unfortunately there's no real good way of telling Googlebot to never ever crawl these URLs again. You can always go into Google Webmaster Tools and request the URLs to be removed from their index which may stop Googlebot from crawling them again, but that doesn't guarantee it.
I'm having a problem with a MVC (1.0) app that I can't figure out at all. There's two versions of the site (live and UAT) hosted on the same server. For each version of the site, the same code is shared by multiple organisations who each have their own database (MSSQL2005) and a separate web site in IIS (7.5) (pointed to the same code).
The UAT site has an update to the code and the database that is waiting to be deployed to the live site.
One of the customers ("customer A") is getting an error "104: Connection reset by peer" when they try to log in to the UAT site. They can see the login page but when they submit their login details the connection seems to be timing out (the requests seem to take ~130s to complete).
Customer A can log in fine to the live site. The other customers don't have a problem logging into the UAT site or the live site. If I try to log in as customer A, using their login details, it all works fine from within our network, and also from outside our network.
Customer A seems to be using squid as a proxy.
I can't think what the problem could be, and I've run out of ideas of things to test. The fact that I can log in as the customer fine and other customers don't have any issues seems to eliminate the code and database as problems.
What other things could I do to try and isolate the problem?
By dumping out the request data I was able to work out that something (I'm guessing the proxy) was removing the form values from the request. This obviously meant the app didn't work properly.
However, it seems whatever was removing the form data was leaving content-length unchanged, which would explain why the client was timing out waiting for more data and the server thought it was finished.
By using https instead of http (which we were going to do anyway), the request tampering seem to have stopped.
I am trying to access a share point site using the SP object model from a console application.
I am trying to do something like this..
SPSite site = new SPSite(sitePath)
//Operations go here
This works fine when the share point site and the console app are on the same machine.
However when the console app and the site are on different machines, I get an error "The Web application at "http://server/url" could not be found. Verify that you have typed the URL correctly. If the URL should be serving existing content, the system administrator may need to add a new request URL mapping to the intended application"
Here are the things that I have already done:
1) I have tried accessing the site via both IP address as well as machine name, assuming that it could be a DNS resolution issue.
2) Initially I impersonated using a farm admin account, still i could not access. Then I added myself as the farm admin, still no joy.
4) The site is accessible via IE. So it is not a permission issue I guess.
5) I have tried almost all the solutions suggested by various links obtained by googling the error message.
I am trying this on share point 2010. A similar issue occurs on 2007 also. Sometimes its kind of frustrating to do SharePoint development , since I get the feeling of stumbling from one error to the next, with no clue as to what could be wrong and the error messages not being helpful in the least :(
That is true because you can't run server object model on another machine. You can use client object model