401 IIS Error for SearchAdmin.asmx - sharepoint-2007

I have a three server SharePoint 2007 MOSS environment where my IIS logs continue to get pounded with 401.1 and 401.2. These logs are filling up so much that they consume my HDD. I can tell from the IP that these errors are from POST requests from one of my front end web server. Here is the sequence of logs that repeat forever. (The xxx IP's are all the same)
2011-02-10 23:25:42 W3SVC951338967 xxx.xx.xxx.xx POST /SharedServices1/Search/SearchAdmin.asmx - 56738 MyDomain\SQLSAServiceAccount xxx.xx.xxx.xx Mozilla/4.0+(compatible;+MSIE+6.0;+MS+Web+Services+Client+Protocol+2.0.50727.42) 200 0 0
2011-02-10 23:25:42 W3SVC951338967 xxx.xx.xxx.xx POST /SharedServices1/Search/SearchAdmin.asmx - 56738 - xxx.xx.xxx.xx Mozilla/4.0+(compatible;+MSIE+6.0;+MS+Web+Services+Client+Protocol+2.0.50727.42) 401 2 2148074254
2011-02-10 23:25:42 W3SVC951338967 xxx.xx.xxx.xx POST /SharedServices1/Search/SearchAdmin.asmx - 56738 - xxx.xx.xxx.xx Mozilla/4.0+(compatible;+MSIE+6.0;+MS+Web+Services+Client+Protocol+2.0.50727.42) 401 1 0
I really need some help trying to understand the source of this.
Thanks for your thoughts and ideas.

401.1 and 401.2 are classic errors with SP.
First, if you set up SP to use NTLM, either move to kerberos (which requires operation on the domain) or force the NTLM as the only authentication provider :
c:\inetpub\adminscripts\adsutils.vbs set w3svc\root\xxxxx\NTAuthenticationProvider NTLM
the process is described here.
If you are on IIS 7, you have to use appcmd command as described here
Then, in some case, a loopback check is done. This occurs when a webrequest is done from and to the same box with a custom host header. This can be either disabled, or white listed in the registry using the procedure described here. Please make the effort to white list your hostname instead of disabling the check as it can open a security breach in your SI.
[Edit] According the James comment, added the advice to whitelist the hostname instead of disabling the security check

These are not really errors. Your front end webserver (client) is connecting to the server, the server is responding with a 401 - to indicate that the client needs to authenticate.
The client then retrys with authentication tokens if available...

Related

Ngrok not passing my post request on to localhost

I'm trying to set up a webhook for Stripe and I've created a controller, according to the Stripe doc , to do it in ASP.Net MVC running in a virtual machine (maybe that changes things?). I've been testing the action in the controller to see if I can receive posts, so I'm using Postman to send my localhost posts requests which are working. But now I need to use Ngrok to give my localhost a url so that Stripe can use it. I'm running ngrok and passing in these parameters to run
ngrok http -host-header="localhost:44368" 44368
and here is what I see, everything looks ok
But now when I try and use it in Postaman
ex https://11d1ba97.ngrok.io/StripeWebHook/Index
I get a 502 Bad Gateway message and the action method never gets hit.
I get the same problem when I try and send a test webhook from Stripe.
FYI - The request times from Ngrok using 'localhost:4040' show all my response times as 0ms.
Update - I was emailed by ngrok
"The trouble is the HTTPS. ngrok terminates HTTPS traffic and then forwards the unencrypted http traffic through to your local application. You want to do one of two things:
1) make your application expose an HTTP port as well and forward traffic to that
2) use ngrok's TLS tunnels (which hand of TLS traffic to you for termination). with this option you have all the complexities of doing cert management, cert mismatches, etc, just fyi. i'd recommend #1 if possible"
Question - anyone know how to open up a http port in a ASP.Net MVC app using Https?
My problem was that the breakpoint in my application wasn't getting hit.
I was using
ngrok http 58533
but changing it to the following allowed my breakpoint to get hit.
ngrok http -host-header=rewrite localhost:58533
Bit late to the party :)
I could get http working by un-checking Enable SSL flag in Properties.
Step 1: Right click Web Api project, select Properties
Step 2: Download and install extension
https://marketplace.visualstudio.com/items?itemName=DavidProthero.NgrokExtensions
Step 3: Start ngrok Tunnel from Visual Studio
(image from https://raw.githubusercontent.com/dprothero/NgrokExtensions/master/docs/img/menu-item.png)
Step 4: Copy Forwarding http url
Step 5: Paste in Postman, and append the controller/action
you get 200! (upvotes? :))

Is it possible to increase CloudFlare time-out?

Is it possible to increase CloudFlare's time-out? If yes, how?
My code takes a while to execute and I wasn't planning on Ajaxifying it the coming days.
No, CloudFlare only offers that kind of customisation on Enterprise plans.
CloudFlare will time out if it fails to establish a HTTP handshake after 15 seconds.
CloudFlare will also wait 100 seconds for a HTTP response from your server before you will see a 524 timeout error.
Other than this there can be timeouts on your origin web server.
It sounds like you need Inter-Process Communication. HTTP should not be used a mechanism for performing blocking tasks without sending responses, these kind of activities should instead be abstracted away to a non-HTTP service on the server. By using RabbitMQ (or any other MQ) you can then pass messages from the HTTP element of your server over to the processing service on your webserver.
I was in communication with Cloudflare about the same issue, and also with the technical support of RabbitMQ.
RabbitMQ suggested using Web Stomp which relies on Web Sockets. However Cloudflare suggested...
Websockets would create a persistent connection through Cloudflare and
there's no timeout as such, but the best way of resolving this would
be just to process the request in the background and respond asynchronously, and serve a 'Loading...' page or similar, rather than having the user to wait for 100 seconds. That would also give a better user experience to the user as well
UPDATE:
For completeness, I will also record here that
I also asked CloudFlare about running the report via a subdomain and "grey-clouding" it and they replied as follows:
I will suggest to verify on why it takes more than 100 seconds for the
reports. Disabling Cloudflare on the sub-domain, allow attackers to
know about your origin IP and attackers will be attacking directly
bypassing Cloudflare.
FURTHER UPDATE
I finally solved this problem by running the report using a thread and using AJAX to "poll" whether the report had been created. See Bypassing CloudFlare's time-out of 100 seconds
Cloudflare doesn't trigger 504 errors on timeout
504 is a timeout triggered by your server - nothing to do with Cloudflare.
524 is a timeout triggered by Cloudflare.
See: https://support.cloudflare.com/hc/en-us/articles/115003011431-Troubleshooting-Cloudflare-5XX-errors#502504error
524 error? There is a workaround:
As #mjsa mentioned, Cloudflare only offers timeout settings to Enterprise clients, which is not an option for most people.
However, you can disable Cloudflare proxing for that specific (sub)domain by turning the orange cloud into grey:
Before:
After:
Note: it will disable extra functionalities for that specific (sub)domain, including IP masking and SSL certificates.
As Cloudflare state in their documentation:
If you regularly run HTTP requests that take over 100 seconds to
complete (for example large data exports), consider moving those
long-running processes to a subdomain that is not proxied by
Cloudflare. That subdomain would have the orange cloud icon toggled to
grey in the Cloudflare DNS Settings . Note that you cannot use a Page
Rule to circumvent Error 524.
I know that it cannot be treated like a solution but there is a 2 ways of avoiding this.
1) Since this timeout is often related to long time generating of something, this type of works can be done through crontab or if You have access to SSH you can run a PHP command directly to execute. In this case connection is not served through Cloudflare so it goes as long as your configuration allows it to run. Check it on Google how to run scripts from command line or how to determine them in crontab by using /usr/bin/php /direct/path/to/file.php
2) You can create subdomain that is not added to cloudlflare and move Your script there and run them directly through URL, Ajax call or whatever.
There is a good answer on Cloudflare community forums about this:
If you need to have scripts that run for longer than around 100 seconds without returning any data to the browser, you can’t run these through Cloudflare. There are a couple of options: Run the scripts via a grey-clouded subdomain or change the script so that it kicks off a long-running background process and quickly returns a status which the browser can poll until the background process has completed, at which point the full response can be returned. This is the way most people do this type of action as keeping HTTP connections open for a long time is unreliable and can be very taxing also.
This topic on Stackoverflow is high in SERPs so I decided to write down this answer for those who will find it usefull.
https://support.cloudflare.com/hc/en-us/articles/115003011431-Troubleshooting-Cloudflare-5XX-errors#502504error
Cloudflare 524 error results from a web page taking more than 100 seconds to completely respond.
This can be overridden to (up to) 600 seconds ... if you change to "Enterprise" Cloudflare account. The cost of Enterprise is roughtly $40k per year (annual contract required).
If you are getting your results with curl, you could use the resolve option to directly access your IP, not using the Cloudflare proxy IP:
For example:
curl --max-time 120 -s -k --resolve lifeboat.com:443:127.0.0.1 -L https://lifeboat.com/blog/feed
The simplest way to do this is to increase your proxy waiting timeout.
If you are using Nginx for instance you can simply add this line in your /etc/nginx/sites-availables/your_domain:
location / {
...
proxy_read_timeout 600s; # this increases it by 10mins; feel free to change as you see fit with your needs.
...
}
If the issue persists, make sure you use let's encrypt to secure your server alongside Nginx and then disable the orange cloud on that specific subdomain on Cloudflare.
Here are some resources you can check to help do that
installing-nginx-on-ubuntu-server
secure-nginx-with-let's-encrypt

sap.ui.model.odata.ODataModel returns 501 error on serivces.odata.org

I am trying to create an OData model in SAP UI5 this way:
new sap.ui.model.odata.ODataModel("http://services.odata.org/Northwind/Northwind.svc/");
but I am getting a 501 not implemented error!
could you please check what's wrong?
Thanks
As far as I can see it, the service is not really CORS-enabled. I have the same problem with my own examples here, as soon as I am not using some kind of proxy, I get this error.
The reason behind it is that when you send a complex request to the service, you'll autmatically have a so-called preflight request sent by your browser (before the actual GET) which is not a GET-Request, but an HTTP OPTIONS request.
All the odata.org sample services return a 501 error at the moment for such requests.
You can e.g. use the simpleProxyServlet which is shipped with UI5, or of course any other proxy which would solve this.
You are getting this error as your browser will refuse this request due to same Origin Policy. Here is what you should do:
Deploy the app on the same server or domain as the service that you want to call, so that both resources are in the same origin (if possible)
Disable the same-origin policy in the browser for local testing. Run Chrome by running Chrome with the following command:
[your-path-to-chrome-installation-dir]\chrome.exe
--disable-web-security --user-data-dir. Make sure that all instances of Chrome are closed before you run the command. This allows all web
sites to break out of the same-origin policy and connect to the
remote service directly.
-> Don't do this in your productive app as it imposes a security risk.
Using a proxy
The following documentation should help you understand this more and implement:
Conncting with oData Service
Request failing due to Same-Origin Policy sharing(CORS)
Please use "proxy/http/services.odata.org/Northwind/Northwind.svc", I think it's solve your problem!

ADFS 2.0 - Proxy / Server 503 Service Unavailable

For the past several days I've been working tirelessly to setup a test environment for development with WIF & ADFS 2.0. One of the problems that I am up against is my home environment only has one IP address and I wasn't about to stick ADFS on my main server. Therefore, I've created a dedicated virtual machine for FS (idp.yyy.local).
For the sake of not having direct links back to my site, 'yyy' refers to 'dgdev'. (image below)
The strange thing is, it's partially working. Here is an image detailing my infrastructure.
What's odd is that I can browse 'idp.yyy.net' in both normal HTTP and HTTPS just fine. I can also view the WS-Federation Metadata perfectly. Now, I'm quite new to ADFS, but I expect that when going to http://idp.yyy.net/adfs/services/trust it would redirect me to a Windows SSL login. Instead all I'm receiving is:
Service Unavailable
HTTP Error 503. The service is unavailable.
I am using the same SSL certificate on the FS Proxy and FS. Its subject is my main domain name yyy.net. It has several Subject Alternative Names so that I can host multiple IIS web sites with SSL with my single IP:Port.
CN = yyy.net
DNS Name=www.yyy.net
DNS Name=idp.yyy.net
DNS Name=idp.yyy.local
...
IP Address=192.168.1.2
IP Address=192.168.1.3
IP Address=192.168.1.4
...
Does anyone have any idea of why I'm seeing 503 Service Unavailable errors. Nothing is showing up in Event Viewer as an error. (except annoying things with AppFabric, but that's another issue I've yet to touch)
Thanks in advance! Actually many many thanks. I've exhausted every avenue and idea I could come up with, why this might be "broken"?
If anyone has an idea how I can debug this issue I'd certainly except that as a solution. I've tried IIS Failed Request Logging but nothing is being generated. Where/What is hosting the ADFS Services?
Things I've already looked at:
All AppPools are running.
The old ADFS 1.0 web service (asmx) is accessible just fine.
I can access issuer endpoints directly ... or at least 'windowstransport'
Well turns out everything has been working all along!
I spent a couple hours ensuring the certificate was created properly. Then after still seeing 503 & 403 errors, I realized that my proxy server AppPool for the \Default Web Site was running under "ApplicationPoolIdentity" - which is really the user:
IIS AppPool\DefaultAppPool.
I never gave that user read privileges to the ADFS certificate private key. Hence the reason I saw a 403 Forbidden instead of 503. After switching the AppPool over to Network Service ... voila!, 503 Service Unavailable.
So now I was sure my proxy server and ADFS server were talking just fine. Now why was I still seeing 503 Service Unavailable?!?
I told myself to create a test application anyway. In visual studio, I setup a new MVC 3 Web App. Added my existing STS-Reference. Setup a dummy claim and updated the application's FederationMetadata. Added the new Relying Party to ADFS.
Opened my browser to the web app and instant success!
> GET) https://mywebapp/
> Response-Redirect) Location header kicks me to my IdP (ADFS)
https://idp.yyy.net/adfs/ls/?wa=wsignin1.0&wtrealm.........
> I sign-in with proper credentials
> POST) https://mywebapp/login << AWESOME!

How do I stop 401 responses from TFS 2008

Whenever a web request is made by Visual Studio to TFS, Fiddler will show a 401 Unauthorized error. Visual Studio will then try again with a proper Authorization Negotiate header in place with which TFS will respond with the proper data and a 200 status code.
How can I get the correct headers to be sent the first time to stop the 401?
This is how the process of Windows Integrated Authentication (NTLM) works. NTLM is a connection based authentication mechanism and actually involves 3 calls to establish the authenticated session.
The TFS API then goes to extra-ordinary lengths to make sure that this handshake is done in the most efficient way possible. It will keep the authenticated connection open for a period of time to avoid this hand-shake where possible. It will also do the initial authentication using a HTTP payload with minimal content and then send the real message if the message you were going to send is over a certain length. It does a bunch of other tricks as well to optimise the connection to TFS.
Basically, I would just leave it alone as it works well.
You will see that a web browser also does this when communicating with a web site. It will always try to give away the minimum amount of detail with the first call. If this fails, it will reveal a little more about you.
This is by design and for a very good reason.
This is how it's always done - request, get the 401 back, then send the authorization. It's part of the authentication protocol for http.

Resources