I have two delphi clients of one ASMX service. One client is normal desktop application and other is Outlook add-in.
Everything works fine (SOAP calls to ASMX service) on my PC. But on one of my costumers have problems with SOAP calls within Outlook add-in, and at the same time desktop application works as expected on the same machine.
The problem with SOAP call within Outlook add-in is error "XML document must have a top level element". Reason of this error is empty response of call. Take a look at logs:
8/7/2013-1:12:29 PM Response:
8/7/2013-1:12:29 PM XML document must have a top level element.
Line: 0
XMLDoc.TXMLDocument.LoadData + $2AA
XMLDoc.TXMLDocument.SetActive + $A8
XMLDoc.TXMLDocument.LoadFromStream + $29
Rio.TRIO.Generic + $70F
Response stream is retrieved in HttpRio AfterExecute method using call
fResponse.LoadFromStream(Response);
The question is: what is the reason of this error, how can one client on the same machine work fine and other not? what can I do to reproduce, diagnose this situation?
P.S. I know that it would be great to have sniffed http packets, but I don't have access to the costumer's PC to run http sniffer.
Your log doesn't show the timestamp of the original request. If the time difference happens to fall on a 30 or 60 second boundary, then this is almost certainly a timeout situation. i.e. no response was ever received. If it happens right away, then your request likely is not getting out, and is being blocked by a firewall.
Related
We have a single page web application. One of the functions of the application is to supervise the connection path from the client back to the server. This is implemented with a periodic ajax http request in javascript to the server every 60 seconds. This request acts as a heartbeat.
After a session is started, the server looks for that heartbeat. If it fails to receive a heartbeat request after a reasonable amount of time, it takes specific action.
The client also looks for a response to that heartbeat request. If it fails to receive a response after a reasonable amount of time, it displays a message on the screen via javascript.
We are getting reports from the field where a Chrome version of Edge is failing. Communication between the client and server is apparently failing. The server is seeing those heartbeat requests cease – and taking that specific action. However, the client is not taking the expected action on its side. It’s not displaying the message indicating a failed heartbeat request. It’s almost appears as though the javascript stopped running altogether.
The thing is, though… The customer has reported that if they disable automatic updates to Microsoft Edge the application runs fine. If the checking of updates is allowed to occur, the application eventually fails as described above. Note that this is apparently happening when Edge is just checking for updates - it's already up to date.
Updates are being turned off using several guid-named registry keys at [HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\EdgeUpdate].
Any thoughts?
I have a DELPHI XE5 web service and Win 32 client exchanging SOAP calls via IIS. It has been heavily used for many years. One function sends a PDF file as a Soap Attachment to the client.
This suddenly stopped working on my development machine (Win 7) about mid-July this summer, although no direct code changes were made. The same code continues to work in production setting with no issues. This must be something in my local machine, but I cannot find it. The server creates the file and sends it without issue. But then client waits about 2 minutes and gets an exception
The connection with the server was reset
This is the case for any call which has a TSoapAttachment parameter, even if it is left null.
I have a SOAP server/client application written in Delphi XE working fine for some time until a user who runs it on Windows 7 x64 behind a corporate proxy/firewall. The application sends and receives TSOAPAttachment object in the request.
The Problem:
Once the first request from this user is received and processed, the server could not process any request (from any user) successfully coming after this.
The server still response to the request, but the SOAPAttachment of the request
seems corrupted after the first one from this user, that's why it couldn't process the request successfully.
After putting may debug logs to the server, I noticed the TSOAPAttachment.SourceStream in the request's parameter become inaccessible (or empty), and TSOAPAttachment.CacheFile also empty. Therefore whenever trying to use the SourceStream, it will return Access Violation error.
Further investigation found that the BorlandSoapAttachment(n) file generated in the temp folder by the first request still exist and locked (which should be deleted when a request is completed normally), and BorlandSoapAttachment(n+1) files of the following request are piling up.
The SOAP server will work again after restarting IIS or recycle the application pool.
It is quite certain that it is caused by the proxy or the user’s networks because when the same machine runs outside this networks, it will work fine.
To add more mystery to the problem, running the application on WinXP behind the same proxy have no problem AT ALL!
Any help or recommendation is very appreciated as we have stuck in this situation for some time.
Big thanks in advance.
If you are really sure that you debugged all your server logic that handles the attachments to attempt do discover any piece of code that could failed specifically on Windows 7, I would suggest:
1) Use some network sniffer Wireshark is good for this task, make two subsequent requests with the same data/parameters values, and compare the HTTP contents. This analyze should be done both in the client (to see if the data is always leaving the client machine with the same content) and also in the server, to analyze the incoming data;
2) I faced a similar situation in the past, and my attempts to really understand the problem was not well succeed. I did workaround the problem sending files as Base64 encoded strings parameters, and not using the SOAP attachments. The side affect of using Base64 its an increase of ~30% in the data size to be sent, and this could be significant if you are transferring large files.
Remember that SOAP attachments create temp files in the server, and Windows 7 has different file access rules than Windows XP. I don't know if this could explain the first call being processed ant the others not, but maybe there are something related with file access.
Maybe it is UAC (User Access Control) problem under Win 7. Try running the client in win 7 "As Administrator" and see if it is working properly.
We have a web application which does some computing and returns a file to the client. When the computing takes less then 5 minutes everything works fine on IE and Chrome and we get the file but if computing takes more than 5 minutes IE times-out with "Internet Explorer cannot display the webpage" message where as Chrome keeps running and eventually gets the file from the server.
I've tried changing registry settings like KeepAliveTimeout, ReceiveTimeout, ServerInfoTimeout of WinInet but it didn't help. Clicking the Diagnose Connection Problems button shows "Windows received an HTTPerror message: 403 (forbidden) from " message which I think is because it tries to again access the site without credentials and fails. When using fiddler the request terminates with 504 status and show this message "ReadResponse() failed: The server did not return a response for this request." Interestingly I've once observed that even Chrome times-out if Fiddler is running (haven't verified this by re-running though.)
This is an ASP.Net web application using MVC framework.
I've spent considerable amount of time but haven't been able to find a solution for this. Any useful pointers would really be appreciated.
From KB181050
You can usually break down long processes into smaller pieces. Or, the server can return status data to update users about the process. In addition, you can create a long server process that has a messages-based or asynchronous approach so that it returns immediately to the user after the job is submitted, and then notifies the user after the long process is finished.
In other words, create a <div> and fill that <div> immediately with a "processing" value after the request has been accepted by the server. Then, use ajax or javascript to update that same <div> with the result whenever you're finished with processing.
Whenever a web request is made by Visual Studio to TFS, Fiddler will show a 401 Unauthorized error. Visual Studio will then try again with a proper Authorization Negotiate header in place with which TFS will respond with the proper data and a 200 status code.
How can I get the correct headers to be sent the first time to stop the 401?
This is how the process of Windows Integrated Authentication (NTLM) works. NTLM is a connection based authentication mechanism and actually involves 3 calls to establish the authenticated session.
The TFS API then goes to extra-ordinary lengths to make sure that this handshake is done in the most efficient way possible. It will keep the authenticated connection open for a period of time to avoid this hand-shake where possible. It will also do the initial authentication using a HTTP payload with minimal content and then send the real message if the message you were going to send is over a certain length. It does a bunch of other tricks as well to optimise the connection to TFS.
Basically, I would just leave it alone as it works well.
You will see that a web browser also does this when communicating with a web site. It will always try to give away the minimum amount of detail with the first call. If this fails, it will reveal a little more about you.
This is by design and for a very good reason.
This is how it's always done - request, get the 401 back, then send the authorization. It's part of the authentication protocol for http.