DCOM Hardening and Delphi Clients: Why does it work? - delphi

We've the following settings: Two machines with the latest patch-level of updates and there are no registry patches applied to bypass the DCOM Hardening for Windows.
That means, a client which tries to connect to the server needs at least the authetication level of RPC_C_AUTHN_LEVEL_PKT_INTEGRITY.
At the server side we've got a tiny DCOM Service provided from MS (C++) for tests (Takes two numbers, returns the sum.) At the client side a caller - also from MS (C++) - which receives the sum.
The client tool serves the possibility to set the authetication level.
So we've build the same client along the test tool, but in Delphi connecting to the provided DCOM Service from MS(c++).
But unfortunately it seems, that the Delphi-Client is ignoring it at all.
The tests with the MS-Tool as expected:
Doing a test with the MS-Tool with auth-level RPC_C_AUTHN_LEVEL_CONNECT (=2), the connection is refused. The eventlog of the server states the expected error: ' 10036 - The server-side authentication level policy does not allow the user ...'
Doing it with the correct level RPC_C_AUTHN_LEVEL_PKT_INTEGRITY it works.
But now, look what happens when using the delphi tool.
Doing a test with the Delphi-Test-Tool with auth-level RPC_C_AUTHN_LEVEL_CONNECT (= 2): It works!
Doing a test with the Delphi-Test-Tool with auth-level RPC_C_AUTHN_LEVEL_PKT_INTEGRITY (= 5): It works!
So, it's uncommon, to ask something because it works, but I'm not happy with this, cause I don't understand, what's happening.
Any ideas?
Thanks in advance.
Vesan.

Related

SignalR Issue when Load Balanced on Netscalers

We are attempting to deploy a SignalR site on a Citrix NetScaler, as opposed to the current deployment on a single server. There are three servers in the farm. If you navigate to any single server, SignalR comes up fine. If you go to the NetScaler address, you get this:
WebSocket connection to
'wss://mysite.com/myapp/signalr/connect?transport=webSockets&clientProtocol=1.5&connectionToken=(token_displayed_here)'
failed: Error during WebSocket handshake: net::ERR_CONNECTION_RESET
After this error, there is about a 10-15 second delay, then it starts working. If I attempt to disable websockets as I have read that Netscalers still have issues with them, the error goes away but the delay remains. I believe the delay is caused by it trying to connect with ServerSentEvents and failing that as well. It appears that only long polling may be working over the NetScaler.
We have checked the NetScaler websocket settings, made sure the servers have the correct machine keys, had a backplane set up (tried Redis and an Oracle Nuget package as that's our typical DB), checked the OWIN versions and web.config settings, all of the stuff that Google told me to do that I could find but still get this error and delay. One thing that I did find is that Netscalers have issues with wss, but haven't been able to find anything about how to account for this. Most of the information found was for people using other load balancing technology.
Is using SignalR (or more specifically, WebSockets or ServerSentEvents) with a NetScaler even doable, and if so what could be causing this problem?

SQL Server 2012 mirror in azure VM - on second failover app loses connectivity

We've got a mirrored SQL server 2012 database setup on Azure VM's - two servers plus a witness, all using client certificates, with SQL logins with the same SID set.
When testing our app from a different VM, everything works as expected when we manually failover the database, there's a one second wait and then it continues to operate quite happily.
If we then do another manual failover, ie moving the principal back to the original server, the app errors and throws a 'no such host in known' error. Recycling the app pool fixes the issue, but this clearly isn't workable in production when one of the servers is updated followed by the other at some later point (both are in an availability set).
The host not known error is somewhat baffling as it was communicating with it happily before the initial failover, and will again after the app pool recycle.
Here's the connection string as it is right now, after a lot of faffing around:
"Data Source=server1,1433;Failover Partner=server2,1433;Initial
Catalog=;MultipleActiveResultSets=True;User Id=user;
Password=password; Network=dbmssocn;Connect Timeout=60; async = true;"
providerName="System.Data.SqlClient"
The app is running on .net 4.5.2, so should be up to date with hotfixes, and we're out of ideas after much Googling with Bing.
I've just solved a problem that I had that looks very similar to your problem. I'd get the host not known error whenever the database switched from the first one listed in the web.config file to the failover one. It was fine switching from the failover to the primary.
The problem that I had was that I set up the database mirroring using server names but my web server did not know the database servers by name. Once I fixed this, I was able to get the failover working smoothly both ways.
This is what I think was happening:
I set up the mirroring using the names SQL1 and SQL2 as the principal and mirror servers
I have their ip addresses in my connection string: 10.1.1.5 and 10.1.1.6
The application tries to get to the first server 10.1.1.5 and succeeds and is then told that the mirror server is SQL2
SQL1 goes down and the database is successfully switched to the mirror server.
The web application attempts to connect, fails and determines that it should try the second server.
It tries to connect to SQL2, which it doesn't know, and fails with the message that the host is unknown.
This answer would only apply to your situation if you actually put ip addresses in your web.config and that server1,1433 and server2,1433 were actually masking place-holders for the ip addresses that you actually used.
I haven't really solved the naming issue though. I just added the two database server names to the HOSTS file which isn't an acceptable situation but does prove my theory on what my problem was.
I am researching a setup just like you have and upon reading this and the response by Steve Kaye, I'm wondering if you have SQL browser running. Take a look at this article for how SQL browser comes into play:
http://blogs.msdn.com/b/spike/archive/2010/12/15/running-a-database-mirror-setup-with-the-sqlbrowser-service-off-may-produce-unexpected-results.aspx

SOAP server couldn't work correctly behind some proxy/firewall

I have a SOAP server/client application written in Delphi XE working fine for some time until a user who runs it on Windows 7 x64 behind a corporate proxy/firewall. The application sends and receives TSOAPAttachment object in the request.
The Problem:
Once the first request from this user is received and processed, the server could not process any request (from any user) successfully coming after this.
The server still response to the request, but the SOAPAttachment of the request
seems corrupted after the first one from this user, that's why it couldn't process the request successfully.
After putting may debug logs to the server, I noticed the TSOAPAttachment.SourceStream in the request's parameter become inaccessible (or empty), and TSOAPAttachment.CacheFile also empty. Therefore whenever trying to use the SourceStream, it will return Access Violation error.
Further investigation found that the BorlandSoapAttachment(n) file generated in the temp folder by the first request still exist and locked (which should be deleted when a request is completed normally), and BorlandSoapAttachment(n+1) files of the following request are piling up.
The SOAP server will work again after restarting IIS or recycle the application pool.
It is quite certain that it is caused by the proxy or the user’s networks because when the same machine runs outside this networks, it will work fine.
To add more mystery to the problem, running the application on WinXP behind the same proxy have no problem AT ALL!
Any help or recommendation is very appreciated as we have stuck in this situation for some time.
Big thanks in advance.
If you are really sure that you debugged all your server logic that handles the attachments to attempt do discover any piece of code that could failed specifically on Windows 7, I would suggest:
1) Use some network sniffer Wireshark is good for this task, make two subsequent requests with the same data/parameters values, and compare the HTTP contents. This analyze should be done both in the client (to see if the data is always leaving the client machine with the same content) and also in the server, to analyze the incoming data;
2) I faced a similar situation in the past, and my attempts to really understand the problem was not well succeed. I did workaround the problem sending files as Base64 encoded strings parameters, and not using the SOAP attachments. The side affect of using Base64 its an increase of ~30% in the data size to be sent, and this could be significant if you are transferring large files.
Remember that SOAP attachments create temp files in the server, and Windows 7 has different file access rules than Windows XP. I don't know if this could explain the first call being processed ant the others not, but maybe there are something related with file access.
Maybe it is UAC (User Access Control) problem under Win 7. Try running the client in win 7 "As Administrator" and see if it is working properly.

Delphi XE – Datasnap Filter problems

I have a tcp/ip Datasnap -XE Server that uses a PC1 and Zlib filter
On the Client both of these filters are defined in DataSnap TSqlConnection
When the client connects to the server I get a "Connection Closed Gracefully” error message
If I only use the PC1 filter on its own - there is no problem
If I only use the Zlib filter on its own - there is no problem
Any Ideas on how I can get both filters working at the same time?
You need to deploy the libeay32.dll and ssleay32.dll with your client application as well.
A quote from my Delphi XE DataSnap Development courseware manual:
"If you deploy the DataSnap standalone server, using TCP/IP and the RSA and PC1 filters, then you must also deploy two Indy specific SSL DLLs: libeay32.dll and ssleay32.dll – or make sure they already exist at the server machine. These DLLs are needed for the RSA filter (which encrypts the password used by the PC1 filter). Without these two DLLs, any client who wants to connect to the server will get an “Connection Closed Gracefully” message, because the server was unable to load the two DLLs to start the RSA filter to encrypt the PC1 keys, etc.
By the way, the same two DLLs will be required for any DataSnap client, whether connected to the TCP/IP server using the RSA and PC1 filters, or whether connected to the ISAPI filter using HTTPS."
Groetjes, Bob Swart
It is probably a bug in DataSnap. I have exactly the same problem and here is the QC report.
http://qc.embarcadero.com/wc/qcmain.aspx?d=91180
Vote on QC report to be fixed and wait for an update of Delphi-XE.
Edit 1
A crazy idea, don't specify filters on the client.
Here is a paper from Pawel Glowacki on Transport Filters.
http://edn.embarcadero.com/article/41293
He specifically mentions that you should add ZLibCompression to the Filters property of the DataSnap driver on the client.
I have tested not to do so and it works just fine. You do have to add DBXCompressionFilter to the uses clause otherwise you get "ZLibCompression is not registered" error.
With PC1 and ZLibCompression on the server and no filter on the client everything seams to work as expected. I have checked the traffic and it is encrypted and compressed.
Until someone from Embarcadero confirms that this is the way it should be I would think twice before I used it.
Edit 2 Here is a post on Embarcadero Discussion Forums by Bob Swart saying that it is enough to add the filters on the server. Not Embarcadero directly but pretty close :)
https://forums.embarcadero.com/thread.jspa?threadID=48875&tstart=0
Until someone from Embarcadero confirms that this is the way it should be I would think twice before I used it.
This is true. If you don't specify filters on the client, it is told in the initial handshake protocol during connection what the server's filters are, and it adds them automatically. This is a perfectly reasonable and safe way to use filters.
Note, however, that this isn't true in the reverse. Servers do not adopt filters from a connecting client. If you have an RSA filter on the client but not a matching one on the server, then you will get an exception on connection, saying the server has no matching RSA filter. Any other filter on the client but not on the server will be ignored.
Try reversing the order of the filters, leaving the client always contrary server.
eg
Server
Filters = <
item
FilterId = 'ZLibCompression'
Properties.Strings = (
'CompressMoreThan = 1024')
end
item
FilterId = 'PC1'
Properties.Strings = (
'Key = test')
end>
Client
Params.Add ('Filters = {"PC1": {"Key": "test"}, "ZLibCompression": {"CompressMoreThan": "1024"}}');

How can I update a DataSnap server while clients are still connected?

We use stateful DataSnap servers for some business logic tasks and also to provide clientdataset data.
If we have to update the server to modify a business rule, we copy the new version into a new empty folder and register it (depending on the Delphi version, just by launching or by running the TRegSvr utility).
We can do this even while the old server instance is running. However, after registering the new version, all new client connections will still use the currently running (old) server instance. All clients have to disconnect first, then the new server will be used for the next clients.
Is there a way to direct all new client connections to the new server, immediately after registering?
(I know that new or changed method signatures will also require a change and restart of the clients but this question is about internal modifications which do not affect the interface)
We are using Socket connections, and all clients share the same server application (only one application window is open). In the early days we have used a different configuration of the remote datamodule which resulted in one app window per client. Maybe this could be a solution? (because every new client will launch the currently registered executable)
Update: does Delphi XE offer some support for 'hot deployment' (of updated servers)? We use Delphi 2009 at the moment but would upgrade to XE if it offers easier implementation of 'hot deployment'.
you could separate your appserver into 2 new servers, one being a simple proxy object redirecting all methods (and optionally containing state info if any) to the second one actually implementing your business logic. you also need to implement "silent reconnect" feature within your proxy server in order not to disturb connected clients if you decide to replace business appserver any time you want. never did such design myself before but hope the idea is clear
Have you tried renaming the current server and placing the new in the same location with the correct name (versus changing the registry location). I have done this for COM libraries before with success. I am not sure if it would apply to remote launch rules through as it may look for an existing instance to attach to instead of a completely fresh server.
It may be a bit hackish but you would have the client call a method on the server indicating that a newer version is available. This would allow it to perform any necessary cleanup so it doesn't end up talking to both the existing server instance and new server instance at the same time.
There is probably not a simple answer to this question, and I suspect that you will have to modify the client. The simplest solution I can think of is to have a flag (a property or an out parameter on some commonly called method) on the server that the client checks periodically that tells the client to disconnect and reconnect (called something like ImBeingRetired).
It's also possible to write callbacks under certain circumstances for datasnap (although I've never done this). This would allow the server to inform the client that it should restart or reconnect.
The last option I can think of (that hasn't already been mentioned) would be to make the client/server stateless, so that every time the client wants something it connects, gets what it wants then disconnects.
Unfortunately none of these options are the answer you want to your question, but might give you some ideas.
(optional) set up vmware vSphere, ESX, or find a hosting service that already has one.
Store the session variables in db.
Prepare 2 web boxes with 2 distinct IP address and deploy your stuff.
Set up DNS, firewall, load balancer, or BSD vm so name "example.com" resolves to web box 1.
Deploy new version to web box 2.
Switch over to web box 2 using whatever routing method you chose.
Deploy new version to web box 1 if things look ok.
Using DNS is probably easiest, but it takes time for the mapping to propagate to the client (if the client is outside your LAN) and also two clients may see different results. Some firewalls have IP address mapping feature that you can map public IP address and internal IP address. The ideal way is to use load balancer and configure it to 50:50 and change it to 100:0 when you want to do upgrade, but it costs money. A cheaper alternative is to run software load balancer on BSD vm, but it probably requires some work.
Edit: What I meant to say is session variables, not session. You said the server is stateful. If it contains some business logic that uses session variable, it needs to get stored externally to be preserved across reconnection during switch over. Actual DataSnap session will be lost, so when you shutdown web box 1 during upgrade, the client will get "Session {some-uuid} is not found" error by web box 1, and it will reconnect to web box 2.
Also you could use 3 IP addresses (1 public and 2 private) so the client always sees 1 address , which is better method.
I have done something similar by having a specific table which held my "data version". Each time I would update the server or change a system wide global setting, I would increment this field. When a client starts it always checks this value, and will check again before any transactions/queries. If the value was ever different from when I first started, then I needed to go through my re-initialization logic, which could easily include a re-login to an updated server.
I was using IIS to publish my app servers, so the data that would change would be the path to the app server. I kept the old ones available, to respond to any existing transactions that were in play. Eventually these would be removed once I knew there were no more client connections to that version.
You could easily handle knowing what versions to keep around if you log what server the client last connected too (and therefore would know about).
For newer versions (Delphi 2010 and up), there is an interesting solution
for systems using the HTTP transport:
Implementing Failover and Load Balancing in DataSnap 2010 by Andreano Lanusse
and a related question for the TCP/IP transport:
How to direct DataSnap client connections to various DS Servers?

Resources