How can I update a DataSnap server while clients are still connected? - delphi

We use stateful DataSnap servers for some business logic tasks and also to provide clientdataset data.
If we have to update the server to modify a business rule, we copy the new version into a new empty folder and register it (depending on the Delphi version, just by launching or by running the TRegSvr utility).
We can do this even while the old server instance is running. However, after registering the new version, all new client connections will still use the currently running (old) server instance. All clients have to disconnect first, then the new server will be used for the next clients.
Is there a way to direct all new client connections to the new server, immediately after registering?
(I know that new or changed method signatures will also require a change and restart of the clients but this question is about internal modifications which do not affect the interface)
We are using Socket connections, and all clients share the same server application (only one application window is open). In the early days we have used a different configuration of the remote datamodule which resulted in one app window per client. Maybe this could be a solution? (because every new client will launch the currently registered executable)
Update: does Delphi XE offer some support for 'hot deployment' (of updated servers)? We use Delphi 2009 at the moment but would upgrade to XE if it offers easier implementation of 'hot deployment'.

you could separate your appserver into 2 new servers, one being a simple proxy object redirecting all methods (and optionally containing state info if any) to the second one actually implementing your business logic. you also need to implement "silent reconnect" feature within your proxy server in order not to disturb connected clients if you decide to replace business appserver any time you want. never did such design myself before but hope the idea is clear

Have you tried renaming the current server and placing the new in the same location with the correct name (versus changing the registry location). I have done this for COM libraries before with success. I am not sure if it would apply to remote launch rules through as it may look for an existing instance to attach to instead of a completely fresh server.
It may be a bit hackish but you would have the client call a method on the server indicating that a newer version is available. This would allow it to perform any necessary cleanup so it doesn't end up talking to both the existing server instance and new server instance at the same time.

There is probably not a simple answer to this question, and I suspect that you will have to modify the client. The simplest solution I can think of is to have a flag (a property or an out parameter on some commonly called method) on the server that the client checks periodically that tells the client to disconnect and reconnect (called something like ImBeingRetired).
It's also possible to write callbacks under certain circumstances for datasnap (although I've never done this). This would allow the server to inform the client that it should restart or reconnect.
The last option I can think of (that hasn't already been mentioned) would be to make the client/server stateless, so that every time the client wants something it connects, gets what it wants then disconnects.
Unfortunately none of these options are the answer you want to your question, but might give you some ideas.

(optional) set up vmware vSphere, ESX, or find a hosting service that already has one.
Store the session variables in db.
Prepare 2 web boxes with 2 distinct IP address and deploy your stuff.
Set up DNS, firewall, load balancer, or BSD vm so name "example.com" resolves to web box 1.
Deploy new version to web box 2.
Switch over to web box 2 using whatever routing method you chose.
Deploy new version to web box 1 if things look ok.
Using DNS is probably easiest, but it takes time for the mapping to propagate to the client (if the client is outside your LAN) and also two clients may see different results. Some firewalls have IP address mapping feature that you can map public IP address and internal IP address. The ideal way is to use load balancer and configure it to 50:50 and change it to 100:0 when you want to do upgrade, but it costs money. A cheaper alternative is to run software load balancer on BSD vm, but it probably requires some work.
Edit: What I meant to say is session variables, not session. You said the server is stateful. If it contains some business logic that uses session variable, it needs to get stored externally to be preserved across reconnection during switch over. Actual DataSnap session will be lost, so when you shutdown web box 1 during upgrade, the client will get "Session {some-uuid} is not found" error by web box 1, and it will reconnect to web box 2.
Also you could use 3 IP addresses (1 public and 2 private) so the client always sees 1 address , which is better method.

I have done something similar by having a specific table which held my "data version". Each time I would update the server or change a system wide global setting, I would increment this field. When a client starts it always checks this value, and will check again before any transactions/queries. If the value was ever different from when I first started, then I needed to go through my re-initialization logic, which could easily include a re-login to an updated server.
I was using IIS to publish my app servers, so the data that would change would be the path to the app server. I kept the old ones available, to respond to any existing transactions that were in play. Eventually these would be removed once I knew there were no more client connections to that version.
You could easily handle knowing what versions to keep around if you log what server the client last connected too (and therefore would know about).

For newer versions (Delphi 2010 and up), there is an interesting solution
for systems using the HTTP transport:
Implementing Failover and Load Balancing in DataSnap 2010 by Andreano Lanusse
and a related question for the TCP/IP transport:
How to direct DataSnap client connections to various DS Servers?

Related

Getting "ECONNREFUSED" error when trying to upload to Wolkenkit Blob Server

I'm currently developing a Wolkenkit application which is run on my local machine.
I want to upload a file from the Wolkenkit app to the blob server (as documented here).
When sending a POST request from the server to https://local.wolkenkit.io:3001/, Node.js gives me the error ECONNREFUSED.
I've tested the POST-Request with another program and it works there. Any idea why it doesn't work from the wolkenkit application itself?
Thanks!
The Storing files sample you linked to shows code that is to be run in the browser, not in the backend itself. Of course, both should work, but there are a few minor differences you need to watch out for.
Fixing the host name
First, I suppose that local.wolkenkit.io in your case maps to 127.0.0.1, which is the default for wolkenkit. That means that when you try to connect to this domain from within a Docker container, the container does not try to call out to the blog storage container, but it stays within itself. So, the first thing that needs to be fixed is the host name.
Basically, there are two options for this: You can either setup local.wolkenkit.io so that it resolves to the external IP address of your machine. This would work, but is pretty cumbersome. The other option is to directly address the appropriate container that is responsible for blob storage, by its internal name. The internal name is <name-of-your-app>-depot-file. So you need to replace https://local.wolkenkit.io:3001/ by https://<...>-depot-file.wolkenkit.io:3001/.
Fixing the port
Second, the port is wrong. This is because the blob storage service is internally running on port 3000, externally on 3001. So instead of https://<...>-depot-file.wolkenkit.io:3001/ you need to use https://<...>-depot-file.wolkenkit.io:3000/.
Once you have done this you should not get any more errors like ECONNREFUSED, since now the service can be found.
Fixing SSL issues
Third, since you are now connecting to the blob storage service using a different domain name, the SSL certificate doesn't match any more, since it was issued for local.wolkenkit.io. As a result, you will get SSL errors when trying to connect.
The simplest way to get around this is to disable any SSL checks (albeit this is also the most insecure way to handle this!). How to do this depends on the HTTP client module you are using. E.g., in request there is an option called strictSSL that you can set to false.
Of course, what you actually should do is to either use a custom certificate which includes this domain name as well, or to write a function that handles the certificate check and accepts the presented one, especially in this case.
If you do all of this, things should work :-)
PS: I am one of the authors of wolkenkit. Thanks a lot for bringing up this issue, and we will take care of this in the future, to make storing blobs easier.

How to update a web page from requests made by another client (in rails)?

Here is my need:
I have to displays some information from a web page.
The web browser is actually on the same machine (localhost).
I want the data to be updated dynamically by the server initiative.
Since HTTP protocol is actually a request/response protocol, I know that to get this functionality, the connection between the server and the client (which is local here) should be kept open in some way (Websocket, Server-Sent Events, etc..)
Yes, "realtime" is really a fashion trend nowadays and there are many frameworks out there to do this (meteor, etc...)
And indeed, it seems that Rails supports this functionnality too in addition to using Websockets (Server-Sent Events in Rails 4 and ActionCable in Rails 5)
So achieving this functionnality would not be a big deal, I guess...
Nevertheless what I really want is to trigger an update of the webpage (displayed here locally) from a request made by another client..
This picture will explain that better :
At the beginning, the browser connects to the (local) server (green arrows).
I guess that a thread is executed where all the session data (instance variables) are stored.
In order to use some "realtime" mechanisms, the connection remains open and therefore the thread Y is not terminated. (I guess this is how it works)
A second user is connecting (blue arrows) to the server (could be or not be the same web page) and make some actions (eg. posting a form).
Here the response to that external client does not matter. Just an HTTP OK response is fine. But a confirmation web page could also be returned.
But in anyway the thread X (and/or the connection) has no particular reason to be kept.
Ok, here is my question (BTW thank you for reading me thus far).
How can I echo this new data on the local web browser ?
I see 2 differents ways to do this :
Path A: Before terminating, the thread X passes the data (its instance variables) to the thread Y which has its connection still open. Thus the server is able to update the web browser.
Path B: Before terminating the thread X sends a request (I mean a response since it is the server) directly to the web browser using a particular socket.
Which mechanisms should I use in either method to achieve this functionnality ?
For method A, how can I exchange data between threads ?
For method B, how can I use an already opened socket ?
But which of these two methods (or another one) is actually the best way to do that?
Again thank you for reading me thus far, and sorry for my bad english.
I hope I've been clear enough to expose my need.
You are overthinking this. There is no need to think of such low-level mechanisms as threads and sockets. Most (all?) pub-sub live-update tools (ActionCable, faye, etc.) operate in terms of "channels" and "events".
So, your flow will look like this:
Client A (web browser) makes a request to your server and subscribes to events from channel "client-a-events" (or something).
Client B (the other browser) makes a request to your server with instructions to post an event to channel "client-a-events".
Pub-sub library does its magic.
Client A gets an update and updates the UI accordingly.
Check out this intro guide: Action Cable Overview.

SQL Server 2012 mirror in azure VM - on second failover app loses connectivity

We've got a mirrored SQL server 2012 database setup on Azure VM's - two servers plus a witness, all using client certificates, with SQL logins with the same SID set.
When testing our app from a different VM, everything works as expected when we manually failover the database, there's a one second wait and then it continues to operate quite happily.
If we then do another manual failover, ie moving the principal back to the original server, the app errors and throws a 'no such host in known' error. Recycling the app pool fixes the issue, but this clearly isn't workable in production when one of the servers is updated followed by the other at some later point (both are in an availability set).
The host not known error is somewhat baffling as it was communicating with it happily before the initial failover, and will again after the app pool recycle.
Here's the connection string as it is right now, after a lot of faffing around:
"Data Source=server1,1433;Failover Partner=server2,1433;Initial
Catalog=;MultipleActiveResultSets=True;User Id=user;
Password=password; Network=dbmssocn;Connect Timeout=60; async = true;"
providerName="System.Data.SqlClient"
The app is running on .net 4.5.2, so should be up to date with hotfixes, and we're out of ideas after much Googling with Bing.
I've just solved a problem that I had that looks very similar to your problem. I'd get the host not known error whenever the database switched from the first one listed in the web.config file to the failover one. It was fine switching from the failover to the primary.
The problem that I had was that I set up the database mirroring using server names but my web server did not know the database servers by name. Once I fixed this, I was able to get the failover working smoothly both ways.
This is what I think was happening:
I set up the mirroring using the names SQL1 and SQL2 as the principal and mirror servers
I have their ip addresses in my connection string: 10.1.1.5 and 10.1.1.6
The application tries to get to the first server 10.1.1.5 and succeeds and is then told that the mirror server is SQL2
SQL1 goes down and the database is successfully switched to the mirror server.
The web application attempts to connect, fails and determines that it should try the second server.
It tries to connect to SQL2, which it doesn't know, and fails with the message that the host is unknown.
This answer would only apply to your situation if you actually put ip addresses in your web.config and that server1,1433 and server2,1433 were actually masking place-holders for the ip addresses that you actually used.
I haven't really solved the naming issue though. I just added the two database server names to the HOSTS file which isn't an acceptable situation but does prove my theory on what my problem was.
I am researching a setup just like you have and upon reading this and the response by Steve Kaye, I'm wondering if you have SQL browser running. Take a look at this article for how SQL browser comes into play:
http://blogs.msdn.com/b/spike/archive/2010/12/15/running-a-database-mirror-setup-with-the-sqlbrowser-service-off-may-produce-unexpected-results.aspx

User Disconnection Detection (i.e. "Online Status") Daemon

Summary: is there a daemon that will do postbacks when a user connects/disconnects via TCP, or is it a good idea to write one?
Details:
There are a number of questions based around this already; but I believe that this is a different "twist" on it. We're writing a Ruby on Rails web application, and we would like to be able to tell if a user is "online" or "offline", where the following definitions apply:
"online" - the user's browser is open and maintaining a TCP connection to one of our servers.
"offline" - the user's browser is no longer connected to one of our servers.
What we're thinking is a convenient way of doing this is to run a completely separate "online state" server that each of our users will connect to (exactly once):
when a connection is made to the "online state" server, it will postback to our actual RoR site and let it know "this user just logged on".
when a connection is lost from the "online state" server, it will postback to our actual RoR site and let it know "this user just logged off".
This methodology seems reasonable and keeps things quite modularized (the online state server, for instance, will be quite simple, which is nice). We're able to write this online state server, but have the following questions:
Any specific problems with the above architecture that we haven't taken into account?
Is there a daemon or application out there that does this already? Why reinvent the wheel, if it has already been written?
Is there a push server out there that offers this functionality (i.e. it maintains connections to the users, but will postback or send notifications upstream to the web servers when a user connects or disconnects?)
Is this something you envisage users would install on their systems?
If you are looking for a browser based system, WebSockets are probably your only option using something like Socket.IO http://socket.io/.
The node.js socket server provided as part of this project can be found on github: http://github.com/LearnBoost/Socket.IO-node
Node.js is a great platform designed for exactly this problem domain and there are a number of WebSocket servers for node.
Unless your app is entirely ajax based and uses a single parent page, you would need to create a persistent parent frame containing the socket that wraps your application, as each time the user clicks a link the page unloads and reloads, resulting in disconnection and re-connection from the state server.

How to reuse a (Delphi) OLE server with a second client?

I wrote an OLE automation server (using Delphi). I usually start the OLE server manually and use it as a normal application. From time to time I start a client, which
automatically connect to the existing OLE Server.
When I terminate the client, the server does not terminate (at least when it was started manually before the client) but it won't accept any other OLE connection. Starting another client will trigger a new server instead of reusing the first one.
How can I reuse the same server with the second client?
(Question edited to reformulate it correctly. In the original version I was asking how to prevent the server from terminating, which wasn't a good formulation)
There is a setting "Instancing" in the COM Object Wizard in Delphi. Allowed values are "internal", "Multiple Instance", "Single Instance".
I wanted to reuse the same COM server with multiple clients. That is why I chose "single Instance" and though that I would have a single instance of my server application for all the clients. But I was wrong. "Single Instance" means that there will be only one instance of a COM connection in my server. I should have chosen "Multiple Instance" to allow multiple COM connection (but one after the other, not simultaneous) in the same server.
I think that the words used in the COM Object Wizard in Delphi are not really clear. Instead of "multiple instance", "single instance", it would be better to have "multiuse" and "single use" like in this article about OLE Server and VB.
In the client, use
ConnectKind := ckRunningOrNew
and an existing server should be used instead of starting a new one.
You should be able to increment the reference counter of the automation server when you start the server as a normal application. What you want to achieve is two-fold: Let the server not terminate when the client exits, and also let the server not terminate when you close your main form while the client is still running.
Create the COM object as Singleton. And also to keep the object running even after the Client goes, put extra reference count. To do this call QI once inside the COM object.
A note about the previous post 'There is a setting "Instancing" in the COM Object Wizard in Delphi.' : At least in C++ builder, this option can simply be changed afterwards in the project settings, item "ATL". This item only appears there for an EXE-ole-server after you have added the first automation object to it.
(I have also asked author of This fine page to mention this in item 18.).
You can also try changing the identity of the user that launches the OLE server, if it is an Exe and not a dll, by running dcomcnfg and choosing Component Services/Computers/My Computer/DCOM Config and selecting your server.
You might have to play around with it, I can't remember the differences between them all but I think "Interactive User" should do it.

Resources