How to access the only TRemoteDataModule's instance in my application server - delphi

How do I access my Remote Data Module(RDM)'s instance from another unit at runtime? (The RDM is single instance). When I create a normal Data Module descendant Delphi creates a variable for it in the same unit (ex: MyDM: TMyDM), but when I create a RDM's descendant there's no variable.
I'm trying to set the provider of a TClientDataSet created at runtime in another unit to a TDataSetProvider in my RDM, but I can't find a reference to my RDM's instance.
I also tried to do it at design time but while I have no problems to set the connection property of a TSQLQuery from the same unit to a TSQLConnection in that RDM, I wasn't able to set the TClientDataSet's provider, because no providers from the RDM appears in the TClientDataSet's provider list.

First you need to set the RemoteServer property of your client dataset, assign it an instance of TLocalConnection component (which should be placed on your remote data module since you are not using it remotely). The remote data module unit has to be in the uses clause of the unit with the client dataset, of course.
Then you can assign the ProviderName property of your client dataset.

I did some study on TRemoteDataModule and learned that it is dedicated to support COM application servers.
The fact you don´t have a variable to your RDM is because you are not supposed to access it like a regular DM. The application server will instantiate the RDM in response to a remote call, just like any COM application. It will be destroyed when no more references exist to that RDM.
Since the life-cicle of that object depends on the client, not the server, having a reference to it in the server is highly dangerous. You never know when it´s valid or not. Besides, more than one instance will exist, one for each client that is accessing that object in a given moment.
Considering that, I believe is very reasonable to tell you that it´s impossible to access the RDM after it is created to perform the correction you intend to do.
If you really need to put the TDatasetProvider in a different unit, then my best suggestion is to make the RDM look for that provider in some kind of Provider poll service. Doing like this will enable you to find the provider you need everytime a new RDM is instantiated and only when it is instantiated.
In your place I would add a handler to the OnCreate event of the RDM and in that handler I would call a method like TProviderPool.GetProvider. That method would give me a provider and I would assign its name to the ProviderName property of the CDS.

Related

Is it possible to have Orleans host without a client?

I have a grain that sets up a reminder in OnActivate method.
The reminder then periodically does some action and no further communication is needed from outside the silo.
Is it possible to get a GrainProvider during host start and activate the grain within the Host itself?
Or do I need a client to initiate the first activation?
You can call the grain in Application Bootstrapping within a Silo, which is called on silo startup. Calling the gets its OnActicate called. Some more documentation at
Bootstrap provider.
It would be possible to insert the data straight to the persistent storage (via some side-channel) too, but that's a somewhat of an unsupport scenario (at the moment).

How to peek at message while dependencies are being built?

I building multitenancy into the unit of work for a set of services. I want to keep the tenancy question out of the way of day-to-day business domain work, and I do not want to touch every existing consumer in the system (I am retrofitting the multitenancy onto a system without any prior concept of a tenant).
Most messages in the system will be contexted by a tenant. However, there will be some infrastructure messages which will not be, particularly for the purpose of automating tenant creation. I need a way of determining whether to use a tenant-contexted unit of work, or a infrastructure unit of work uncontexted by a tenant because the way I interact with the database is different depending on whether I have tenant context. The unit of work is built in the process of spinning up the dependencies of the consumer.
As such I need a way of peeking at the message or its metadata before consuming it, and specifically, I need to be able to peek at it during the dependency building. I was intended to have a tag interface to mark tenant management messages out from normal business domain messages, but any form of identifying the difference could work. If I am in a unit of work resulting from an HTTP request, I can look at WebApi's HttpContext.Current and see the headers of the current request, etc. How do I do something analogous to this if I am in a unit of work resulting from messaging?
I see there is a way to intercept messages with BeforeConsumingMessage() but I need a way of correlating it to the current unit of work I am spinning up and I'm not seeing how that would work for me. Pseudocode for what I am trying to do:
if MessageContext.Message.GetType() = typeof<ITenantInfrastructureMessage>:
database = new Database(...)
else:
tenantId = MessageContext.Headers.TenantId;
database = new TenantDatabase(..., tenantId)
I am working in C#/.NET using MassTransit with RabbitMQ and Autofac with MassTransit's built-in support for both.
Your best option is to override at the IConsumerFactory<T> extension point, and extract the tenant from the message (either via a message header, or some message property) and register that in the container child lifetime scope so that subsequent resolutions from the actual consumer class (and it's dependencies) are properly matched to the tenant in the message.
In our systems, we have a TenantContext that is registered in a newly created LifetimeScope (we're using Autofac), after which we resolve the consume from the child scope, and the dependencies that use the tenant context get the proper value since it's registered as part of building the child container for the message scope.
It works extremely well, we even built up extension methods to make it easy for developers registering consumers to specify "tenant context providers" that go from a message type to the proper tenant id, which is used to build the TenantContext.
You can do similar things with activity factories in Courier routing slips (which are a specialization of a consumer).

Runtime object reference

If CORBA doesn't know about an object at compile time, how does CORBA identify an object passed to it at runtime?
How does it access that object at runtime?
CORBA uses Object References. For inter ORB (the middleware framework code running on your machine) communication, Interoperable Object References - IORs are used. These are string based and contain host, port, policies and other stuff.
You need an objects reference to act with it the CORBA way (location transparent, remote). This reference than is "narrowed" , i.e., the middleware connects to the remote site. After that, every call to the object is a remote call, but you won't notice than in the application as you can handle the object as it where local.

nested type provider

I have one type provider that connects to the network to retrieve data.
And produce (the fiction we call) 'static type' through type providers mechanism.
Of course, I might not always be connected. I might be raging in a private jet with satellite connection down.
Has anyone experience building an "offline type provider" which take (somehow) a type (from a type provider) as an input, stores its definition on disk, and provides you later with said type definition for easy access while on your way to Koh Phangan ?
Since types are not allowed as parameter to TP, I was thinking in providing assembly name + type name to be offlined.
You can enhance your original type provider to work both in online and offline modes. I.e. provider tries to connect to data source and fetch schema, if successful schema is cached on disk (in some format that provider can understand). After that provider exposes types using schema information on disk. If for some reason connection to data source is not available - provider checks if cached schema exists and if yes - uses it. For example standard type providers (LINQ2SQL or EF) allow you to specify schema file that can be used if direct connection to database is not possible.
This is a tricky aspect of writing F# type providers. But I think the main problem is that when you're developing in a private jet and you're using type providers to access some external data source, then you won't be able to access the data.
Schema caching -
If the type provider supports some form of schema caching (i.e. by storing the schema in a XML file like LINQ to SQL mentioned by #desco), then you'll be able to write some code and compile it, but you still won't be able to test the code. I think this makes schema caching less useful for the private-jet scenario. However, it is useful in scenario where you build code on a build server that does not have access to the schema.
Local data - For the private-jet scenario, you probably need some sort of local data (or a subset), to be actually able to test the code you write and then you can often point the type provider to your local copy (database, CSV or XML file etc.).
Meta-provider - I think the idea of having meta-provider is pretty cool - it should work to some extent - you would be able to cache the schema, but you probably wouldn't be able to cache the data (perhaps the values of properites, but I guess methods would not work). I think it should be possible to just pass the name of the provider to mock as an argument to your meta-provider. Something like:
type CachedDB =
SchemaCachingProvider<"FSharp.Data.TypeProviders.dll", "SqlDataConnection", "..">
I'm not aware of any plans for doing something like this, but if you started, I'm sure the FSharpX people would be interested in looking at it :-).

How to reuse a (Delphi) OLE server with a second client?

I wrote an OLE automation server (using Delphi). I usually start the OLE server manually and use it as a normal application. From time to time I start a client, which
automatically connect to the existing OLE Server.
When I terminate the client, the server does not terminate (at least when it was started manually before the client) but it won't accept any other OLE connection. Starting another client will trigger a new server instead of reusing the first one.
How can I reuse the same server with the second client?
(Question edited to reformulate it correctly. In the original version I was asking how to prevent the server from terminating, which wasn't a good formulation)
There is a setting "Instancing" in the COM Object Wizard in Delphi. Allowed values are "internal", "Multiple Instance", "Single Instance".
I wanted to reuse the same COM server with multiple clients. That is why I chose "single Instance" and though that I would have a single instance of my server application for all the clients. But I was wrong. "Single Instance" means that there will be only one instance of a COM connection in my server. I should have chosen "Multiple Instance" to allow multiple COM connection (but one after the other, not simultaneous) in the same server.
I think that the words used in the COM Object Wizard in Delphi are not really clear. Instead of "multiple instance", "single instance", it would be better to have "multiuse" and "single use" like in this article about OLE Server and VB.
In the client, use
ConnectKind := ckRunningOrNew
and an existing server should be used instead of starting a new one.
You should be able to increment the reference counter of the automation server when you start the server as a normal application. What you want to achieve is two-fold: Let the server not terminate when the client exits, and also let the server not terminate when you close your main form while the client is still running.
Create the COM object as Singleton. And also to keep the object running even after the Client goes, put extra reference count. To do this call QI once inside the COM object.
A note about the previous post 'There is a setting "Instancing" in the COM Object Wizard in Delphi.' : At least in C++ builder, this option can simply be changed afterwards in the project settings, item "ATL". This item only appears there for an EXE-ole-server after you have added the first automation object to it.
(I have also asked author of This fine page to mention this in item 18.).
You can also try changing the identity of the user that launches the OLE server, if it is an Exe and not a dll, by running dcomcnfg and choosing Component Services/Computers/My Computer/DCOM Config and selecting your server.
You might have to play around with it, I can't remember the differences between them all but I think "Interactive User" should do it.

Resources