What is the standard method for a website to communicate with a win32 executable? - delphi

I have some delphi code which, given a list of items, calculates the total price taking into account any special deals that might apply.
This code is non-trivial to rewrite in another language.
How should I set it up to communicate with a website running on the same server? The website will need to ask it for a price every time the user updates their shopping cart. It's possible that there will be multiple concurrent requests.
The delphi code needs to maintain an in-memory list of special deals, periodically refreshed from a database. So it cannot simply be executed every time or anything as simple as that.
I don't know what the website is written in, or even which http server it runs under, so I'm just looking for ideas or standard methods.

It sounds like the win32 app is already running as a Windows Service on the box. So, if you can't modify that service, you are going to have to deal with whatever way it wants to accept and respond to requests. This could be through sockets or some higher level communication protocol like web services.
You could do a couple of things. Write an assembly that knows how to communicate with the service and have your web site use that assembly. Or you could build a shim service that knows how to communicate with the legacy service, but exposes communication over higher level protocols such as web services. Either way will have the benefit of hiding the concurrency, threading and communications issue behind an easy to call interface, but the latter will make communicating with the service easier for everyone going forward.

If you can modify the delphi app to take an XML request and respond with an XML answer over a TCP socket (ideally using the HTTP protocol), you will be able to make it interoperate with most web server frameworks relatively easily. But the exact details of how to make that integration happen will depend on the language/framework it was written in.
If the web server is on windows you can compile your delphi app as a DLL that can return XML or HTML, taking parameters as part of the URL or a POST operation. Some details on making a Delphi DLL for web servers are here.

It doesn't matter what web server or OS the existing system is running under. What matters is what you want YOUR code to run under. If it is windows then the easiest solution would be to use WebBroker and write a custom ISAPI application, or use SOAP to expose web services. The first method could be used if you wanted to write a rest like API for instance, the second if your web application has the ability to consume web services.
Another option, if you are running both on the same box under IIS, is to create a COM/Automation object which you then invoke via server side scripting (ASP). If the application is an ASP.NET application, then I would use PRISM to port your code into an assembly.

I have done this with a quite complicated workers compensation calculator. I created a windows service using RemObjects Sdk. The calculations are exposed as a soap method so it can be accessed by nearly anything.
It's not necessary to use RemObjects in the service but it makes it much easier to do as it handles a lot of the underlying plumbing. The clients don't need RemObjects, they just need to be able to call soap methods. Nearly any programming langugae can do that.
You could also create an isapi dll for IIS that exposes a soap interface. This would be useful if other websites on different servers needed access to the methods. However I have handled this in my case by opening a port in the firewall to access my windows service.
There is a lot of examples on the web. A couple of places to start reading are About.Com and Dr Bob.

Torn this app into Windows Service. Write Web Service that will communicate with your windows service. You should spend some time designing your Web Service, because this Web Service is going to be your consistent interface, shielding old Delphi app. So in the future whenever you will want to write web app, mobile app, or whatever you will imagine, you will have one consistent interface – XML Web Service.

A popular way to integrate a web application with background services is a message broker.
The message flow would be:
the web application sends a "calculation request" message to a message destination on the message broker, which contains all needed parameters and also a correlation id to match the calculation request with the response from the Delphi service
one (or, in a high availability / load balanced environment more) Delphi services handle the messages: pull the next incoming message, process it by feeding the parameters to the calculation engine, and send a "calculation result message" back to the web server
the web server can either synchronously wait for the response (and discard responses which have no matching correlation ide) and build the result HTML document, or continue with other tasks and asynchronously receive the calculation result in a separate thread, for example in a Ajax based web application
See for an introduction this slideshow about the Dopplr image service:
http://de.slideshare.net/carsonified/dopplr-its-made-of-messages-matt-biddulph-presentation

If you can make it a service (but not a library), you have to do inter-process communication somehow - there are a few ways to do this on Windows:
Sockets directly which is hardest since you have to do marshalling/auth yourself
Shared Memory (yuck!)
RPC which works great but isn't trivial
DCOM which is easier but a pain to configure
WCF - but can you call it from your Windows Service written in Delphi?

Related

IMAP Server Facade - how to make one?

I have implemented a custom email server and web client. The server is just a REST API (similar to google's gmail API) that uses a 3rd party (sendgrid) for sending and receiving. The emails are stored in a database. The web client just talks to the REST client for sending and receiving.
The problem with this approach is it doesn't implement IMAP anywhere, which makes it impossible for standard clients (outlook, iphone, etc.) to connect to and use our email API. This limits customers to using only our client for email.
What I need is some sort of IMAP Server "facade" that will manage the connections to clients and make calls to my REST API for actually handling the requests (get email, send email, etc.).
How can an IMAP facade be implemented? Is there maybe a way to take an existing MailServer and gut it and point all it's "events" to making calls to my API?
tl:dr; write your gateway in Perl; use Net::IMAP::Server; override Net::IMAP::Server::Mailbox; and use one of the many Perl REST clients to talk to your server.
Your best bet for doing this quickly, while maintaining a reasonable amount of code security, is with Perl. You'll need two Perl modules. The first is Net::IMAP::Server, and here is the Github repository for that module. This is a standards-compliant RFC 3501 server that was purposely designed to have a configurable mail store. You will override the default Net::IMAP::Server::Mailbox implementation with your own code that talks to your custom email backend.
For your second module, choose your favorite Perl module(s) to use to speak to your REST server. Your choice depends on how much fine grained control you want to have over the construction and delivery of the REST messages.
Fortunately, here you have tons of choices. One possibility is Eixo::REST, which has a Github repository here. Eixo::REST seems to deal well with asynchronous vs. synchronous REST API calls, but it doesn't provide a lot of control over X509 key management. Depending on how googley your API is, there's also the REST::Google module. Interestingly, this family also has a REST::Google::Apps::EmailSettings module, specifically for setting Gmail-specific funkiness like labels and languages. Lastly, the REST::Consumer module seems to encapsulate a lot of https-specific things like timeout and authentication as parameters to Perl object instantiation.
If you use these existing frameworks, then about 90% of the necessary code should already be done for you.
Don't do this by hacking Dovecot or any other mail server written in C or C++. If you hack together a mail server quickly using a compiled language, your server will sooner or later experience all the joy of buffer overflows and stack smashing and everything else that the Internet does to fuck over mail servers. Get it working safely first, then optimize later.
(This is basically my comment again, but elaborated quite a bit more.)
Some IMAP servers, most notably Dovecot, are structured such that the file access is in a separate module with a defined interface. Dovecot isn't the only one, but it's by far the most popular and its backend interface is known to be appropriate, so I'd take that absent specific concerns.
There already exist non-file modules such as imapc, which proves that it can be done. When a client opens a mailbox backed by imapc, Dovecot parses IMAP commands, calls message access functions in imapc, imapc issues new IMAP commands, parses the server responses, returns C structs to Dovecot, Dovecot fashions new IMAP responses and returns them to the client.
I suggest that you take the dovecot source, look at src/lib-storage/inbox/index/imapc and the other backends in that directory, and implement one that speaks your REST API as a client.
Since you're familiar with .NET, I would suggest hacking either of the following implementations of IMAPv4 servers to your liking:
Lumisoft Mail Server - a very old project indeed (let's call it "mature", huh?). Don't be too turned off by the decade-old website and the lack of a github link - the source is provided under "other downloads".
McNNTP - also an older project and with a major focus on NNTP (as the name says) but very close to what you're trying to achieve in terms of the IMAP component. Take a look, you'll probably find this a good starting point.

Best choice for robust self hosting server: WCF vs. ASP.NET Web Api

We currently have an .NET 4 application that consists of Windows Service running in the background and local or remote clients (only 1-3 normally).
The clients have a WPF GUI and need some data from the windows service. Therefore, we use WCF with NamedPipe binding for a local client and NetTcp binding for remote clients. This works, but we often have problems with endpoints that are not reachable (channel faulted or not found etc.). We already try to rebuild faulted connections but it seems to be pretty fragile...
Now enter Web Api: It looks like a HTTP based stack might be more robust (no channels, no endpoints, can be self-hosted in windows service as well). There seems to be no problems with broken channels because each request is handled individually. So if something fails, you just repeat the request. (And we have experience with ASP.NET MVC from other apps, so this not new to us).
Now we are thinking what might be our best bet. Is it better to "harden" our existing WCF service (one service interface with about 15 operations) or to move the interface to Web Api and run it as HTTP requests (with JSON data)? Performance is not our main issue here...
Any ideas?
Hartmut
I recommend you stick with WCF (SOAP) services for your WPF application rather than moving to the Web API. There are a number of reasons for this. First I think we need to consider what the new Web API is trying to address - namely to provide a framework for supporting RESTful/HTTP/hypermedia services. This is likely to be a good fit for building applications that make heavy use of HTTP such as web, mobile and JavaScript applications, where you want to maximise the "reach" or interopability of your services (irrespective of platform). This is not to say that you can't use it for WPF clients but in your case, where all traffic is local to your domain, it makes more sense to stick with your current implementation.
The binding choices you have made for your services / clients sound ok to me. I would focus on why your channels are faulting and address these issues. You may also want to consider hosting your services via IIS and use WAS to expose your non-HTTP endpoints. I have had much success with this in the past and for the most part has been pretty stable. It also takes away a few of the headaches with managing your own host. If you are concerned about the TCP binding faults, then just create a new HTTP or wsHTTP endpoint and use that instead. This will provide you exactly the same transport the web api uses without having to change your programming model.

EHR intercommunication / client

So, I'm researching methods for building a client interface for existing EMRs. I've read tons of info on HL7, as well as the various coding schemes, but I'm still really clueless.
For anyone whose worked with an EMR before: is it possible to build a web interface that can use HTTP-POST and HTTP-GET requests to pull/push data to the server database? Or would you have a separate database for the client, say a web application, then use some interface engine like Mirth to communicate between the EMR database and the web application?
A web service API is definitely a way to go. One benefit to this is that you can get https almost out-of-the box for encryption of data in transit.
The way we have configured our EMR is that we have a tcp server accepting incoming hl7 messages from certain IPs which connects directly to our EMR database. This can be beneficial by separating emr and interface processes (You don't have to restart your whole EMR if the interface goes down, for instance).
Another good feature would be to have a token system for pseudo-authentication. This only works if you are going over a secure connection though.
If you aren't into writing your own tcp server (not that hard), an api-based server is probably just as good.
EDIT: What language(s) do you think you'll be using?
Other things that you might run into:
Some applications prefer file drops to direct calls (either url or tcp)
Some vendors will have their own software that sits on a server of yours
Don't forget the ACK.
I don't see why you couldn't do this. You would need to build the web service to handle requests with a specific Uri. When this Uri is called the web service uses the data sent with the request to make changes in the database.
Once you had the web service built, you could build some sort of front-end that displays your information to the user. And makes HTTP-GET and HTTP-POST calls.
There is a lot of flexibililty in what you are trying to do... so go with a plan for sure.
In general though you should be able to accomplish what you need to do by building your own web service and front-end application that is able to manipulate an EMR database.
It really depends on your architecture and requirements.
Architecture 1
If you want your client to be web based, but your client is a separated app from your backend, then the web sends the info using HTTP to your client app server side, and then, that will send info to your EHR backend (another app). That second communication might be written using a standard, that will help you on integrating more systems with your backend in the future. So that interface can be HL7 based, if HL7 v2.x is used, take a look at the MLLP protocol: http://www.hl7.org/implement/standards/product_brief.cfm?product_id=55
This is the most performant way of communicating HL7 data. If you don't want to deal with TCP, there is a proposal for HL7 v2.x over HTTP. HAPI implemented that: http://hl7api.sourceforge.net/hapi-hl7overhttp/
If you don't want to use HL7 v2.x but HL7 v3 (a different standard, not really a version of 2.x) or CDA, you can use HTTP or SOAP.
Architecture 2
But, if you want your client just to be a UI on the user side (browser), HTTP POST will suffice to send info from the browser to the server. That means your EHR is a centralized EHR with a web iu.
In the 1st architectural case, first case you'll probably have multiple client apps (full EMRs apps) and a backend EHR server (centralized backend). On my developments I follow this second architecture.
Also there Mirth might help to manage all the communications between client apps and backend apps. In the 2nd case, using Mirth is nonsense, is just a web application and the client communicates directly with the web server. Of course, you can use Mirth as a web server, but that's not it's role, it is an ESB no a web server.
Hope that helps!

Communicating between Node.Js and ASP.NET MVC Application

I have an existing complex website built using ASP.NET MVC, including a database backend, data layer, as well as the Web UI layer. Rebuilding this website in another language is not a feasible option.
There are some UI elements on some views (client side) which would benefit from live interactivity, involving both push and pull, so rather than implement some kind of custom long polling or websocket server in asp.net, I am looking to leverage node.js for Windows, and Socket.io.
My problem is that I need two way communication between both applications. Each user should only be able to receive data once they are authorised on the ASP.NET website, so I first need communication for this. Secondly, once certain events occur on the ASP.NET website I want to immediately push this data to the Node server, to be broadcast to specific users or groups of users. Thirdly, I would like any data sent to the node.js server to be pushed to the ASP.NET website for processing, as this is where all our business logic lies. The sole reason for adding Node.js is to have the possibility to push data directly to the client, I do not want to build any business logic into it (or as little as possible).
I would like to know what the fastest method of two-way push communication is between Node.Js and ASP.NET. The only good option I'm aware of so far is to create a special listener on a specific port on the node.js server and connect to that, but I was wondering if there's a more elegant or more efficient method? I also know that you could use a database inbetween but surely this would need to be polled and would be less efficient? Both servers will be running on the same server under a Visual Studio project.
Many thanks for any help you can provide.
I'm not an ASP.NET expert, but I think there are multiple ways you can achieve this:
1) As you said, you could make Node listen on a specific port for data and then react based on the data received (TCP)
2) You can make POST requests to Node.js (HTTP) and also send an auth-key in the process to be extra-secure. Like on 1) Node would react to the data you send.
3) Use something like Redis for pub-sub, send messages from ASP.NET (pub) and get them on the Node.js part (sub). This is even better if you want to scale your app across multiple machines etc.
The only good option I'm aware of so far is to create a special
listener on a specific port on the node.js server and connect to that,
but I was wondering if there's a more elegant or more efficient
method?
You can try to look at redis pub/sub model where ASP.NET MVC application and node.js would communicate through separate channels in order to achieve full-duplex communication. Or you can also try to use CouchDB change nofitications.
I also know that you could use a database inbetween but surely this
would need to be polled and would be less efficient?
Former techniques do not require you to poll for changes, but instead they will notify you when the changes happens or channel message arrives.

Call a method/ handle event in Windows Service from hosted WCF Service

apologies in advance for this question being dumb, or previously covered. I have researched far and wide but have not found any resources on WCF/ Windows Services that cover this question.
I have a managed Windows Service which is working nicely. Every n (>5) seconds it checks on the status (e.g. memory consumption) of some processes and other Windows services and also does some database logging and raises events where necessary.
I intend to make an ASP.NET website that would allow users to query the status of the processes that the Windows Service is monitoring. Having researched the options it looks like the up-to-date method would be to use a WCF Service, hosted in the Windows Service, to act as intermediary between the ASP.NET website and the Windows Service. Such that, a user could request through the browser a snapshot of the current status of whatever set of processes the Windows Service was monitoring, and have this request and subsequent response relayed through the WCF service (using named pipes, I think).
So, my difficulty is that there a set of methods and events in the Windows Service for which a single root object exists (let's say MonitorObject). I don't see how the ServiceHost can be instantiated with the reference to MonitorObject so that the WCF Service can call the methods in the Windows Service. I am thinking that perhaps I need to make the Monitor object a shared (I am VB'ing) member of the Windows Service class (that contains OnStart and OnStop) and make all the events shared so that the WCF Service can just access the WindowsService.SharedMonitorObject without needing to be passed the object....
However, I am lost in the subject and am seeking any advice on how best to proceed.
Thanks in advance.
I think you're going down the right track. I wouldn't necessarily make the entire MonitorObject shared, but you might put a shared method in that object that will return the single root object to the caller.
There is a design pattern called the Singleton Pattern that will help you with this. Jon Skeet has written an excellent article on some of the things to be aware of when using this pattern in .NET. His article uses C# for the examples, but here's a SO question referencing this pattern using VB.
While it's unclear from your description, my guess is that your Windows Service is essentially single-threaded right now. Just keep in mind that once you add the WCF service, you'll need to make the methods that it references thread-safe.

Resources