Integration between two systems using TIBCO - windows-services

I have two different applications that need to share data between them. By data, I mean only some pieces of data and not the entire data across the systems.
The databases of the applications are owned by us but the applications are third party.
The two applications use database as an integration mechanism where they have a transaction table into which the data to be integrated is written to. A third party application then picks the data from there.
We are evaluating how to utilize TIBCO for performing the integration - (Trying to go away from the third party integration pieces). Such that, we expect Tibco to pick the data from the transaction table, apply any business logic and sync up the other systems as result of the integration.
I have in the past wrote a windows service that polls a similar transaction table and write (the data to be synced/integrated) to the TIBCO queue and have the windows service read from the queue, apply the business logic and do the integration or data sync in the other system's database.
I was using TIBCO EMS module to work with the message queues. But if there are other options within and outside of TIBCO, please provide some pointers.
We are particular on TIBCO because the org moves in a direction to make TIBCO a standard means of integration between systems.
TIA

TIBCO BusinessWorks Product is the right option for you here in this where it allows you to connect to the database / ems, I believe. This allows to integrate systems and define your process and orchestrate the logic you might wanted to do.
It has the connection palette to connect to database, fetches the required data using query and allows to process that data for further processing.
May be you can find more details in documentation

Related

Impact of Fiori Apps on Server

We are following Embedded Architecture for our S4HANA 1610 system.
Please let me know what will be the impact on Server if we implement 200+ Standard Fiori Apps in our System ?
Regards,
Sayed
When you say “server”, are you referring to the ABAP backend, consisting of one or more SAP application servers and usually one database server?
In this case, you might get a very first impression using transaction ST03.
Here, you get a detailed analysis of resource consumption on the SAP application server.
You also get information about database access times, as seen from the application server.
This can give you a good hint about resource consumption on the database server.
Usually, the ABAP backend is accessed from Fiori via OData calls.
Not every user interaction causes an OData call, some interactions are handled locally at the frontend.
In general, implemented apps only require some space on the hard disk, as long as nobody is using them.
So the important questions for defining the expected workload are:
How many users are working with these apps in which frequency (Avg.
thinktime)?
How many OData calls are sent from these apps to the backend and how
many dialog steps are handled by the frontend itself?
How expensive are these OData calls (see ST03)?
Every app reflects one or more typical business processes, which need to be defined.
Your specific Customizing also plays an important role, because it controls different internal functionality.
It’s also mandatory, to optimize database access, because in productive use, tables might get bigger in size, which might slow down database access over time.
Usually, this kind of sizing is done by SAP Hardware and Technology partners.

How do I call Web API from MVC without latency?

I'm thinking about moving my DAL which uses DocumentDb and Azure Table Storage to a separate Web API and host it as a cloud service on Azure.
The primary purpose of doing this is to make sure that I keep a high performance DAL that can scale up easily and independently of my front-end application -- currently ASP.NET MVC 5 running as a cloud service on Azure but I'll definitely add mobile apps as well. With DocumentDb and Azure Table Storage, I'm finding myself doing a lot of data handling in my C# code, therefore, I think it would be a good idea to keep that separate from my front-end application.
However, I'm very concerned about latency issues introduced by HTTP calls from one cloud service to another which would defeat the purpose of separating DAL into its own application/cloud service.
What is the best way to separate my DAL from my front-end application without introducing any latency issues?
I think the trade off between scaling-out/partitioning resources and network latency is unavoidable. That being said, you may find the trade-off well worth it for many reasons (i.e. enabling parallel execution of application tasks, increased reliability, etc.) when working w/ large-scale systems.
Here are some general tips to help you minimize the hit on network latency:
Use caching to avoid cross-service calls whenever possible.
Batch cross-service calls and re-use connections whenever possible to minimize the cost associated w/ traversing the NAT out of one cloud service and through the load balancer into another. Note - your application must also be able to handle dropped connections (inevitable in cloud architecture).
Monitor performance metrics as much as possible to take measurements and identify bottlenecks.
Co-locate your applications layers within the same datacenter to keep cross-service latency to a minimum.
You may also find the following literature useful: http://azure.microsoft.com/en-us/documentation/articles/best-practices-performance/
I recently split out my DAL to a WebAPI that serves data from DocumentDB for both the MVC website and mobile applications for the same reasons stated by the questioner.
The statements from aliuy are valid performance considerations generally accepted as good practice.
But more specifically - in order to call Web API from MVC without latency using Azure cloud services, one should specify same affinity group for each resource (websites, cloud services, etc).
Affinity groups are a way you can group your cloud services by
proximity to each other in the Azure datacenter in order to achieve
optimal performance. When you create an affinity group, it lets Azure
know to keep all of the services that belong to your affinity group as
physically close to each other as possible.
https://azure.microsoft.com/en-us/documentation/articles/virtual-networks-migrate-to-regional-vnet/

How to fire events in a Delphi application from another Delphi application?

Please read before tagging as duplicate.
I'm creating a set of applications which rely on smart cards for authentication. Up to now, each application has controlled the smart card reader individually. In a few weeks, some of my customers will be using more than one application at the same time. So, I thought maybe it would be more practical to create a service application which controls the authentication process. I'd like my desktop applications to tell the service application they are interested in the authentication process, and the service application would then provide them with information about current user. This part is easy, using named pipes. The hard part is, how can the service tell the desktop applications that an event has occurred (UserLogIn, UserLogOut, PermissionsChanged, ... to name a few). So far I have two methods in mind. CallBack functions, and Messages. Does anyone have a better idea? I'm sure someone has.
You want do to IPC (Inter Process Communication) with Delphi.
There are many links that can help you, Cromis IPC is just one to give you an idea what you are after.
A similar SO question to yours is here.
If you want to go pure Windows API, then take a look at how OutputDebugString communications is implemented.
Several tools can listen to the mechanism and many apps can send information to it.
Search for DBWIN_DATA_READY and DbWin32 for more information on how the protocol for OutputDebugString works.
This and this are good reading.
When it gets into IPC, some tips:
Do not be tied on one protocol: for instance, if you implements named pipe communication, you would later perhaps need to run it over a network, or even over HTTP;
Do not reinvent the wheel, nor use proprietary messages, but standard formats (like XML / JSON / BSON);
Callbacks events are somewhat difficult to implement, since the common pattern could be to implement a server for each Desktop client, to receive notifications from the server.
My recommendation is not to use callbacks, but polling on a stateless architecture, on the Desktop applications. You open a communication channel with the server, then every second / half second (use a TTimer in your UI), you make a small request asking for what did change (you can put a revision number or a time stamp of your last retrieval). Therefore, you synchronize your desktop data with pending events. Asking for updates on an existing connection is very fast, and will just send one IP packet over the network back and forth, if nothing changed. It is a very small task, and won't slow down nor the client nor the server (if you use some in-memory cache).
On practice, with real application, such a stateless architecture is very responsive, from the end-user point of view, and is much more easy to deploy. You do not need to create a server on each desktop application, so you don't have to open firewall ports or such. Since HTTP is stateless, it is even Internet friendly.
If you want to develop services, you can use DataSnap, something like RemObjects or you can try our Open Source mORmot framework which is able to create interface-based services with light JSON messages over REST, either in-process, using GDI messages, named pipes or TCP/HTTP - for free, with unbeatable performance, build-in security, and from Delphi 6 up to XE2. For your event-based task, just using the Client-Server ORM available in mORMot could be enough: create a table/class storing the events (you can even define a round-robin in-memory storage - no need to use SQLite3 engine nor a DB here), then ask for all pending events since the last refresh. And the server can safely be a background service, or a normal application - some mORMot users even have the same executable able to be either a stand-alone application, a server service, an application server, or a UI client, just by changing the configuration.
Edit / announcement:
On the mORMot roadmap, we added a new upcoming feature, to easily implement one-way callbacks from the server.
That is, add transparent "push" mode to our Service Oriented Architecture framework.
Aim is to implement notification events triggered from the server side, very easily from Delphi code, via some interface definitions, even over a single HTTP connection - for instance, WCF does not allow this: it will need a dual binding, so will need to open a firewall port and such.
It will used for easy Event Collaboration, via a publish / subscribe pattern, and allow Event Sourcing. I will try to make it implement the two modes: polling and lock-and-wait. A direct answer to your question.
You can use a simple TCP socket connection (bidirectional) to allow asynchronous server to client messages on the same socket.
An example is the Indy TIdTelnetClient class, it uses a thread for incoming messages from the server.
You can build a similar text-based protocol and only need a Indy TCP server instance in the service, and one Indy Client instance in the application(s).

Cloud computing: Learn to scale server up/down automatically

I'm really impressed with the power of cloud computing when it comes to the possibility to scale up and down your facilities depending on your load.
How can I shift my paradigm and learn to write my applications in that way? Write it once and forget(no matter of the future load) would be the best solution.
How can I practice my skills in that area?
Setup virtualization environment when I can add another VMs into the private cloud(via command line?) on some smart algorithms to foresee the load for some period of time?
Ideally I want to practice it without buying actual Cloud computing services and just on my hardware.
The only thing I want to practice here is app/web role and/or message queue systems scaling when current workers have too many jobs in queue. So let's rule out database scaling from the question's goal as too big topic.
One option I will throw out is to use a native Cloud execution framework. You might look at CloudIQ Platform. One component is CloudIQ Engine. It allows you to develop cloud native apps in C/C++, Java and .NET. You get the capabilities of scale up by simply adding workers to your cloud. The framework automatically distributes your applications to the new machine(s), and once installed, will begin sending work to them as requests come in. So in effect the cloud handles your queueing issue for you.
Check out the Download and Community links for more information.
You should try AWS- Amazon's offering a free tier that gives you storage, messaging and micro instances (only linux). you can start developing small try-outs without paying. writing an application that scales isn't that hard- try to break your flow into small, concurrent tasks. client-server applications are even easier- use a load balancer to raise\terminate servers by demand.

A way to synchronize data between an external device and a database?

In the application I am designing, I have to communicate with a device and store a history of data readings in a database. The device is essentially a sensor that spits out numbers via the serial port. The user end of the application is a RubyOnRails interface that allows the user to view this data and configure the device.
I am wondering what kind of connection between the database and the device you could recommend for this kind of a setup.
Up to this point, I had a custom application running on a host computer (a computer with the device connected directly through a serial port) that would serve as a bridge to a MySQL database. The application would connect directly to the MySQL database and execute queries. It works fairly well, but I am not sure if this is the best solution.
The only other alternative I see is to have an intermediate application that my custom application could connect to, instead of directly going to the database. This could be a part of the main application, or something separate. Would this be a better solution?
Would you recommend another approach?
Thank you,
I have a similar structure, although I fetch my data from a Web Service. The way I organize is:
Create classes in lib/imports, eg DailyDataImport, DailyDataSummarize (you can organize the hierarchy and names as per your wish or willingness).
Create a rake task under a new namespace, say import and add it to your cron job depending frequency. Take a look at Cron in Ruby. Its helpful.
This allows me to have a better control over what goes in my database.
Some questions to consider:
What schedule does the Device follow
to populate the data?
Do you need the data as-is or you
want a little control over it or you
need to process it, like summarizing
and aggregating etc.
MS SQL Server 2008 has great data synchronisation support.
SQL Server 2008 Express is free and can act as a replication subscriber (but not publisher) for clients.
Microsoft Sync Framework

Resources