UI5 oData Service for two (or more) different backends - odata

At the moment i still only have about 2 months of experience in UI5. i developed a little sample-app, used sap gateway builder to pass my requests to sap backend.
Now my employer asked me to research the possibility to access two different backends (one sap, one nonsap) via odata from the same app. After a little reading and thinking i came to the conclusion that it would be best to access both backends from a single gateway.
Since ive already worked with sap gateway, i wonder if there is a way to access nonsap backends with sap gateway? Are the better options?
Or is my current approach complete wrong and i should think about a whole other way?

It depends on your approach and the non-sap-system:
Is the non-sap-system accesible via Webservices? Then use second data model (e.g. JSON/ODATA) within SAPUI5 by loading data via webservices after initial loadup of your application.
Is the non-sap-system connected to SAP? E.g. via RFC or another technology, then you can read data from the other system during calling your initial Gateway service and simply call your RFC function module in your method.
From my opinion you will not achieve an 'easy' way to read both via one single SAP NetWeaver Gateway.

Not sure why you would want to access a non-SAP oData service via SAP Gateway. On the other hand you may want a router of some sort so that all services are exposed on the same network location and then incoming requests are routed to the appropriate backend for action.
You may also want to "mash-up" the SAP and non-SAP services into some sort of new service. In that case maybe look to some of the API management tools like Apigee to help you achieve that.

Related

Microservice architecture structure with docker-compose : .NET 6

An ex-employee planned a Microservice Architecture which is being implemented now. I've few question regarding the design and I'd highly appreciate your feedbacks.
Explanation
Dematerialized UI has a matching dematerialized API.
Dematerailized API validates the user and generates token via SSO Library.
Flight API does the I/O validation & validate the request via validate request microservice
Flight API calls Booking API to get some bookings based on the UserId
Flight API calls Print Booking API to generate Messages using Generate Message Microservice
Print Booking API must call Data Access API to get data and then call Generate PDF microservices.
Data Access API calls the database for data.
My Project Structure
FlightBookingsMicroserice.V1 //solution
ApiGatways //folder
DMZ.API/DMZ.API.csproj //Folder/project
BuildingBlocks
EventBus/EventBus.csproj
EventBus/EventBusRabbitMQ
Services
SSO
SSO.API/SSO.csproj
SSO.UnitTests
Flight
Flight.API/Flight.API.csproj
Flight.UnitTets
//Similar for all
ValidationRequest
Booking
PrintBooking
PrintBooking.API.csproj
DataAccess
DataAccess.API.csproj
GeneratePDF
GenerateMessage
UI
UI
Docker-compose
Questions
Should I be using ocelot in DMZ.API.csproj, Flight API and Print Booking API.
Is my project structure a Microservice way of development
Should I continue to use ASP.NET Core Web API with .NET 6 for Dematerialized API in orange, Function API in blue and Microservice in purple projects.
For validation, since the SSO is passed from Dematerialized UI what if the token expires while CRUD operations
is already performed for some stages [rolling back changes is a hassle].
Should each API access to an identidy server and validate the user passed and generate its own token for its
services in purple.
Thank you in advance.
The core question is if you really need all those services and if you perhaps are making things too complicated. I think the important thing is to really consider and really make sure you justify why you want to go through this route.
If you do synchronous API calls between the services, that creates coupling and in the long run a distributed monolith.
For question #4, you typically use one access token for the user to access the public service, and then you use a different set of internal tokens (machine-to-machine also called client credentials in OpenID Connect parlor) between services that have a totally different lifetime.
q1: ocelot is an API GATEWAY which is the entry point for your requests. so it should be the first layer/service meet by user request in front of your services and it forwards the request to the service according to its configuration. so it is lay in the front for all services you have. some arch provide another api gateway for different reasons like specific api gateway for mobiles request for example.
q2: as looking separate services (i cant understand function api but i assume they are services also ) yes but the microservices development is not just about separating things, its about design and identifying the services from business context (Domain Driven Design).its very challenging to identify services and their size and the way they are communicate to each other (asynchronous communication and synchronous communication).
q3: microservices is not about languages and frameworks.one of benefits of microservices architecture is its not language or framework dependent. the may be multiple languages used in microservices. choosing languages it depends on organization policy or your own reasons. if you are .net developer then go for .net.
q4: all the services are registered with identity server and they validate the given token by it. the identity server generate token (there may be multiple tokens) with scopes . the request from identified users always has the token in the headers and the services validate incoming token by referring identity server. this tokens has lifetime and also identity server generates refresh tokens in case of expiry of current token. please look at Oauth docs and rfc. also this https://www.youtube.com/watch?v=Fhfvbl_KbWo&list=PLOeFnOV9YBa7dnrjpOG6lMpcyd7Wn7E8V may helped. you can skip the basic topics. i learned a lot from this series.

Discover WCF Rest Service from iOS5

I am building an iOS app that should communicate with a WCF Rest Service. They both will be on the same local network. While testing a have hardcoded the IP of the service, but that wont work when it will be deployed.
How can i get the service address, or connect to it in any way?
I was reading about WCF discovery but I don't know how I would implement this in iOS.
If it is of any help, im using WCF REST Service Template 40(CS)
Any help would be appreciated.
EDIT: How about using bonjour? Any thoughts?
WCF Discovery is an implementation of the WS-Discovery specification, which is an open standard. As such there are a few implementations of it, for example one in Java called java-ws-discovery, one in Python called python-ws-discovery and one in c sharp called WS Discovery proxy.
I haven't found an implementation in Objective-C but given those three are all open source you may be able to port one, or at least the part you need (depending on whether you are able to understand one of those languages).

Best choice for robust self hosting server: WCF vs. ASP.NET Web Api

We currently have an .NET 4 application that consists of Windows Service running in the background and local or remote clients (only 1-3 normally).
The clients have a WPF GUI and need some data from the windows service. Therefore, we use WCF with NamedPipe binding for a local client and NetTcp binding for remote clients. This works, but we often have problems with endpoints that are not reachable (channel faulted or not found etc.). We already try to rebuild faulted connections but it seems to be pretty fragile...
Now enter Web Api: It looks like a HTTP based stack might be more robust (no channels, no endpoints, can be self-hosted in windows service as well). There seems to be no problems with broken channels because each request is handled individually. So if something fails, you just repeat the request. (And we have experience with ASP.NET MVC from other apps, so this not new to us).
Now we are thinking what might be our best bet. Is it better to "harden" our existing WCF service (one service interface with about 15 operations) or to move the interface to Web Api and run it as HTTP requests (with JSON data)? Performance is not our main issue here...
Any ideas?
Hartmut
I recommend you stick with WCF (SOAP) services for your WPF application rather than moving to the Web API. There are a number of reasons for this. First I think we need to consider what the new Web API is trying to address - namely to provide a framework for supporting RESTful/HTTP/hypermedia services. This is likely to be a good fit for building applications that make heavy use of HTTP such as web, mobile and JavaScript applications, where you want to maximise the "reach" or interopability of your services (irrespective of platform). This is not to say that you can't use it for WPF clients but in your case, where all traffic is local to your domain, it makes more sense to stick with your current implementation.
The binding choices you have made for your services / clients sound ok to me. I would focus on why your channels are faulting and address these issues. You may also want to consider hosting your services via IIS and use WAS to expose your non-HTTP endpoints. I have had much success with this in the past and for the most part has been pretty stable. It also takes away a few of the headaches with managing your own host. If you are concerned about the TCP binding faults, then just create a new HTTP or wsHTTP endpoint and use that instead. This will provide you exactly the same transport the web api uses without having to change your programming model.

EHR intercommunication / client

So, I'm researching methods for building a client interface for existing EMRs. I've read tons of info on HL7, as well as the various coding schemes, but I'm still really clueless.
For anyone whose worked with an EMR before: is it possible to build a web interface that can use HTTP-POST and HTTP-GET requests to pull/push data to the server database? Or would you have a separate database for the client, say a web application, then use some interface engine like Mirth to communicate between the EMR database and the web application?
A web service API is definitely a way to go. One benefit to this is that you can get https almost out-of-the box for encryption of data in transit.
The way we have configured our EMR is that we have a tcp server accepting incoming hl7 messages from certain IPs which connects directly to our EMR database. This can be beneficial by separating emr and interface processes (You don't have to restart your whole EMR if the interface goes down, for instance).
Another good feature would be to have a token system for pseudo-authentication. This only works if you are going over a secure connection though.
If you aren't into writing your own tcp server (not that hard), an api-based server is probably just as good.
EDIT: What language(s) do you think you'll be using?
Other things that you might run into:
Some applications prefer file drops to direct calls (either url or tcp)
Some vendors will have their own software that sits on a server of yours
Don't forget the ACK.
I don't see why you couldn't do this. You would need to build the web service to handle requests with a specific Uri. When this Uri is called the web service uses the data sent with the request to make changes in the database.
Once you had the web service built, you could build some sort of front-end that displays your information to the user. And makes HTTP-GET and HTTP-POST calls.
There is a lot of flexibililty in what you are trying to do... so go with a plan for sure.
In general though you should be able to accomplish what you need to do by building your own web service and front-end application that is able to manipulate an EMR database.
It really depends on your architecture and requirements.
Architecture 1
If you want your client to be web based, but your client is a separated app from your backend, then the web sends the info using HTTP to your client app server side, and then, that will send info to your EHR backend (another app). That second communication might be written using a standard, that will help you on integrating more systems with your backend in the future. So that interface can be HL7 based, if HL7 v2.x is used, take a look at the MLLP protocol: http://www.hl7.org/implement/standards/product_brief.cfm?product_id=55
This is the most performant way of communicating HL7 data. If you don't want to deal with TCP, there is a proposal for HL7 v2.x over HTTP. HAPI implemented that: http://hl7api.sourceforge.net/hapi-hl7overhttp/
If you don't want to use HL7 v2.x but HL7 v3 (a different standard, not really a version of 2.x) or CDA, you can use HTTP or SOAP.
Architecture 2
But, if you want your client just to be a UI on the user side (browser), HTTP POST will suffice to send info from the browser to the server. That means your EHR is a centralized EHR with a web iu.
In the 1st architectural case, first case you'll probably have multiple client apps (full EMRs apps) and a backend EHR server (centralized backend). On my developments I follow this second architecture.
Also there Mirth might help to manage all the communications between client apps and backend apps. In the 2nd case, using Mirth is nonsense, is just a web application and the client communicates directly with the web server. Of course, you can use Mirth as a web server, but that's not it's role, it is an ESB no a web server.
Hope that helps!

Communicating between Node.Js and ASP.NET MVC Application

I have an existing complex website built using ASP.NET MVC, including a database backend, data layer, as well as the Web UI layer. Rebuilding this website in another language is not a feasible option.
There are some UI elements on some views (client side) which would benefit from live interactivity, involving both push and pull, so rather than implement some kind of custom long polling or websocket server in asp.net, I am looking to leverage node.js for Windows, and Socket.io.
My problem is that I need two way communication between both applications. Each user should only be able to receive data once they are authorised on the ASP.NET website, so I first need communication for this. Secondly, once certain events occur on the ASP.NET website I want to immediately push this data to the Node server, to be broadcast to specific users or groups of users. Thirdly, I would like any data sent to the node.js server to be pushed to the ASP.NET website for processing, as this is where all our business logic lies. The sole reason for adding Node.js is to have the possibility to push data directly to the client, I do not want to build any business logic into it (or as little as possible).
I would like to know what the fastest method of two-way push communication is between Node.Js and ASP.NET. The only good option I'm aware of so far is to create a special listener on a specific port on the node.js server and connect to that, but I was wondering if there's a more elegant or more efficient method? I also know that you could use a database inbetween but surely this would need to be polled and would be less efficient? Both servers will be running on the same server under a Visual Studio project.
Many thanks for any help you can provide.
I'm not an ASP.NET expert, but I think there are multiple ways you can achieve this:
1) As you said, you could make Node listen on a specific port for data and then react based on the data received (TCP)
2) You can make POST requests to Node.js (HTTP) and also send an auth-key in the process to be extra-secure. Like on 1) Node would react to the data you send.
3) Use something like Redis for pub-sub, send messages from ASP.NET (pub) and get them on the Node.js part (sub). This is even better if you want to scale your app across multiple machines etc.
The only good option I'm aware of so far is to create a special
listener on a specific port on the node.js server and connect to that,
but I was wondering if there's a more elegant or more efficient
method?
You can try to look at redis pub/sub model where ASP.NET MVC application and node.js would communicate through separate channels in order to achieve full-duplex communication. Or you can also try to use CouchDB change nofitications.
I also know that you could use a database inbetween but surely this
would need to be polled and would be less efficient?
Former techniques do not require you to poll for changes, but instead they will notify you when the changes happens or channel message arrives.

Resources