Are 5G control plane reference point interfaces for presentation only, or are they real interfaces separate from the service based interfaces? - 5g

Are the 5G control plane reference point interfaces (e.g. N11) just used for display purposes, or are they actually separate interfaces used in signaling? Stated another way, when the AMF is communicating with the SMF, do some messages go through the SBI and other messages go through S11?
I was under the impression that 3GPP requires all the 5G control plane interfaces to only use the service based interfaces. If this is true, why does N11 even exist, other than for a reference point of confusion :-).

The interaction between NF (network functions) are represented in two ways as the 5G architecture is defined as service-based and reference point based. N11 reference point is a representation here for the interface between AMF and SMF. It is for bringing in identification for the interface between the NF services of respective network functions for the messages that are exchanged over respective SBI.
The NFs within the 5GC Control Plane shall only use SBI for their interactions.
A typical Control Plane NF can provide one or more NF Services and the NF Service consist of operations based on either a request-response or a subscribe-notify model and it uses Common control protocol like HTTP based API, replacing protocols like Diameter.
AMF and SMF are typically NFs that shall be running in virtual machines or bare metal as per MNO preference. Within the 5GC, the AMF offers services to the SMF. The AMF uses the Namf service based interface(SBI) and offers services to the SMF, other AMF, PCF, SMSF, LMF, GMLC, CBCF, PWS-IWF and NEF.

Related

Inter Application Communication in opendaylight controller

I am new to opendaylight and I seek help regarding following:
How can I make two different applications communicate with each other?
Can I have something like a communication bus for my selected set of applications that can transfer data to each other? Or Do I need to have a single application with submodules (with different features ) to achieve the same task, i.e feature communication in this case.
The main feature that sets OpenDaylight (ODL) apart from other SDN controllers is the Model-Driven Service Abstraction Layer (MD-SAL), which provides a framework to share structured/modeled data and send notifications between ODL applications, among other things.
If you want to transfer data between ODL applications, you first need to model it using YANG, and include the YANG model in one of the applications.
To take advantage of the features offered by MD-SAL, please take a look at the official documentation. Once you understand the architecture, you should look at the source code of existing applications to see examples of how to take advantage of the power of MD-SAL.

When it is justified to use UA OPC and UA OPC architectures over MQTT

I am new to using OPC UA, I would like me to clarify some doubts I have about OPC UA which are as follows:
– In what situations is the use of OPC UA.
– OPC UA Architectures over MQTT.
If there is any document that explains these two doubts, I thank you
OPC UA it is probably the de-facto standard for industrial M2M communication and it is very important in the context of Industrie 4.0.
Let's say you have an industrial machinery (like a PLC) that manages some others, like sensors. With OPCUA you can model into the PLC (which becomes an OPCUA server) some data, using an information model (object-structured and hierarchical, similar concepts to UML) built using rules defined by OPCUA standard (https://opcfoundation.org/developer-tools/specifications-unified-architecture/part-3-address-space-model/). So the PLC first gather data from these sensors using a specific industry protocol, then model in its address space some data that is considered relevant.
You can also build a (opcua) server on the sensors, imagine a temperature or humidity sensor in which you model data such as not only the value of the temperature, but also the manufacturer, engineering unit (Fahrenheit or Celsius for instance). But you can also insert methods within a server and associate to them some specific actions, for example turn on/off a specific functionality if some conditions occur. For all specifications you can look at https://opcfoundation.org/developer-tools/specifications-unified-architecture, where, after signing up, you can download specifications in detail. Another good documentation that I found is http://documentation.unified-automation.com/uasdkcpp/1.6.1/html/index.html where it is explained the main concepts.
Once you defined your opcua servers with an information model within its address space, you can start interacting with some others industrial machinery, in a standardized way. These machinery could be MES or HMI applications and they have to be opcua clients. They can query the opcua server above mentioned, browsing their address space, reading values, calling methods, monitoring some interesting variables or events (subscribing to them the server will send a notification when a change occurs). The main advantage is that all these operations are performed via the use of standardized messages: if you want to write a data you have to send a WriteRequest message, if you want to read the client will send a ReadRequest and so on. Since everything is standardized (from data types to serialization of messages), all clients can understand structure of opcua servers (even if they are from different manufacturers). Without that every manufacturer could use its own way to define services or variables and you have to create your application (let's say HMI) to fit to that particular vendor's APIs or conventions.
Regarding OPCUA over MQTT, in this you can find some useful information OPC UA protocol vs MQTT protocol. As I said before OPCUA has the advantage of defining a structured and a standard information model, accessible via standard services, so using MQTT is only one part of the whole.
Another good reference to understand information models in opcua server could be OPC Unified Architecture

Servant and objects - relation

I read a lot about servant and objects used in technologies such as ICE or Corba. There are a lot of resources where I can read something like this :
One servant can handle multiple objects (for resource saving).
One object can be handled by multiple servants (for reliability).
Could somebody tell me a real life example for this two statements ?
If i am not mistaken, this term was coined by Douglas Schmidt in his paper describing Common Object Request Architecture.
Here is a direct quote of few definitions:
Note: see picture below for clarity
Object -- This is a CORBA programming entity that consists of an identity, an interface, and an implementation, which is known as a Servant.
Servant -- This is an implementation programming language entity that defines the operations that support a CORBA IDL interface. Servants can be written in a variety of languages, including C, C++, Java, Smalltalk, and Ada.
CORBA IDL stubs and skeletons -- CORBA IDL stubs and skeletons serve as the ``glue'' between the client and server applications, respectively, and the ORB
ORB Interface -- An ORB is a logical entity that may be implemented in various ways (such as one or more processes or a set of libraries). To decouple applications from implementation details, the CORBA specification defines an abstract interface for an ORB. This interface provides various helper functions such as converting object references to strings and vice versa, and creating argument lists for requests made through the dynamic invocation interface described below.
CORBA
The Common Object Request Broker Architecture (CORBA) is a standard defined by the Object Management Group (OMG) designed to facilitate the communication of systems that are deployed on diverse platforms. CORBA enables collaboration between systems on different operating systems, programming languages, and computing hardware
So, there are clients, server, client and server proxies, and ORB core. Client and server use proxies to communicate via ORB core, which provides a mechanism for transparently communicating client requests to target object implementations. From client perspective, this makes calls on remote objects look like the objects are in local address space and therefore simplifies design of clients in distributed environment.
Given all the above, Servant is an implementation which is an invocation target for remote client calls, and is abstracting remote objects which are actual targets.
As for your question, one servant can handle calls to multiple distributed objects which are encapsulated by the Servant. Note that the client doesn't access these objects directly but goes via Servant.
One servant for multiple objects is for example a bank, each bank account is an object but in this case you don't want to have a servant in memory for each bank account, so you have one servant for all bank accounts.
One object handled by multiple servants is for things like load balancing and fault tolerance. The client doesn't know which exact one it is executed on.

How to implement OData federation for Application integration

I have to integrate various legacy applications with some newly introduced parts that are silos of information and have been built at different times with varying architectures. At times these applications may need to get data from other system if it exists and display it to the user within their own screens based on the business needs.
I was looking to see if its possible to implement a generic federation engine that kind of abstracts the aggregation of the data from various other OData endpoints and have a single version of truth.
An simplistic example could be as below.
I am not really looking to do an ETL here as that may introduce some data related side effects in terms of staleness etc.
Can some one share some ideas as to how this can be achieved or point me to any article on the net that shows such a concept.
Regards
Kiran
Officially, the answer is to use either the reflection provider or a custom provider.
Support for multiple data sources (odata)
Allow me to expose entities from multiple sources
To decide between the two approaches, take a look at this article.
If you decide that you need to build a custom provider, the referenced article also contains links to a series of other articles that will help you through the learning process.
Your project seems non-trivial, so in addition I recommend looking at other resources like the WCF Data Services Toolkit to help you along.
By the way, from an architecture standpoint, I believe your idea is sound. Yes, you may have some domain logic behind OData endpoints, but I've always believed this logic should be thin as OData is primarily used as part of data access layers, much like SQL (as opposed to service layers which encapsulate more behavior in the traditional sense). Even if that thin logic requires your aggregator to get a little smart, it's likely that you'll always be able to get away with it using a custom provider.
That being said, if the aggregator itself encapsulates a lot of behavior (as opposed to simply aggregating and re-exposing raw data), you should consider using another protocol that is less data-oriented (but keep using the OData backends in that service). Since domain logic is normally heavily specific, there's very rarely a one-size-fits-all type of protocol, so you'd naturally have to design it yourself.
However, if the aggregated data is exposed mostly as-is or with essentially structural changes (little to no behavior besides assembling the raw data), I think using OData again for that central component is very appropriate.
Obviously, and as you can see in the comments to your question, not everybody would agree with all of this -- so as always, take it with a grain of salt.

Is data representation typically part of a "distributed application middleware"?

I am currently building a lightweight application layer which provides distributed services to applications of a specific type. This layer provides synchronization and data transmission services to applications which use that layer via an API. Therefore I classify this software as "middleware", since it bridges communication among heterogeneous distributed applications of a specific type. However, my software does not cover data representation. It therefore "only" delivers messages to other applications in a synchronized manner, but does not specify how messages look like and how they can be parsed/read/interpreted/or whatever. Instead, the developer should decide what message format he may use, e.g. JSON, XML, Protobuf, etc. The applications are most of the times governed by one developer party. Now, my question is, whether this is a severe "feature-lack" for being classified as a "distributed application middleware". The aim of the software is to glue together some heterogeneous software applications, where the software type cannot be compared to conventional software and therefore needs specific type of services (which prevents the user to "simply" use CORBA, etc.).
Thanks a lot!
Even though you leave the concrete message format open, you still have specified what formats (JSON, XML) can be used (whether hardcoded or by other means). Therefore in my opinion you have specified data representation.
If your software is modular in adding new formats, then that modularity itself is a feature (and not a lack of a feature).

Resources