I read a lot about servant and objects used in technologies such as ICE or Corba. There are a lot of resources where I can read something like this :
One servant can handle multiple objects (for resource saving).
One object can be handled by multiple servants (for reliability).
Could somebody tell me a real life example for this two statements ?
If i am not mistaken, this term was coined by Douglas Schmidt in his paper describing Common Object Request Architecture.
Here is a direct quote of few definitions:
Note: see picture below for clarity
Object -- This is a CORBA programming entity that consists of an identity, an interface, and an implementation, which is known as a Servant.
Servant -- This is an implementation programming language entity that defines the operations that support a CORBA IDL interface. Servants can be written in a variety of languages, including C, C++, Java, Smalltalk, and Ada.
CORBA IDL stubs and skeletons -- CORBA IDL stubs and skeletons serve as the ``glue'' between the client and server applications, respectively, and the ORB
ORB Interface -- An ORB is a logical entity that may be implemented in various ways (such as one or more processes or a set of libraries). To decouple applications from implementation details, the CORBA specification defines an abstract interface for an ORB. This interface provides various helper functions such as converting object references to strings and vice versa, and creating argument lists for requests made through the dynamic invocation interface described below.
CORBA
The Common Object Request Broker Architecture (CORBA) is a standard defined by the Object Management Group (OMG) designed to facilitate the communication of systems that are deployed on diverse platforms. CORBA enables collaboration between systems on different operating systems, programming languages, and computing hardware
So, there are clients, server, client and server proxies, and ORB core. Client and server use proxies to communicate via ORB core, which provides a mechanism for transparently communicating client requests to target object implementations. From client perspective, this makes calls on remote objects look like the objects are in local address space and therefore simplifies design of clients in distributed environment.
Given all the above, Servant is an implementation which is an invocation target for remote client calls, and is abstracting remote objects which are actual targets.
As for your question, one servant can handle calls to multiple distributed objects which are encapsulated by the Servant. Note that the client doesn't access these objects directly but goes via Servant.
One servant for multiple objects is for example a bank, each bank account is an object but in this case you don't want to have a servant in memory for each bank account, so you have one servant for all bank accounts.
One object handled by multiple servants is for things like load balancing and fault tolerance. The client doesn't know which exact one it is executed on.
Related
I am new to using OPC UA, I would like me to clarify some doubts I have about OPC UA which are as follows:
– In what situations is the use of OPC UA.
– OPC UA Architectures over MQTT.
If there is any document that explains these two doubts, I thank you
OPC UA it is probably the de-facto standard for industrial M2M communication and it is very important in the context of Industrie 4.0.
Let's say you have an industrial machinery (like a PLC) that manages some others, like sensors. With OPCUA you can model into the PLC (which becomes an OPCUA server) some data, using an information model (object-structured and hierarchical, similar concepts to UML) built using rules defined by OPCUA standard (https://opcfoundation.org/developer-tools/specifications-unified-architecture/part-3-address-space-model/). So the PLC first gather data from these sensors using a specific industry protocol, then model in its address space some data that is considered relevant.
You can also build a (opcua) server on the sensors, imagine a temperature or humidity sensor in which you model data such as not only the value of the temperature, but also the manufacturer, engineering unit (Fahrenheit or Celsius for instance). But you can also insert methods within a server and associate to them some specific actions, for example turn on/off a specific functionality if some conditions occur. For all specifications you can look at https://opcfoundation.org/developer-tools/specifications-unified-architecture, where, after signing up, you can download specifications in detail. Another good documentation that I found is http://documentation.unified-automation.com/uasdkcpp/1.6.1/html/index.html where it is explained the main concepts.
Once you defined your opcua servers with an information model within its address space, you can start interacting with some others industrial machinery, in a standardized way. These machinery could be MES or HMI applications and they have to be opcua clients. They can query the opcua server above mentioned, browsing their address space, reading values, calling methods, monitoring some interesting variables or events (subscribing to them the server will send a notification when a change occurs). The main advantage is that all these operations are performed via the use of standardized messages: if you want to write a data you have to send a WriteRequest message, if you want to read the client will send a ReadRequest and so on. Since everything is standardized (from data types to serialization of messages), all clients can understand structure of opcua servers (even if they are from different manufacturers). Without that every manufacturer could use its own way to define services or variables and you have to create your application (let's say HMI) to fit to that particular vendor's APIs or conventions.
Regarding OPCUA over MQTT, in this you can find some useful information OPC UA protocol vs MQTT protocol. As I said before OPCUA has the advantage of defining a structured and a standard information model, accessible via standard services, so using MQTT is only one part of the whole.
Another good reference to understand information models in opcua server could be OPC Unified Architecture
I have to integrate various legacy applications with some newly introduced parts that are silos of information and have been built at different times with varying architectures. At times these applications may need to get data from other system if it exists and display it to the user within their own screens based on the business needs.
I was looking to see if its possible to implement a generic federation engine that kind of abstracts the aggregation of the data from various other OData endpoints and have a single version of truth.
An simplistic example could be as below.
I am not really looking to do an ETL here as that may introduce some data related side effects in terms of staleness etc.
Can some one share some ideas as to how this can be achieved or point me to any article on the net that shows such a concept.
Regards
Kiran
Officially, the answer is to use either the reflection provider or a custom provider.
Support for multiple data sources (odata)
Allow me to expose entities from multiple sources
To decide between the two approaches, take a look at this article.
If you decide that you need to build a custom provider, the referenced article also contains links to a series of other articles that will help you through the learning process.
Your project seems non-trivial, so in addition I recommend looking at other resources like the WCF Data Services Toolkit to help you along.
By the way, from an architecture standpoint, I believe your idea is sound. Yes, you may have some domain logic behind OData endpoints, but I've always believed this logic should be thin as OData is primarily used as part of data access layers, much like SQL (as opposed to service layers which encapsulate more behavior in the traditional sense). Even if that thin logic requires your aggregator to get a little smart, it's likely that you'll always be able to get away with it using a custom provider.
That being said, if the aggregator itself encapsulates a lot of behavior (as opposed to simply aggregating and re-exposing raw data), you should consider using another protocol that is less data-oriented (but keep using the OData backends in that service). Since domain logic is normally heavily specific, there's very rarely a one-size-fits-all type of protocol, so you'd naturally have to design it yourself.
However, if the aggregated data is exposed mostly as-is or with essentially structural changes (little to no behavior besides assembling the raw data), I think using OData again for that central component is very appropriate.
Obviously, and as you can see in the comments to your question, not everybody would agree with all of this -- so as always, take it with a grain of salt.
I am currently building a lightweight application layer which provides distributed services to applications of a specific type. This layer provides synchronization and data transmission services to applications which use that layer via an API. Therefore I classify this software as "middleware", since it bridges communication among heterogeneous distributed applications of a specific type. However, my software does not cover data representation. It therefore "only" delivers messages to other applications in a synchronized manner, but does not specify how messages look like and how they can be parsed/read/interpreted/or whatever. Instead, the developer should decide what message format he may use, e.g. JSON, XML, Protobuf, etc. The applications are most of the times governed by one developer party. Now, my question is, whether this is a severe "feature-lack" for being classified as a "distributed application middleware". The aim of the software is to glue together some heterogeneous software applications, where the software type cannot be compared to conventional software and therefore needs specific type of services (which prevents the user to "simply" use CORBA, etc.).
Thanks a lot!
Even though you leave the concrete message format open, you still have specified what formats (JSON, XML) can be used (whether hardcoded or by other means). Therefore in my opinion you have specified data representation.
If your software is modular in adding new formats, then that modularity itself is a feature (and not a lack of a feature).
I'm not too big of a fan of direct entity mappers, because I still think that SQL queries are fastest and most optimized when written by hand directly on and for the database (using correct joins, groupings, indexes etc).
On my current project I decided to give BLToolkit a try and I'm very pleased with its wrapper around Ado.net and speed so I query database and get strong type C# objects back. I've also written a T4 that generates stored procedure helpers so I don't have to use magic strings when calling stored procedures so all my calls use strong types for parameters.
Basically all my CRUD calls are done via stored procedures, because many of my queries are not simple select statements and especially my creates and updates also return results which is easily done using a stored procedure making just a single call. Anyway...
Downside
The biggest drawback of BLToolkit (I'd like everyone evaluating BLToolkit to know this) aren't its capabilities or speed but its very scarce documentation as well as support or lack thereof. So the biggest problem with this library is doing trial and error to get it working. That's why I also don't want to use too many different parts of it, because the more I use the more problems I have to solve on my own.
Question
What alternatives do I have to BLToolkit that:
support use of stored procedures that return whatever entities I provide that are not necessarily the same as DB tables
provide a nice object mapper from data reader to objects
supports relations (all of them)
optional (but desirable) support for multiple result-set results
doesn't need any special configuration (I only use data connection string and nothing else)
Basically it should be very lightweight, should basically just have a simple Ado.net wrapper and object mapper.
And the most important requirement: is easy to use, well supported and community uses it.
Alternatives (May 2011)
I can see that big guns have converted their access strategies to micro ORM tools. I was playing with the same idea when I evaluated BLToolkit, because it felt bulky (1.5MB) for the functionality I'd use. In the end I decided to write the aforementioned T4 (link in question) to make my life easier when calling stored procedures. But there are still many possibilities inside BLToolkit that I don't use at all or even understand (reasons also pointed out in the question).
Best alternative are micro ORM tools. Maybe it would be better to call them micro object mappers. They all have the same goals: simplicity and extreme speed. They are not following the NoSQL paradigm of their big fellow ORMs, so most of the time we have to write (almost) everyday TSQL to power their requests. They fetch data and map them to objects (and sometimes provide something more - check below).
I would like to point out 3 of them. They're all provided in a single code file and not as a compiled DLL:
Dapper - used by Stackoverflow itself; all it actually does it provides generic extension methods over IDbConnection which means it supports any backing data store as long there's a connection class that implements IDbConnection interface;
uses parametrised SQL
maps to static types as well as dynamic (.net 4+)
supports mapping to multiple objects per result record (as in 1-1 relationships ie. Person+Address)
supports multi-resultset object mapping
supports stored procedures
mappings are generated, compiled (MSIL) and cached - this can as well be downside if you use huge number of types)
Massive - written by Rob Connery;
only supports dynamic type mapping (no support in .net 3.5 or older baby)
is extremely small (few hundreds of lines of code)
provides a DynamicModel class that your entities inherit from and provides CRUD functionaly or maps from arbitrary baremetal TSQL
implicit paging support
supports column name mapping (but you have to do it every time you access data as opposed to declarative attributes)
supports stored procedures by writing direct parametrised TSQL
PetaPoco - inspired my Massive but with a requirement to support older framework versions
supports strong types as well as dynamic
provides T4 template to generate your POCOs - you'll end up with similar classes as big fellow ORMs (which means that code-first is not supported) but you don't have to use these you can still write your own POCO classes of course to keep your model lightweight and not include DB only information (ie. timestamps etc.)
similar to Dapper it also compiles mappings for speed and reuse
supports CRUD operations + IsNew
implicit paging support that returns a special type with page-full of data + all metadata (current page, number of all pages/records)
has extensibility point for various scenarios (logging, type converters etc)
supports declarative metadata (column/table mappings etc)
supports multi object mapping per result record with some automatic relation setting (unlike Dapper where you have to manually connect related objects)
supports stored procedures
has a helper SqlBuilder class for easier building TSQL statements
Of all three PetaPoco seems to be the liveliest in terms of development and support most of the things by taking the best of the other two (and some others).
Of all three Dapper has the best real-world usage reference because it's used by one of the highest traffic sites on the world: Stackoverflow.
They all suffer from magic string problem because you write SQL queries directly into them most of the time. But some of this can be mitigated by T4, so you can have strong typed calls that provide intellisense, compile-time checking and re-generation on the fly within Visual Studio.
Downside of dynamic type
I think the biggest downside of dynamic types is maintenance. Imagine your application using dynamic types. Looking at your own code after a while will become rather problematic, because you don't have any concrete classes to observe or hang on to. As much as dynamic types are a blessing they're as well a curse on the long run.
Does anyone have advice or tips on using a web service as the model in an ASP.Net MVC application? I haven't seen anyone writing about doing this. I'd like to build an MVC app, but not tie it to using a specific database, nor limit the database to the single MVC app. I feel a web service (RESTful, most likely ADO.Net Data Services) is the way to go.
How likely, or useful, is it for your MVC app to be decoupled from your database? How often have you seen, in your application lifetime, a change from SQL Server to Oracle? From the last 10 years of projects I've delivered, it's never happened.
Architectures are like onions, they have layers of abstractions above things they depend on. And if you're going to use an RDBMS for storage, that's at the core of your architecture. Abstracting yourself from the DB so you can swap it around is very much a fallacy.
Now you can decouple your database access from your domain, and the repository pattern is one of the ways to do that. Most mature solutions use an ORM these days, so you may want to have a look at NHibernate if you want a mature technology, or ActiveRecord / linq2sql for a simpler active record pattern on top of your data.
Now that you have your data strategy in place, you have a domain of some sort. When you expose data to your client, you can choose to do so through an MVC pattern, where you'll usually send DTOs generated from your domain for rendering, or you can decide to leverage an architecture style like REST to provide more loosely coupled systems, by providing links and custom representations.
You go from tight coupling to looser coupling as you go towards the external layers of your solution.
If your question however was to build an MVC app on top of a REST architecture or web services, and use that as a model... Why bother? If you're going to have a domain model, why not reuse it in your system and your services where it makes sense?
Generating a UI from an MVC app and generating documents needed for a RESTful architecture are two completely different contexts, basing one on top of each other is just going to cause much more pain than needed. And you're sacrificing performance.
Depends on your exact scenario, but remote XML-based service as the model in MVC, from experience, not a good idea, it's probably over-engineering and disregarding the need for a domain to start with.
Edit 2010-11-27; clarified my thoughts, which was really needed.
A web service exposes functionality across different types of applications, not for abstraction in one single application, most often. You are probably thinking more of a way of encapsulating commands and reads in a way that doesn't interfere with your controller/view programming.
Use a service from a service bus if you're after the decoupling and do an async pattern in your async pages. You can see Rhino.ServiceBus, nServiceBus and MassTransit for .Net native implementations and RabbitMQ for something different http://blogs.digitar.com/jjww/2009/01/rabbits-and-warrens/.
Edit: I've had some time to try rabbit out in a way that pushed messages to my service which in turn pushed updates to the book keeping app. RabbitMQ is a message broker, aka a MOM (message oriented middle-ware) and you could use it to send messages to your application server.
You can also simply provide service interfaces. Read Eric Evan's Domain Driven Design for a more detailed description.
REST-ful service interfaces deal a lot with data, and more specifically with addressable resources. It can greatly simplify your programming model and allows great control over output through the HTTP protocol. WCF's upcoming programming model uses true rest as defined in the original thesis, where each document should to some extent provide URIs for continued navigation. Have a look at this.
(In my first version of this post, I lamented REST for being 'slow', whatever that means) REST-based APIs are also pretty much what CouchDB and Riak uses.
ADO.Net is rather crap (!) [N+1 problems with lazy collection because of code-to-implementation, data-access leakage - you always need your db context where your query code is etc] in comparison to for example LightSpeed (commercial) or NHibernate. Spring.Net also allows you to wrap service interfaces in their contain with a web service facade, but (without having browsed it for a while) I think it's a bit too xmly in its configuration.
Edit 1: With ADO.Net here I mean the default "best practice" with DataSets, DataAdapter and iterating lots of rows from a DataReader; it breeds rather ugly and hard-to-debug code. The N+1 stuff, yes, that is about the entity framework.
(Edit 2: EntityFramework doesn't impress me either!)
Edit 1: Create your domain layer in a separate assembly [aka. Core] and provide all domain and application services there, then import this assembly from your specific MVC application. Wrap data access in some DAO/Repository, through an interface in your core assembly, which your Data assembly then references and implements. Wire up interface and implementation with IoC. You can even program something for dynamic service discovery with the above mentioned service buses, to solve for the interfaces. WCF uses interfaces like this and so do most of the above service busses; you can provide a subcomponentresolver in your IoC container to do this automatically.
Edit 2:
A great combo for the above would be CQRS+EventSourcing+ReactiveExtensions. Your write-model would take commands, your domain model would decide whether to accept them, it would push events to the reactive-extensions pipeline, perhaps also over RabbitMQ, which your read-model would consume.
Update 2010-01-02 (edit 1)
The jest of my idea has been codified by something called MindTouch Dream. They have made a screencast where they treat almost all parts of a web application as a (web)-service, which also is exposed with REST.
They have created a highly parallel framework using co-routines to handle this, including their own elastic thread pool.
To all the nay-sayers in this question, in ur face :p! Listen to this screen-cast, especially at 12 minutes.
The actual framework is here.
If you are into this sort of programming, have a look at how monads work and their implementations in C#. You can also read up on CoRoutines.
Happy new year!
Update 2010-11-27 (edit 2)
It turned out CoRoutines got productized with the task parallel library from Microsoft. Your Task now implement the same features, as it implements IAsyncResult. Caliburn is a cool framework that uses them.
Reactive Extensions took the monad comprehensions to the next level of asynchronocity.
The ALT.Net world seems to be moving in the direction I talked about when I wrote this answer the first time, albeit with new types of architectures I knew little of.
You should define your models in a data access agnostic way, e.g. using Repository pattern. Then you can create concrete implementations backed by specific data access technologies (Web Service, SQL, etc).
It really depends on the size of this mvc project. I would say keep the UI and Domain in same running environment if the website is going to be used by a small number of users ( < 5000).
On the other side, if you are planning on a site that is going to be accessed by millions, you have to think distributed and that means you need to build your website in a way that it can scale up/out. That means you might need to use extra servers (Web, application and database).
For this to work nicely, you need to decouple your mvc UI site from the application. The application layer would usually contain your domain model and might be exposed through WCF or a service bus. I would prefer a Service Bus because it is more reliable and might use persistent queues like msmq.
I hope this helps