I am new to opendaylight and I seek help regarding following:
How can I make two different applications communicate with each other?
Can I have something like a communication bus for my selected set of applications that can transfer data to each other? Or Do I need to have a single application with submodules (with different features ) to achieve the same task, i.e feature communication in this case.
The main feature that sets OpenDaylight (ODL) apart from other SDN controllers is the Model-Driven Service Abstraction Layer (MD-SAL), which provides a framework to share structured/modeled data and send notifications between ODL applications, among other things.
If you want to transfer data between ODL applications, you first need to model it using YANG, and include the YANG model in one of the applications.
To take advantage of the features offered by MD-SAL, please take a look at the official documentation. Once you understand the architecture, you should look at the source code of existing applications to see examples of how to take advantage of the power of MD-SAL.
Related
Migrating from monolith to a microservice architecture having a single API gateway to access multiple services i.e., cars, shoes, mobiles etc. I'm using NET 6, Docker, Docker-Compose, Ocelot for my project. I'd highly appreciate your feedbacks on my question below based on two scenarios.
Scenario 1
Number of solutions [ApiGateway.sln, Cars.sln, Shoes.sln, Mobiles.sln, ...]
Docker Container [ApiGateway, Cars, Shoes, Mobiles, ...]
Docker Sub Containers for Cars [Hyundai, Honda, ...]
Ocelot used for [ApiGateway, Cars, Shoes, Mobiles]
Sub-ApiGateways: used for all services. MasterAPIGateway will interact with the SubApiGateways of each services.
Details: For an instance, a call for getting all cars of Hyundai is made. So the MasterApiGateway calls the cars service. Now the car serivce uses its own ApiGateways configured using Ocelot to call the required project i.e., Hyundai.csproj methods.
Yes this can be simplied by removing the ocelot from Cars and converting projects into methods.
Scenario 2
Number of solutions [ApiGateway.sln, Services.sln]
Docker Container [ApiGateway, Services]
Docker Sub Containers for Services [Cars, Mobiles, Shoes, ...]
Ocelot used for [ApiGateway]
Details: This is too mainstream but what if each services cars is a big project in itself. Due to which I've tried to separate the services i.e., cars.service, mobile.services hosted in differnt ports as seen in the above image. Again what if services has a huge module i.e., cars.services.honda has over 1000 methods. Due to which I've created sub projects within Cars again hosted in different ports. However, I am trying to encapsulate these sub projects as a single service i.e., for cars only 5000 port will be used by the MasterApiGateway.
Please do suggest me best way to achieve this. Again each service and sub projects within each services is a huge project. So having all these in one solution is something I'm trying to avoid. Thank you for your feedbacks.
this is a design problem and it is highly abstract and depends on business requirements so there is no absolute solution.
the scenario that you have car service and has api for each car may looks proper one BUT as you said each one of them is huge. THIS IS MY OPONION AND NOT A SOLUTION:
if it is just HUGE in data dont bother your self its better go fore one service of car
if all type of cars share same sort of functionality (methods , process..etc) then one service is ok.
if each car type has its own methods and process (not just getting data) then you have complexity in business logic then go for services for each car or one main service of car with similar functionality which can be support by services specific to cars type which contains specific functionality to the car type. here the car service may be play role as aggregator service.
if the car service become very very huge in code size in such a way that the maintenance require more than 5 colleagues (the number may be vary depend on organization size etc) then it should break in pieces.
also look at ubiquitous language in Domain Driven Design. at least it helps your architecture to be more appropriate by hard communication with domain experts.
your problem is the very challenging part of microservices (true microservices) and its out of my experience (i am still studying the microservice architecture and always i find the huge mistakes on my previous works). so please discuss and study and just dont rely what i said.
these two articles are very very useful
decompose-by-subdomain decompose-by-business-capability
The first question you should ask yourself is why do you need microservices, and usually it's better to start with a modular monolith and then break out a service at a time when needed...
You really need to have a clear understanding of the reason why you do it and not just for the fun of creating a solution like this.
I agree what Rouzbeh says about domain driven design, start there! and find your true bounded contexts using the ubiquitous language as a guide.
I have to integrate various legacy applications with some newly introduced parts that are silos of information and have been built at different times with varying architectures. At times these applications may need to get data from other system if it exists and display it to the user within their own screens based on the business needs.
I was looking to see if its possible to implement a generic federation engine that kind of abstracts the aggregation of the data from various other OData endpoints and have a single version of truth.
An simplistic example could be as below.
I am not really looking to do an ETL here as that may introduce some data related side effects in terms of staleness etc.
Can some one share some ideas as to how this can be achieved or point me to any article on the net that shows such a concept.
Regards
Kiran
Officially, the answer is to use either the reflection provider or a custom provider.
Support for multiple data sources (odata)
Allow me to expose entities from multiple sources
To decide between the two approaches, take a look at this article.
If you decide that you need to build a custom provider, the referenced article also contains links to a series of other articles that will help you through the learning process.
Your project seems non-trivial, so in addition I recommend looking at other resources like the WCF Data Services Toolkit to help you along.
By the way, from an architecture standpoint, I believe your idea is sound. Yes, you may have some domain logic behind OData endpoints, but I've always believed this logic should be thin as OData is primarily used as part of data access layers, much like SQL (as opposed to service layers which encapsulate more behavior in the traditional sense). Even if that thin logic requires your aggregator to get a little smart, it's likely that you'll always be able to get away with it using a custom provider.
That being said, if the aggregator itself encapsulates a lot of behavior (as opposed to simply aggregating and re-exposing raw data), you should consider using another protocol that is less data-oriented (but keep using the OData backends in that service). Since domain logic is normally heavily specific, there's very rarely a one-size-fits-all type of protocol, so you'd naturally have to design it yourself.
However, if the aggregated data is exposed mostly as-is or with essentially structural changes (little to no behavior besides assembling the raw data), I think using OData again for that central component is very appropriate.
Obviously, and as you can see in the comments to your question, not everybody would agree with all of this -- so as always, take it with a grain of salt.
We have an ERP system made in java that we will adapt to 3-tier architecture, and we want to add transaction controls (JTA).
We read that the best way to analyze where to place the controls was to create a graph of the system scenarios using BPM and then adding controls to the graph.
the web give us 2 ways to make the graph:
By way of use (scenarios) of each module, adding to the graph the
different routes that can be done by using a module, for example: in
the invoice module the different ways to complete it (with detail,
without detail, etc...)
By relation between the modules, adding to the graph how passes from
module to module, for example in invoice how passes to client
account
Our question are:
Which is the best way?
Is there another way to do that?
Definitely, using a BPM solution like jBPM will help you to define your business scenarios and discover the interaction between the different departments and modules in your company. If you want to use BPM there will be some things to learn, I would suggest you to take a look at BPM solutions and see if that can help you in your specific implementation.
This is a general question about design. What's the best way to communicate between your business layer and presentation layer? We currently have a object that get pass into our business layer and the services reads the info from the object and sets the result into the object. When the service are finish, we'll have a object populated with result from business layer and then the UI can display according to the result of the object.
Is this the best approach? What other approach are out there?
Domain Driven Design books (the quickly version is freely avaible here) can give you insights into this.
In a nutshell, they suggest the following approach: the model objects transverse from model tier to view tier seamlessly (this can be tricky if you are using static typed languages or different languages on clinet/server, but it is trivial on dynamic ones). Also, services should only be used to perform action that do not belong to the model objects themselves (or when you have an action that involves lots of model objects).
Also, business logic should be put into model tier (entities, services, values objects), in order to prevent the famous anemic domain model anti pattern.
This is another approach. If it suits you, it depends a lot on the team, how much was code written, how much test coverage you have, how long the project is, if your team is agile or not, and so on. Domain Driven Design quickly discusses it even further, and any decision would be far less risky if you at least skim over it first (getting the original book from Eric Evans will help if you choose to delve further).
We use a listener pattern, and have events in the business layer send information to the presentation layer.
It depends on your architecture.
Some people structure their code all in the same exe or dll and follow a standard n-tier architecture.
Others might split it out so that their services are all web services instead of just standard classes. The benefit to this is re-usable business logic installed in one place within your physical infrastructure. So single changes apply accross all applications.
Software as a service and cloud computing are becoming the platform where things are moving towards. Amazons Elastic cloud, Microsofts Azure and other cloud providers are all offering numerous services which may affect your decisions for architecture.
One I'm about to use is
Silverlight UI
WCF Services - business logic here
NHibernate data access
Sql Server Database
We're only going to allow the layers of the application to talk via interfaces so that we can progress upto Azure cloud services once it becomes more mature.
Does anyone have advice or tips on using a web service as the model in an ASP.Net MVC application? I haven't seen anyone writing about doing this. I'd like to build an MVC app, but not tie it to using a specific database, nor limit the database to the single MVC app. I feel a web service (RESTful, most likely ADO.Net Data Services) is the way to go.
How likely, or useful, is it for your MVC app to be decoupled from your database? How often have you seen, in your application lifetime, a change from SQL Server to Oracle? From the last 10 years of projects I've delivered, it's never happened.
Architectures are like onions, they have layers of abstractions above things they depend on. And if you're going to use an RDBMS for storage, that's at the core of your architecture. Abstracting yourself from the DB so you can swap it around is very much a fallacy.
Now you can decouple your database access from your domain, and the repository pattern is one of the ways to do that. Most mature solutions use an ORM these days, so you may want to have a look at NHibernate if you want a mature technology, or ActiveRecord / linq2sql for a simpler active record pattern on top of your data.
Now that you have your data strategy in place, you have a domain of some sort. When you expose data to your client, you can choose to do so through an MVC pattern, where you'll usually send DTOs generated from your domain for rendering, or you can decide to leverage an architecture style like REST to provide more loosely coupled systems, by providing links and custom representations.
You go from tight coupling to looser coupling as you go towards the external layers of your solution.
If your question however was to build an MVC app on top of a REST architecture or web services, and use that as a model... Why bother? If you're going to have a domain model, why not reuse it in your system and your services where it makes sense?
Generating a UI from an MVC app and generating documents needed for a RESTful architecture are two completely different contexts, basing one on top of each other is just going to cause much more pain than needed. And you're sacrificing performance.
Depends on your exact scenario, but remote XML-based service as the model in MVC, from experience, not a good idea, it's probably over-engineering and disregarding the need for a domain to start with.
Edit 2010-11-27; clarified my thoughts, which was really needed.
A web service exposes functionality across different types of applications, not for abstraction in one single application, most often. You are probably thinking more of a way of encapsulating commands and reads in a way that doesn't interfere with your controller/view programming.
Use a service from a service bus if you're after the decoupling and do an async pattern in your async pages. You can see Rhino.ServiceBus, nServiceBus and MassTransit for .Net native implementations and RabbitMQ for something different http://blogs.digitar.com/jjww/2009/01/rabbits-and-warrens/.
Edit: I've had some time to try rabbit out in a way that pushed messages to my service which in turn pushed updates to the book keeping app. RabbitMQ is a message broker, aka a MOM (message oriented middle-ware) and you could use it to send messages to your application server.
You can also simply provide service interfaces. Read Eric Evan's Domain Driven Design for a more detailed description.
REST-ful service interfaces deal a lot with data, and more specifically with addressable resources. It can greatly simplify your programming model and allows great control over output through the HTTP protocol. WCF's upcoming programming model uses true rest as defined in the original thesis, where each document should to some extent provide URIs for continued navigation. Have a look at this.
(In my first version of this post, I lamented REST for being 'slow', whatever that means) REST-based APIs are also pretty much what CouchDB and Riak uses.
ADO.Net is rather crap (!) [N+1 problems with lazy collection because of code-to-implementation, data-access leakage - you always need your db context where your query code is etc] in comparison to for example LightSpeed (commercial) or NHibernate. Spring.Net also allows you to wrap service interfaces in their contain with a web service facade, but (without having browsed it for a while) I think it's a bit too xmly in its configuration.
Edit 1: With ADO.Net here I mean the default "best practice" with DataSets, DataAdapter and iterating lots of rows from a DataReader; it breeds rather ugly and hard-to-debug code. The N+1 stuff, yes, that is about the entity framework.
(Edit 2: EntityFramework doesn't impress me either!)
Edit 1: Create your domain layer in a separate assembly [aka. Core] and provide all domain and application services there, then import this assembly from your specific MVC application. Wrap data access in some DAO/Repository, through an interface in your core assembly, which your Data assembly then references and implements. Wire up interface and implementation with IoC. You can even program something for dynamic service discovery with the above mentioned service buses, to solve for the interfaces. WCF uses interfaces like this and so do most of the above service busses; you can provide a subcomponentresolver in your IoC container to do this automatically.
Edit 2:
A great combo for the above would be CQRS+EventSourcing+ReactiveExtensions. Your write-model would take commands, your domain model would decide whether to accept them, it would push events to the reactive-extensions pipeline, perhaps also over RabbitMQ, which your read-model would consume.
Update 2010-01-02 (edit 1)
The jest of my idea has been codified by something called MindTouch Dream. They have made a screencast where they treat almost all parts of a web application as a (web)-service, which also is exposed with REST.
They have created a highly parallel framework using co-routines to handle this, including their own elastic thread pool.
To all the nay-sayers in this question, in ur face :p! Listen to this screen-cast, especially at 12 minutes.
The actual framework is here.
If you are into this sort of programming, have a look at how monads work and their implementations in C#. You can also read up on CoRoutines.
Happy new year!
Update 2010-11-27 (edit 2)
It turned out CoRoutines got productized with the task parallel library from Microsoft. Your Task now implement the same features, as it implements IAsyncResult. Caliburn is a cool framework that uses them.
Reactive Extensions took the monad comprehensions to the next level of asynchronocity.
The ALT.Net world seems to be moving in the direction I talked about when I wrote this answer the first time, albeit with new types of architectures I knew little of.
You should define your models in a data access agnostic way, e.g. using Repository pattern. Then you can create concrete implementations backed by specific data access technologies (Web Service, SQL, etc).
It really depends on the size of this mvc project. I would say keep the UI and Domain in same running environment if the website is going to be used by a small number of users ( < 5000).
On the other side, if you are planning on a site that is going to be accessed by millions, you have to think distributed and that means you need to build your website in a way that it can scale up/out. That means you might need to use extra servers (Web, application and database).
For this to work nicely, you need to decouple your mvc UI site from the application. The application layer would usually contain your domain model and might be exposed through WCF or a service bus. I would prefer a Service Bus because it is more reliable and might use persistent queues like msmq.
I hope this helps