What is a real life example of CORBA? - corba

What is an example of a situation where CORBA would be used? Is it just a matter of using an interface language (e.g. Java) to 'talk' to all applications?

CORBA might be used to build a language-independent, O/S-independent distributed system. For example, C++ on Linux developers could build a common distributed system with Java on Windows developers. IDL describes the interfaces that bind the two implementations over a common substrate (CORBA).
CORBA is also useful when building a plain old distributed object system - it has a rich set of services defined and is generally very well thought out. However, these days - depending on the language - many folks have opted for either simpler (e.g., RMI, protocol buffers) or message-based protocols (e.g., HTTP) for building distributed systems, so it's not as common. CORBA suffered from design-by-committee (esp on things like security).
More info:
http://en.wikipedia.org/wiki/Common_Object_Request_Broker_Architecture

You will see a list of real-life example of CORBA projects from below website.
http://www.cs.wustl.edu/~schmidt/TAO-users.html
TAO is one of the most popular C++ CORBA implementation available today. The project is pretty active.

CORBA technology vendors killed each other through incompatible and bureaucratic implementations. Today, you can safely consider CORBA to be a legacy technology; that is, use it if you have to deal with components that already expose themselves through COBA. Otherwise, stick to modern RPC/distribution standards like SOAP, or, better yet, REST/JSON.
Sorry. To answer your question: CORBA was intended to be what SOAP, REST, and others are today. Real-life examples of applications of the latter are examples of things attempted with the former.

Related

Is there any plain to replace DI implementation in Guice with dagger2

I like dagger2 a lot and want to use it in my new project. The only gotcha is with dagger2 we still have to write some boilerplate code and its missing support for CDI.
Since Google is developing and maintaining dagger2 and also using it for their Android development, I am wondering if they are thinking of replacing the DI implementation in Guice with dagger2, which is my first question.
If they are, then I can start using guice expecting that with some future update I will get the goodness of dagger.
But if they are not, is there a way that I can use both in the same project where guice can be limited to CDI.
I'm not a Dagger expert, but I'll try to answer your question... (and I hope to do it well)
I am wondering if they are thinking of replacing the DI implementation in Guice with dagger2,
Nope. There is no good reason to do it. Dagger and Guice present totally different approach to the Dependency Injection concept. The former uses a code generation, the latter - runtime reflection.
(...) is there a way that I can use both in the same project where guice can be limited to CDI?
I don't think it's a good idea to mix CDI, Dagger and Guice in the same project. Beside fact, that CDI is only a specification, not an actual implementation like Weld or OpenWebBeans - so I guess you wanted to say "DI"?.
Anyway: there is dagger-adapter extension which allows using Dagger2 modules with Guice (using DaggerAdapter) if you really want to mix Dagger with Guice 4.
By the way, I would like to give you an idea of what Dagger is and what it never will be. Below is a quote from Christian Gruber (who worked on Dagger) on this subject:
Guice will always have a superset of features compared to Dagger, though we do have projects using Dagger on the server and in stand-alone java apps. But Dagger is not as evolved in terms of the surrounding "scaffolding" code (servlet support, etc.) as Guice, and won’t be for quite some time. Additionally, some teams will need or want some advanced Guice features that will never make it in to Dagger.
You may ask what are these "advanced features"? It is e.g. AOP support, like intercepting methods, which might be crucial for many developers.
You can read the whole discussion (February 2014) which is available here.
As someone working on Java application framework development at Google, I can assure you that Google has large important projects built both with Guice and with Dagger, and both DI systems will continue to be used and developed for the foreseeable future.
My personal idea (which is not an official Google plan or statement) is that over time we will build both more powerful abstractions on top of Dagger (likely in add-on frameworks and/or libraries) so that Dagger continues to become suitable for a larger and larger set of applications, and also more powerful tooling around Guice to make the Guice developer experience become more and more comparable to the Dagger developer experience, at least for a subset of Guice applications that are doing "normal" things.
Both Dagger and Guice are useful tools, each with a different set of trade-offs and a different target audience. Using both in the same project is something that should be possible, although that isn't really the ideal solution because then you can't fully take advantage of the strengths of either of them. But better interoperability is a goal, and the Guice and Dagger teams regularly communicate about how to standardize and coordinate efforts.
Stumbled upon this after having issues with Guice and Java 11. As we barely use guice anyway, I intend to rip it out in favor of dagger for now. Guice is giving me a mega complicated asm based exception that is buried, hard to get a RCA read on and apparently not addressed by the framework. I also, having stepped through the guice code trying to figure this out over a week or so, find the "scaffolding" to be too much for at least my use case and it has me questioning the merits of DI frameworks generally. Dagger2 at least operates at compile time.

Why hasn't Erlang's Open Telecom Platform (OTP) been ported to other languages?

I'm starting to dive into Erlang for the first time, and OTP is held aloft by lovers and critics alike as being the gold standard for highly available, distributed processing.
Given that OTP has been around for decades and is openly documented, why is it that other languages supporting lightweight threads/processes haven't adopted versions of their own? Are there technical/political challenges? Or does everyone just shrug and learn Erlang?
Thanks!
The largest issue is that most language runtimes don't have built-in lightweight concurrency and error isolation with exit signal propagation. Without those things you would have a really hard time properly porting OTP.
For the languages that do have the right kind of runtime, I am seeing some effort or at least plans to build OTP inspired frameworks. Cloud Haskell is the first that comes to mind. I also expect that Go and Rust will eventually have something like OTP if they don't already.
There are technical challenges, as Erlang itself is designed for the same features OTP is known for. Case in point, Basho Riak is a distributed fault-tolerant key/value store written in Erlang. One might be able to port it to Haskell or some similar functional language, but it would probably be a lot of work. Just for fun, you might look into OTP stuff written in the Elixir language.
Actually, it has been (tried).
Akka is the library which takes some OTP features and implements them in Scala for JVM.
Given the principles underlying JVM and BEAM (the Erlang VM) are very different (mainly GC, scheduling and message passing are radically different), I can't say how successful that implementation is and how many benefits of the original OTP it preserves. There's a lot of (heated) debate on that in the internets.

Corba Trading Service inspecting tool

Is there any tool for viewing registered types in CORBA Trading Service, and maybe, for making some simple queries for objects?
I am using TAO, if it matters.
Not that I know off. Maybe you can write your own and contribute it back to TAO. Maybe consider a scripting language for a client, like Ruby with the R2CORBA implementation which is interoperable with TAO

Is there a more modern implementation of CORBA?

I'm figuring that CORBA is considered a legacy technology that just refuses to die. That being said, I'm curious if there are any known standards out there that are preferred (and are also as platform independent.)
Thoughts? TIA!
Many organization are moving to WebServices and the open standards relating to them (HTTP, WS-*) as alternatives to Corba.
This article provides a comparison of the two technologies and offers some recommendations on when to use which.
If you really care about platform independence and protocol standardization - then the WS-* standards are something to look into.
There is now a state of the art modern CORBA implementation using C++11, TAOX11. This uses the new IDL to C++11 language mapping. For TAOX11 see the TAOX11 website. TAOX11 is supported on a wide range of platforms and compilers.
I have recently tried Google Protocol buffers, they seem rather similar to CORBA by design (some kind of IDL with compiler, binary compact messages, etc). It is probably one of the many possible successors.
Web services are good for the right tasks but creating and parsing messages needs more time and text based messages are more bulky than binary ones. REST API with JSON looks like a good solution where binary protocols do not fit well.
ICE from ZeroC aims to be a "better CORBA".
Unfortunately their licensing terms are crap (at least last time I checked with them), as they do not sell developer licenses but only (roughly) per-installation terms.
It is offered via GPL license too, if you can live with this.

Should I make and implement a network protocol by hand or use a middleware (if so which)?

I have some data that I need to share between multiple services on multiple machines. Stuffing the data into a database or shuffling it over http won't work in this situation and ideally the different pieces of software will need to communicate with each other directly (or through one central coordinator that can send and receive).
Is it recommended to create and implement a network protocol or use some tool to do the communication?
If I did go the route of creating a protocol myself, it wouldn't have to be very complex. Under 10 different message types, but it would have to be re-implemented in a few different languages for this project, and support unicode. I have read plenty (and done some) with handling sockets, but don't have much knowledge in handling a protocol I create. Are there any good resources on this?
There are also things like ICE and RPC that look intresting. The limit of my experience is using ICE and XMLRPC for a few days each. Is this the better route to go? If so what tools are out there?
Recently I've been using Google Protocol Buffers for encoding and shipping data between different machines running software written in different languages. It is quite easy to do, and takes away a lot of the hassle of designing a custom protocol.
Without knowing what technologies and platforms you are dealing with, it's difficult to give you a very specific answer - so I'll try to give you some general feedback.
If the system(s) you are wishing to connect span more than a single platform and/or technology you are probably better using an existing transport mechanism and protocol to maximize the chance your base platform will already have a library (or multiple) to interact over it. Also, integrating security and other features in a stack with known behaviors is more likely to be documented (with examples floating around). RPC (and ICE, though I've less familiarity with it) has some useful capabilities, but it also requires a lot of control over the environment and security can be convoluted (particularly if you are passing objects between different languages).
With regards to avoiding polling, this is a performance related issue; there are design patterns which can help you to handle such things - if you understand how you need the system to work (e.g. the observer pattern - kind of a dont-call-us-we'll-call-you approach). The network environment you are playing in will dictate which options are actually viable (e.g. a local LAN will have different considerations from something which runs over a WAN or the internet). Factors like firewall tunneling, VPN traversal, etc. should play part in your final selected technology profile.
The only other major consideration (that I can think of just now... ;-)) would be to consider the type of data you need to pass about. Is it just text, or do you need to stream binary objects? Would an encoding format (like XML or JSON or bJSON) do the trick? You mention "less than ten message types" as part of the question, but is that the only information which would ever need to be communicated by the system?
Either way, unless the overhead of existing protocols is unacceptable you're better of leveraging established work 99% of the time. Creativity is great - but commercial projects usually benefit from well-known behaviors, even if not the coolest or slickest (kind of the "as long as it works..." approach).
hth!

Resources