Compatibility of CORBA implementations - corba

As far as I know there were problems with incompatible CORBA implementations in the past resulting from different interpretations of the specification. How is the situation today? Can I expect two different implementations to interact without problems?

I worked with corba stuff for the last 8 years. Its a standard (ASAM ODS OO API) and uses Corba to implement the API between server and clients.
We din't find incompatibilities till a long time. (java, jorb, mico, omniorb / C++ / Java)
When we started there were mainly problems with the nameservices (different port, ...) but we didn't figure out any real incompatibility till then.
I think the difference between the different orb implementations may be the features that are represented by the implementation.
What is still there is that speed of the different implementations differ.

At the time I was testing Sun's (now Oracle's) CORBA implementation (Java SE 1.4.0) compatibility with my own, I have discovered that Sun's CORBA it is not properly switching from big to little endian (CORBA must be capable of processing both big and little endian messages, determining the byte order from the header). C-based implementation that send the first message using little endian default would have not been able to talk.
The issue has been reported as bug 4119129 and seems fixed now. I am posting because maybe somebody has tried at that time and remembers that it did not work with.
At that time, this was one of the reasons to use JacORB instead.

Related

HTTP Libraries in elixir

This question is more of getting information about different HTTP Libraries in elixir.
I have been using HTTPoison for a long time in my all elixir projects. which is a wrapper around Hackney. I read some stuff on http://big-elephants.com/2019-05/gun/ about hackney as
Occasionally hackney could get stuck, so all the calls to HTTPoison
would be hanging and blocking caller processes.
Let me ask with an example, I was using Poison in my project and I considered it to be a best JSON Parser, then I saw Jason.
The primary reason is performance. Jason is about twice as fast and
uses half the memory while being almost 100% functionally compatible
with Poison (it covers all the features used directly in Phoenix).
I know the reason why I want to switch to Jason instead of Poison.
In HTTP, I can see there are Tesla, Mint and Gun. I just want to know the perspective of switching from something Famous to I may say Accurate one?

Is there a more modern implementation of CORBA?

I'm figuring that CORBA is considered a legacy technology that just refuses to die. That being said, I'm curious if there are any known standards out there that are preferred (and are also as platform independent.)
Thoughts? TIA!
Many organization are moving to WebServices and the open standards relating to them (HTTP, WS-*) as alternatives to Corba.
This article provides a comparison of the two technologies and offers some recommendations on when to use which.
If you really care about platform independence and protocol standardization - then the WS-* standards are something to look into.
There is now a state of the art modern CORBA implementation using C++11, TAOX11. This uses the new IDL to C++11 language mapping. For TAOX11 see the TAOX11 website. TAOX11 is supported on a wide range of platforms and compilers.
I have recently tried Google Protocol buffers, they seem rather similar to CORBA by design (some kind of IDL with compiler, binary compact messages, etc). It is probably one of the many possible successors.
Web services are good for the right tasks but creating and parsing messages needs more time and text based messages are more bulky than binary ones. REST API with JSON looks like a good solution where binary protocols do not fit well.
ICE from ZeroC aims to be a "better CORBA".
Unfortunately their licensing terms are crap (at least last time I checked with them), as they do not sell developer licenses but only (roughly) per-installation terms.
It is offered via GPL license too, if you can live with this.

What is a real life example of CORBA?

What is an example of a situation where CORBA would be used? Is it just a matter of using an interface language (e.g. Java) to 'talk' to all applications?
CORBA might be used to build a language-independent, O/S-independent distributed system. For example, C++ on Linux developers could build a common distributed system with Java on Windows developers. IDL describes the interfaces that bind the two implementations over a common substrate (CORBA).
CORBA is also useful when building a plain old distributed object system - it has a rich set of services defined and is generally very well thought out. However, these days - depending on the language - many folks have opted for either simpler (e.g., RMI, protocol buffers) or message-based protocols (e.g., HTTP) for building distributed systems, so it's not as common. CORBA suffered from design-by-committee (esp on things like security).
More info:
http://en.wikipedia.org/wiki/Common_Object_Request_Broker_Architecture
You will see a list of real-life example of CORBA projects from below website.
http://www.cs.wustl.edu/~schmidt/TAO-users.html
TAO is one of the most popular C++ CORBA implementation available today. The project is pretty active.
CORBA technology vendors killed each other through incompatible and bureaucratic implementations. Today, you can safely consider CORBA to be a legacy technology; that is, use it if you have to deal with components that already expose themselves through COBA. Otherwise, stick to modern RPC/distribution standards like SOAP, or, better yet, REST/JSON.
Sorry. To answer your question: CORBA was intended to be what SOAP, REST, and others are today. Real-life examples of applications of the latter are examples of things attempted with the former.

Why is creating a 64bit Delphi so hard?

The Internet is full of developers requesting a 64bit Delphi, and users of Delphi software requesting 64 versions.
delphi 32bit : 1.470.000 pages
delphi 64bit : 2.540.000 pages :-)
That's why I've been wondering why Embarcadero still doesn't offer such a version.
If it was easy to do, I'm sure it would've been done a long time ago already. So what exactly are the technical difficulties that Embarcedero need to overcome?
Is it the compiler, the RTL/VCL, or the IDE/Debugger?
Why is the switch from 32bit to 64bit more complicated than it was for Borland to switch from 16bit to 32bit?
Did the FPC team face similar problems when they added 64bit support?
Am I overseeing something important when I think that creating a 64bit Delphi should be easier than Kylix or Delphi.Net?
For things I had read in forums, I think the main delay was that the compiler for 32-bits could not be adapted to 64-bits easily at all, so they had to write a new compiler with a structure that allows porting it to new platforms easily. That delay is pretty easy to understand in that field.
And the first thing the new compiler had to do is to support the current 32-bit Windows before targeting it for 64-bit, so that extra-delay is also easy to understand.
Now, in the road to the 64-bit support, Embarcadero decided to target 32-bit MacOSx, and that decision is something that some people don't understand at all. Personally I think it's a good marketing decision for the Embarcadero business point of view (wait, I'm not saying 64-bit support is less important, read carefully, I'm not talking about importance but about commerciality). That is a very controversial extra delay in the road to 64-bits (besides Embarcadero says that they have teams working in parallel, in the facts there is a delay, at least for versioning issues -marketing again-).
Also is good to have in mind FPC is not a commercial product, so they can add/remove features more easily than Delphi.
If it weren't for the limitation on shell extensions (I have one that loads into Windows Explorer), I'd probably never care about 64. But due to this limitation, I need it, and I need it now. So I'll probably have to develop that part in Free Pascal. It's a pity, as aside from this, there are few apps that would actually benefit from 64. IMO, most users are either drinking the coolaid, or are angry about having been duped into buying something that sounded great but turned into a headache. I know a guy who is happy to run Win7/64 so he has enough RAM to run a full copy of XP in a VM, which he wouldn't need if he'd gotten Win7/32 like I told him too. :-<
I think everyone has been duped by the HW manufacturers, particularly the RAM dealers who would otherwise have a very soft market.
Anyway, back to the question at hand... I'm caught between a rock and a hard place. My customers are placing demands on me, due to an architecture decision from M$ (not allowing 32-bit DLLs in Windows Explorer) and perception issues (64-bit must be twice as good as 32, or maybe 32 has to run on the "penalty core" or something). So I'm being driven by a largely "artificial" motivation. And therefore, I must project that onto Embarcadero. But in the end, the need for 64-bit support in Delphi is IMO, mostly based on BS. But they're going to have to respond to it, as will I.
I guess the closest I've seen to an "answer" to your question from Embarcadero's point of view is summarised in this article on the future of the Delphi compiler by Nick Hodges.
The real issues are not technical. Borland/CodeGear first, then Embarcadero, show they do not like to mantain more than one Windows version of Delphi. They delayed the Unicode switch until they could drop Ansi OS support wholly. Actually they would need to support both a Win32 compiler/library and a 64 bit compiler/library because there are a mix of 32 and 64 bit Windows OS used. I believe they are trying to delay it as much as possible to avoid to mantain the 32 bit ones as much as they could.
Delphi compiler became pretty old and difficul to mantain, but they decide to rewrite it aiming at non-Windows OSes, and I am sure the driver was to port some Embarcadero database tools to non-Windows 32 bit platforms, ignoring Delphi customers' actual needs, and delaying again the 64 bit compiler and library in a cross-platform attempt made again trying to cut corners to deliver it quickly, and thereby doomed to fail once more.
Stubbornly, they do not want to switch to a two year release cycle because they want fresh cash each year, notwithstanding it becomes increasingly difficult to release real well done improvements in a so short cycle - and it took almost four years to fix issues introduced with Delphi 2005 changes. In turn they have to put developers to work on bringing in "minor" improvements to justify upgrades, instead of working on 64 bit stuff fully.
I have heard that they wanted a complete rewrite of the compiler without losing the backward compatibility. Considering that there is no full syntax description of the language (I have asked, but they haven't so I made my own available to the general public). I can Imagine that the documentation is not as complete as they wanted it. So they are probably trying to reverse engineer their own code.
I'm a strong supporter of Delphi, and I think 2009 and 2010 are great releases (comparable with the rock solid #7). But the lack of 64 bit support is going to kill them in the end.
Back to the question, the biggest problem should be the compiler. The switch from 16 to 32 bit was easier because there was less legacy (delphi 2 was 32 bit and the Object Pascal language was a lot simpeler than it is now.)
I'm not sure about Free pascal. Maybe their code was easy to port. Maybe they lost some backward compatibility. To be sure, you can ask them.
I know you are asking for the technical issues but I guess the marketing department might be the biggest issue... I am sure they get far more excited from the prospect of new markets that bring new customers an thus manage to shift priorities. A problem (in my opinion) is the bad track record: we have seen kylix and delphi.net in the past that were both ehm kylixed. I can imagine that new customers will wait and see if it's around to stay and that in turn might decide embarcadero to leave it prematurely.
That said: there are certainly some issues and design considerations for x64 and I just hope that the Embarcadero team will share it's thoughts about them and discuss with the community (to prevent rants as we've had about the unicode change).
There already is a 64-bit Delphi (Object Pascal). It's called Free Pascal. So while I have no doubt it's hard, it's not "so hard" that it's not feasible. Of course, I can't speculate about Embarcedero.
Allen Bauer from Embarcadero also said recently that they had to implement exception support completely differently for 64-bit "due to differences in the exception ABI on Win64".

Datamining models in FORTRAN or C (or managed code)?

We are planning to develop a datamining package for windows. The program core / calculation engine will be developed in F# with GUI stuff / DB bindings etc done in C# and F#.
However, we have not yet decided on the model implementations. Since we need high performance, we probably can't use managed code here (any objections here?). The question is, is it reasonable to develop the models in FORTRAN or should we stick to C (or maybe C++). We are looking into using OpenCL at some point for suitable models - it feels funny having to go from managed code -> FORTRAN -> C -> OpenCL invocation for these situations.
Any recommendations?
F# compiles to the CLR, which has a just-in-time compiler. It's a dialect of ML, which is strongly typed, allowing all of the nice optimisations that go with that type of architecture; this means you will probably get reasonable performance from F#. For comparison, you could also try porting your code to OCaml (IIRC this compiles to native code) and see if that makes a material difference.
If it really is too slow then see how far that scaling hardware will get you. With the performance available through a modern PC or server it seems unlikely that you would need to go to anything exotic unless you are working with truly brobdinagian data sets. Users with smaller data sets may well be OK on an ordinary PC.
Workstations give you perhaps an order of magnitude more capacity than a standard dekstop PC. A high-end workstation like a HP Z800 or XW9400 (similar kit is available from several other manufacturers) can take two 4 or 6 core CPU chips, tens of gigabytes of RAM (up to 192GB in some cases) and has various options for high-speed I/O like SAS disks, external disk arrays or SSDs. This type of hardware is expensive but may be cheaper than a large body of programmer time. Your existing desktop support infrastructure shouldn be able to this sort of kit. The most likely problem is compatibility issues running 32 bit software on a 64-bit O/S. In this case you have various options like VMs or KVM switches to work around the compatibility issues.
The next step up is a 4 or 8 socket server. Fairly ordinary wintel servers go up to 8 sockets (32-48 cores) and perhaps 512GB of RAM - without having to move off the Wintel platform. This gives you fairly wide range of options within your platform of choice before you have to go to anything exotic1.
Finally, if you can't make it run quickly in F#, validate the F# prototype and build a C implementation using the F# prototype as a control. If that's still not fast enough you've got problems.
If your application can be structured in a way that suits the platform then you could look at a more exotic platform. Depending on what will work with your application, you might be able to host it on a cluster, cloud provider or build the core engine on a GPU, Cell processor or FPGA. However, in doing this you're getting into (quite substantial) additional costs and exotic dependencies that might cause support issues. You will probably also have to bring a third-party consultant who knows how to program the platform.
After all that, the best advice is: suck it and see. If you're comfortable with F# you should be able to prototype your application fairly quickly. See how fast it runs and don't worry too much about performance until you have some clear indication that it really will be an issue. Remember, Knuth said that premature optimisation is the root of all evil about 97% of the time. Keep a weather eye out for issues and re-evaluate your strategy if you think performance really will cause trouble.
Edit: If you want to make a packaged application then you will probably be more performance-sensitive than otherwise. In this case performance will probably become an issue sooner than it would with a bespoke system. However, this doesn't affect the basic 'suck it and see' principle.
For example, at the risk of starting a game of buzzword bingo, if your application can be parallelized and made to work on a shared-nothing architecture you might see if one of the cloud server providers [ducks] could be induced to host it. An appropriate front-end could be built to run locally or through a browser. However, on this type of architecture the internet connection to the data source becomes a bottleneck. If you have large data sets then uploading these to the service provider becomes a problem. It may be quicker to process a large dataset locally than to upload it through an internet connection.
I would advise not to bother with optimizations yet. First try to get a working prototype, then find out where computation time is spent. You can probably move the biggest bottlenecks out into C or Fortran when and if needed -- then see how much difference it makes.
As they say, often 90% of the computation is spent in 10% of the code.

Resources