Why has CORBA lost popularity? [closed] - corba

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 12 years ago.
I've never heard anyone speak of CORBA in anything but derisive terms which is odd considering that 10+ years ago it was the bee's knees. Why did CORBA fall from grace? Was it purely that the implementations were bad or was there something more fundamental?

It's not just CORBA, it's RPC in general. This includes stuff like DCOM, Java-RMI, .NET Remoting and all the others as well.
The problem is basically that distributed computing is fundamentally different than local computing. RPC tries to pretend these differences don't exist, and makes remote calls look just like local calls. But, in order to build a good distributed system, you need to deal with those differences.
Bill Joy, Tom Lyon, L. Peter Deutsch and James Gosling identified 8 Fallacies of Distributed Computing, i.e. things that newcomers to distributed programming believe to be true, but that are actually false, which usually results in the failure of the project or a significant increase in cost and effort. RPC is the perfect embodiment of those fallacies, because it is built on those same wrong assumptions:
The network is reliable.
Latency is zero.
Bandwidth is infinite.
The network is secure.
Topology doesn't change.
There is one administrator.
Transport cost is zero.
The network is homogeneous.
All of this is true for local computing.
Take reliability, for example: if you call a method locally, then the call itself always suceeds. Sure, the called method itself may have an error, but the actual calling of the method will always succed. And the result will always be returned, or, if the method fails, an error will be signaled.
In a distributed system, this is not true: the act of calling the method itself may fail. I.e. from your end it looks like you called the method, but the call actually got lost on the network and never reached the method. Or, the method successfully received the call and performed the operation, but the result got lost on the way back to you. Or the method failed, but the error got lost.
Similar with latency: locally, calling a method is essentially free. The method itself may take an arbitrary amount of time to compute the answer, but the call is free. Over a network, a call may take hundreds of milliseconds.
That's why pretty much all RPC projects, including CORBA failed.
Note that the other way around works just fine: if you just pretend that all calls are remote calls, then the worst thing that can happen is that you lose a little bit of performance and your app contains some error handling code that will never be used. That's how Erlang works, for example.
In Erlang, processes always have separate heaps and separate garbage collectors, just like if they were running on different machines on different continents, even if those processes run on the same VM on the same CPU in the same address space. If you pass data from one process to another, that data is always copied, just like it would have to be, if the processes were on different machines. Calls are always made as asynchronous message sends.
So, making local and remote calls look the same is not the problem. Making them look like local calls is.
In CORBA, the problem is actually a bit more convoluted than that. They actually did make local calls look like remote calls, but since CORBA was designed by committee, remote calls were incredibly complex, because they had to be able to handle some incredibly absurd requirements. And that complexity was forced upon everybody, even for local calls.
Again, comparing to Erlang, the complexity is much lower. In Erlang, sending a message to a process is not more complex than calling a method in Java. The interface is basically the same, it's only the expectations that are different: method calls in Java are expected to be instantaneous and synchronous, in Erlang, message sends are expected to be asynchronous and have visible latency. But actually using them is not more involved than a simple local procedure call.
Another difference is that Erlang distinguishes between function calls, which can only happen inside a process and thus are always local, and message sends, which happen between processes and are assumed to be always remote, even if they aren't. In CORBA, all method calls are assumed to be remote.

Distributed object technologies like CORBA and DCOM had problems with granularity - implementations tended to be too 'chatty' to perform well over a network - and generally leaked implementation details which made the solution fragile.
Service Orientation gained prominence as a reaction to those concerns.

Related

FluxSink seems to be queing things

For testing purposes, I want to send only one thing at a time, but the thing(s) that FluxSink is sending to the other side do not match the think that I literally just called the FluxSink.next method with. The thing(s) that it is sending over are things that were "nexted" a while ago. Is there any way to prevent FluxSink from doing any kind of queing/batching or to set the queue/batch size to 1, just like I'm setting my batch size to one for my test?
I'm not clearly understood what you're going to achieve, so it what I'm guessing base on title:
You may use FluxSink.OverflowStrategy but there no strategy to block FluxSink.next, because reactive programming is NOT about blocking programming and if producer in reactive programming is faster that consumer, reactive programming wil take care and will BUFFER|DROP etc. so You have to speed up Your consumer or choose appropriate FluxSink.OverflowStrategy.
Think how You will implement it without reactive programming. If your producer is faster than consumer you probably will queue data from producer or throw an error, because data are too old.
Anyway, probably the best choice in Your case will be Flux.create or Flux.generate Difference Between Flux.create and Flux.generate
Remember: Flux.generate is created to calculate and emit values on demand, so you may put BlockingQueue.poll() inside - it's not what I recommend, but it sth what You probably looking for.

Track and trace geolocation & temperature with blockchains

For my internship, I need to implement a blockchain based solution to manage a drug supply chain. The management of this supply chain implies to track-and-trace (geolocate) a drug on the chain, but also to monitor the storage temperature to see if the cold chain is respected. For that I also intend to use IOT, where a device will feed information on the blockchain solution. However, I have a few questions that I can't find easier.
The first one is that I don't know if I should use ethereum or not, since each time that a new block is added (the block representing the update on the information about the product on "real-time") I will to use money. Is there any solution for that? Or do I need to create a blockchain with javascript?
The second question is that I absolutely don't know from where to begin in order to implement IOT on the block chain. I searched on research site, but they only talk about it, without presenting any example...
The third one is more a confirmation than a question since I want to know if my idea to use an IOT to track and manage products on a supply chain can be done on a wide scale, since the bigger a blockchain the slower is the time to add a block because of the consensus mechanism. So it means that my "real time" tracking on truly be "on time" since there would be a waiting time before the block is added to the blockchain. If the time is just a few seconds to minutes, then there is no problem, but because the number of block will rapidly increase because of the real time tracking (1 block each minute for each storage or transport vehicles I was planning for) that this problem of scalability makes it impractible.
I'm thanking anybody in advance who will help me solve these questions.
#1) Whether you use Ethereum or another blockchain that may be better suited for this purpose is completely up to you. I expect you will get a lot of opinionated answers on this. Ethereum is certainly the most popular blockchain for a use like this, but that does not mean it is the best for your app. Over the last several years we have seen many new blockchains with lower/no fees, faster block times, and increased scalability. I would suggest doing some research into various "Supply Chain", "Enterprise", and "Business" blockchains, as these are likely the type you are looking for and will cost very little in blockchain fees due to them not being as widely used as Ethereum is.
#2) You will have to settle on a blockchain before you can start prototyping or looking for examples, as each one will be different. For storing "log" data for your application there are generally 2 options: Storing data in a smart contract on Ethereum (or an Ethereum-like blockchain), or storing data in the OP_RETURN field of a transaction on Bitcoin or a bitcoin-based blockchain. The latter is likely easier to get started with and is simpler to understand, you just put the data in a transaction and send it (to yourself, even).
#3) Yes, there a special purpose blockchains created exactly for this purpose that are meant to ingest large amounts of data and that can scale to meet the needs of an application like you describe. Some blockchains have block times of 1 minute or less meaning that on average, you could update the data every minute if you were willing and able to pay the blockchain fee to include the data in each new block (personally I would suggest a longer interval, such as every 5-10 minutes).
You can use Emercoin NVS technology and Emercoin (testnet) blockchain to upload you data into it. By creating some "name label" with name_new command, you can thereafter upload chain of modifications for values of your name. Blockchain has command name_show (shows recent uploaded value) and name_history (shows all chain of uploaded values). You can view/debug your uploaded values within Emercoin Testnet Explorer, tab NVS.
Regarding "use money".
I can give you (or anyone else) 100 test EMCs, for free. Just write your tEMC address in comments here. 100tEMCs will be enough for ~100,000 records within Testnet. Thus, I think, it is more than enough for your test tasks.
If you need to use your service for production, you need to use the "real blockchain" (with high trust), no testnet. Anyway, EMCs are very cheap right now, and you can buy 100EMSs for ~$5 only. I think, this is not big deal for your organization.
Ask more questions, and I will be happy to assist you here with this technology.
Because you have a permissioned / private environment that does not transfer value or store value, a blockchain specialised in value transfer, like Ethereum, is not a good choice.
Choosing a blockchain that is specialised in untrusted value writes simplies creating your product a lot. Some good choices include:
BigchainDB
HyperLedger SawTooth - comes with a stock supply chain example

Precise definition of "MPI_THREAD_SERIALIZED"

Since MPI_THREAD_MULTIPLE is not as perform and stable as MPI_THREAD_SERIALIZED, I am trying to convert my MPI application to serialized. However, the exact definition of MPI_THREAD_SERIALIZED is not clear to me. Does it mean:
Start and the end of any MPI call should not overlap with other calls?
or 2. Start of any MPI call should not overlap with other calls. (But two calls can still be overlapped)
If the former is the case, how to implement multi-threaded applications with long-lasting blocking APIs such as MPI_Comm_accept, MPI_Win_fence?
An example will be a DPM application waiting for clients to serve.

Distributed message passing in D?

I really like the message passing primitives that D implements. I have only seen examples of message passing within a program though. Is there support for distributing the messages over e.g. a network?
The message passing functions are in std.concurrency, which only deals with threads. So, the type of message passing used to pass messages between threads is for threads only. There is no RMI or anything like that in Phobos. That's not to say that we'll never get something like that in Phobos (stuff is being added to Phobos all the time), but it doesn't exist right now.
There is, however, the std.socket module which deals with talking to sockets, which is obviously network-related. I haven't used it myself, but it looks like it sends and receives void[]. So, it's not as nice as sending immutable objects around like you do with std.concurrency, but it does allow you to do network communication via sockets and presumably in a much nicer manner than if you were using C calls directly.
Seems that this has been considered. From the Phobos documentation (found it through Jonathan M Davis answer)
This is a low-level messaging API upon
which more structured or restrictive
APIs may be built. The general idea is
that every messageable entity is
represented by a common handle type
(called a Cid in this implementation),
which allows messages to be sent to
in-process threads, on-host processes,
and foreign-host processes using the
same interface. This is an important
aspect of scalability because it
allows the components of a program to
be spread across available resources
with few to no changes to the actual
implementation.
Right now, only in-process threads are
supported and referenced by a more
specialized handle called a Tid. It is
effectively a subclass of Cid, with
additional features specific to
in-process messaging.

Clone a lua state

Recently, I have encountered many difficulties when I was developing using C++ and Lua. My situation is: for some reason, there can be thousands of Lua-states in my C++ program. But these states should be same just after initialization. Of course, I can do luaL_loadlibs() and lua_loadfile() for each state, but that is pretty heavy(in fact, it takes a rather long time for me even just initial one state). So, I am wondering the following schema: What about keeping a separate Lua-state(the only state that has to be initialized) which is then cloned for other Lua-states, is that possible?
When I started with Lua, like you I once wrote a program with thousands of states, had the same problem and thoughts, until I realized I was doing it totally wrong :)
Lua has coroutines and threads, you need to use these features to do what you need. They can be a bit tricky at first but you should be able to understand them in a few days, it'll be well worth your time.
take a look to the following lua API call I think it is what you exactly need.
lua_State *lua_newthread (lua_State *L);
This creates a new thread, pushes it on the stack, and returns a pointer to a lua_State that represents this new thread. The new thread returned by this function shares with the original thread its global environment, but has an independent execution stack.
There is no explicit function to close or to destroy a thread. Threads are subject to garbage collection, like any Lua object.
Unfortunately, no.
You could try Pluto to serialize the whole state. It does work pretty well, but in most cases it costs roughly the same time as normal initialization.
I think it will be hard to do exactly what you're requesting here given that just copying the state would have internal references as well as potentially pointers to external data. One would need to reconstruct those internal references in order to not just have multiple states pointing to the clone source.
You could serialize out the state after one starts up and then load that into subsequent states. If initialization is really expensive, this might be worth it.
I think the closest thing to doing what you want that would be relatively easy would be to put the states in different processes by initializing one state and then forking, however your operating system supports it:
http://en.wikipedia.org/wiki/Fork_(operating_system)
If you want something available from within Lua, you could try something like this:
How do you construct a read-write pipe with lua?

Resources