Why do SCDF does not support JMS? - spring-cloud-dataflow

I'm evaluating the use of Spring-Cloud-DataFlow. I'm wondering why do it supports the use of Kafka and RabitMQ but does not support JMS? Is there a technical reasons? or is it just a mater of contributing and add the JMS support?

There is a variety of JMS-spec implementation from different vendors. In fact, we have an implementation for IBM MQ, Solace and ActiveMQ.
As for the support, since JMS is a spec and there is a variety of vendor-specific investments in the enterprise, we (spring) didn't want to ship binaries that involve vendor-specific licensing terms, so we have opened it up with the partners to support them instead. Example: Solace built a supported version of Solace PubSub+ implementation, and that's hosted in their GitHub, too.
Google PubSub and Azure Event Hubs are the other binder implementations, and they are supported and maintained by them directly. More details here.
Lastly, from SCDF point-of-view, if the Spring Cloud Stream applications are bundled with the particular binder implementation, there's nothing extra required for SCDF. The SCDF-server orchestrates the deployment of the Spring Cloud Stream applications on the targeted platform.

Related

difference between spring-cloud-starter-dataflow-server (Data Flow Server Starter) and spring-cloud-starter-dataflow-server-local (Local Data Flow Se

I've recently started understanding the Spring Cloud Data Flow, also called as SCDF. I've just started looking at https://codenotfound.com/spring-batch-admin-example.html which seems very nice example, also would need more examples to really understand the use of Spring Cloud Data Flow with Spring Batch, as I've good experience with Spring Batch.
What's the difference between spring-cloud-starter-dataflow-server (Data Flow Server Starter) and spring-cloud-starter-dataflow-server-local (Local Data Flow Server Starter) ?
We used to ship spring-cloud-starter-dataflow-server-local as a standalone uber-jar for local deployments a few years ago. Similarly, we used to have spring-cloud-starter-dataflow-server-kubernetes, spring-cloud-starter-dataflow-server-cloudfoundry, and others.
However, we have consolidated all the supported platform implementations of SCDF into a single uber-jar, and that is spring-cloud-starter-dataflow-server. Please only use this artifact for any development/deployment, even if it is only used locally.
As for feature capabilities, we have a dedicated page that lists them. Once you dig into the relevant sections ranging from developer guides [example: batch developer guide] to recipes, hopefully, you will have an idea.
And, likewise, you might find the architecture and concepts useful for your research, which will cover the broad set of capabilities that SCDF supports including first-class orchestration experience for Spring Batch workloads.

Key differences between Hyperledger Aries and Hyperledger Indy?

Both Hyperledger Aries and Hyperledger Indy are platforms for building distributed ledger applications
for identities.
What are the main differences between them? When to choose one over the other to implement a blockchain solution?
Aries is the agent (client) part of a decentralized identity (ledger, DIDs, verifiable credentials) application that is intended to be agnostic to the underlying ledger/DIDs/verifiable credentials layer.
Indy is a decentralized identity implementation including support for a ledger, DIDs and verifiable credentials.
Initial Aries work was to move the agent work in Indy to Aries, and so the first working versions of Aries use Indy underneath for the decentralized identity components. Over time, those components will become pluggable, and additional decentralized identity components will be supported. Thus, major parts of the indy-sdk will be deprecated, as they are implemented in Aries.
For building solutions, you should always be looking at Aries to start. You will need to know what Indy does, but not the details. The only time you would need to dive into Indy is if you what to extend its capabilities to support your use case.
The question is clear, what choice to make between Aries and Indy, we could also rephrase as follows: where to start from?
Indy represents, certainly, a base layer, Aries, at the moment, is the better choice for building identity app. Indy, infact, provides: Blockchain network, DID’s implementation, and all features related to verifiable credentials. Aries, on top of it, helps to build apps which can communicate over peer to peer network through secured communication channel: DIDComm. From the user interaction point of view, Aries is integrated with identity apps and internally talks to Indy. Aim of Aries, obviousely, is to build an interoperable communication layer that can be connected not only to Indy but also others blockchains framework.
Indy, as the first project in the Hyperledger family to build a decentralized identity, offers a real-time view of the transactions and its architecture is based on self-sovereign identity which enables users to have complete control over their identity. At the very beginning Indy was good to build an identity solution but, clearly, it lacks a peer to peer communication which is the heart of identity solution. Aries has filled this part. For building solutions there are lots of things that need to be considered but you should look at Aries to start
For building a decentralized identity solution, Hyperledger Indy project was started. Evernym has donated the codebase to Hyperledger community and thus Indy was born.
In initial architecture, Indy was supposed to provide governance (Consensus), Verifiable Credentials, DID, and DID Communication between different entities and Hyperledger Indy has provided all the above except DID Communication which later gave birth to Hyperledger Aries.
Indy Journey
Now the question is why don't Indy itself provide DID comm feature rather than relying on Aries
1: Hyperledger projects support plug and play architecture, detaching DID communication is a good decision for Indy to focus more on core identity party and Hyperledger Ursa was created for same reason to detach all cryptographic features from Indy.
2: Identity is a fundamental right of citizens, so in later times there would be thousands of service providers who offer Identity solutions and Interoperability would be a key factor. Keeping in mind, building a Ledger agnostic based clients is a good idea where Aries not only support Indy but it will support other Blockchain ledgers too.
So putting all pieces together, Indy is providing a core identity feature where Aries is just one of the clients who is availing those services. It is the same as the relation between Ethereum (Indy) and Web3 (Aries). For a development perspective, we need to be more focused on Aries to develop client apps.
in layman's:
Aries is for communication between agents
Indy is for cryptographic
transactions (issuance/proofing/etc)
Aries primarily covers the agent part of Hyperledger Indy, which has been initially cevered by Indy-Sdk. It supports connections to other Blockchains (For now it's only for Indy ledger).
Whereas Indy covers the blockchain part as of now.

Solace Integration with .NET/REST

I am totally new to Solace, your ideas will be grateful. building a system that will integrate with Solace Messaging bus. the system will have a service layer which will communicate with Solace Messaging bus to pull messages from external systems and in future, it also will be integrate with internet based messages.
So I have two options in front of me, 1) JMS 2) .NET 3) REST
Could you please let me know which is the best option above, when the service layer has to connect with .NET Business layer. considering, extensibility, performance, message transformation, scalability etc.,
Thanks
Messages are generally inter-operable between the different APIs.
For example, a JMS message can be consumed by a .NET consumer.
You can pick whichever API is the most convenient for your use case.
Since your service layer is communicating directly with your .NET Business layer, perhaps it might make sense to have the service layer use the .NET API.
Alternatively, you might feel that it makes sense to make use of the REST API, which is an open protocol, without the need for Solace provided libraries.

How does AppDynamics (and programs alike) retrieve information

How does AppDynamics and similar problems retrieve data from apps ? I read somewhere here on SO that it is based on bytecode injection, but is there some official or reliable source to this information ?
Data retrieval by APM tools is done in several ways, each one with its pros and cons
Bytecode injection (for both Java and .NET) is one technique, which is somewhat intrusive but allows you to get data from places the application owner (or even 3rd party frameworks) did not intend to allow.
Native function interception is similar to bytecode injection, but allows you to intercept unmanaged code
Application plugins - some applications (e.g. Apache, IIS) give access to monitoring and application information via a well-documented APIs and plugin architecture
Network sniffing allows you to see all the communication to/from the monitored machine
OS specific un/documented APIs - just like application plugins, but for the Windows/*nix
Disclaimer : I work for Correlsense, provider of APM software SharePath, which uses all of the above methods to give you complete end-to-end transaction visibility.

Erlang - Riak clients

I am in trouble finding API for the "local Erlang client" for Riak.
Here is what Riak wiki says:
The local Erlang client is a tightly-integrated part of Riak and the Riak REST interface uses the Erlang client internally. You can find more information about the Erlang-native driver in the edoc API.
The link redirects to the main wiki-page. There is plenty of information on PBC Client though.
How do both clients compare and what are the pros and cons in using one or another?
The API for the native erlang client or edoc is found here
But I would second what Dan says. However, note that the PBC is still very much at the alpha stage of development and as far as I know does not yet have map reduce capabilities.
I would recommend using the PBC client. The performance is comparable to the native erlang client. It is also easier to decouple your application code from Riak. The native erlang client requires the entire Riak code base as a dependency.
From Riak 2.0 and on, it is highly recommended to use PB (Protocol Buffers) APIs over HTTP APIs. They have become a primary APIs, have more functionality and also is faster than HTTP APIs.
Getting Started with Erlang client
GitHub repo for official Riak Erlang client

Resources