Conditional Flow - Spring Cloud Dataflow - spring-cloud-dataflow

In Spring Cloud Dataflow (Stream) what's the right generic way to implement the conditional flow based on a processor output
For example, in below case to have the execution flow through different path based on the product price spit by a transformer.
Example Process Diagram

You can use the router-sink to dynamically decide where to send the data based on the upstream event-data.
There is a simple SpEL as well as comprehensive support via a Groovy script as options, which can help with decision making and conditions. See README.
If you foresee more complex conditions and process workflows, alternatively, you could build a custom processor that can do the dynamic routing to various downstream named-channel destinations. This approach can also help with unit/IT testing the business-logic standalone.

Related

Spring Cloud Data Flow : Sample application to have multiple inputs and outputs

Spring Cloud Data Flow : Sample application to have multiple inputs and outputs.
Can anyone have a sample application to demonstrate spring cloud data flow with multiple inputs and outputs support
These samples apps uses multiple inputs and outputs.
The code from the acceptance tests provides an illustration of usage.
In dataflow you start by adding only one app to dsl with one of the required properties, then on deployment, you remove any extra properties and provide all the properties to connect the various streams.
The properties refer to each app by its registered name.

Sample application for using flow using wolkenkit

Can we get some sample application describing the behaviour of flow mentioned in its documentation.
https://docs.wolkenkit.io/2.0.0/reference/creating-stateful-flows/overview/
A stateful flow is a state machine whose transitions are caused by events. Whenever a stateful flow transitions, it is able to run a reaction, such as sending commands or running tasks. Using their state they have knowledge of their past. This way you can use them to create complex workflows that include conditions and loops, e.g. to notify a user once an invoice has been rejected for the third time in a row.
I think a good place to start is the wolkenkit-boards sample application (see a list of all available sample applications here). It basically is an application for collaboratively organizing notes, similar to tools such as Trello.
It contains a tip of the day workflow, which actually uses a stateful flow under the hoods.
I hope this helps 😊

Conditional iterations in Google cloud dataflow

I am looking at the opportunities for implementing a data analysis algorithm using Google Cloud Dataflow. Mind you, I have no experience with dataflow yet. I am just doing some research on whether it can fulfill my needs.
Part of my algorithm contains some conditional iterations, that is, continue until some condition is met:
PCollection data = ...
while(needsMoreWork(data)) {
data = doAStep(data)
}
I have looked around in the documentation and as far as I can see I am only able to do "iterations" if I know the exact number of iterations before the pipeline starts. In this case my pipeline construction code can just create a sequential pipeline with fixed number of steps.
The only "solution" I can think of is to run each iteration in separate pipelines, store the intermediate data in some database, and then decide in my pipeline construction whether or not to launch a new pipeline for the next iteration. This seems to be an extremely inefficient solution!
Are there any good ways to perform this kind of additional iterations in Google cloud dataflow?
Thanks!
For the time being, the two options you've mentioned are both reasonable. You could even combine the two approaches. Create a pipeline which does a few iterations (becoming a no-op if needsMoreWork is false), and then have a main Java program that submits that pipeline multiple times until needsMoreWork is false.
We've seen this use case a few times and hope to address it natively in the future. Native support is being tracked in https://github.com/GoogleCloudPlatform/DataflowJavaSDK/issues/50.

How to implement OData federation for Application integration

I have to integrate various legacy applications with some newly introduced parts that are silos of information and have been built at different times with varying architectures. At times these applications may need to get data from other system if it exists and display it to the user within their own screens based on the business needs.
I was looking to see if its possible to implement a generic federation engine that kind of abstracts the aggregation of the data from various other OData endpoints and have a single version of truth.
An simplistic example could be as below.
I am not really looking to do an ETL here as that may introduce some data related side effects in terms of staleness etc.
Can some one share some ideas as to how this can be achieved or point me to any article on the net that shows such a concept.
Regards
Kiran
Officially, the answer is to use either the reflection provider or a custom provider.
Support for multiple data sources (odata)
Allow me to expose entities from multiple sources
To decide between the two approaches, take a look at this article.
If you decide that you need to build a custom provider, the referenced article also contains links to a series of other articles that will help you through the learning process.
Your project seems non-trivial, so in addition I recommend looking at other resources like the WCF Data Services Toolkit to help you along.
By the way, from an architecture standpoint, I believe your idea is sound. Yes, you may have some domain logic behind OData endpoints, but I've always believed this logic should be thin as OData is primarily used as part of data access layers, much like SQL (as opposed to service layers which encapsulate more behavior in the traditional sense). Even if that thin logic requires your aggregator to get a little smart, it's likely that you'll always be able to get away with it using a custom provider.
That being said, if the aggregator itself encapsulates a lot of behavior (as opposed to simply aggregating and re-exposing raw data), you should consider using another protocol that is less data-oriented (but keep using the OData backends in that service). Since domain logic is normally heavily specific, there's very rarely a one-size-fits-all type of protocol, so you'd naturally have to design it yourself.
However, if the aggregated data is exposed mostly as-is or with essentially structural changes (little to no behavior besides assembling the raw data), I think using OData again for that central component is very appropriate.
Obviously, and as you can see in the comments to your question, not everybody would agree with all of this -- so as always, take it with a grain of salt.

is there an easy free way to create test case management in Jira

I think the only part I dont get is how you handle the run results. So if I set up a new project in Jira for test cases how would I make it so I can run mark a test case as pass or fail but not close out the jira.
So I basically want the original jira to be always open then be able to mark it passed or failed against a specific release. the original jira should stay unchanged just somehow log a result set?
I do not have bamboo
that make any sense
We have setup a simple custom workflow in Jira without using Confluence.
We added one new issue type - Test Case. And we have a new sub-task - Test Run.
Test Case has only three workflow actions: Pass, Fail and Invalid (the last one is to make Test Case redundant). And two statuses - Open and Invalid.
Test Run is automatically created when Test Case passes or fails. Users do not manually create test runs. We use one of the plugins to create a subtask on transition.
Test Run can be in a Passed or Failed state and has version info, user who passed or failed and a comment.
This is it.
Here are some links that I used to setup Jira for Test Case Management:
Test Case Management in Jira
Using Jira for Test Case Manangement
Create On Transition Plugin
The approach we are following is as follows
We use Confluence for implementing our test cases.
Each test case has its own page describing the setup, the scenario to run and all possible outcomes.
We have a test library page which is the parent of all these test cases.
When we want to start a validation cycle on a particular release, we use a script which
generates for each test case in confluence, a corresponding 'test run' issue.
(#DennisG - JIRA allows to define different issue types, each with its own workflow)
The summary is the summary of the testcase
The description is the scenario and outcome of the testcase
We have a specific confluence link referring the testcase
The testrun issue workflow contains 4 stages
Open
In Progress
Blocked
Closed
And 3 resolutions
Success
Failure
Review testcase
We then start validating all 'test run' isuses.
Using dashboard gadgets it is easy to see how many testcases still need to be run, how many are blocked, how many have been done, and how many have failed ...
In case the resolution is 'review testcase' we have the ability to adapt the testcase itself.
Conclusion - JIRA is certainly usable as a test execution management environment. Confluence,
as a wiki provides an environment to build the necessary hierarchies (technical, functional).
Last point.
We start to extensively use Bonfire (a plugin for JIRA)
http://www.atlassian.com/en/software/bonfire
This shortens the 'manual' testing cycle considerably.
For us it had an ROI of a couple of weeks.
Hope this helps,
Francis
PS. If you're interested to use the script send me a note.
We are using this test case management called informup.
The test case management is integrates with Jira.
In addition it has fully integration in the system so in case you want to use it as a test case management and a bug tracking system you can do it as well
you can use PractiTest, a test management tool that integrates with JIRA. PractiTest covers your entire QA process, so you can use it to create Requirements, Tests and Test sets, and use the integration option to report issues in JIRA. you can also link between the different entities.
read more about PractiTest's integration with JIRA
To be honest, I'm not sure that using JIRA (or any other bug/issue tracking tool) as a test management tool is a good idea. The problem with this is that issue trackers usually have a single main entity (the issue), whereas test management tools usually distinguish between test cases and actual tests/results. This way you can easily reuse the same test case for different releases and also store a history of test results. Additional entities such as test runs and test suites also usually make it a lot easier to manage and track your data. So instead of using Jira for test management, you might want to consider using a dedicated test management software that integrates with Jira. There are many test management tools out there, including open source projects:
http://www.opensourcetestmanagement.com/
You could also take a look at our tool TestRail, which also comes with Jira integration:
http://www.gurock.com/testrail/
http://www.gurock.com/testrail/jira-test-management.i.html
Have you tried looking in Jira's plugin directory at https://plugins.atlassian.com to see whats available to extend the core functionality. There may be something there that you could be installed.
There are tools out there that combine both issue tracking and test management (e.g. elementool.com), however if you are after a more feature rich issue tracking experience, you may need to start looking at dedicated tools.
If after looking around you find that there are no suitable solutions to enable you to have things in one place, you may want to take a look at TestLodge test case management, which is a tool I have developed that integrates easily with Jira.
Why not just integrate JIRA with a test management tool? So, for example, we use Kualitee. It integrates with JIRA and provides traceability from the defect reported to the underlying test case and requirements. So, you can run your entire QA cycle on Kualitee and sync and assign defects to your team in JIRA.
You can also generate your test case execution reports and bug reports and export them so you are all set on that front as well.

Resources