How does Spring Cloud Data Flow take care of distributed processing? If the server is deployed on PCF and say there are 2 instances, how will the input data be distributed between these 2 instances?
Also, how are failures handled when deployed on PCF? PCF will spawn a new instance for failed one. But will it also take care of deploying the stream or manual intervention is required there?
You should make the distinction between what the Spring Cloud Dataflow documentation calls "the server" and the apps that make up a managed stream.
"The server" is only here to receive deployment requests and honor them, spawning apps that make up your stream(s). If you deploy multiple instances of "the server", then there is nothing special about it. PCF will front it with a LB and either instance will handle your REST requests. When deploying on PCF, state is maintained in a bound service, so there is nothing special here.
If you're rather referring to "the apps", ie deploying a stream with some or all of its part using more than one instance, ie
stream create foo --definition "time | log"
stream deploy foo --properties "app.log.count=3"
then by default, it's up to the binder implementation to choose how to distribute data. This often means round robin balancing.
If you want to control how data pertaining to the same conceptual domain object ends up on the same app instance, you should tell Dataflow how to do so. Something like
stream deploy bar --properties "app.x.producer.partitionKeyExpression=<someDomainConcept>"
As for handling failures, I'm not sure what you're asking. The deployed apps are the stream. Once a request to have that many instances of the stream components has been sent and received by PCF, it will take care of honouring that request. It's out of the hands of Dataflow at that point, and this is exactly why the boundary for the Spring Cloud Deployer contract has been set there (same for other runtimes)/
Related
Suppose you have a micro-service architecture with a topology of two services A and B on which both has 3 instances running each.
A its a web service receiving web requests, and B its a cli based application listening for events from a queue
Now you want to deploy a new version of B, but since the instances of B can be processing info at the moment.
How can be deployed, replacing old instances for new ones without breaking current execution?
There is any tool, patterns or strategy that handle this scenarios?
You need a simple strategy where you stop serving new requests for B for that instance which is about to go under deployment.
If it's consuming events using rest then you can use load balancer, if you have load balancer then using consul, consul template you can detach that instance from load balancer. Keep some approx time say 5 mins (which you need to evaluate) and then start the deployment.
Using this approach is necessary if you are not sure how to find out if the current instance has done all the processing of existing events.
If these events are consumed using MQ then you can have an endpoint upon called which will disable the new event consumption. And then have the same wait and deploy strategy.
We have been using spring batch for below use cases
Read data from file, process and write to target database (batch
kicks off when file arrives)
Read data from remote database, process and write to target database (runs on scheduled interval, triggered
by Autosys)
With the plan to move all online apps to spring-boot microservices and PCF, we are looking at doing a similar excercise on the batch side if it adds value.
In the new world, the spring cloud batch job task will be reading the file from S3 storage (ECSS3).
I am looking at good design here (stay away from too many pipes/filters and orchestration if possible), the input data ranges from 1MM to 20MM records
ECSS3 will notify on file arrival by sending an http request, the
workflow would be - clould stram httpsource->launch clould batch job task that will read from object store, process and save records to target database
Spring Clould Job Task triggered from PCF scheduler to read from remote database, process and save to target database
With the above design, I don't see the value of wrapping the spring batch job into clould task and running in the PCF with spring data flow
Am I missing something here ? Is PCF/SpringClouldDataFlow an overkill in this case ?
Orchestrating batch-jobs in a cloud setting could bring new benefits to the solution. For instance, the resiliency model that PCF supports could be useful. Spring Cloud Task (SCT) are typically run in a short-lived container; if it goes down, PCF will bring it back up and run in it.
Both the options listed above are feasible and it comes down to the use-case wrt the frequency in which you're processing the incoming data. It is really real-time or it can happily run on a schedule is something you'd have to determine to make the decision.
As for the applicability of Spring Cloud Data Flow (SCDF) + PCF, again, it comes down to your business requirements. You may not be using it now, but Spring Batch Admin is EOL in favor of SCDF's Dashboard. The following questions might help realize the SCDF + SCT value proposition.
Do you have to monitor the overall batch-jobs' status, progress, and health? Maybe you've requirements to assemble multiple batch-jobs as a DAG? How about visually composing a series of Tasks and orchestrate it entirely from the Dashboard?
Also, when the batch-jobs are used together with SCT, SCDF, and PCF Scheduler, you'd get the benefit to monitoring all of this from the PCF Apps Manager.
Normally we run the Jar of spring cloud data flow in one of the machine, but what if over the period we create many flows on the machine and the server gets overloaded and becomes a single point of failure, Do we have some thing where we can run the spring cloud data flow server jar on another machine and shift the flows on to that so that we can avoid any such failures and make our complete system more resilient and robust. or does the expansion happen automatically when we deploy our complete system on PCF/or cloud foundry.
SCDF is a simple Boot application. It doesn't retain any state about the stream/task applications itself, but it does keep track of the DSL definitions in the database.
It is common to provision multiple instances of SCDF-server and a load balancer in front for resiliency.
In PCF specifically, if you scale the SCDF-server to >1, PCF will automatically load-balance the incoming traffic (from SCDF Shell/GUI). It is also important to note that PCF will automatically restart the application instance, if it goes down for any reason. You will be set up for multiple levels of resiliency this way.
I am quite new to Quartz.NET, but was able to create a running solution for my problem.
There are remote server instances, which are executed as windows services. The jobstore for these instances is an AdoJobStore with SQLLite backend.
The client application is able to run jobs remotely through remote scheduler proxies.
Now i have to combine the remote execution with clustering. Right here I am struggling with the instantiating of scheduler proxies for remote servers. When a scheduler is created on client, side addresses and ports are configured explicit with the properties of the scheduler factory.
In architecture with a cluster consisting of several remote services and one client, which has to start jobs on these servers with the Quartz.NET feature load balancing, an explicit start of each of the jobs to a specific server address makes no sense to me.
So, how should the client app give the jobs to the cluster and how has the cluster to be configured (for example a list of server ip addresses and port to be used)?
In addition: how have the Quartz.NET server instances to share the database and how will this work for server less SQLLite?
Thanks for any tip useful for further reading I have to do,
Mario
Meanwhile I was able to get my system to work. The answer to my question “Combination of remoting & clustering” is: Do not combine these features, as it is not necessary.
For implementation of a distributed cluster, don’t use remoting at all (hard to find when your first development step was creating a client with a single remote server).
Distribution of jobs and therefore all “connecting” of instances is done by using the same database, which has to be centralized for that reason (using SQL Express now).
Don’t start your local (client) scheduler instance.
Don’t care about all the local working threads appearing even when all the work should be carried out by the remoted servers in the cluster. My expectation would have been to use a scheduler with 0 threads in the local application, as you do not want to start any job within this app.
Problem unsolved: There seems to be no way to register a listener which will be called when a job is executed in the cluster. So I have to build my own feedback channel job --> starting app in order to track the status of jobs (start time, finish time, node where execution has taken place, ..).
Unsolved problem: When the local (WPF) application is closed by the user an endless loop in SimpleThreadPool
while (runnable == null && run)
{
Monitor.Wait(lockObject, 500);
}
prevents the process from being exited.
I have a platform (based on Rails 4/Postgres) running on an auto scaling Elastic Beanstalk web environment. I'm planning on offloading long running tasks (sync with 3rd parties, delivering email etc) to a Worker tier, which appears simple enough to get up and running.
However, I also want to run periodic batch processes. I've looked into using cron.yml and the scheduling seems pretty simple, however the batch process I'm trying to build needs to access the data from the web application to be able to work.
Does anybody have any opinion of the best way of doing this? Either a shared RDS database between web and worker tier, or perhaps a web service that the worker tier can access?
Thanks,
Dan
Note: I've added an extra question, which more broadly describes my
requirements as it struck me that this might not be the best approach.
What's the best way to implement this shared batch process with Elastic Beanstalk?
Unless you need a full relational database management system (RDBMS), consider using S3 for shared persistent data storage across your instances.
Also consider Amazon Simple Queue Service (SQS):
SQS is a fast, reliable, scalable, fully managed message queuing
service. SQS makes it simple and cost-effective to decouple the
components of a cloud application. You can use SQS to transmit any
volume of data, at any level of throughput, without losing messages or
requiring other services to be always available.