Launching composed task built by DSL from stream application - spring-cloud-dataflow

Every example I've seen (task-launcher sink and triggertask source ) shows how to launch the task defined by uri attribute.
My tasks definitions look like this :
sampleTask <t2: timestamp || t1: timestamp>
sampleTask-t1 timestamp
sampleTask-t2 timestamp
sampleTaskRunner composed-task-runner --graph=sampleTask
My question is how do I launch the composed task runner (sampleTaskRunner, defined by DSL) from stream application.
Thanks
UPDATE
I ended up with the below solution that triggers task using SCDF REST API :
composedTask definition :
<timestamp || mySampleTask>
Stream definition :
http | httpclient | log
Deployment properties :
app.http.port=81
app.httpclient.body=name=composedTask&arguments=--increment-instance-enabled=true
app.httpclient.http-method=POST
app.httpclient.url=http://localhost:9393/tasks/executions
app.httpclient.headers-expression={'Content-Type':'application/x-www-form-urlencoded'}
Though it's easy to implement http sink component, would be great if stream application starters will provide one out of the box.
Another concern I have is about discovering the SCDF REST URL when deployed in distributed environment.

Here's a quick take from one of the SCDF's R&D team members (Glenn Renfro).
stream create foozer --definition "trigger --fixed-delay=5 | tasklaunchrequest-transform --uri=maven://org.springframework.cloud.task.app:composedtaskrunner-task:1.1.0.BUILD-SNAPSHOT --command-line-arguments='--graph=sampleTask-t1||sampleTask-t2 --increment-instance-enabled=true --spring.datasource.url=jdbc:mariadb://localhost:3306/test --spring.datasource.username=root --spring.datasource.password=password --spring.datasource.driverClassName=org.mariadb.jdbc.Driver' | task-launcher-local" --deploy
In the foozer stream definition,
1) "trigger" source happens to trigger an upstream event every 5s
2) "tasklaunchrequest-transform" processor takes a few arguments; more specifically, it uses "composedtaskrunner-task:1.1.0.BUILD-SNAPSHOT" to launch a composed-task graph (i.e., sampleTask-t1||sampleTask-t2)
3) Pay attention to --increment-instance-enabled. This was recently added to CTR application and this provides the ability to re-launch a composed-task in a recurring cadence
4) Since the CTR and SCDF must share the same database, we are also passing datasource properties as command-line args. (SCDF-server is already started with the same datasource credentials)
Hope this helps.
Lastly, we will add a sample to the reference guide via: spring-cloud/spring-cloud-dataflow#1780

Related

ROS - How do I publish a message and get the subscribed callback immediately

I have a ROS node that allows you to "publish" a data structure to it, to which it responds by publishing an output. The timestamp of what I published and what it publishes is matched.
Is there a mechanism for a blocking function where I send/publish and output, and it waits until I receive an output?
I think you need the ROS_Services (client/server) pattern instead of the publisher/subscriber.
Here is a simple example to do that in Python:
Client code snippet:
import rospy
from test_service.srv import MySrvFile
rospy.wait_for_service('a_topic')
try:
send_hi = rospy.ServiceProxy('a_topic', MySrvFile)
print('Client: Hi, do you hear me?')
resp = send_hi('Hi, do you hear me?')
print("Server: {}".format(resp.response))
except rospy.ServiceException, e:
print("Service call failed: %s"%e)
Server code snippet:
import rospy
from test_service.srv import MySrvFile, MySrvFileResponse
def callback_function(req):
print(req)
return MySrvFileResponse('Hello client, your message received.')
rospy.init_node('server')
rospy.Service('a_topic', MySrvFile, callback_function)
rospy.spin()
MySrvFile.srv
string request
---
string response
Server out:
request: "Hi, do you hear me?"
Client out:
Client: Hi, do you hear me?
Server: Hello client, your message received.
Learn more in ros-wiki
Project repo on GitHub.
[UPDATE]
If you are looking for fast communication, TCP-ROS communication is not your purpose because it is slower than a broker-less communicator like ZeroMQ (it has low latency and high throughput):
ROS-Service pattern equivalent in ZeroMQ is REQ/REP (client/server)
ROS publisher/subscriber pattern equivalent in ZeroMQ is PUB/SUB
ROS publisher/subscriber with waitformessage equivalent in ZeroMQ is PUSH/PULL
ZeroMQ is available in both Python and C++
Also, to transfer huge amounts of data (e.g. pointcloud), there is a mechanism in ROS called nodelet which is supported only in C++. This communication is based on shared memory on a machine instead of TCP-ROS socket.
What exactly is a nodelet?
Since you want to stick with publish/ subscribers, assuming from your comment, that services are to slow I would have a look at waitForMessage (Documentation).
And for an example on how to use it you can have a look at this ros answers question.
All you need to do is to publish your data and immediately call waitForMessage on the output topic and manually pass the received message to your "callback".
I hope this is what you were looking for.
To get this request/reply behaviour ROS has a mechanism called ROS service.
You can specify the input and output of your service in a service file similar to a ROS message definition. You can then call the service of a node with your input and the call will receive an output when the service is finished.
Here is a tutorial how to use this mechanism in python. If you prefer C++ there is also one, you should find it.

Spring cloud dataflow - Composite Task with external configuration

I have a composite task with two cloud task ( AAA && BBB ).
I want to pass the properties to AAA and BBB task from a directory.
For example, the usage of "--spring.config.location=directory/" when launching the Spring boot application.
As per the documentation, i understand that we can pass properties using app.CompositeTaskName.taskname.prop1=val1.
But, i want to load a bunch of configuration at launch.
So, is there a way to launch the tasks with "spring.config.location" argument ?
I found the solution. I passed "--spring.config.location" in the task definition of my composite task.
task create myctr --definition "AAA --spring.config.location=/data/prop/ '*'->BBB"
I launched the composite task "myctr" and it referred the property files from the "/data/prop/" directory.
Documentation reference :
http://docs.spring.io/spring-cloud-dataflow/docs/1.7.4.RELEASE/reference/htmlsingle/#spring-cloud-dataflow-composed-tasks
--> Task Application Parameters

Exported Dataflow Template Parameters Unknown

I've exported a Cloud Dataflow template from Dataprep as outlined here:
https://cloud.google.com/dataprep/docs/html/Export-Basics_57344556
In Dataprep, the flow pulls in text files via wildcard from Google Cloud Storage, transforms the data, and appends it to an existing BigQuery table. All works as intended.
However, when trying to start a Dataflow job from the exported template, I can't seem to get the startup parameters right. The error messages aren't overly specific but it's clear that for one thing, I'm not getting the locations (input and output) right.
The only Google-provided template for this use case (found at https://cloud.google.com/dataflow/docs/guides/templates/provided-templates#cloud-storage-text-to-bigquery) doesn't apply as it uses a UDF and also runs in Batch mode, overwriting any existing BigQuery table rather than append.
Inspecting the original Dataflow job details from Dataprep shows a number of parameters (found in the metadata file) but I haven't been able to get those to work within my code. Here's an example of one such failed configuration:
import time
from google.cloud import storage
from googleapiclient.discovery import build
from oauth2client.client import GoogleCredentials
def dummy(event, context):
pass
def process_data(event, context):
credentials = GoogleCredentials.get_application_default()
service = build('dataflow', 'v1b3', credentials=credentials)
data = event
gsclient = storage.Client()
file_name = data['name']
time_stamp = time.time()
GCSPATH="gs://[path to template]
BODY = {
"jobName": "GCS2BigQuery_{tstamp}".format(tstamp=time_stamp),
"parameters": {
"inputLocations" : '{{\"location1\":\"[my bucket]/{filename}\"}}'.format(filename=file_name),
"outputLocations": '{{\"location1\":\"[project]:[dataset].[table]\", [... other locations]"}}',
"customGcsTempLocation": "gs://[my bucket]/dataflow"
},
"environment": {
"zone": "us-east1-b"
}
}
print(BODY["parameters"])
request = service.projects().templates().launch(projectId=PROJECT, gcsPath=GCSPATH, body=BODY)
response = request.execute()
print(response)
The above example indicates invalid field ("location1", which I pulled from a completed Dataflow job. I know I need to specify the GCS location, the template location, and the BigQuery table but haven't found the correct syntax anywhere. As mentioned above, I found the field names and sample values in the job's generated metadata file.
I realize that this specific use case may not ring any bells but in general if anyone has had success determining and using the correct startup parameters for a Dataflow job exported from Dataprep, I'd be most grateful to learn more about that. Thx.
I think you need to review this document it explains exactly the syntax required for passing the various pipeline options available including the location parameters needed... 1
Specifically with your code snippet the following does not follow the correct syntax
""inputLocations" : '{{\"location1\":\"[my bucket]/{filename}\"}}'.format(filename=file_name)"
In addition to document1, you should also review the available pipeline options and their correct syntax 2
Please use the links...They are the official documentation links from Google.These links will never go stale or be removed they are actively monitored and maintained by a dedicated team

How to write a custom ES sink in Flume 1.7

In the Flume agent I am collection the elements from Kafka topics and I need to insert them in ES. However I need to perform a previous digestion process in the sink, so I need to write a custom sink to pass the data from the Agent's channel to a java digestion module (which I have written already).
Can anyone share with me a template of a custom sink and can use as a reference? Flumes official website doesn't say much about this topic:
A custom sink’s class and its dependencies must be included in the agent’s classpath when starting the Flume agent. The type of the custom sink is its FQCN.
https://flume.apache.org/FlumeUserGuide.html#custom-sink
And once the custom sink is ready, How could I link the following three files to make the agent work:
custom sink
ingestion jar (java module to perform the ingestion process)
FlumeAgent.properties
Thank you for any feedback. I will keep adding information as soon as I progress in this task.
Hope you are trying to use Flume to recieve events from Kafka (source) and forwarding it to ES (sink) with some data processing logic already you have.
With this understanding, I would suggest you to look into Flume interceptors which is responsible for altering/filtering the events on fly before sending to Sink.
So all your business logic to alter the events can be implemented as a custom interceptor and it should be configured to the Flume channel.
For reference you can checkout the native interceptors source code already available. This should probably give you an idea on the Flume interceptor framework.
Here is the ES Sink source code
Sample Flume config
a1.sources = kafkaSource
a1.sinks = ES_Sink
a1.channels = channel1
a1.sources.kafkaSource.interceptors = i1
a1.sources.kafkaSource.interceptors.i1.type = org.apache.flume.interceptor.<Custom_Interceptor_name>$Builder
a1.sinks.ES_Sink.channel = channel1
a1.sinks.ES_Sink.type = elasticsearch
a1.sinks.ES_Sink.hostNames = 127.0.0.1:9200

Aggregate-counter on an existing stream

I'm trying to create an aggregate counter for various streams I have set up. In SpringXD it would look like this: "tap:stream:MyCustomStream > aggregate-counter".
In Spring Cloud Dataflow so far I have done ":MyKafkaTopic > aggregate-counter", which seems to create a Kafka consumer and read the payload to determine a count of events on the topic. I'd like to be able to tap any stream not just a Kafka source, e.g. "MyApp1 | MyApp2" --name MyCustomStream.
The provided example "stream create --definition ":mainstream.http > counter" --name tap_at_http --deploy" essentially assumes mainstream.http is a Kafka topic (or RabbitMQ topic).
Anyone done this before?
Going by your example,
stream create foo --definition "MyApp1 | MyApp2"
If you'd have to TAP the foo stream at the producer, MyApp1 level, your TAP stream would like the following.
stream create bar --definition ":foo.MyApp1 > MyApp3"
You're just pointing to the producer in the stream where you'd like to TAP to get a copy of same data. The format is: :<streamName>.<label/appName>. You could use "labels" instead of app names, too. Please review the reference guide for more details.
The provided example "stream create --definition ":mainstream.http > counter" --name tap_at_http --deploy" essentially assumes mainstream.http is a Kafka topic (or RabbitMQ topic).
In this case, mainstream is the stream name and you're TAP'ing at http source application, which equates to :mainstream.http.
This is analogous to tap:stream:foo in Spring XD. By default, Spring XD assumes the producer if there's only in the stream. You'd have to specify it when you TAP at the processor, though.
In SCDF, we require it specifically to make it more descriptive and the DSL is easy to follow as well.

Resources