I'm using flume to do something like this
Source --> interceptor --> Channel --> multiplexing --> HDFS Sink
|-----------> Null Sink
I would like to add a channel just after the source but I don't want event pass through the interceptor. I would like "raw" event. Like this:
Source --> interceptor (i) --> Channel --> multiplexing --> HDFS Sink
| |-----------> Null Sink
|-------> Channel (must no be intercepted by i) --> HDFS
How can I do it ?
Thanks
Since interceptors are configured per source, you will have to add a second source (configured with no interceptors at all and listening in a different Http port), and emit your data twice: one copy for the source with interceptors, and one copy to the other source.
Another possibility is to chain two agents. The first one containing a single source with no interceptors, and two sinks: one for persisting the data as it is in HDFS, and the other feeding the agent you already have. I mean:
src-->ch-->multip-->sink----------->src-->int-->ch-->multip-->hdfssink
|-->hdfssink |-->nullsink
(________agent1____________) (_____________agent2_____________)
Related
I am using jmeter for functional testing and have 2 different jmx.
The first jmx has all APIs automated and the second jmx is used to send the html report (generated using Ant-Jmeter task) through SMTP sampler.
Now, I want to send the count of Total, Pass, Fail sample counts in the same email by parsing the jtl file generated by first jmx.
Here is what I can see in the jtl file, s="true" and s="false".
I want count of the same and save it as property to use it further in SMTP sampler.
Example in jtl:
<sample t="2" it="0" lt="2" ct="0" ts="1565592433268" s="false" lb="Verify Latest Patch" rc="200" rm="OK" tn="Tenant_Login 3-1" dt="text" by="9" sby="0" ng="1" na="1">
Any help will be appreciated.
Add the next line to user.properties file:
jmeter.save.saveservice.autoflush=true
it will instruct JMeter to immediately write results to file as soon as they're available
Add tearDown Thread Group to your Test Plan
Add HTTP Request sampler to the TearDown Thread Group
Configure it as follows:
Protocol: file
Path: `location of your .jtl result file
Add XPath Extractor as a child of the HTTP Request sampler
Configure it as follows:
Reference Name: anything meaningful, i.e. successCount
XPath query: count(//sample[#s='true'])
That's it, now you should be able to refer the successful samples count as ${successCount} where required
I am new to message broker development. I tried to convert source SOAP over xml file to target SOAP over xml file.On my message flow source message discarded to catch terminal.I am not able to find out the problem
my flow : MQINPUT NODE ---> COMPUTE NODE --> MQOUTPUT NODE
If any provide solution on this that may me helpful for me.
DECLARE soapenv CHARACTER 'SOAP-ENV';
SET OutputRoot.XMNLSC.soapenv:Envelope.soapenv:Body.params.ORIGIN_TYPE_CD = InputRoot.XMNLSC.soapenv:Envelope.soapenv:Body.params.originType;
**
Your first line is definitely wrong, but you should be able to see that from the exceptions you are getting.
The first line should be:
DECLARE soapenv NAMESPACE 'http://schemas.xmlsoap.org/soap/envelope/';
An in the further lines the domain should be XMLNSC not XMNLSC.
Every example I've seen (task-launcher sink and triggertask source ) shows how to launch the task defined by uri attribute.
My tasks definitions look like this :
sampleTask <t2: timestamp || t1: timestamp>
sampleTask-t1 timestamp
sampleTask-t2 timestamp
sampleTaskRunner composed-task-runner --graph=sampleTask
My question is how do I launch the composed task runner (sampleTaskRunner, defined by DSL) from stream application.
Thanks
UPDATE
I ended up with the below solution that triggers task using SCDF REST API :
composedTask definition :
<timestamp || mySampleTask>
Stream definition :
http | httpclient | log
Deployment properties :
app.http.port=81
app.httpclient.body=name=composedTask&arguments=--increment-instance-enabled=true
app.httpclient.http-method=POST
app.httpclient.url=http://localhost:9393/tasks/executions
app.httpclient.headers-expression={'Content-Type':'application/x-www-form-urlencoded'}
Though it's easy to implement http sink component, would be great if stream application starters will provide one out of the box.
Another concern I have is about discovering the SCDF REST URL when deployed in distributed environment.
Here's a quick take from one of the SCDF's R&D team members (Glenn Renfro).
stream create foozer --definition "trigger --fixed-delay=5 | tasklaunchrequest-transform --uri=maven://org.springframework.cloud.task.app:composedtaskrunner-task:1.1.0.BUILD-SNAPSHOT --command-line-arguments='--graph=sampleTask-t1||sampleTask-t2 --increment-instance-enabled=true --spring.datasource.url=jdbc:mariadb://localhost:3306/test --spring.datasource.username=root --spring.datasource.password=password --spring.datasource.driverClassName=org.mariadb.jdbc.Driver' | task-launcher-local" --deploy
In the foozer stream definition,
1) "trigger" source happens to trigger an upstream event every 5s
2) "tasklaunchrequest-transform" processor takes a few arguments; more specifically, it uses "composedtaskrunner-task:1.1.0.BUILD-SNAPSHOT" to launch a composed-task graph (i.e., sampleTask-t1||sampleTask-t2)
3) Pay attention to --increment-instance-enabled. This was recently added to CTR application and this provides the ability to re-launch a composed-task in a recurring cadence
4) Since the CTR and SCDF must share the same database, we are also passing datasource properties as command-line args. (SCDF-server is already started with the same datasource credentials)
Hope this helps.
Lastly, we will add a sample to the reference guide via: spring-cloud/spring-cloud-dataflow#1780
If a have a source app(named A-Source) which has multiple channels to emit messages, eg.
channelA.destination=b-topic, channelB.destination=c-topic.
The receiver for b-topic is B-Sink, for c-topic is C-Sink.
How can i construct my stream, describe them like: A|B and A|C? And if so, i think just part of my A-Source code is useful in every stream.
So my question is: how SCDF stream DSL deal with multiple tap for single source app.
You can use named channel destinations in the Stream DSL.
For example:
dataflow:>stream create tap1 --definition ":b-topic > B-SInk"
dataflow:>stream create tap2 --definition ":c-topic > C-SInk"
In the Flume agent I am collection the elements from Kafka topics and I need to insert them in ES. However I need to perform a previous digestion process in the sink, so I need to write a custom sink to pass the data from the Agent's channel to a java digestion module (which I have written already).
Can anyone share with me a template of a custom sink and can use as a reference? Flumes official website doesn't say much about this topic:
A custom sink’s class and its dependencies must be included in the agent’s classpath when starting the Flume agent. The type of the custom sink is its FQCN.
https://flume.apache.org/FlumeUserGuide.html#custom-sink
And once the custom sink is ready, How could I link the following three files to make the agent work:
custom sink
ingestion jar (java module to perform the ingestion process)
FlumeAgent.properties
Thank you for any feedback. I will keep adding information as soon as I progress in this task.
Hope you are trying to use Flume to recieve events from Kafka (source) and forwarding it to ES (sink) with some data processing logic already you have.
With this understanding, I would suggest you to look into Flume interceptors which is responsible for altering/filtering the events on fly before sending to Sink.
So all your business logic to alter the events can be implemented as a custom interceptor and it should be configured to the Flume channel.
For reference you can checkout the native interceptors source code already available. This should probably give you an idea on the Flume interceptor framework.
Here is the ES Sink source code
Sample Flume config
a1.sources = kafkaSource
a1.sinks = ES_Sink
a1.channels = channel1
a1.sources.kafkaSource.interceptors = i1
a1.sources.kafkaSource.interceptors.i1.type = org.apache.flume.interceptor.<Custom_Interceptor_name>$Builder
a1.sinks.ES_Sink.channel = channel1
a1.sinks.ES_Sink.type = elasticsearch
a1.sinks.ES_Sink.hostNames = 127.0.0.1:9200