spring integration dsl :: stored procedure passing input parameter - spring-integration-dsl

I am trying to convert existing stored proc outbound gateway xml into dsl.
<int-jdbc:stored-proc-outbound-gateway id="my-proc"
request-channel="myChannel"
data-source="datasource"
stored-procedure-name="SAMPLE_SP"
expect-single-result="false"
ignore-column-meta-data="true">
<!-- Parameter Definitions -->
<int-jdbc:sql-parameter-definition name="V_TEST_ID" direction="IN"/>
<int-jdbc:sql-parameter-definition name="O_MSG" direction="OUT"/>
<!-- Parameter Mappings Before Passing & Receiving -->
<int-jdbc:parameter name="V_TEST_ID" expression="payload.testId"/>
</int-jdbc:stored-proc-outbound-gateway>
can you please throw some light how to pass input parameters to dsl?
#Bean
public StoredProcOutboundGateway spGateway(){
StoredProcOutboundGateway storedProcOutboundGateway = new StoredProcOutboundGateway(storedProcExecutor());
storedProcOutboundGateway.setExpectSingleResult(true);
storedProcOutboundGateway.setRequiresReply(true);
return storedProcOutboundGateway;
}
#Bean
public StoredProcExecutor storedProcExecutor() {
StoredProcExecutor storedProcExecutor = new StoredProcExecutor(this.datasource);
storedProcExecutor.setStoredProcedureName("SAMPLE_SP2");
storedProcExecutor.setIsFunction(false);
storedProcExecutor.setReturningResultSetRowMappers(..);
return storedProcExecutor;
}

You need to create the procedure parameters, and the sql parameters...
#Bean
public StoredProcExecutor storedProcExecutor() {
StoredProcExecutor storedProcExecutor = new StoredProcExecutor(this.datasource);
storedProcExecutor.setStoredProcedureName("SAMPLE_SP2");
storedProcExecutor.setIsFunction(false);
storedProcExecutor.setReturningResultSetRowMappers(..);
List<ProcedureParameter> procedureParameters = new ArrayList<>();
procedureParameters.add(new ProcedureParameter("cdc_group_name", groupName, null));
// TODO set output_limit from property file
procedureParameters.add(new ProcedureParameter("output_limit", 500, null));
storedProcExecutor.setProcedureParameters(procedureParameters);
List<SqlParameter> sqlParameters = new ArrayList<>();
sqlParameters.add(new SqlParameter("cdc_group_name", Types.CHAR));
sqlParameters.add(new SqlParameter("output_limit", Types.BIGINT));
storedProcExecutor.setSqlParameters(sqlParameters);
return storedProcExecutor;
}

Related

Constructing a pipline with ValueProivder.RuntimeProvider

I have a Google Dataflow job using library version 1.9.1 job, The job was taking runtime arguments. We used the TextIO.read().from().withoutValidation(). Since we migrated to google dataflow 2.0.0 , the withoutValidation is removed in 2.0.0. The Release notes page doesnt talk about this https://cloud.google.com/dataflow/release-notes/release-notes-java-2 .
We tried to pass the input as a ValueProvider.RuntimeProvider. But During pipeline construction, we get the following error. If pass it as ValueProvider the pipeline creation is trying to validate the value provider. How do I provide a runtime value provider for a TextIO input in google cloud dataflow 2.0.0?
java.lang.RuntimeException: Method getInputFile should not have return type RuntimeValueProvider, use ValueProvider instead.
at org.apache.beam.sdk.options.ProxyInvocationHandler.getDefault(ProxyInvocationHandler.java:505)
I'm going to assume you are using templated pipelines, and that your pipeline is consuming runtime parameters. Here's a working example using the Cloud Dataflow SDK version 2.1.0. It reads a file from GCS (passed to the template at runtime), turns each row into a TableRow and writes to BigQuery. It's a trivial example, but it works with 2.1.0.
Program args are as follows:
--project=<your_project_id>
--runner=DataflowRunner
--templateLocation=gs://<your_bucket>/dataflow_pipeline
--stagingLocation=gs://<your_bucket>/jars
--tempLocation=gs://<your_bucket>/tmp
Program code is as follows:
public class TemplatePipeline {
public static void main(String[] args) {
PipelineOptionsFactory.register(TemplateOptions.class);
TemplateOptions options = PipelineOptionsFactory
.fromArgs(args)
.withValidation()
.as(TemplateOptions.class);
Pipeline pipeline = Pipeline.create(options);
pipeline.apply("READ", TextIO.read().from(options.getInputFile()).withCompressionType(TextIO.CompressionType.GZIP))
.apply("TRANSFORM", ParDo.of(new WikiParDo()))
.apply("WRITE", BigQueryIO.writeTableRows()
.to(String.format("%s:dataset_name.wiki_demo", options.getProject()))
.withCreateDisposition(CREATE_IF_NEEDED)
.withWriteDisposition(WRITE_TRUNCATE)
.withSchema(getTableSchema()));
pipeline.run();
}
private static TableSchema getTableSchema() {
List<TableFieldSchema> fields = new ArrayList<>();
fields.add(new TableFieldSchema().setName("year").setType("INTEGER"));
fields.add(new TableFieldSchema().setName("month").setType("INTEGER"));
fields.add(new TableFieldSchema().setName("day").setType("INTEGER"));
fields.add(new TableFieldSchema().setName("wikimedia_project").setType("STRING"));
fields.add(new TableFieldSchema().setName("language").setType("STRING"));
fields.add(new TableFieldSchema().setName("title").setType("STRING"));
fields.add(new TableFieldSchema().setName("views").setType("INTEGER"));
return new TableSchema().setFields(fields);
}
public interface TemplateOptions extends DataflowPipelineOptions {
#Description("GCS path of the file to read from")
ValueProvider<String> getInputFile();
void setInputFile(ValueProvider<String> value);
}
private static class WikiParDo extends DoFn<String, TableRow> {
#ProcessElement
public void processElement(ProcessContext c) throws Exception {
String[] split = c.element().split(",");
TableRow row = new TableRow();
for (int i = 0; i < split.length; i++) {
TableFieldSchema col = getTableSchema().getFields().get(i);
row.set(col.getName(), split[i]);
}
c.output(row);
}
}
}

DymanicDestinations in Apache Beam

I have a PCollection [String] say "X" that I need to dump in a BigQuery table.
The table destination and the schema for it is in a PCollection[TableRow] say "Y".
How to accomplish this in the simplest manner?
I tried extracting the table and schema from "Y" and saving it in static global variables (tableName and schema respectively). But somehow oddly the BigQueryIO.writeTableRows() always gets the value of the variable tableName as null. But it gets the schema. I tried logging the values of those variables and I can see the values are there for both.
Here is my pipeline code:
static String tableName;
static TableSchema schema;
PCollection<String> read = p.apply("Read from input file",
TextIO.read().from(options.getInputFile()));
PCollection<TableRow> tableRows = p.apply(
BigQueryIO.read().fromQuery(NestedValueProvider.of(
options.getfilename(),
new SerializableFunction<String, String>() {
#Override
public String apply(String filename) {
return "SELECT table,schema FROM `BigqueryTest.configuration` WHERE file='" + filename +"'";
}
})).usingStandardSql().withoutValidation());
final PCollectionView<List<String>> dataView = read.apply(View.asList());
tableRows.apply("Convert data read from file to TableRow",
ParDo.of(new DoFn<TableRow,TableRow>(){
#ProcessElement
public void processElement(ProcessContext c) {
tableName = c.element().get("table").toString();
String[] schemas = c.element().get("schema").toString().split(",");
List<TableFieldSchema> fields = new ArrayList<>();
for(int i=0;i<schemas.length;i++) {
fields.add(new TableFieldSchema()
.setName(schemas[i].split(":")[0]).setType(schemas[i].split(":")[1]));
}
schema = new TableSchema().setFields(fields);
//My code to convert data to TableRow format.
}}).withSideInputs(dataView));
tableRows.apply("write to BigQuery",
BigQueryIO.writeTableRows()
.withSchema(schema)
.to("ProjectID:DatasetID."+tableName)
.withWriteDisposition(BigQueryIO.Write.WriteDisposition.WRITE_TRUNCATE)
.withCreateDisposition(BigQueryIO.Write.CreateDisposition.CREATE_IF_NEEDED));
Everything works fine. Only BigQueryIO.write operation fails and I get the error TableId is null.
I also tried using SerializableFunction and returning the value from there but i still get null.
Here is the code that I tried for it:
tableRows.apply("write to BigQuery",
BigQueryIO.writeTableRows()
.withSchema(schema)
.to(new GetTable(tableName))
.withWriteDisposition(BigQueryIO.Write.WriteDisposition.WRITE_TRUNCATE)
.withCreateDisposition(BigQueryIO.Write.CreateDisposition.CREATE_IF_NEEDED));
public static class GetTable implements SerializableFunction<String,String> {
String table;
public GetTable() {
this.table = tableName;
}
#Override
public String apply(String arg0) {
return "ProjectId:DatasetId."+table;
}
}
I also tried using DynamicDestinations but I get an error saying schema is not provided. Honestly I'm new to the concept of DynamicDestinations and I'm not sure that I'm doing it correctly.
Here is the code that I tried for it:
tableRows2.apply(BigQueryIO.writeTableRows()
.to(new DynamicDestinations<TableRow, TableRow>() {
private static final long serialVersionUID = 1L;
#Override
public TableDestination getTable(TableRow dest) {
List<TableRow> list = sideInput(bqDataView); //bqDataView contains table and schema
String table = list.get(0).get("table").toString();
String tableSpec = "ProjectId:DatasetId."+table;
String tableDescription = "";
return new TableDestination(tableSpec, tableDescription);
}
public String getSideInputs(PCollectionView<List<TableRow>> bqDataView) {
return null;
}
#Override
public TableSchema getSchema(TableRow destination) {
return schema; //schema is getting added from the global variable
}
#Override
public TableRow getDestination(ValueInSingleWindow<TableRow> element) {
return null;
}
}.getSideInputs(bqDataView)));
Please let me know what I'm doing wrong and which path I should take.
Thank You.
Part of the reason your having trouble is because of the two stages of pipeline execution. First the pipeline is constructed on your machine. This is when all of the applications of PTransforms occur. In your first example, this is when the following lines are executed:
BigQueryIO.writeTableRows()
.withSchema(schema)
.to("ProjectID:DatasetID."+tableName)
The code within a ParDo however runs when your pipeline executes, and it does so on many machines. So the following code runs much later than the pipeline construction:
#ProcessElement
public void processElement(ProcessContext c) {
tableName = c.element().get("table").toString();
...
schema = new TableSchema().setFields(fields);
...
}
This means that neither the tableName nor the schema fields will be set at when the BigQueryIO sink is created.
Your idea to use DynamicDestinations is correct, but you need to move the code to actually generate the schema the destination into that class, rather than relying on global variables that aren't available on all of the machines.

How to add multiple Simplemessagelistenercontainer dynamically through application.properties

Below is our program, we create multiple containers for different queue through property in application.properties. But now it is static, when add another property, we must change the code.
I want add containers dynamically. I investigate several solutions.
1.use BeanFactory.registerSingleton method, but it cannot receive lifecycle callback,so i'm not sure the container can shutdown gracefully.
2.use BeanFactoryPostRegistor, but it need build a BeanDefinition, i have no idea how can construct a BeanDefinition for SimpleMessageListenerContainer, because it will be created by SimpleMessageListenrContainerFactory.
Can anybody give me better solution both add beans dynamically and the SimpleMessageListenerContainer can be started and shutdown normally?
#Bean
#ConditionalOnProperty(name = "pmc.multiple.hypervisor.reply.routerkey.kvm")
public SimpleMessageListenerContainer kvmReplyQueueConsumer() {
return getSimpleMessageListenerContainer(environment
.getProperty("pmc.multiple.hypervisor.reply.routerkey.kvm"));
}
#Bean
#ConditionalOnProperty(name = "pmc.multiple.hypervisor.reply.routerkey.vmware")
public SimpleMessageListenerContainer vmwareReplyQueueConsumer() {
return getSimpleMessageListenerContainer(environment
.getProperty("pmc.multiple.hypervisor.reply.routerkey.vmware"));
}
#Bean
#ConditionalOnProperty(name = "pmc.multiple.hypervisor.reply.routerkey.powervc")
public SimpleMessageListenerContainer powervcReplyQueueConsumer() {
return getSimpleMessageListenerContainer(environment
.getProperty("pmc.multiple.hypervisor.reply.routerkey.powervc"));
}
#Autowired
private SimpleRabbitListenerContainerFactory simpleRabbitListenerContainerFactory;
private SimpleMessageListenerContainer getSimpleMessageListenerContainer(String queueName){
return simpleRabbitListenerContainerFactory.createContainerInstance();
}
Take all properties you need (for example by regexp) and then register beans you want. There are 2 separate task there (1) how to get Spring properties (2) how register bean dynamically
1) To iterate over properties in 'Spring way':
#Autowired
Properties props;
....
for(Entry<Object, Object> e : props.entrySet()) {
if( /*some code to match*/ ){
//dispatch bean creation
}
}
2) you can either create beans dynamically by:
public MyClassPostRegister implements BeanFactoryPostProcessor {
public void postProcessBeanFactory(ConfigurableListableBeanFactory beanFactory) {
//create bean definition:
GenericBeanDefinition beanDefinition = new GenericBeanDefinition();
beanDefinition.setBeanClass(MyBeanClass.class);
beanDefinition.setLazyInit(false);
beanDefinition.setAbstract(false);
beanDefinition.setAutowireCandidate(true);
beanDefinition.setScope("prototype");
beanFactory.registerBeanDefinition("dynamicBean",beanDefinition);
}
Appendix after comment #GrapeBaBa:
Actually I use simpleRabbitListenerContainerFactory.createContainerInstance() to create container, so how to transform to use beanDefinition - please pay attention to lines marked with (!!!)
Create you own component
#Component
public class MyClassPostRegister implements BeanFactoryPostProcessor {
#Autowired
Properties props; //this gives you access to all properties
//following is example of filter by name
static final Pattern myInterestingProperties =
Pattern.compile("pmc\\.multiple\\.hypervisor\\.reply\\.routerkey\\..+");
Add post-process handler:
public void postProcessBeanFactory(ConfigurableListableBeanFactory beanFactory) {
//iterate through properties
for(Entry<Object, Object> e : props.entrySet()) {
Matcher m = myInterestingProperties.matcher(e.key);
if( !m.matches() )
continue;
//create bean definition:
GenericBeanDefinition beanDefinition = new GenericBeanDefinition();
beanDefinition.setBeanClass(SimpleMessageListenerContainer.class);
beanDefinition.setLazyInit(false);
beanDefinition.setAbstract(false);
beanDefinition.setAutowireCandidate(true);
beanDefinition.setScope("prototype");
//!!! Now specify name of factory method
beanDefinition.setFactoryMethodName("getSimpleMessageListenerContainer");
//!!! Now specify factory arguments:
ConstructorArgumentValues v = new ConstructorArgumentValues();
v.addGenericArgumentValue( e.getKey() ); //string
beanDefinition.getConstructorArgumentValues().add( v );
beanFactory.registerBeanDefinition("dynamicBean",beanDefinition);
}
}

How to list all binding variables with GroovyShell

I'm very new to Groovy. How can I list all variables I passed to Binding constructor ?
Considering I have following :
#Test
public void test() {
List<String> outputNames = Arrays.asList("returnValue", "ce");
String script = getScript();
Script compiledScript = compileScript(script);
CustomError ce = new CustomError("shit", Arrays.asList(new Long(1)));
Map<String, Object> inputObjects = new HashMap<String, Object>();
inputObjects.put("input", "Hovada");
inputObjects.put("error", ce);
Binding binding = new Binding(inputObjects);
compiledScript.setBinding(binding);
compiledScript.run();
for (String outputName : outputNames) {
System.out.format("outputName : %s = %s", outputName, binding.getVariable(outputName));
}
}
private Script compileScript(String script) {
GroovyShell groovyShell = new GroovyShell();
Script compiledScript = groovyShell.parse(script);
return compiledScript;
}
How can I iterate over all the variables (over the hashMap) in groovy.script ?
Script compiledScript represents the script, if you look at its source code, you'll see that it has property binding and getter+setter and Binding has a variable "variables". So you go :
binding.variables.each{
println it.key
println it.value
}
For Map<String, String> ...
you can also set properties like this :
Binding binding = new Binding(inputObjects);
compiledScript.setBinding(binding);
compiledScript.setProperty("prop", "value");
compiledScript.run();
and it is stored into the Binding variables.

Rhino, e4x and generating URLs in XHTML

I'm using Rhino to generate XHTML but my urls are being encoded as in:
-http://www.example.com/test.html?a=b&c=d
becomes
-http://www.example.com/test.html?a=b&c=d
Failing test case as follows:
public class E4XUrlTest extends TestCase {
public void testJavascript() throws Exception {
final Context context = new ContextFactory().enterContext();
context.setLanguageVersion(Context.VERSION_1_7);
try {
final ScriptableObject scope = new Global(context);
final Script compiledScript = context.compileReader(
new StringReader("<html><body><a href={'blah.html?id=2345&name=345'}></a></body></html>"), "test", 1, null);
HashMap<String, Object> variables = new HashMap<String, Object>();
Set<Entry<String, Object>> entrySet = variables.entrySet();
for (Entry<String, Object> entry : entrySet) {
ScriptableObject.putProperty(scope, entry.getKey(), Context.javaToJS(entry.getValue(), scope));
}
Object exec = compiledScript.exec(context, scope);
String html = exec.toString();
System.out.println(html);
assertTrue(html.indexOf("id=2345&name") > 0);
} finally {
Context.exit();
}
}
}
Any ideas?
Actually the encoding "&name" is correct in xHTML since &name; is NOT a valid xHTML entity. ALL browsers understand the URL correctly. So you need to fix your test rather than looking to break your correct xHTML.
:-) stw

Resources