Google Dataflow: java.lang.IllegalArgumentException: Cannot setCoder(null) - google-cloud-dataflow

I am trying to build a custom sink for unzipping files.
Having this simple code:
public static class ZipIO{
public static class Sink extends com.google.cloud.dataflow.sdk.io.Sink<String> {
private static final long serialVersionUID = -7414200726778377175L;
private final String unzipTarget;
public Sink withDestinationPath(String s){
if(s!=""){
return new Sink(s);
}
else {
throw new IllegalArgumentException("must assign destination path");
}
}
protected Sink(String path){
this.unzipTarget = path;
}
#Override
public void validate(PipelineOptions po){
if(unzipTarget==null){
throw new RuntimeException();
}
}
#Override
public ZipFileWriteOperation createWriteOperation(PipelineOptions po){
return new ZipFileWriteOperation(this);
}
}
private static class ZipFileWriteOperation extends WriteOperation<String, UnzipResult>{
private static final long serialVersionUID = 7976541367499831605L;
private final ZipIO.Sink sink;
public ZipFileWriteOperation(ZipIO.Sink sink){
this.sink = sink;
}
#Override
public void initialize(PipelineOptions po) throws Exception{
}
#Override
public void finalize(Iterable<UnzipResult> writerResults, PipelineOptions po) throws Exception {
long totalFiles = 0;
for(UnzipResult r:writerResults){
totalFiles +=r.filesUnziped;
}
LOG.info("Unzipped {} Files",totalFiles);
}
#Override
public ZipIO.Sink getSink(){
return sink;
}
#Override
public ZipWriter createWriter(PipelineOptions po) throws Exception{
return new ZipWriter(this);
}
}
private static class ZipWriter extends Writer<String, UnzipResult>{
private final ZipFileWriteOperation writeOp;
public long totalUnzipped = 0;
ZipWriter(ZipFileWriteOperation writeOp){
this.writeOp = writeOp;
}
#Override
public void open(String uID) throws Exception{
}
#Override
public void write(String p){
System.out.println(p);
}
#Override
public UnzipResult close() throws Exception{
return new UnzipResult(this.totalUnzipped);
}
#Override
public ZipFileWriteOperation getWriteOperation(){
return writeOp;
}
}
private static class UnzipResult implements Serializable{
private static final long serialVersionUID = -8504626439217544799L;
public long filesUnziped=0;
public UnzipResult(long filesUnziped){
this.filesUnziped=filesUnziped;
}
}
}
}
The processing fails with error:
Exception in thread "main" java.lang.IllegalArgumentException: Cannot setCoder(null)
at com.google.cloud.dataflow.sdk.values.TypedPValue.setCoder(TypedPValue.java:67)
at com.google.cloud.dataflow.sdk.values.PCollection.setCoder(PCollection.java:150)
at com.google.cloud.dataflow.sdk.io.Write$Bound.createWrite(Write.java:380)
at com.google.cloud.dataflow.sdk.io.Write$Bound.apply(Write.java:112)
at com.google.cloud.dataflow.sdk.runners.DataflowPipelineRunner$BatchWrite.apply(DataflowPipelineRunner.java:2118)
at com.google.cloud.dataflow.sdk.runners.DataflowPipelineRunner$BatchWrite.apply(DataflowPipelineRunner.java:2099)
at com.google.cloud.dataflow.sdk.runners.PipelineRunner.apply(PipelineRunner.java:75)
at com.google.cloud.dataflow.sdk.runners.DataflowPipelineRunner.apply(DataflowPipelineRunner.java:465)
at com.google.cloud.dataflow.sdk.runners.BlockingDataflowPipelineRunner.apply(BlockingDataflowPipelineRunner.java:169)
at com.google.cloud.dataflow.sdk.Pipeline.applyInternal(Pipeline.java:368)
at com.google.cloud.dataflow.sdk.Pipeline.applyTransform(Pipeline.java:275)
at com.google.cloud.dataflow.sdk.runners.DataflowPipelineRunner.apply(DataflowPipelineRunner.java:463)
at com.google.cloud.dataflow.sdk.runners.BlockingDataflowPipelineRunner.apply(BlockingDataflowPipelineRunner.java:169)
at com.google.cloud.dataflow.sdk.Pipeline.applyInternal(Pipeline.java:368)
at com.google.cloud.dataflow.sdk.Pipeline.applyTransform(Pipeline.java:291)
at com.google.cloud.dataflow.sdk.values.PCollection.apply(PCollection.java:174)
at com.mcd.de.tlogdataflow.StarterPipeline.main(StarterPipeline.java:93)
Any help is appreciated.
Thanks & BR
Philipp

This crash is caused by a bug in the Dataflow Java SDK (specifically, this line) which was also present in the Apache Beam (incubating) Java SDK.
The method Sink.WriterOperation#getWriterResultCoder() must always be overridden, but we failed to mark it abstract. It is fixed in Beam, but unchanged in the Dataflow SDK. You should override this method and return an appropriate coder.
You have some options to come up with the coder:
Write your own small coder class, wrapping one of VarLongCoder or BigEndianLongCoder
Just use a long instead of the UnzipResult structure so you can use those as-is.
Less advisable due to the excess size, you could use SerializableCoder.of(UnzipResult.class)

Related

Springboot RabbitMq no consumer connected

I'm using springboot and rabbitmq to receive a message.
The first consumer i created works, declared as below:
#Component
public class UserConsumer {
#Autowired
private RabbitTemplate template;
#RabbitListener(queues = MessagingConfig.CONSUME_QUEUE)
public void consumeMessageFromQueue(MassTransitRequest userRequest) {
...
}
}
I then needed a second consumer so i duplicated the above and called it another name:
#Component
public class PackConsumer {
#Autowired
private RabbitTemplate template;
#RabbitListener(queues = MessagingConfig.CONSUME_QUEUE_CREATE_PACK)
public void consumeMessageFromQueue(MassTransitRequest fileRequest) {
...
}
}
Everything works locally on my machine, however when i deploy it the new queue does not process messages because there is no consumer connected to it. The UserConsumer continues to work.
Is there something else i should be doing in order to connect to the new queue at the same time as the original?
During my learning i did add a "MessagingConfig" class as below, however i believe it relates to sending messages and not receiving them or an alternative configuration:
#Configuration
public class MessagingConfig {
public static final String CONSUME_QUEUE = "merge-document-request";
public static final String CONSUME_EXCHANGE = "merge-document-request";
public static final String CONSUME_ROUTING_KEY = "";
public static final String PUBLISH_QUEUE = "merge-document-response";
public static final String PUBLISH_EXCHANGE = "merge-document-response";
public static final String PUBLISH_ROUTING_KEY = "";
public static final String CONSUME_QUEUE_CREATE_PACK = "create-pack-request";
public static final String CONSUME_EXCHANGE_CREATE_PACK = "create-pack-request";
public static final String CONSUME_ROUTING_KEY_CREATE_PACK = "";
public static final String PUBLISH_QUEUE_CREATE_PACK = "create-pack-response";
public static final String PUBLISH_EXCHANGE_CREATE_PACK = "create-pack-response";
public static final String PUBLISH_ROUTING_KEY_CREATE_PACK = "";
#Bean
public Queue queue() {
return new Queue(CONSUME_QUEUE);
}
#Bean
public TopicExchange exchange() {
return new TopicExchange(CONSUME_EXCHANGE);
}
#Bean
public Binding binding(Queue queue, TopicExchange exchange) {
return BindingBuilder.bind(queue).to(exchange).with(CONSUME_ROUTING_KEY);
}
#Bean
public MessageConverter converter() {
return new Jackson2JsonMessageConverter();
}
#Bean
public AmqpTemplate template(ConnectionFactory connectionFactory) {
final RabbitTemplate rabbitTemplate = new RabbitTemplate(connectionFactory);
rabbitTemplate.setMessageConverter(converter());
return rabbitTemplate;
}
}
Thanks in advance

java.lang.NoSuchMethodError using JDBI

public class MyApplication extends Application<MyConfiguration> {
final static Logger LOG = Logger.getLogger(MyApplication.class);
public static void main(final String[] args) throws Exception {
new MyApplication().run(args);
}
#Override
public String getName() {
return "PFed";
}
#Override
public void initialize(final Bootstrap<MyConfiguration> bootstrap) {
// TODO: application initialization
bootstrap.addBundle(new DBIExceptionsBundle());
}
#Override
public void run(final MyConfiguration configuration,
final Environment environment) {
// TODO: implement application
final DBIFactory factory = new DBIFactory();
final DBI jdbi = factory.build(environment, configuration.getDataSourceFactory(), "postgresql");
UserDAO userDAO = jdbi.onDemand(UserDAO.class);
userDAO.findNameById(1);
UserResource userResource = new UserResource(new UserService(userDAO));
environment.jersey().register(userResource);
}
I get the the following error at findNameById.
java.lang.NoSuchMethodError: java.lang.Object.findNameById(I)Ljava/lang/String;
at org.skife.jdbi.v2.sqlobject.CloseInternalDoNotUseThisClass$$EnhancerByCGLIB$$a0e63670.CGLIB$findNameById$5()
}
public interface UserDAO {
#SqlQuery("select userId from user where id = :email")
User isEmailAndUsernameUnique(#Bind("email") String email);
#SqlQuery("select name from something where id = :id")
String findNameById(#Bind("id") int id);
}

Default to content_type application/json with overriden isFatal from DefaultExceptionStrategy

I'd like to not require my clients to provide content_type application/json but just default to it. I got this working.
I also tried to combine with another example to implement a custom isFatal(Throwable t) from ConditionalRejectingErrorHandler. I can get my custom error handler to fire, but then it seems to require the content_type property again. I can't figure out how to get them both to work at the same time.
Any ideas?
The below successfully works to not require content_type
EDIT: The below code does not work as I thought. An old message in the queue with the property content_type application/json must have been pulled in
#EnableRabbit
#Configuration
public class ExampleRabbitConfigurer implements
RabbitListenerConfigurer {
#Value("${spring.rabbitmq.host:'localhost'}")
private String host;
#Value("${spring.rabbitmq.port:5672}")
private int port;
#Value("${spring.rabbitmq.username}")
private String username;
#Value("${spring.rabbitmq.password}")
private String password;
#Autowired
private MappingJackson2MessageConverter mappingJackson2MessageConverter;
#Autowired
private DefaultMessageHandlerMethodFactory messageHandlerMethodFactory;
#Bean
public MappingJackson2MessageConverter mappingJackson2MessageConverter() {
return new MappingJackson2MessageConverter();
}
#Bean
public DefaultMessageHandlerMethodFactory messageHandlerMethodFactory() {
DefaultMessageHandlerMethodFactory factory = new DefaultMessageHandlerMethodFactory();
factory.setMessageConverter(mappingJackson2MessageConverter);
return factory;
}
#Override
public void configureRabbitListeners(final RabbitListenerEndpointRegistrar registrar) {
registrar.setMessageHandlerMethodFactory(messageHandlerMethodFactory);
}
The below here works to override isFatal() in ConditionalRejectingErrorHandler. The SimpleRabbitListenerContainerFactory.setMessageConverter() seems like it should serve the same purpose as DefaultMessageHandlerMethodFactory.setMessageConverter(). Obviously this is not the case.
#EnableRabbit
#Configuration
public class ExampleRabbitConfigurer {
#Value("${spring.rabbitmq.host:'localhost'}")
private String host;
#Value("${spring.rabbitmq.port:5672}")
private int port;
#Value("${spring.rabbitmq.username}")
private String username;
#Value("${spring.rabbitmq.password}")
private String password;
#Autowired
ConnectionFactory connectionFactory;
#Autowired
Jackson2JsonMessageConverter jackson2JsonConverter;
#Autowired
ErrorHandler amqpErrorHandlingExceptionStrategy;
#Bean
public Jackson2JsonMessageConverter jackson2JsonConverter() {
return new Jackson2JsonMessageConverter();
}
#Bean
public ErrorHandler amqpErrorHandlingExceptionStrategy() {
return new ConditionalRejectingErrorHandler(new AmqpErrorHandlingExceptionStrategy());
}
#Bean
public SimpleRabbitListenerContainerFactory rabbitListenerContainerFactory(ConnectionFactory connectionFactory) {
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setConnectionFactory(connectionFactory);
factory.setMessageConverter(jackson2JsonConverter);
factory.setErrorHandler(amqpErrorHandlingExceptionStrategy);
return factory;
}
public static class AmqpErrorHandlingExceptionStrategy extends ConditionalRejectingErrorHandler.DefaultExceptionStrategy {
private final Logger LOGGER = org.slf4j.LoggerFactory.getLogger(getClass());
#Override
public boolean isFatal(Throwable t) {
if (t instanceof ListenerExecutionFailedException) {
ListenerExecutionFailedException lefe = (ListenerExecutionFailedException) t;
LOGGER.error("Failed to process inbound message from queue "
+ lefe.getFailedMessage().getMessageProperties().getConsumerQueue()
+ "; failed message: " + lefe.getFailedMessage(), t);
}
return super.isFatal(t);
}
}
Use an "after receive" MessagePostProcessor to add the contentType header to the inbound message.
Starting with version 2.0, you can add the MPP to the container factory.
For earlier versions you can reconfigure...
#SpringBootApplication
public class So47424449Application {
public static void main(String[] args) {
SpringApplication.run(So47424449Application.class, args);
}
#Bean
public ApplicationRunner runner(RabbitListenerEndpointRegistry registry, RabbitTemplate template) {
return args -> {
SimpleMessageListenerContainer container =
(SimpleMessageListenerContainer) registry.getListenerContainer("myListener");
container.setAfterReceivePostProcessors(m -> {
m.getMessageProperties().setContentType("application/json");
return m;
});
container.start();
// send a message with no content type
template.setMessageConverter(new SimpleMessageConverter());
template.convertAndSend("foo", "{\"bar\":\"baz\"}", m -> {
m.getMessageProperties().setContentType(null);
return m;
});
template.convertAndSend("foo", "{\"bar\":\"qux\"}", m -> {
m.getMessageProperties().setContentType(null);
return m;
});
};
}
#Bean
public Jackson2JsonMessageConverter converter() {
return new Jackson2JsonMessageConverter();
}
#RabbitListener(id = "myListener", queues = "foo", autoStartup = "false")
public void listen(Foo foo) {
System.out.println(foo);
if (foo.bar.equals("qux")) {
throw new MessageConversionException("test");
}
}
public static class Foo {
public String bar;
public String getBar() {
return this.bar;
}
public void setBar(String bar) {
this.bar = bar;
}
#Override
public String toString() {
return "Foo [bar=" + this.bar + "]";
}
}
}
As you can see, since it modifies the source message, the modified header is available in the error handler...
2017-11-22 09:39:26.615 WARN 97368 --- [cTaskExecutor-1] ingErrorHandler$DefaultExceptionStrategy : Fatal message conversion error; message rejected; it will be dropped or routed to a dead letter exchange, if so configured: (Body:'{"bar":"qux"}' MessageProperties [headers={}, contentType=application/json, contentEncoding=UTF-8, contentLength=0, receivedDeliveryMode=PERSISTENT, priority=0, redelivered=false, receivedExchange=, receivedRoutingKey=foo, deliveryTag=2, consumerTag=amq.ctag-re1kcxKV14L_nl186stM0w, consumerQueue=foo]), contentType=application/json, contentEncoding=UTF-8, contentLength=0, receivedDeliveryMode=PERSISTENT, priority=0, redelivered=false, receivedExchange=, receivedRoutingKey=foo, deliveryTag=2, consumerTag=amq.ctag-re1kcxKV14L_nl186stM0w, consumerQueue=foo])

extension function using saxon s9api

I am trying to add an extension function, but is failing with :
Caused by: net.sf.saxon.trans.XPathException: Unknown system function follow()
at net.sf.saxon.expr.parser.XPathParser.grumble(XPathParser.java:282)
I see (in debug) that function registered with the integrated library. I was expecting saxon to look for the function in the integrated library but it is searching in system functions and throwing error. What is causing this function to be represented as a system function.
I am using the following :
<dependency>
<groupId>net.sf.saxon</groupId>
<artifactId>Saxon-HE</artifactId>
<version>9.7.0-14</version>
</dependency>
Thank you
my code is
import net.sf.saxon.expr.XPathContext;
import net.sf.saxon.lib.ExtensionFunctionCall;
import net.sf.saxon.lib.ExtensionFunctionDefinition;
import net.sf.saxon.om.Sequence;
import net.sf.saxon.om.StructuredQName;
import net.sf.saxon.s9api.Processor;
import net.sf.saxon.s9api.XPathCompiler;
import net.sf.saxon.s9api.XPathExecutable;
import net.sf.saxon.trans.XPathException;
import net.sf.saxon.value.SequenceType;
public class FollowTest {
public static void main(String[] args) throws Exception {
new FollowTest().test();
}
private void test () throws Exception {
Processor proc = new Processor(false);
proc.registerExtensionFunction(new Follow());
XPathCompiler xx = proc.newXPathCompiler();
XPathExecutable x = xx.compile("follow(/a/b/c)/type='xyz'");
}
public class Follow extends ExtensionFunctionDefinition {
#Override
public StructuredQName getFunctionQName() {
return new StructuredQName("", "http://example.com/saxon-extension", "follow");
}
#Override
public int getMinimumNumberOfArguments() {
return 1;
}
#Override
public int getMaximumNumberOfArguments() {
return 1;
}
#Override
public SequenceType[] getArgumentTypes() {
return new net.sf.saxon.value.SequenceType[] {SequenceType.SINGLE_STRING,};
}
#Override
public SequenceType getResultType(SequenceType[] suppliedArgumentTypes) {
return SequenceType.NODE_SEQUENCE;
}
#Override
public boolean trustResultType() {
return true;
}
#Override
public boolean dependsOnFocus() {
return false;
}
#Override
public boolean hasSideEffects() {
return false;
}
#Override
public ExtensionFunctionCall makeCallExpression() {
return null;
}
private class followCall extends ExtensionFunctionCall {
#Override
public Sequence call(XPathContext context, Sequence[] arguments) throws XPathException {
return null;
}
}
}
}
In the XPath expression you have written
follow(/a/b/c)
A function name with no namespace prefix is assumed to be in the default namespace for functions, which by default is the system function namespace http://www.w3.org/2005/xpath-functions. You need to use a prefix that's bound to the URI appearing in the extension function definition, namely http://example.com/saxon-extension

Jedis Cache implementation without JedisPool/commons-pool2-2.0

How to implement Jedis without JedisPool/commons-pool2-2.0 because still we are using jdk 1.5(commons-pool2-2.0 does not support JDK 1.5)
How to implement a thread-safe connection pooling?
I'm not sure about Jedis compatibility with Java 5. You can create your own pooling based on the older commons-pool 1.6 library. You do not need to have commons-pool2 on your class path to run jedis. I used Jedis 2.7.3 and commons-pool 1.6 to validate the solution approach.
Find the example code attached:
import org.apache.commons.pool.ObjectPool;
import org.apache.commons.pool.PoolableObjectFactory;
import org.apache.commons.pool.impl.GenericObjectPool;
import redis.clients.jedis.Jedis;
public class JedisWithOwnPooling {
public static void main(String[] args) throws Exception {
ObjectPool<Jedis> pool = new GenericObjectPool(new JedisFactory("localhost"));
Jedis j = pool.borrowObject();
System.out.println(j.ping());
pool.returnObject(j);
pool.close();
}
private static class JedisFactory implements PoolableObjectFactory<Jedis> {
private String host;
/**
* Add fields as you need. That's only an example.
*/
public JedisFactory(String host) {
this.host = host;
}
#Override
public Jedis makeObject() throws Exception {
return new Jedis(host);
}
#Override
public void destroyObject(Jedis jedis) throws Exception {
jedis.close();
}
#Override
public boolean validateObject(Jedis jedis) {
return jedis.isConnected();
}
#Override
public void activateObject(Jedis jedis) throws Exception {
if (!jedis.isConnected()) {
jedis.connect();
}
}
#Override
public void passivateObject(Jedis jedis) throws Exception {
}
}
}

Resources