KSQLDB java client error during PUSH query - ksqldb

I try to run basic example of usage PUSH query using ksqldb java client:
public class Main {
public static String KSQLDB_SERVER_HOST = "localhost";
public static int KSQLDB_SERVER_HOST_PORT = 8088;
public static void main(String[] args) throws ExecutionException, InterruptedException {
ClientOptions options = ClientOptions.create()
.setHost(KSQLDB_SERVER_HOST)
.setPort(KSQLDB_SERVER_HOST_PORT);
Client client = Client.create(options);
client.streamQuery("SELECT * FROM users EMIT CHANGES;")
.thenAccept(streamedQueryResult -> {
System.out.println("Query has started. Query ID: " + streamedQueryResult.queryID());
RowSubscriber subscriber = new RowSubscriber();
streamedQueryResult.subscribe(subscriber);
}).exceptionally(e -> {
System.out.println("Request failed: " + e);
return null;
});
client.close();
}
}
I run Confluent env using docker-compose file:
https://github.com/confluentinc/cp-all-in-one/blob/6.0.0-post/cp-all-in-one/docker-compose.yml and create users topic with data.
But got exception inside io.netty.resolver.AddressResolverGroup class, getResolver(final EventExecutor executor) method:
Request failed: java.util.concurrent.CompletionException:
java.lang.IllegalStateException: executor not accepting a task
But all works fine when I run PULL query with synchronous usage:
StreamedQueryResult streamedQueryResult = client.streamQuery("SELECT * FROM users EMIT CHANGES;").get();
for (int i = 0; i < 10; i++) {
// Block until a new row is available
Row row = streamedQueryResult.poll();
if (row != null) {
System.out.println("Row: " + row.values());
}
}

You should not close the client connection while streaming.
Remove the client.close(); to test.

Related

Reading data from kinesis stream unsuccessfully

I am working with Amazon Kinesis data streams. My Kinesis stream consists of only one shard.
I am trying to read data (records) from the stream after writing some data (records) to the same stream. My records are simple JSON's.
I can see through the Amazon console the readings and the writings.
When I try to print the content of the record with "record.getData()" I got this error :
java.nio.HeapByteBuffer[pos=4 lim=4 cap=4]
20:35:59.118 [RecordProcessor-0000] WARN com.kinesisconsumer.AmazonKinesisApplicationSampleRecordProcessor - Caught throwable while processing record UserRecord [subSequenceNumber=0, explicitHashKey=null, aggregated=false, getSequenceNumber()=49593662497507120518174908605360552573875197411355262978, getData()=java.nio.HeapByteBuffer[pos=4 lim=4 cap=4], getPartitionKey()=12345]
java.lang.StringIndexOutOfBoundsException: String index out of range: -9
at java.lang.String.substring(String.java:1931)
at com.kinesisconsumer.AmazonKinesisApplicationSampleRecordProcessor.processSingleRecord(AmazonKinesisApplicationSampleRecordProcessor.java:112)
at com.kinesisconsumer.AmazonKinesisApplicationSampleRecordProcessor.processRecordsWithRetries(AmazonKinesisApplicationSampleRecordProcessor.java:75)
at com.kinesisconsumer.AmazonKinesisApplicationSampleRecordProcessor.processRecords(AmazonKinesisApplicationSampleRecordProcessor.java:53)
at com.amazonaws.services.kinesis.clientlibrary.lib.worker.V1ToV2RecordProcessorAdapter.processRecords(V1ToV2RecordProcessorAdapter.java:42)
at com.amazonaws.services.kinesis.clientlibrary.lib.worker.ProcessTask.callProcessRecords(ProcessTask.java:221)
at com.amazonaws.services.kinesis.clientlibrary.lib.worker.ProcessTask.call(ProcessTask.java:176)
at com.amazonaws.services.kinesis.clientlibrary.lib.worker.MetricsCollectingTaskDecorator.call(MetricsCollectingTaskDecorator.java:49)
at com.amazonaws.services.kinesis.clientlibrary.lib.worker.MetricsCollectingTaskDecorator.call(MetricsCollectingTaskDecorator.java:24)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Here is my code :
public class AmazonKinesisApplicationRecordProcessorFactory implements IRecordProcessorFactory {
/**
* {#inheritDoc}
*/
#Override
public IRecordProcessor createProcessor() {
return new AmazonKinesisApplicationSampleRecordProcessor();
}
}
public final class AmazonKinesisApplicationSample {
public static final String SAMPLE_APPLICATION_STREAM_NAME = "LimorKinesis";
private static final String SAMPLE_APPLICATION_NAME = "SampleKinesisApplication";
// Initial position in the stream when the application starts up for the first time.
// Position can be one of LATEST (most recent data) or TRIM_HORIZON (oldest available data)
private static final InitialPositionInStream SAMPLE_APPLICATION_INITIAL_POSITION_IN_STREAM =
InitialPositionInStream.LATEST;
private static ProfileCredentialsProvider credentialsProvider;
private static void init() {
// Ensure the JVM will refresh the cached IP values of AWS resources (e.g. service endpoints).
java.security.Security.setProperty("networkaddress.cache.ttl", "60");
/*
* The ProfileCredentialsProvider will return your [default]
* credential profile by reading from the credentials file located at
* (~/.aws/credentials).
*/
credentialsProvider = new ProfileCredentialsProvider();
try {
credentialsProvider.getCredentials();
} catch (Exception e) {
throw new AmazonClientException("Cannot load the credentials from the credential profiles file. "
+ "Please make sure that your credentials file is at the correct "
+ "location (~/.aws/credentials), and is in valid format.", e);
}
}
public static void main(String[] args) throws Exception {
init();
if (args.length == 1 && "delete-resources".equals(args[0])) {
deleteResources();
return;
}
String workerId = InetAddress.getLocalHost().getCanonicalHostName() + ":" + UUID.randomUUID();
KinesisClientLibConfiguration kinesisClientLibConfiguration =
new KinesisClientLibConfiguration(SAMPLE_APPLICATION_NAME,
SAMPLE_APPLICATION_STREAM_NAME,
credentialsProvider,
workerId);
kinesisClientLibConfiguration.withInitialPositionInStream(SAMPLE_APPLICATION_INITIAL_POSITION_IN_STREAM);
kinesisClientLibConfiguration.withRegionName("us-west-2");//todo : added region west-2
IRecordProcessorFactory recordProcessorFactory = new AmazonKinesisApplicationRecordProcessorFactory();
Worker worker = new Worker(recordProcessorFactory, kinesisClientLibConfiguration);
System.out.printf("Running %s to process stream %s as worker %s...\n",
SAMPLE_APPLICATION_NAME,
SAMPLE_APPLICATION_STREAM_NAME,
workerId);
int exitCode = 0;
try {
worker.run();
} catch (Throwable t) {
System.err.println("Caught throwable while processing data.");
t.printStackTrace();
exitCode = 1;
}
System.exit(exitCode);
}
public static void deleteResources() {
// Delete the stream
AmazonKinesis kinesis = AmazonKinesisClientBuilder.standard()
.withCredentials(credentialsProvider)
.withRegion("us-west-2")
.build();
System.out.printf("Deleting the Amazon Kinesis stream used by the sample. Stream Name = %s.\n",
SAMPLE_APPLICATION_STREAM_NAME);
try {
kinesis.deleteStream(SAMPLE_APPLICATION_STREAM_NAME);
} catch (ResourceNotFoundException ex) {
// The stream doesn't exist.
}
// Delete the table
AmazonDynamoDB dynamoDB = AmazonDynamoDBClientBuilder.standard()
.withCredentials(credentialsProvider)
.withRegion("us-west-2")
.build();
System.out.printf("Deleting the Amazon DynamoDB table used by the Amazon Kinesis Client Library. Table Name = %s.\n",
SAMPLE_APPLICATION_NAME);
try {
dynamoDB.deleteTable(SAMPLE_APPLICATION_NAME);
} catch (com.amazonaws.services.dynamodbv2.model.ResourceNotFoundException ex) {
// The table doesn't exist.
}
}
}
public class AmazonKinesisApplicationSampleRecordProcessor implements IRecordProcessor {
private static final Log LOG = LogFactory.getLog(AmazonKinesisApplicationSampleRecordProcessor.class);
private String kinesisShardId;
// Backoff and retry settings
private static final long BACKOFF_TIME_IN_MILLIS = 3000L;
private static final int NUM_RETRIES = 10;
// Checkpoint about once a minute
private static final long CHECKPOINT_INTERVAL_MILLIS = 60000L;
private long nextCheckpointTimeInMillis;
private final CharsetDecoder decoder = Charset.forName("UTF-8").newDecoder();
/**
* {#inheritDoc}
*/
#Override
public void initialize(String shardId) {
LOG.info("Initializing record processor for shard: " + shardId);
this.kinesisShardId = shardId;
}
/**
* {#inheritDoc}
*/
#Override
public void processRecords(List<Record> records, IRecordProcessorCheckpointer checkpointer) {
LOG.info("Processing " + records.size() + " records from " + kinesisShardId);
// Process records and perform all exception handling.
processRecordsWithRetries(records);
// Checkpoint once every checkpoint interval.
if (System.currentTimeMillis() > nextCheckpointTimeInMillis) {
checkpoint(checkpointer);
nextCheckpointTimeInMillis = System.currentTimeMillis() + CHECKPOINT_INTERVAL_MILLIS;
}
}
/**
* Process records performing retries as needed. Skip "poison pill" records.
*
* #param records Data records to be processed.
*/
private void processRecordsWithRetries(List<Record> records) {
for (Record record : records) {
boolean processedSuccessfully = false;
for (int i = 0; i < NUM_RETRIES; i++) {
try {
//
// Logic to process record goes here.
//
processSingleRecord(record);
processedSuccessfully = true;
break;
} catch (Throwable t) {
LOG.warn("Caught throwable while processing record " + record, t);
}
// backoff if we encounter an exception.
try {
Thread.sleep(BACKOFF_TIME_IN_MILLIS);
} catch (InterruptedException e) {
LOG.debug("Interrupted sleep", e);
}
}
if (!processedSuccessfully) {
LOG.error("Couldn't process record " + record + ". Skipping the record.");
}
}
}
/**
* Process a single record.
*
* #param record The record to be processed.
*/
private void processSingleRecord(Record record) {
System.out.println(record.getData());
String data = null;
try {
// For this app, we interpret the payload as UTF-8 chars.
data = decoder.decode(record.getData()).toString();
// Assume this record came from AmazonKinesisSample and log its age.
long recordCreateTime = new Long(data.substring("testData-".length()));
long ageOfRecordInMillis = System.currentTimeMillis() - recordCreateTime;
LOG.info(record.getSequenceNumber() + ", " + record.getPartitionKey() + ", " + data + ", Created "
+ ageOfRecordInMillis + " milliseconds ago.");
} catch (NumberFormatException e) {
LOG.info("Record does not match sample record format. Ignoring record with data; " + data);
} catch (CharacterCodingException e) {
LOG.error("Malformed data: " + data, e);
}
}
/**
* {#inheritDoc}
*/
#Override
public void shutdown(IRecordProcessorCheckpointer checkpointer, ShutdownReason reason) {
LOG.info("Shutting down record processor for shard: " + kinesisShardId);
// Important to checkpoint after reaching end of shard, so we can start processing data from child shards.
if (reason == ShutdownReason.TERMINATE) {
checkpoint(checkpointer);
}
}
/** Checkpoint with retries.
* #param checkpointer
*/
private void checkpoint(IRecordProcessorCheckpointer checkpointer) {
LOG.info("Checkpointing shard " + kinesisShardId);
for (int i = 0; i < NUM_RETRIES; i++) {
try {
checkpointer.checkpoint();
break;
} catch (ShutdownException se) {
// Ignore checkpoint if the processor instance has been shutdown (fail over).
LOG.info("Caught shutdown exception, skipping checkpoint.", se);
break;
} catch (ThrottlingException e) {
// Backoff and re-attempt checkpoint upon transient failures
if (i >= (NUM_RETRIES - 1)) {
LOG.error("Checkpoint failed after " + (i + 1) + "attempts.", e);
break;
} else {
LOG.info("Transient issue when checkpointing - attempt " + (i + 1) + " of "
+ NUM_RETRIES, e);
}
} catch (InvalidStateException e) {
// This indicates an issue with the DynamoDB table (check for table, provisioned IOPS).
LOG.error("Cannot save checkpoint to the DynamoDB table used by the Amazon Kinesis Client Library.", e);
break;
}
try {
Thread.sleep(BACKOFF_TIME_IN_MILLIS);
} catch (InterruptedException e) {
LOG.debug("Interrupted sleep", e);
}
}
}
}
public class AmazonKinesisRecordProducerSample {
private static AmazonKinesis kinesis;
private static void init() throws Exception {
/*
* The ProfileCredentialsProvider will return your [default]
* credential profile by reading from the credentials file located at
* (~/.aws/credentials).
*/
ProfileCredentialsProvider credentialsProvider = new ProfileCredentialsProvider();
try {
credentialsProvider.getCredentials();
} catch (Exception e) {
throw new AmazonClientException(
"Cannot load the credentials from the credential profiles file. " +
"Please make sure that your credentials file is at the correct " +
"location (~/.aws/credentials), and is in valid format.",
e);
}
kinesis = AmazonKinesisClientBuilder.standard()
.withCredentials(credentialsProvider)
.withRegion("us-west-2")
.build();
}
public static void main(String[] args) throws Exception {
init();
final String myStreamName = AmazonKinesisApplicationSample.SAMPLE_APPLICATION_STREAM_NAME;
final Integer myStreamSize = 1;
// Describe the stream and check if it exists.
DescribeStreamRequest describeStreamRequest = new DescribeStreamRequest().withStreamName(myStreamName);
try {
StreamDescription streamDescription = kinesis.describeStream(describeStreamRequest).getStreamDescription();
System.out.printf("Stream %s has a status of %s.\n", myStreamName, streamDescription.getStreamStatus());
if ("DELETING".equals(streamDescription.getStreamStatus())) {
System.out.println("Stream is being deleted. This sample will now exit.");
System.exit(0);
}
// Wait for the stream to become active if it is not yet ACTIVE.
if (!"ACTIVE".equals(streamDescription.getStreamStatus())) {
waitForStreamToBecomeAvailable(myStreamName);
}
} catch (ResourceNotFoundException ex) {
System.out.printf("Stream %s does not exist. Creating it now.\n", myStreamName);
// Create a stream. The number of shards determines the provisioned throughput.
CreateStreamRequest createStreamRequest = new CreateStreamRequest();
createStreamRequest.setStreamName(myStreamName);
createStreamRequest.setShardCount(myStreamSize);
kinesis.createStream(createStreamRequest);
// The stream is now being created. Wait for it to become active.
waitForStreamToBecomeAvailable(myStreamName);
}
// List all of my streams.
ListStreamsRequest listStreamsRequest = new ListStreamsRequest();
listStreamsRequest.setLimit(10);
ListStreamsResult listStreamsResult = kinesis.listStreams(listStreamsRequest);
List<String> streamNames = listStreamsResult.getStreamNames();
while (listStreamsResult.isHasMoreStreams()) {
if (streamNames.size() > 0) {
listStreamsRequest.setExclusiveStartStreamName(streamNames.get(streamNames.size() - 1));
}
listStreamsResult = kinesis.listStreams(listStreamsRequest);
streamNames.addAll(listStreamsResult.getStreamNames());
}
// Print all of my streams.
System.out.println("List of my streams: ");
for (int i = 0; i < streamNames.size(); i++) {
System.out.println("\t- " + streamNames.get(i));
}
System.out.printf("Putting records in stream : %s until this application is stopped...\n", myStreamName);
System.out.println("Press CTRL-C to stop.");
// Write records to the stream until this program is aborted.
while (true) {
long createTime = System.currentTimeMillis();
PutRecordRequest putRecordRequest = new PutRecordRequest();
putRecordRequest.setStreamName(myStreamName);
putRecordRequest.setData(ByteBuffer.wrap(String.format("testData-%d", createTime).getBytes()));
putRecordRequest.setPartitionKey(String.format("partitionKey-%d", createTime));
PutRecordResult putRecordResult = kinesis.putRecord(putRecordRequest);
System.out.printf("Successfully put record, partition key : %s, ShardID : %s, SequenceNumber : %s.\n",
putRecordRequest.getPartitionKey(),
putRecordResult.getShardId(),
putRecordResult.getSequenceNumber());
}
}
private static void waitForStreamToBecomeAvailable(String myStreamName) throws InterruptedException {
System.out.printf("Waiting for %s to become ACTIVE...\n", myStreamName);
long startTime = System.currentTimeMillis();
long endTime = startTime + TimeUnit.MINUTES.toMillis(10);
while (System.currentTimeMillis() < endTime) {
Thread.sleep(TimeUnit.SECONDS.toMillis(20));
try {
DescribeStreamRequest describeStreamRequest = new DescribeStreamRequest();
describeStreamRequest.setStreamName(myStreamName);
// ask for no more than 10 shards at a time -- this is an optional parameter
describeStreamRequest.setLimit(10);
DescribeStreamResult describeStreamResponse = kinesis.describeStream(describeStreamRequest);
String streamStatus = describeStreamResponse.getStreamDescription().getStreamStatus();
System.out.printf("\t- current state: %s\n", streamStatus);
if ("ACTIVE".equals(streamStatus)) {
return;
}
} catch (ResourceNotFoundException ex) {
// ResourceNotFound means the stream doesn't exist yet,
// so ignore this error and just keep polling.
} catch (AmazonServiceException ase) {
throw ase;
}
}
throw new RuntimeException(String.format("Stream %s never became active", myStreamName));
}
}
I used the sample code from this link :
https://github.com/aws/aws-sdk-java/tree/master/src/samples/AmazonKinesis
Try to change Application Name and then retry. Most of problems gets resolved by this simple change.
or try this code below.
AmazonKinesisApplicationSample.java :-
package KinesiSampleApplication.www.intellyzen.com;
/*
* Copyright 2012-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://aws.amazon.com/apache2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
import java.net.InetAddress;
import java.util.UUID;
import com.amazonaws.AmazonClientException;
import com.amazonaws.auth.AWSCredentials;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDB;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDBClientBuilder;
import com.amazonaws.services.kinesis.AmazonKinesis;
import com.amazonaws.services.kinesis.AmazonKinesisClientBuilder;
import com.amazonaws.services.kinesis.clientlibrary.interfaces.IRecordProcessorFactory;
import com.amazonaws.services.kinesis.clientlibrary.lib.worker.InitialPositionInStream;
import com.amazonaws.services.kinesis.clientlibrary.lib.worker.KinesisClientLibConfiguration;
import com.amazonaws.services.kinesis.clientlibrary.lib.worker.Worker;
import com.amazonaws.services.kinesis.model.ResourceNotFoundException;
/**
* Sample Amazon Kinesis Application.
*/
public final class AmazonKinesisApplicationSample {
/*
* Before running the code:
* Fill in your AWS access credentials in the provided credentials
* file template, and be sure to move the file to the default location
* (~/.aws/credentials) where the sample code will load the
* credentials from.
* https://console.aws.amazon.com/iam/home?#security_credential
*
* WARNING:
* To avoid accidental leakage of your credentials, DO NOT keep
* the credentials file in your source directory.
*/
public static final String SAMPLE_APPLICATION_STREAM_NAME = "IOTREST-API";
private static final String SAMPLE_APPLICATION_NAME = "SampleKinesisApplicationadsfdsa11 ";
// Initial position in the stream when the application starts up for the first time.
// Position can be one of LATEST (most recent data) or TRIM_HORIZON (oldest available data)
private static final InitialPositionInStream SAMPLE_APPLICATION_INITIAL_POSITION_IN_STREAM =
InitialPositionInStream.LATEST;
private static ProfileCredentialsProvider credentialsProvider;
private static void init() {
// Ensure the JVM will refresh the cached IP values of AWS resources (e.g. service endpoints).
java.security.Security.setProperty("networkaddress.cache.ttl", "60");
/*
* The ProfileCredentialsProvider will return your [default]
* credential profile by reading from the credentials file located at
* (~/.aws/credentials).
*/
credentialsProvider = new ProfileCredentialsProvider();
try {
credentialsProvider.getCredentials();
} catch (Exception e) {
throw new AmazonClientException("Cannot load the credentials from the credential profiles file. "
+ "Please make sure that your credentials file is at the correct "
+ "location (~/.aws/credentials), and is in valid format.", e);
}
}
public static void main(String[] args) throws Exception {
init();
if (args.length == 1 && "delete-resources".equals(args[0])) {
deleteResources();
return;
}
String workerId = InetAddress.getLocalHost().getCanonicalHostName() + ":" + UUID.randomUUID();
KinesisClientLibConfiguration kinesisClientLibConfiguration =
new KinesisClientLibConfiguration(SAMPLE_APPLICATION_NAME,
SAMPLE_APPLICATION_STREAM_NAME,
credentialsProvider,
workerId);
kinesisClientLibConfiguration.withInitialPositionInStream(SAMPLE_APPLICATION_INITIAL_POSITION_IN_STREAM);
IRecordProcessorFactory recordProcessorFactory = new AmazonKinesisApplicationRecordProcessorFactory();
Worker worker = new Worker(recordProcessorFactory, kinesisClientLibConfiguration);
System.out.printf("Running %s to process stream %s as worker %s...\n",
SAMPLE_APPLICATION_NAME,
SAMPLE_APPLICATION_STREAM_NAME,
workerId);
int exitCode = 0;
try {
worker.run();
} catch (Throwable t) {
System.err.println("Caught throwable while processing data.");
t.printStackTrace();
exitCode = 1;
}
System.exit(exitCode);
}
public static void deleteResources() {
// Delete the stream
AmazonKinesis kinesis = AmazonKinesisClientBuilder.standard()
.withCredentials(credentialsProvider)
.withRegion("us-east-1")
.build();
System.out.printf("Deleting the Amazon Kinesis stream used by the sample. Stream Name = %s.\n",
SAMPLE_APPLICATION_STREAM_NAME);
try {
kinesis.deleteStream(SAMPLE_APPLICATION_STREAM_NAME);
} catch (ResourceNotFoundException ex) {
// The stream doesn't exist.
}
// Delete the table
AmazonDynamoDB dynamoDB = AmazonDynamoDBClientBuilder.standard()
.withCredentials(credentialsProvider)
.withRegion("us-east-1")
.build();
System.out.printf("Deleting the Amazon DynamoDB table used by the Amazon Kinesis Client Library. Table Name = %s.\n",
SAMPLE_APPLICATION_NAME);
try {
dynamoDB.deleteTable(SAMPLE_APPLICATION_NAME);
} catch (com.amazonaws.services.dynamodbv2.model.ResourceNotFoundException ex) {
// The table doesn't exist.
}
}
}
AmazonKinesisApplicationSampleRecordProcessor.java:-
package KinesiSampleApplication.www.intellyzen.com;
/*
* Copyright 2012-2016 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://aws.amazon.com/apache2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
import java.nio.charset.CharacterCodingException;
import java.nio.charset.Charset;
import java.nio.charset.CharsetDecoder;
import java.util.List;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import com.amazonaws.services.kinesis.clientlibrary.exceptions.InvalidStateException;
import com.amazonaws.services.kinesis.clientlibrary.exceptions.ShutdownException;
import com.amazonaws.services.kinesis.clientlibrary.exceptions.ThrottlingException;
import com.amazonaws.services.kinesis.clientlibrary.interfaces.IRecordProcessor;
import com.amazonaws.services.kinesis.clientlibrary.interfaces.IRecordProcessorCheckpointer;
import com.amazonaws.services.kinesis.model.Record;
import software.amazon.kinesis.lifecycle.ShutdownReason;
/**
* Processes records and checkpoints progress.
*/
public class AmazonKinesisApplicationSampleRecordProcessor implements IRecordProcessor {
private static final Log LOG = LogFactory.getLog(AmazonKinesisApplicationSampleRecordProcessor.class);
private String kinesisShardId;
// Backoff and retry settings
private static final long BACKOFF_TIME_IN_MILLIS = 3000L;
private static final int NUM_RETRIES = 10;
// Checkpoint about once a minute
private static final long CHECKPOINT_INTERVAL_MILLIS = 60000L;
private long nextCheckpointTimeInMillis;
private final CharsetDecoder decoder = Charset.forName("UTF-8").newDecoder();
/**
* {#inheritDoc}
*/
public void initialize(String shardId) {
LOG.info("Initializing record processor for shard: " + shardId);
this.kinesisShardId = shardId;
}
/**
* {#inheritDoc}
*/
public void processRecords(List<Record> records, IRecordProcessorCheckpointer checkpointer) {
LOG.info("Processing " + records.size() + " records from " + kinesisShardId);
// Process records and perform all exception handling.
processRecordsWithRetries(records);
// Checkpoint once every checkpoint interval.
if (System.currentTimeMillis() > nextCheckpointTimeInMillis) {
checkpoint(checkpointer);
nextCheckpointTimeInMillis = System.currentTimeMillis() + CHECKPOINT_INTERVAL_MILLIS;
}
}
/**
* Process records performing retries as needed. Skip "poison pill" records.
*
* #param records Data records to be processed.
*/
private void processRecordsWithRetries(List<Record> records) {
for (Record record : records) {
boolean processedSuccessfully = false;
for (int i = 0; i < NUM_RETRIES; i++) {
try {
//
// Logic to process record goes here.
//
processSingleRecord(record);
processedSuccessfully = true;
break;
} catch (Throwable t) {
LOG.warn("Caught throwable while processing record " + record, t);
}
// backoff if we encounter an exception.
try {
Thread.sleep(BACKOFF_TIME_IN_MILLIS);
} catch (InterruptedException e) {
LOG.debug("Interrupted sleep", e);
}
}
if (!processedSuccessfully) {
LOG.error("Couldn't process record " + record + ". Skipping the record.");
}
}
}
/**
* Process a single record.
*
* #param record The record to be processed.
*/
private void processSingleRecord(Record record) {
// TODO Add your own record processing logic here
String data = null;
try {
// For this app, we interpret the payload as UTF-8 chars.
data = decoder.decode(record.getData()).toString();
System.out.println(data);
System.out.println("\n");
// Assume this record came from AmazonKinesisSample and log its age.
long recordCreateTime = new Long(data.substring("testData-".length()));
long ageOfRecordInMillis = System.currentTimeMillis() - recordCreateTime;
LOG.info(record.getSequenceNumber() + ", " + record.getPartitionKey() + ", " + data + ", Created "
+ ageOfRecordInMillis + " milliseconds ago.");
} catch (NumberFormatException e) {
LOG.info("Record does not match sample record format. Ignoring record with data; " + data);}
catch (CharacterCodingException e) {
LOG.error("Malformed data: " + data, e);
}
}
/**
* {#inheritDoc}
*/
public void shutdown(IRecordProcessorCheckpointer checkpointer, ShutdownReason reason) {
LOG.info("Shutting down record processor for shard: " + kinesisShardId);
// Important to checkpoint after reaching end of shard, so we can start processing data from child shards.
if (reason == ShutdownReason.LEASE_LOST) {
checkpoint(checkpointer);
}
}
/** Checkpoint with retries.
* #param checkpointer
*/
private void checkpoint(IRecordProcessorCheckpointer checkpointer) {
LOG.info("Checkpointing shard " + kinesisShardId);
for (int i = 0; i < NUM_RETRIES; i++) {
try {
checkpointer.checkpoint();
break;
} catch (ShutdownException se) {
// Ignore checkpoint if the processor instance has been shutdown (fail over).
LOG.info("Caught shutdown exception, skipping checkpoint.", se);
break;
} catch (ThrottlingException e) {
// Backoff and re-attempt checkpoint upon transient failures
if (i >= (NUM_RETRIES - 1)) {
LOG.error("Checkpoint failed after " + (i + 1) + "attempts.", e);
break;
} else {
LOG.info("Transient issue when checkpointing - attempt " + (i + 1) + " of "
+ NUM_RETRIES, e);
}
} catch (InvalidStateException e) {
// This indicates an issue with the DynamoDB table (check for table, provisioned IOPS).
LOG.error("Cannot save checkpoint to the DynamoDB table used by the Amazon Kinesis Client Library.", e);
break;
}
try {
Thread.sleep(BACKOFF_TIME_IN_MILLIS);
} catch (InterruptedException e) {
LOG.debug("Interrupted sleep", e);
}
}
}
public void shutdown(IRecordProcessorCheckpointer checkpointer,
com.amazonaws.services.kinesis.clientlibrary.lib.worker.ShutdownReason reason) {
// TODO Auto-generated method stub
}
}
AmazonKinesisApplicationRecordProcessorFactory.java:-
package KinesiSampleApplication.www.intellyzen.com;
/*
* Copyright 2012-2016 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://aws.amazon.com/apache2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
import com.amazonaws.services.kinesis.clientlibrary.interfaces.IRecordProcessor;
import com.amazonaws.services.kinesis.clientlibrary.interfaces.IRecordProcessorFactory;
/**
* Used to create new record processors.
*/
public class AmazonKinesisApplicationRecordProcessorFactory implements IRecordProcessorFactory {
/**
* {#inheritDoc}
*/
public IRecordProcessor createProcessor() {
return new AmazonKinesisApplicationSampleRecordProcessor();
}
}
pom.xml:-
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>KinesiSampleApplication</groupId>
<artifactId>www.intellyzen.com</artifactId>
<version>0.0.1-SNAPSHOT</version>
<packaging>jar</packaging>
<name>www.intellyzen.com</name>
<url>http://maven.apache.org</url>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
</properties>
<dependencies>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>3.8.1</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>software.amazon.kinesis</groupId>
<artifactId>amazon-kinesis-client</artifactId>
<version>2.2.0</version>
</dependency>
<!-- https://mvnrepository.com/artifact/com.amazonaws/aws-java-sdk -->
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk-kinesis</artifactId>
<version>1.11.551</version>
</dependency>
<!-- https://mvnrepository.com/artifact/com.amazonaws/amazon-kinesis-client -->
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>amazon-kinesis-client</artifactId>
<version>1.10.0</version>
</dependency>
<dependency>
<groupId>software.amazon.kinesis</groupId>
<artifactId>amazon-kinesis-client</artifactId>
<version>2.2.0</version>
</dependency>
<!-- Thanks for using https://jar-download.com -->
<!-- https://mvnrepository.com/artifact/com.fasterxml.jackson.dataformat/jackson-dataformat-cbor -->
<dependency>
<groupId>com.fasterxml.jackson.dataformat</groupId>
<artifactId>jackson-dataformat-cbor</artifactId>
<version>2.9.8</version>
</dependency>
</dependencies>
</project>
Your use case -- "I am trying to read data (records) from the stream"
You can find all AWS Java V2 examples here: https://github.com/awsdocs/aws-doc-sdk-examples/tree/master/javav2/example_code/kinesis
Here is the Solution using the AWS Kinesis Java v2 API...
package com.example.kinesis;
//snippet-start:[kinesis.java2.getrecord.import]
import software.amazon.awssdk.core.SdkBytes;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.kinesis.KinesisClient;
import software.amazon.awssdk.services.kinesis.model.*;
import java.util.ArrayList;
import java.util.List;
//snippet-end:[kinesis.java2.getrecord.import]
/**
* Demonstrates how to read data from a Kinesis Data Stream. Before running this Java code example, populate a Data Stream
* by running the StockTradesWriter example. That example populates a Data Stream that you can then use for this example.
*/
public class GetRecords {
public static void main(String[] args) {
// snippet-start:[kinesis.java2.getrecord.main]
Region region = Region.US_EAST_1;
KinesisClient kinesisClient = KinesisClient.builder()
.region(region)
.build();
getStockTrades(kinesisClient);
}
private static void getStockTrades(KinesisClient kinesisClient) {
String shardIterator;
String lastShardId = null;
// Retrieve the Shards from a Stream
DescribeStreamRequest describeStreamRequest = DescribeStreamRequest.builder()
.streamName("StockTrade")
.build();
List<Shard> shards = new ArrayList<>();
DescribeStreamResponse streamRes;
do {
// describeStreamRequest.exclusiveStartShardId(lastShardId);
streamRes = kinesisClient.describeStream(describeStreamRequest);
shards.addAll(streamRes.streamDescription().shards());
if (shards.size() > 0) {
lastShardId = shards.get(shards.size() - 1).shardId();
}
} while (streamRes.streamDescription().hasMoreShards());
GetShardIteratorRequest itReq = GetShardIteratorRequest.builder()
.streamName("StockTrade")
.shardIteratorType("TRIM_HORIZON")
.shardId(shards.get(0).shardId())
.build();
GetShardIteratorResponse shardIteratorResult = kinesisClient.getShardIterator(itReq);
shardIterator = shardIteratorResult.shardIterator();
// Continuously read data records from shard.
List<Record> records;
while (true) {
// Create new GetRecordsRequest with existing shardIterator.
// Set maximum records to return to 1000.
GetRecordsRequest recordsRequest = GetRecordsRequest.builder()
.shardIterator(shardIterator)
.limit(1000)
.build();
GetRecordsResponse result = kinesisClient.getRecords(recordsRequest);
// Put result into record list. Result may be empty.
records = result.records();
// Print records
for (Record record : records) {
SdkBytes byteBuffer = record.data();
System.out.println(String.format("Seq No: %s - %s", record.sequenceNumber(),
new String(byteBuffer.asByteArray())));
}
try {
Thread.sleep(1000);
} catch (InterruptedException exception) {
throw new RuntimeException(exception);
}
shardIterator = result.nextShardIterator();
}
// snippet-end:[kinesis.java2.getrecord.main]
}
}

Cloud Dataflow - how does Dataflow do parallelism?

My question is, behind the scene, for element-wise Beam DoFn (ParDo), how does the Cloud Dataflow parallel workload? For example, in my ParDO, I send out one http request to an external server for one element. And I use 30 workers, each has 4vCPU.
Does that mean on each worker, there will be 4 threads at maximum?
Does that mean from each worker, only 4 http connections are necessary or can be established if I keep them alive to get the best performance?
How can I adjust the level of parallelism other than using more cores or more workers?
with my current setting (30*4vCPU worker), I can establish around 120 http connections on the http server. But both server and worker has very low resource usage. basically I want to make them work much harder by sending out more requests out per second. What should I do...
Code Snippet to illustrate my work:
public class NewCallServerDoFn extends DoFn<PreparedRequest,KV<PreparedRequest,String>> {
private static final Logger Logger = LoggerFactory.getLogger(ProcessReponseDoFn.class);
private static PoolingHttpClientConnectionManager _ConnManager = null;
private static CloseableHttpClient _HttpClient = null;
private static HttpRequestRetryHandler _RetryHandler = null;
private static String[] _MapServers = MapServerBatchBeamApplication.CONFIG.getString("mapserver.client.config.server_host").split(",");
#Setup
public void setupHttpClient(){
Logger.info("Setting up HttpClient");
//Question: the value of maxConnection below is actually 10, but with 30 worker machines, I can only see 115 TCP connections established on the server side. So this setting doesn't really take effect as I expected.....
int maxConnection = MapServerBatchBeamApplication.CONFIG.getInt("mapserver.client.config.max_connection");
int timeout = MapServerBatchBeamApplication.CONFIG.getInt("mapserver.client.config.timeout");
_ConnManager = new PoolingHttpClientConnectionManager();
for (String mapServer : _MapServers) {
HttpHost serverHost = new HttpHost(mapServer,80);
_ConnManager.setMaxPerRoute(new HttpRoute(serverHost),maxConnection);
}
// config timeout
RequestConfig requestConfig = RequestConfig.custom()
.setConnectTimeout(timeout)
.setConnectionRequestTimeout(timeout)
.setSocketTimeout(timeout).build();
// config retry
_RetryHandler = new HttpRequestRetryHandler() {
public boolean retryRequest(
IOException exception,
int executionCount,
HttpContext context) {
Logger.info(exception.toString());
Logger.info("try request: " + executionCount);
if (executionCount >= 5) {
// Do not retry if over max retry count
return false;
}
if (exception instanceof InterruptedIOException) {
// Timeout
return false;
}
if (exception instanceof UnknownHostException) {
// Unknown host
return false;
}
if (exception instanceof ConnectTimeoutException) {
// Connection refused
return false;
}
if (exception instanceof SSLException) {
// SSL handshake exception
return false;
}
return true;
}
};
_HttpClient = HttpClients.custom()
.setConnectionManager(_ConnManager)
.setDefaultRequestConfig(requestConfig)
.setRetryHandler(_RetryHandler)
.build();
Logger.info("Setting up HttpClient is done.");
}
#Teardown
public void tearDown(){
Logger.info("Tearing down HttpClient and Connection Manager.");
try {
_HttpClient.close();
_ConnManager.close();
}catch (Exception e){
Logger.warn(e.toString());
}
Logger.info("HttpClient and Connection Manager have been teared down.");
}
#ProcessElement
public void processElement(ProcessContext c) {
PreparedRequest request = c.element();
if(request == null)
return;
String response="{\"my_error\":\"failed to get response from map server with retries\"}";
String chosenServer = _MapServers[request.getHardwareId() % _MapServers.length];
String parameter;
try {
parameter = URLEncoder.encode(request.getRequest(),"UTF-8");
} catch (UnsupportedEncodingException e) {
Logger.error(e.toString());
return;
}
StringBuilder sb = new StringBuilder().append(MapServerBatchBeamApplication.CONFIG.getString("mapserver.client.config.api_path"))
.append("?coordinates=")
.append(parameter);
HttpGet getRequest = new HttpGet(sb.toString());
HttpHost host = new HttpHost(chosenServer,80,"http");
CloseableHttpResponse httpRes;
try {
httpRes = _HttpClient.execute(host,getRequest);
HttpEntity entity = httpRes.getEntity();
if(entity != null){
try
{
response = EntityUtils.toString(entity);
}finally{
EntityUtils.consume(entity);
httpRes.close();
}
}
}catch(Exception e){
Logger.warn("failed by get response from map server with retries for " + request.getRequest());
}
c.output(KV.of(request, response));
}
}
Yes, based on this answer.
No, you can establish more connections. Based on my answer, you can use a async http client to have more concurrent requests. As this answer also describes, you need to collect the results from these asynchronous calls and output it synchronously in any #ProcessElement or #FinishBundle.
See 2.
Since your resource usage is low, it indicates that the worker spends most of its time waiting for a response. I think with the described approach above, you can utilize your resources far better and you can achieve the same performance with far less workers.

Calling SignalR client on a loop Fails using different browser

I have a problem using asynchronous task and signalR here is my scenario:
I have to page records using async task to create a csv file and updating the client using push notification via signalR here is my code:
private async Task WriteRecords([DataSourceRequest] DataSourceRequest dataRequest,int countno, VMEXPORT[] arrVmExport, bool createHeaderyn, string filePath )
{
string fileName = filePath.Replace(System.Web.HttpContext.Current.Server.MapPath("~/") + "Csv\\", "").Replace(".csv", "");
int datapage = (countno / 192322)+1;
for (int i = 1; i <= datapage; )
{
dataRequest.Page = i;
dataRequest.PageSize = 192322;
var write = _serviceAgent.FetchByRole("", "", CurrentUser.Linkcd, CurrentUser.Rolecd).ToDataSourceResult(dataRequest);
await Task.Run(()=>write.Data.Cast<AGENT>().WriteToCSV(new AGENT(), createHeaderyn, arrVmExport, filePath));
createHeaderyn = false;
i = i + 1;
double percentage = (i * 100) / datapage;
SendProgress(percentage, countno,fileName);
}
}
Here is the set up in my BaseController which calls the hub context:
public void SendNotification(string fileNametx, bool createdyn)
{
var context = GlobalHost.ConnectionManager.GetHubContext<SignalRHubHelper>();
context.Clients.User(CurrentUser.Usernm + '-' + CurrentUser.GUID)
.receiveNotification("Export", CurrentUser.Usernm, "info", fileNametx, createdyn);
}
public void SendProgress(double recordCount, int totalCount,string fileName)
{
var context = GlobalHost.ConnectionManager.GetHubContext<SignalRHubHelper>();
context.Clients.User(CurrentUser.Usernm + '-' + CurrentUser.GUID).reportProgress(recordCount, totalCount,fileName);
}
And Here is my controller Method:
public async Task<ActionResult> _Export([DataSourceRequest] DataSourceRequest dataRequest, string columns,int countno, string menunm)
{
var fileNametx = AgentsPrompttx + DateTime.Now.ToString(GeneralConst.L_STRING_DATE4) + ".csv";
SendNotification(fileNametx, false);
var filePath = System.Web.HttpContext.Current.Server.MapPath("~/") + "Csv\\";
var vmexport = new JavaScriptSerializer().Deserialize<VMEXPORT[]>(columns);
dataRequest.GroupingToSorting();
dataRequest.PageSize = 0; // set to zero
await WriteRecords(dataRequest,countno, vmexport, true, filePath + fileNametx);
SendNotification(fileNametx, true);
return File(filePath + fileNametx, WebConst.L_CONTENTTYPE_APP_OCTET, fileNametx);
}
the main problem is when i request 4 times download.. means 4 tasks running asynchronously. It creates notification when i use same browser. but when i use IE and Google it fails to give me the progress. It creates the file no problem with file creation but on updates only it doesnt work fine. can someone correct me in this way
Update
The problem is when I use multiple Browser which invokes OnDisconnected() when navigating to other pages. Which stops the connection to other connected Hub context.

SNMP4j - Cannot send RESPONSE PDU on some OID

I'm trying to respond to SNMP GET requests from SnmpB with SNMP4j 2.3.1 (running on Windows).
In "Discover" mode, SnmpB queries by broadcasting 255.255.255.255 (checked with Wireshark) and I receive a GET request with standard OID (sysDescr, sysUpTime, sysContact, sysName and sysLocation). It finds my instance with the information I coded ("My System", "Myself", ...) (note that it also works when I enter the IP address under the "IP networks" textboxes, though I don't see any traffic on Wireshark but I receive the GET request):
I did write a very simple MIB file that I imported into SnmpB. It defines a single Integer32 data that I want to retrieve using an SNMP GET request from SnmpB.
However, using the same code than for the standard sys* OID, SnmpB doesn't seem to receive that data ("Timeout" in red on the top-right):
I did try Wireshark to check network activity and I don't see anything, so I guess it takes place on localhost (which is not accessible with Wireshark on Windows)? But the traces below show it does not (peerAddress=192.168.56.1)...
Here is the MIB file (code follows):
MY-TEST-MIB DEFINITIONS ::= BEGIN
IMPORTS
enterprises, MODULE-IDENTITY, OBJECT-TYPE, Integer32
FROM SNMPv2-SMI;
myTest MODULE-IDENTITY
LAST-UPDATED "201412301216Z"
ORGANIZATION "My org"
CONTACT-INFO "Matthieu Labas"
DESCRIPTION "MIB Test"
REVISION "201412301216Z"
DESCRIPTION "Generated"
::= { enterprises 12121 }
myData OBJECT-TYPE
SYNTAX Integer32
MAX-ACCESS read-only
STATUS current
DESCRIPTION "My data for test"
::= { myTest 1 }
END
... and the code:
public class RespondGET implements CommandResponder {
public static final OID sysDescr = new OID("1.3.6.1.2.1.1.1.0");
public static final OID sysUpTime = new OID("1.3.6.1.2.1.1.3.0");
public static final OID sysContact = new OID("1.3.6.1.2.1.1.4.0");
public static final OID sysName = new OID("1.3.6.1.2.1.1.5.0");
public static final OID sysLocation = new OID("1.3.6.1.2.1.1.6.0");
public static final OID myData = new OID("1.3.6.1.4.1.12121.1.0");
private Snmp snmp;
public RespondGET() throws IOException {
MessageDispatcher dispatcher = new MessageDispatcherImpl();
dispatcher.addMessageProcessingModel(new MPv2c()); // v2c only
snmp = new Snmp(dispatcher, new DefaultUdpTransportMapping(new UdpAddress("192.168.56.1/161"), true));
snmp.addCommandResponder(this);
snmp.listen();
}
#Override
public void processPdu(CommandResponderEvent event) {
System.out.println("Received PDU "+event);
PDU pdu = event.getPDU();
switch (pdu.getType()) {
case PDU.GET:
List<VariableBinding> responses = new ArrayList<VariableBinding>(pdu.size());
for (VariableBinding v : pdu.getVariableBindings()) {
OID oid = v.getOid();
// Answer the usual SNMP requests
if (sysDescr.equals(oid)) {
responses.add(new VariableBinding(oid, new OctetString("My System description")));
} else if (sysUpTime.equals(oid)) {
responses.add(new VariableBinding(oid, new TimeTicks(ManagementFactory.getRuntimeMXBean().getUptime())));
} else if (sysContact.equals(oid)) {
responses.add(new VariableBinding(oid, new OctetString("Myself")));
} else if (sysName.equals(oid)) {
responses.add(new VariableBinding(oid, new OctetString("My System")));
} else if (sysLocation.equals(oid)) {
responses.add(new VariableBinding(oid, new OctetString("In here")));
} else if (myData.equals(oid)) { // MyData handled here
responses.add(new VariableBinding(oid, new Integer32(18)));
}
}
try {
CommunityTarget comm = new CommunityTarget(event.getPeerAddress(), new OctetString(event.getSecurityName()));
comm.setSecurityLevel(event.getSecurityLevel());
comm.setSecurityModel(event.getSecurityModel());
PDU resp = new PDU(PDU.RESPONSE, responses);
System.out.println(String.format("Sending response PDU to %s/%s: %s", event.getPeerAddress(), new String(event.getSecurityName()), resp));
snmp.send(resp, comm);
} catch (IOException e) {
System.err.println(String.format("Unable to send response PDU! (%s)", e.getMessage()));
}
event.setProcessed(true);
break;
default:
System.err.println(String.format("Unhandled PDU type %s.", PDU.getTypeString(pdu.getType())));
break;
}
}
public static void main(String[] args) throws IOException {
RespondGET rg = new RespondGET();
System.out.println("Listening...");
int n = 300; // 5 min
while (true) {
try { Thread.sleep(1000); } catch (InterruptedException e) { }
if (--n <= 0) break;
}
System.out.println("Stopping...");
rg.snmp.close();
}
}
It produces the following output when I click "discover" under SnmpB and right-click on myData in the MIB Tree and "Get" (slightly reformatted for readability):
Listening...
Received PDU CommandResponderEvent[securityModel=2, securityLevel=1, maxSizeResponsePDU=65535,
pduHandle=PduHandle[16736], stateReference=StateReference[msgID=0,pduHandle=PduHandle[16736],
securityEngineID=null,securityModel=null,securityName=public,securityLevel=1,
contextEngineID=null,contextName=null,retryMsgIDs=null], pdu=GET[requestID=16736, errorStatus=Success(0), errorIndex=0,
VBS[1.3.6.1.2.1.1.1.0 = Null; 1.3.6.1.2.1.1.3.0 = Null; 1.3.6.1.2.1.1.4.0 = Null; 1.3.6.1.2.1.1.5.0 = Null; 1.3.6.1.2.1.1.6.0 = Null]],
messageProcessingModel=1, securityName=public, processed=false, peerAddress=192.168.56.1/49561, transportMapping=org.snmp4j.transport.DefaultUdpTransportMapping#120d62b, tmStateReference=null]
Sending response PDU to 192.168.56.1/49561/public: RESPONSE[requestID=0, errorStatus=Success(0), errorIndex=0,
VBS[1.3.6.1.2.1.1.1.0 = My System description; 1.3.6.1.2.1.1.3.0 = 0:01:03.18; 1.3.6.1.2.1.1.4.0 = Myself; 1.3.6.1.2.1.1.5.0 = My System; 1.3.6.1.2.1.1.6.0 = In here]]
Received PDU CommandResponderEvent[securityModel=2, securityLevel=1, maxSizeResponsePDU=65535,
pduHandle=PduHandle[1047], stateReference=StateReference[msgID=0,pduHandle=PduHandle[1047],
securityEngineID=null,securityModel=null,securityName=public,securityLevel=1,
contextEngineID=null,contextName=null,retryMsgIDs=null], pdu=GET[requestID=1047, errorStatus=Success(0), errorIndex=0,
VBS[1.3.6.1.4.1.12121.1.0 = Null]], messageProcessingModel=1, securityName=public, processed=false, peerAddress=192.168.56.1/49560, transportMapping=org.snmp4j.transport.DefaultUdpTransportMapping#120d62b, tmStateReference=null]
Sending response PDU to 192.168.56.1/49560/public: RESPONSE[requestID=0, errorStatus=Success(0), errorIndex=0, VBS[1.3.6.1.4.1.12121.1.0 = 18]]
Stopping...
What am I missing here? Could that "just" be a network routing issue?
After setting up a VM and checking with Wireshark, it turned out I forgot to set, on the response PDU, the same request ID than the GET PDU.
It was solved by adding resp.setRequestID(pdu.getRequestID()); when building the response PDU
CommunityTarget comm = new CommunityTarget(event.getPeerAddress(), new OctetString(event.getSecurityName()));
comm.setSecurityLevel(event.getSecurityLevel());
comm.setSecurityModel(event.getSecurityModel());
PDU resp = new PDU(PDU.RESPONSE, responses);
resp.setRequestID(pdu.getRequestID()); // Forgot that!
snmp.send(resp, comm);
Thanks to #Jolta for his patience during New Year holiday and his insisting on using Wireshark for further checking. :)

commons.net FTPSClient.storeFile doesn't throw IOException if connection with server is lost

Background:
I'm attempting to add some level fault tolerance to an application that uses Apache Commons.net FTPSClient to transfer files. If the connection between the client and server fails, I'd like to capture the produced exception/return code, log the details, and attempt to reconnect/retry the transfer.
What works:
The retrieveFile() method. If the connection fails, (i.e. I disable the server's public interface), I receive a CopyStreamException caused by a SocketTimeoutException after the amount of time I specified as the timeout.
What doesn't work:
The storeFile() method. If I initiate a transfer via storeFile() and disable the server's public interface, the storeFile() method blocks/hangs indefinitely with out throwing any exceptions.
Here is a simple app that hangs if the connection is terminated:
public class SmallTest {
private static Logger log = Logger.getLogger(SmallTest.class);
/**
* #param args
* #throws IOException
*/
public static void main(String[] args) throws IOException {
FTPSClient client = new FTPSClient(true);
FTPSCredentials creds = new FTPSCredentials("host", "usr", "pass",
"/keystore/ftpclient.jks", "pass",
"/keystore/rootca.jks");
String file = "/file/jdk-7u21-linux-x64.rpm";
String destinationFile = "/jdk-7u21-linux-x64.rpm";
client.setTrustManager(TrustManagerUtils.getValidateServerCertificateTrustManager());
client.setKeyManager(creds.getKeystoreManager());
client.addProtocolCommandListener(new PrintCommandListener(new PrintWriter(System.out), true));
client.setCopyStreamListener(createListener());
client.setConnectTimeout(5000);
client.setDefaultTimeout(5000);
client.connect(creds.getHost(), 990);
client.setSoTimeout(5000);
client.setDataTimeout(5000);
if (!FTPReply.isPositiveCompletion(client.getReplyCode())) {
client.disconnect();
log.error("ERROR: " + creds.getHost() + " refused the connection");
} else {
if (client.login(creds.getUser(), creds.getPass())) {
log.debug("Logged in as " + creds.getUser());
client.enterLocalPassiveMode();
client.setFileTransferMode(FTP.BLOCK_TRANSFER_MODE);
client.setFileType(FTP.BINARY_FILE_TYPE);
InputStream inputStream = new FileInputStream(file);
log.debug("Invoking storeFile()");
if (!client.storeFile(destinationFile, inputStream)) {
log.error("ERROR: Failed to store " + file
+ " on remote host. Last reply code: "
+ client.getReplyCode());
} else {
log.debug("Stored the file...");
}
inputStream.close();
client.logout();
client.disconnect();
} else {
log.error("Could not log into " + creds.getHost());
}
}
}
private static CopyStreamListener createListener(){
return new CopyStreamListener(){
private long megsTotal = 0;
#Override
public void bytesTransferred(CopyStreamEvent event) {
bytesTransferred(event.getTotalBytesTransferred(), event.getBytesTransferred(), event.getStreamSize());
}
#Override
public void bytesTransferred(long totalBytesTransferred,
int bytesTransferred, long streamSize) {
long megs = totalBytesTransferred / 1000000;
for (long l = megsTotal; l < megs; l++) {
System.out.print("#");
}
megsTotal = megs;
}
};
}
Is there any way to make the connection ACTUALLY timeout?
SW Versions:
Commons.net v3.3
Java 7
CentOS 6.3
Thanks in advance,
Joe
I ran into this same problem, and I think that I was able to get something that seems to work with the desired timeout behavior when I unplug the ethernet cable on my laptop.
I use 'storeFileStream' instead of 'storeFile', and then use 'completePendingCommand' to finish the transfer. You can check the Apache commons docs for 'completePendingCommand' to see an example of this kind of transfer. It took about 15 mins for it to timeout for me. One other thing: the aforementioned docs include calling 'isPositiveIntermediate' to check for an error, but this wasn't working. I replaced it with 'isPositivePreliminary' and now it seems to work. I'm not sure if that's actually correct, but it's the best I've found so far.

Resources