Batch update failure with HTTP status code 404; discarding 1 replication tasks - spring-security

After starting the service discovery, When the proxy microservice is registered, the eureka is throwing the following error:
2020-06-11 21:07:19.778 ERROR 1040 --- [et_localhost-19] c.n.e.cluster.ReplicationTaskProcessor : Batch update failure with HTTP status code 404; discarding 1 replication tasks
2020-06-11 21:07:19.778 WARN 1040 --- [et_localhost-19] c.n.eureka.util.batcher.TaskExecutors : Discarding 1 tasks of TaskBatchingWorker-target_localhost-19 due to permanent error
2020-06-11 21:07:49.782 ERROR 1040 --- [et_localhost-19] c.n.e.cluster.ReplicationTaskProcessor : Batch update failure with HTTP status code 404; discarding 1 replication tasks
2020-06-11 21:07:49.782 WARN 1040 --- [et_localhost-19] c.n.eureka.util.batcher.TaskExecutors : Discarding 1 tasks of TaskBatchingWorker-target_localhost-19 due to permanent error
2020-06-11 21:08:11.629 INFO 1040 --- [a-EvictionTimer] c.n.e.registry.AbstractInstanceRegistry : Running the evict task with compensationTime 0ms
2020-06-11 21:08:19.796 ERROR 1040 --- [et_localhost-19] c.n.e.cluster.ReplicationTaskProcessor : Batch update failure with HTTP status code 404; discarding 1 replication tasks
2020-06-11 21:08:19.796 WARN 1040 --- [et_localhost-19] c.n.eureka.util.batcher.TaskExecutors : Discarding 1 tasks of TaskBatchingWorker-target_localhost-19 due to permanent error
Error Log
The dependencies are as follows:
<properties>
<java.version>1.8</java.version>
<spring-cloud.version>Hoxton.SR3</spring-cloud.version>
<springfox-swagger2-version>2.9.2</springfox-swagger2-version>
<springfox-swagger2-ui-version>2.9.2</springfox-swagger2-ui-version>
</properties>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-netflix-zuul</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-netflix-eureka-client</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-config</artifactId>
</dependency>
</dependencies>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-dependencies</artifactId>
<version>${spring-cloud.version}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
bootstrap.yml
spring:
application:
name: tng-crs-eureka
server:
port: 8761
servlet:
context-path: /eureka
eureka:
client:
registerWithEureka: false
fetchRegistry: false
Any help is highly appreciated. I have tried by disabling the batch update tasks also. But, it didn't worked.

Finally, The mistake that I have made was figured out. I have set the context-path of /eureka. And that is the reason for all this trouble. Removing the context path of eureka resolves the issue for my case.

Related

how to connect to Cloud SQL from Google DataFlow

I'm trying to create a pipeline task using beam java SDK and Google Dataflow to move data from Cloud SQL to Elastic search
I've created the following class main method:
public static void main(String[] args) throws Exception{
DataflowPipelineOptions options = PipelineOptionsFactory.as(DataflowPipelineOptions.class);
options.setProject("staging");
options.setTempLocation("gs://csv_to_sql_staging/temp");
options.setRunner(DataflowRunner.class);
options.setGcpTempLocation("gs://csv_to_sql_staging/temp");
options.setUsePublicIps(false);
options.setJobName("tamer-new");
options.setSubnetwork("regions/us-central1/subnetworks/new-network");
final List<String> SCOPES = Arrays.asList(
"https://www.googleapis.com/auth/cloud-platform",
"https://www.googleapis.com/auth/devstorage.full_control",
"https://www.googleapis.com/auth/userinfo.email",
"https://www.googleapis.com/auth/datastore",
"https://www.googleapis.com/auth/sqlservice.admin",
"https://www.googleapis.com/auth/pubsub");
options.setGcpCredential(ServiceAccountCredentials.fromStream(new ElasticSearchIO().getClass().getResourceAsStream("/staging-b648da5d2b9b.json")).createScoped(SCOPES)); options.setServiceAccount("data-flow#staging.iam.gserviceaccount.com");
Pipeline p = Pipeline.create(options);
p.begin();
PCollection < List < String >> rows = p.apply(JdbcIO. < List < String >> read().withQuery("select u.id, u.name from user_table").withDataSourceConfiguration(JdbcIO.DataSourceConfiguration.create("com.mysql.jdbc.Driver", "jdbc:mysql://google/nameDB_new?cloudSqlInstance=staging:europe-west1:sql-staging-instance&socketFactory=com.google.cloud.sql.mysql.SocketFactory&useUnicode=true&characterEncoding=UTF-8&user=user&password=password&useSSL=false")).withRowMapper(new RowMapper < List < String >> () {
#Override public List < String > mapRow(ResultSet resultSet) throws Exception {
List < String > addRow = new ArrayList < String > ();
for (int i = 1; i <= resultSet.getMetaData().getColumnCount(); i++) {
addRow.add(i - 1, String.valueOf(resultSet.getObject(i)));
}
//LOG.info(String.join(",", addRow));
return addRow;
}
})
.withCoder(ListCoder.of(StringUtf8Coder. < Object > of ()))
);
Write w = ElasticsearchIO.write().withConnectionConfiguration(
ElasticsearchIO.ConnectionConfiguration.create(new String[] {
"https://host:9243"
}, "user-temp", "String").withUsername("elastic").withPassword("password")
);
rows.apply(w.compose(new SerializableFunction() {
#Override public Object apply(Object input) {
// TODO Auto-generated method stub
return input;
}
}));
p.run().waitUntilFinish();
}
and below is the pom.xml file :
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.harmonica.dataflow</groupId>
<artifactId>com-harmonica-dataflow</artifactId>
<version>0.0.1-SNAPSHOT</version>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<maven-compiler-plugin.version>3.7.0</maven-compiler-plugin.version>
<exec-maven-plugin.version>1.6.0</exec-maven-plugin.version>
<slf4j.version>1.7.25</slf4j.version>
<beam.version>2.19.0</beam.version>
</properties>
<repositories>
<repository>
<id>ossrh.snapshots</id>
<name>Sonatype OSS Repository Hosting</name>
<url>https://oss.sonatype.org/content/repositories/snapshots/</url>
<releases>
<enabled>false</enabled>
</releases>
<snapshots>
<enabled>true</enabled>
</snapshots>
</repository>
</repositories>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>${maven-compiler-plugin.version}</version>
<configuration>
<source>1.8</source>
<target>1.8</target>
</configuration>
</plugin>
</plugins>
<pluginManagement>
<plugins>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>exec-maven-plugin</artifactId>
<version>${exec-maven-plugin.version}</version>
<configuration>
<cleanupDaemonThreads>false</cleanupDaemonThreads>
</configuration>
</plugin>
</plugins>
</pluginManagement>
</build>
<dependencies>
<!-- Beam Lib -->
<dependency>
<groupId>org.apache.beam</groupId>
<artifactId>beam-sdks-java-core</artifactId>
<version>${beam.version}</version>
</dependency>
<dependency>
<groupId>org.apache.beam</groupId>
<artifactId>beam-runners-google-cloud-dataflow-java</artifactId>
<version>${beam.version}</version>
</dependency>
<dependency>
<groupId>org.apache.beam</groupId>
<artifactId>beam-sdks-java-io-elasticsearch</artifactId>
<version>${beam.version}</version>
</dependency>
<dependency>
<groupId>org.apache.beam</groupId>
<artifactId>beam-sdks-java-io-jdbc</artifactId>
<version>${beam.version}</version>
</dependency>
<dependency>
<groupId>org.apache.beam</groupId>
<artifactId>beam-sdks-java-io-google-cloud-platform</artifactId>
<version>${beam.version}</version>
</dependency>
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<version>8.0.19</version>
</dependency>
<dependency>
<groupId>com.google.cloud.sql</groupId>
<artifactId>mysql-socket-factory-connector-j-8</artifactId>
<version>1.0.15</version>
</dependency>
<!-- slf4j API frontend binding with JUL backend -->
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
<version>${slf4j.version}</version>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-jdk14</artifactId>
<version>${slf4j.version}</version>
</dependency>
</dependencies>
</project>
and when I execute this command:
man exec mvn compile exec:java -Dexec.mainClass=com.dataflow.ElasticSearchIO
The worker started successfully but then cant connect to the the Cloud SQL:
even thought I've done the flowing:
I've created a service account with owner access to the project and passed it to the runner options
I've created a VPC network with a name of new-network with a range of IP of 190.10.0.0/16 and assigned it to the pipeline options and then whitlisted this range in the cloud SQL
and however Im still getting this error:
Error message from worker: java.lang.RuntimeException:
org.apache.beam.sdk.util.UserCodeException: java.sql.SQLException:
Cannot create PoolableConnectionFactory (Communications link failure
The last packet sent successfully to the server was 0 milliseconds
ago. The driver has not received any packets from the server.)
org.apache.beam.runners.dataflow.worker.IntrinsicMapTaskExecutorFactory$1.typedApply(IntrinsicMapTaskExecutorFactory.java:194)
org.apache.beam.runners.dataflow.worker.IntrinsicMapTaskExecutorFactory$1.typedApply(IntrinsicMapTaskExecutorFactory.java:165)
org.apache.beam.runners.dataflow.worker.graph.Networks$TypeSafeNodeFunction.apply(Networks.java:63)
org.apache.beam.runners.dataflow.worker.graph.Networks$TypeSafeNodeFunction.apply(Networks.java:50)
org.apache.beam.runners.dataflow.worker.graph.Networks.replaceDirectedNetworkNodes(Networks.java:87)
org.apache.beam.runners.dataflow.worker.IntrinsicMapTaskExecutorFactory.create(IntrinsicMapTaskExecutorFactory.java:125)
org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:352)
org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:305)
org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
java.util.concurrent.FutureTask.run(FutureTask.java:266)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
java.lang.Thread.run(Thread.java:748) Caused by:
org.apache.beam.sdk.util.UserCodeException: java.sql.SQLException:
Cannot create PoolableConnectionFactory (Communications link failure
The last packet sent successfully to the server was 0 milliseconds
ago. The driver has not received any packets from the server.)
org.apache.beam.sdk.util.UserCodeException.wrap(UserCodeException.java:34)
org.apache.beam.sdk.io.jdbc.JdbcIO$ReadFn$DoFnInvoker.invokeSetup(Unknown
Source)
org.apache.beam.runners.dataflow.worker.DoFnInstanceManagers$ConcurrentQueueInstanceManager.deserializeCopy(DoFnInstanceManagers.java:80)
org.apache.beam.runners.dataflow.worker.DoFnInstanceManagers$ConcurrentQueueInstanceManager.peek(DoFnInstanceManagers.java:62)
org.apache.beam.runners.dataflow.worker.UserParDoFnFactory.create(UserParDoFnFactory.java:95)
org.apache.beam.runners.dataflow.worker.DefaultParDoFnFactory.create(DefaultParDoFnFactory.java:75)
org.apache.beam.runners.dataflow.worker.IntrinsicMapTaskExecutorFactory.createParDoOperation(IntrinsicMapTaskExecutorFactory.java:264)
org.apache.beam.runners.dataflow.worker.IntrinsicMapTaskExecutorFactory.access$000(IntrinsicMapTaskExecutorFactory.java:86)
org.apache.beam.runners.dataflow.worker.IntrinsicMapTaskExecutorFactory$1.typedApply(IntrinsicMapTaskExecutorFactory.java:183)
... 14 more Caused by: java.sql.SQLException: Cannot create
PoolableConnectionFactory (Communications link failure The last packet
sent successfully to the server was 0 milliseconds ago. The driver has
not received any packets from the server.)
org.apache.commons.dbcp2.BasicDataSource.createPoolableConnectionFactory(BasicDataSource.java:735)
org.apache.commons.dbcp2.BasicDataSource.createDataSource(BasicDataSource.java:605)
org.apache.commons.dbcp2.BasicDataSource.getConnection(BasicDataSource.java:809)
org.apache.beam.sdk.io.jdbc.JdbcIO$ReadFn.setup(JdbcIO.java:881)
Caused by: com.mysql.cj.jdbc.exceptions.CommunicationsException:
Communications link failure The last packet sent successfully to the
server was 0 milliseconds ago. The driver has not received any packets
from the server.
com.mysql.cj.jdbc.exceptions.SQLError.createCommunicationsException(SQLError.java:174)
com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:64)
com.mysql.cj.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:836)
com.mysql.cj.jdbc.ConnectionImpl.(ConnectionImpl.java:456)
com.mysql.cj.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:246)
com.mysql.cj.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:197)
org.apache.commons.dbcp2.DriverConnectionFactory.createConnection(DriverConnectionFactory.java:53)
org.apache.commons.dbcp2.PoolableConnectionFactory.makeObject(PoolableConnectionFactory.java:355)
org.apache.commons.dbcp2.BasicDataSource.validateConnectionFactory(BasicDataSource.java:116)
org.apache.commons.dbcp2.BasicDataSource.createPoolableConnectionFactory(BasicDataSource.java:731)
org.apache.commons.dbcp2.BasicDataSource.createDataSource(BasicDataSource.java:605)
org.apache.commons.dbcp2.BasicDataSource.getConnection(BasicDataSource.java:809)
org.apache.beam.sdk.io.jdbc.JdbcIO$ReadFn.setup(JdbcIO.java:881)
org.apache.beam.sdk.io.jdbc.JdbcIO$ReadFn$DoFnInvoker.invokeSetup(Unknown
Source)
org.apache.beam.runners.dataflow.worker.DoFnInstanceManagers$ConcurrentQueueInstanceManager.deserializeCopy(DoFnInstanceManagers.java:80)
org.apache.beam.runners.dataflow.worker.DoFnInstanceManagers$ConcurrentQueueInstanceManager.peek(DoFnInstanceManagers.java:62)
org.apache.beam.runners.dataflow.worker.UserParDoFnFactory.create(UserParDoFnFactory.java:95)
org.apache.beam.runners.dataflow.worker.DefaultParDoFnFactory.create(DefaultParDoFnFactory.java:75)
org.apache.beam.runners.dataflow.worker.IntrinsicMapTaskExecutorFactory.createParDoOperation(IntrinsicMapTaskExecutorFactory.java:264)
org.apache.beam.runners.dataflow.worker.IntrinsicMapTaskExecutorFactory.access$000(IntrinsicMapTaskExecutorFactory.java:86)
org.apache.beam.runners.dataflow.worker.IntrinsicMapTaskExecutorFactory$1.typedApply(IntrinsicMapTaskExecutorFactory.java:183)
org.apache.beam.runners.dataflow.worker.IntrinsicMapTaskExecutorFactory$1.typedApply(IntrinsicMapTaskExecutorFactory.java:165)
org.apache.beam.runners.dataflow.worker.graph.Networks$TypeSafeNodeFunction.apply(Networks.java:63)
org.apache.beam.runners.dataflow.worker.graph.Networks$TypeSafeNodeFunction.apply(Networks.java:50)
org.apache.beam.runners.dataflow.worker.graph.Networks.replaceDirectedNetworkNodes(Networks.java:87)
org.apache.beam.runners.dataflow.worker.IntrinsicMapTaskExecutorFactory.create(IntrinsicMapTaskExecutorFactory.java:125)
org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:352)
org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:305)
org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
java.util.concurrent.FutureTask.run(FutureTask.java:266)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
java.lang.Thread.run(Thread.java:748) Caused by:
com.mysql.cj.exceptions.CJCommunicationsException: Communications link
failure The last packet sent successfully to the server was 0
milliseconds ago. The driver has not received any packets from the
server. sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
Method)
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
java.lang.reflect.Constructor.newInstance(Constructor.java:423)
com.mysql.cj.exceptions.ExceptionFactory.createException(ExceptionFactory.java:61)
com.mysql.cj.exceptions.ExceptionFactory.createException(ExceptionFactory.java:105)
com.mysql.cj.exceptions.ExceptionFactory.createException(ExceptionFactory.java:151)
com.mysql.cj.exceptions.ExceptionFactory.createCommunicationsException(ExceptionFactory.java:167)
com.mysql.cj.protocol.a.NativeSocketConnection.connect(NativeSocketConnection.java:91)
com.mysql.cj.NativeSession.connect(NativeSession.java:144)
com.mysql.cj.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:956)
com.mysql.cj.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:826)
... 32 more Caused by: java.net.ConnectException: Connection timed out
(Connection timed out) java.net.PlainSocketImpl.socketConnect(Native
Method)
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
java.net.Socket.connect(Socket.java:589)
sun.security.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:673)
sun.security.ssl.BaseSSLSocketImpl.connect(BaseSSLSocketImpl.java:173)
com.google.cloud.sql.core.CoreSocketFactory.createSslSocket(CoreSocketFactory.java:233)
com.google.cloud.sql.core.CoreSocketFactory.connect(CoreSocketFactory.java:185)
com.google.cloud.sql.mysql.SocketFactory.connect(SocketFactory.java:48)
com.google.cloud.sql.mysql.SocketFactory.connect(SocketFactory.java:38)
com.mysql.cj.protocol.a.NativeSocketConnection.connect(NativeSocketConnection.java:65)
... 35 more
Plz any help will be highly appreciated!
Thanks in advance
You can use below piece of code to establish the connection:
Pipeline p = Pipeline.create(options);
//Increase pool size based on your records
ComboPooledDataSource dataSource = new ComboPooledDataSource();
dataSource.setDriverClass("com.mysql.jdbc.Driver");
dataSource.setJdbcUrl(
"jdbc:mysql://google/test?cloudSqlInstance=dataflowtest-:us-central1:sql-test&socketFactory=com.google.cloud.sql.mysql.SocketFactory");
dataSource.setUser("root");
dataSource.setPassword("root");
dataSource.setMaxPoolSize(10);
dataSource.setInitialPoolSize(6);
JdbcIO.DataSourceConfiguration config = JdbcIO.DataSourceConfiguration.create(dataSource);
// ADD rewriteBatchedStatements=true to improve write speed"
PCollection<KV<String, String>> sqlResult = p.apply(JdbcIO.<KV<String, String>>read()
.withDataSourceConfiguration(config)
.withQuery("select * from test_table").withCoder(KvCoder.of(StringUtf8Coder.of(), StringUtf8Coder.of()))
.withRowMapper(new JdbcIO.RowMapper<KV<String, String>>() {
private static final long serialVersionUID = 1L;
public KV<String, String> mapRow(ResultSet resultSet) throws Exception {
return KV.of(resultSet.getString(1), resultSet.getString(2));
}
}));
Add below dependency in pom.xml
<dependency>
<groupId>org.apache.beam</groupId>
<artifactId>beam-sdks-java-io-jdbc</artifactId>
<version>2.17.0</version>
</dependency>
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<version>5.1.25</version>
</dependency>
<dependency>
<groupId>com.google.cloud.sql</groupId>
<artifactId>mysql-socket-factory</artifactId>
<version>1.0.0</version>
</dependency>
This should work..
If possible try the below code for sql connection:
connection = connectToCloudSql(map.get(LiteralConstant.URL.toString()),
map.get(LiteralConstant.USERNAME.toString()), map.get(LiteralConstant.PASSWORD.toString()));
Then use the below piece of code to get result from sql connection:
statement = connection.prepareCall("query");
statement.execute();
resultSet = statement.getResultSet();
ResultSetMetaData rsmd = resultSet.getMetaData();
int count = rsmd.getColumnCount();
if(!resultSet.next() || count < 1)
throw new ConnectionFailureException("Failed to connect to Cloud SQL");
for (int k = 1; k <= count; k++) {
row.set(rsmd.getColumnName(k), resultSet.getString(k));
}
Get the above result in PCollection
Note: Don't forget to enable Cloud sql api and Cloud sql admin api.
Maven dependency:
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<version>5.1.25</version>
</dependency>
<dependency>
<groupId>com.google.cloud.sql</groupId>
<artifactId>mysql-socket-factory</artifactId> <!-- mysql-socket-factory-connector-j-6 if using 6.x.x -->
<version>1.0.0</version>
</dependency>
This above piece of code worked in my case. Let me know in case this solution works for you.

Not able to re-run failed tests with Junit5 and Surefire plugin

I am trying to re-run the failed tests using Junit5 and maven-surefire-plugin.
I have tried passing up the rerunFailingTestsCount as property and also as config property in pom.xml.
mvn '-Dsurefire.rerunFailingTestsCount=2' '-Dtest=com.salesforceiq.engagementapis.TestReRun#testfailure' clean test
https://maven.apache.org/surefire-archives/surefire-2.18.1/maven-failsafe-plugin/examples/rerun-failing-tests.html
But it seems failed tests are not re-running
This is my simple test class
public class TestReRun {
#Test
public void testfailure(){
Assert.assertTrue(false);
}
}
This is pom.xml
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>2.22.0</version>
<configuration>
<rerunFailingTestsCount>2</rerunFailingTestsCount>
<failIfNoTests>false</failIfNoTests>
<useSystemClassLoader>false</useSystemClassLoader>
<perCoreThreadCount>false</perCoreThreadCount>
<forkCount>0</forkCount>
<reuseForks>true</reuseForks>
</configuration>
</plugin>
<dependency>
<groupId>org.junit.platform</groupId>
<artifactId>junit-platform-surefire-provider</artifactId>
<version>1.2.0-M1</version>
</dependency>
<dependency>
<groupId>org.junit.jupiter</groupId>
<artifactId>junit-jupiter-engine</artifactId>
<version>5.2.0-M1</version>
</dependency>
<dependency>
<groupId>org.junit.jupiter</groupId>
<artifactId>junit-jupiter-params</artifactId>
<version>5.2.0-M1</version>
</dependency>
<dependency>
<groupId>org.junit.platform</groupId>
<artifactId>junit-platform-engine</artifactId>
<version>1.2.0-M1</version>
</dependency>
<dependency>
<groupId>org.junit.platform</groupId>
<artifactId>junit-platform-suite-api</artifactId>
<version>1.2.0-M1</version>
</dependency>
<dependency>
<groupId>org.junit.platform</groupId>
<artifactId>junit-platform-runner</artifactId>
<version>1.2.0-M1</version>
</dependency>
<dependency>
<groupId>org.junit.jupiter</groupId>
<artifactId>junit-jupiter-api</artifactId>
<version>5.2.0-M1</version>
</dependency>
<dependency>
<groupId>org.junit.platform</groupId>
<artifactId>junit-platform-launcher</artifactId>
<version>1.2.0-M1</version>
</dependency>
Actual results:
[ERROR] Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 5.11 s <<< FAILURE! - in com.salesforceiq.engagementapis.TestEngagementFetchAPIs
[ERROR] testfailure Time elapsed: 0.176 s <<< FAILURE!
java.lang.AssertionError
at com.salesforceiq.engagementapis.TestEngagementFetchAPIs.testfailure(TestEngagementFetchAPIs.java:242)
[INFO]
[INFO] Results:
[INFO]
[ERROR] Failures:
[ERROR] TestEngagementFetchAPIs.testfailure:242
[INFO]
[ERROR] Tests run: 1, Failures: 1, Errors: 0, Skipped: 0
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 9.805 s
[INFO] Finished at: 2019-09-21T15:40:15-07:00
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.22.0:test (default-test) on project siqapiautomation: There are test failures.
[ERROR]
[ERROR] Please refer to /Users/nishkam.agrawal/Documents/projects/siqapiautomation/target/surefire-reports for the individual test results.
[ERROR] Please refer to dump files (if any exist) [date]-jvmRun[N].dump, [date].dumpstream and [date]-jvmRun[N].dumpstream.
[ERROR] -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException```
The feature "re-run" is supported since of 3.0.0-M4, see the Feature Matrix.
For more details see these Jiras:
Support #ParameterizedTest for JUnit 5 test reruns
Rerun Failing Tests with JUnit 5
Surefire / Failsafe rerun failed tests functionality fails with JUnit 5 if using #DisplayName
Don't use forkCount=0. It is mostly useless in non-trivial projects.
it's not yet implemented :(
see https://github.com/junit-team/junit5/issues/1558 and https://issues.apache.org/jira/browse/SUREFIRE-1584

Re-creating the queue and re-connecting to rabbitMQ

Components Involved: Spring Config-server, Spring AMQP (RabbitMQ), Spring Config-client
Goal: Use push notification to inform config-client to refresh config.
RabbitMQ instance: From docker hub, I pulled rabbitmq:3-management image and ran.
Config-client AMQP version pom.xml:
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-bus-amqp</artifactId>
<version>1.3.1.RELEASE</version>
</dependency>
Config-server pom.xml:
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-config-monitor</artifactId>
<version>1.3.1.RELEASE</version>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-stream-rabbit</artifactId>
<version>1.2.1.RELEASE</version>
</dependency>
Fault Tolerance Scenario:
- Bring down RabbitMQ service/cluster/instance.
- All config client looses connectivity. Queues are deleted since they were created as auto-delete.
- Bring back up RabbitMQ service.
Expectation: All config client should reconnect successfully.
Reality: This is not working. Please see below error.
2018-03-27 09:07:12.850 WARN 21251 --- [AO2Q06fYCALSA-6] o.s.a.r.listener.BlockingQueueConsumer : Failed to declare queue:springCloudBus.anonymous.FGZPCPqzTAO2Q06fYCALSA
2018-03-27 09:07:12.851 ERROR 21251 --- [AO2Q06fYCALSA-6] o.s.a.r.l.SimpleMessageListenerContainer : Consumer received fatal exception on startup
org.springframework.amqp.rabbit.listener.QueuesNotAvailableException: Cannot prepare queue for listener. Either the queue doesn't exist or the broker will not allow us to use it.
at org.springframework.amqp.rabbit.listener.BlockingQueueConsumer.start(BlockingQueueConsumer.java:548)
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer$AsyncMessageProcessingConsumer.run(SimpleMessageListenerContainer.java:1335)
at java.base/java.lang.Thread.run(Thread.java:844)
Caused by: org.springframework.amqp.rabbit.listener.BlockingQueueConsumer$DeclarationException: Failed to declare queue(s):[springCloudBus.anonymous.FGZPCPqzTAO2Q06fYCALSA]
at org.springframework.amqp.rabbit.listener.BlockingQueueConsumer.attemptPassiveDeclarations(BlockingQueueConsumer.java:621)
at org.springframework.amqp.rabbit.listener.BlockingQueueConsumer.start(BlockingQueueConsumer.java:520)
[common frames omitted]
Caused by: java.io.IOException: null
Caused by: com.rabbitmq.client.ShutdownSignalException: channel error; protocol method: #method(reply-code=404, reply-text=NOT_FOUND - no queue 'springCloudBus.anonymous.FGZPCPqzTAO2Q06fYCALSA' in vhost '/', class-id=50, method-id=10)
[common frames omitted]
Caused by: com.rabbitmq.client.ShutdownSignalException: channel error; protocol method: #method(reply-code=404, reply-text=NOT_FOUND - no queue 'springCloudBus.anonymous.FGZPCPqzTAO2Q06fYCALSA' in vhost '/', class-id=50, method-id=10)
at com.rabbitmq.client.impl.ChannelN.asyncShutdown(ChannelN.java:505)
[common frames omitted]
2018-03-27 09:07:12.852 ERROR 21251 --- [AO2Q06fYCALSA-6] o.s.a.r.l.SimpleMessageListenerContainer : Stopping container from aborted consumer
2018-03-27 09:07:12.853 INFO 21251 --- [AO2Q06fYCALSA-6] o.s.a.r.l.SimpleMessageListenerContainer : Waiting for workers to finish.
2018-03-27 09:07:12.853 INFO 21251 --- [AO2Q06fYCALSA-6] o.s.a.r.l.SimpleMessageListenerContainer : Successfully waited for workers to finish.
Description of error from my understanding
config client using the existing broker, listener tried to reconnect but queue is missing. Makes 3 retries by default. This is expected as we are going through a scenario when a Rabbit MQ service is down and restarted without persistent data. Issue is reconnection fails. I know from many articles that mentions we cannot redeclare queue without using admin. For that we create a XML config file that creates property beans declaring admin and other stuff.
What is the ask?
- Will it be ideal if all this is taken care as by default scenario.
** Also I still don't have the working solution. NEED HELP"
I just tested it with Boot 2.0 and Finchley.M9 (bus 2.0.0.M7) with no problems...
2018-03-27 13:25:06.125 INFO 36716 --- [ main] c.s.b.r.p.RabbitExchangeQueueProvisioner : declaring queue for inbound: springCloudBus.anonymous.tySvAS8BSpS7OtQ_VCeiVQ, bound to: springCloudBus
...
2018-03-27 13:26:38.220 ERROR 36716 --- [ 127.0.0.1:5672] o.s.a.r.c.CachingConnectionFactory : Channel shutdown: connection error; protocol method: #method(reply-code=320, reply-text=CONNECTION_FORCED - broker forced connection closure with reason 'shutdown', class-id=0, method-id=0)
2018-03-27 13:26:58.757 INFO 36716 --- [pS7OtQ_VCeiVQ-6] o.s.a.r.c.CachingConnectionFactory : Attempting to connect to: [localhost:5672]
2018-03-27 13:26:58.761 INFO 36716 --- [pS7OtQ_VCeiVQ-6] o.s.a.r.c.CachingConnectionFactory : Created new connection: rabbitConnectionFactory#52c8295b:5/SimpleConnection#74846ead [delegate=amqp://guest#127.0.0.1:5672/, localPort= 49746]
2018-03-27 13:26:58.762 INFO 36716 --- [pS7OtQ_VCeiVQ-6] o.s.amqp.rabbit.core.RabbitAdmin : Auto-declaring a non-durable, auto-delete, or exclusive Queue (springCloudBus.anonymous.tySvAS8BSpS7OtQ_VCeiVQ) durable:false, auto-delete:true, exclusive:true. It will be redeclared if the broker stops and is restarted while the connection factory is alive, but all messages will be lost.
The RabbitExchangeQueueProvisioner explicitly sets up a RabbitAdmin to re-declare the queue after the connection is re-established.
I'll try with older versions now...
EDIT
Same result with boot 1.5.10 and Edgware.SR3 (bus 1.3.3.RELEASE).
EDIT2
Same result with the 1.3.1 bus starter (brings in 1.2.1 stream rabbit). Works fine.

Neo4J OGM Java connection establishment issues

I am using Neo4J 3.2.0 community edition and I was able to connect and establish connection properly using traditional java driver and running Cypher queries.
However, I wanted to create rich data model, so I imported the following dependencies:
<dependency>
<groupId>org.neo4j</groupId>
<artifactId>neo4j-ogm-core</artifactId>
<version>2.1.3</version>
</dependency>
<dependency>
<groupId>org.neo4j</groupId>
<artifactId>neo4j-ogm-bolt-driver</artifactId>
<version>2.1.3</version>
</dependency>
Post that, I wrote the following code to establish the connection:
Configuration configuration = new Configuration();
configuration.driverConfiguration().setDriverClassName("org.neo4j.ogm.drivers.bolt.driver.BoltDriver")
.setURI("bolt://localhost:7687");
SessionFactory sessionFactory = new SessionFactory(configuration, "org.neo4j.example.domain");
Session session = sessionFactory.openSession();
But irrespective of weather my Neo4j server is turned on or not, it establish connection, and the logs dont show any errors, just the following three lines for the logging system:
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
Any help would be appreciated.
Adding the following dependency fixes it:
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-simple</artifactId>
<version>1.7.25</version>
</dependency>

Maven simple-weather tutorial

I am following the simple-weather tutorial in Maven by Example. When I execute the program, I get below exception. Does this need any specific classpath setting?
POM
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.example.maven.weather</groupId>
<artifactId>simple-weather</artifactId>
<version>1.0</version>
<packaging>jar</packaging>
<name>simple-weather</name>
<url>http://maven.apache.org</url>
<licenses>
<license>
<name>Apache 2</name>
<url>http://www.apache.org/licenses/LICENSE-2.0.txt</url>
<distribution>repo</distribution>
<comments>A business-friendly OSS license</comments>
</license>
</licenses>
<organization>
<name>Sonatype</name>
<url>http://www.sonatype.com</url>
</organization>
<developers>
<developer>
<id>jason</id>
<name>Jason Van Zyl</name>
<email>jason#maven.org</email>
<url>http://www.sonatype.com</url>
<organization>Sonatype</organization>
<organizationUrl>http://www.sonatype.com</organizationUrl>
<roles>
<role>developer</role>
</roles>
<timezone>-6</timezone>
</developer>
</developers>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
</properties>
<dependencies>
<dependency>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
<version>1.2.14</version>
</dependency>
<dependency>
<groupId>dom4j</groupId>
<artifactId>dom4j</artifactId>
<version>1.6.1</version>
</dependency>
<dependency>
<groupId>jaxen</groupId>
<artifactId>jaxen</artifactId>
<version>1.1.1</version>
</dependency>
<dependency>
<groupId>velocity</groupId>
<artifactId>velocity</artifactId>
<version>1.5</version>
</dependency>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>3.8.1</version>
<scope>test</scope>
</dependency>
Main.java
package com.example.maven.weather;
import java.io.InputStream;
import org.apache.log4j.PropertyConfigurator;
public class Main {
public static void main(String[] args) throws Exception {
// Configure Log4J
PropertyConfigurator.configure(Main.class.getClassLoader()
.getResource("log4j.properties"));
// Read the Zip Code from the Command-line (if none supplied, use 60202)
String zipcode = "60202";
try {
zipcode = args[0];
} catch( Exception e ) {}
// Start the program
new Main(zipcode).start();
}
private String zip;
public Main(String zip) {
this.zip = zip;
}
public void start() throws Exception {
// Retrieve Data
InputStream dataIn = new YahooRetriever().retrieve( zip );
// Parse Data
Weather weather = new YahooParser().parse( dataIn );
// Format (Print) Data
System.out.print( new WeatherFormatter().format( weather ) );
}
}
Run:
mvn exec:java -Dexec.mainClass=com.exa
mple.maven.weather.Main
[INFO] Scanning for projects...
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] Building simple-weather 1.0
[INFO] ------------------------------------------------------------------------
[INFO]
[INFO] >>> exec-maven-plugin:1.2.1:java (default-cli) # simple-weather >>>
[INFO]
[INFO] <<< exec-maven-plugin:1.2.1:java (default-cli) # simple-weather <<<
[INFO]
[INFO] --- exec-maven-plugin:1.2.1:java (default-cli) # simple-weather ---
[WARNING]
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.
java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
sorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.codehaus.mojo.exec.ExecJavaMojo$1.run(ExecJavaMojo.java:297)
at java.lang.Thread.run(Thread.java:662)
Caused by: java.lang.NullPointerException
at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurato
r.java:433)
at org.apache.log4j.PropertyConfigurator.configure(PropertyConfigurator.
java:336)
at com.example.maven.weather.Main.main(Main.java:12)
... 6 more
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 0.902s
[INFO] Finished at: Thu May 30 14:23:06 IST 2013
[INFO] Final Memory: 4M/15M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.codehaus.mojo:exec-maven-plugin:1.2.1:java (d
efault-cli) on project simple-weather: An exception occured while executing the
Java class. null: InvocationTargetException: NullPointerException -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e swit
ch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please rea
d the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionE
xception
It seems like your application can't find log4j.properties
Do you use standard maven project structure? Where is log4j.properties located?
'resources' folder should be in main folder (src/main/resources) instead of src/resources .

Resources