Resilience4j rate limiter is not working properly in project reactor? - project-reactor

I'm currently researching the resilience4j library and for some reason the following code doesn't work as expected:
#Test
public void testRateLimiterProjectReactor()
{
// The configuration below will allow 2 requests per second and a "timeout" of 2 seconds.
RateLimiterConfig config = RateLimiterConfig.custom()
.limitForPeriod(2)
.limitRefreshPeriod(Duration.ofSeconds(1))
.timeoutDuration(Duration.ofSeconds(2))
.build();
// Step 2.
// Create a RateLimiter and use it.
RateLimiterRegistry registry = RateLimiterRegistry.of(config);
RateLimiter rateLimiter = registry.rateLimiter("myReactorServiceNameLimiter");
// Step 3.
Flux<Integer> flux = Flux.from(Flux.range(0, 10))
.transformDeferred(RateLimiterOperator.of(rateLimiter))
.log()
;
StepVerifier.create(flux)
.expectNextCount(10)
.expectComplete()
.verify()
;
}
According to the official examples here and here this should be limiting the request() to 2 elements per second. However, the logs show it's fetching all of the elements immediately:
15:08:24.587 [main] DEBUG reactor.util.Loggers - Using Slf4j logging framework
15:08:24.619 [main] INFO reactor.Flux.Defer.1 - onSubscribe(RateLimiterSubscriber)
15:08:24.624 [main] INFO reactor.Flux.Defer.1 - request(unbounded)
15:08:24.626 [main] INFO reactor.Flux.Defer.1 - onNext(0)
15:08:24.626 [main] INFO reactor.Flux.Defer.1 - onNext(1)
15:08:24.626 [main] INFO reactor.Flux.Defer.1 - onNext(2)
15:08:24.626 [main] INFO reactor.Flux.Defer.1 - onNext(3)
15:08:24.626 [main] INFO reactor.Flux.Defer.1 - onNext(4)
15:08:24.626 [main] INFO reactor.Flux.Defer.1 - onNext(5)
15:08:24.626 [main] INFO reactor.Flux.Defer.1 - onNext(6)
15:08:24.626 [main] INFO reactor.Flux.Defer.1 - onNext(7)
15:08:24.626 [main] INFO reactor.Flux.Defer.1 - onNext(8)
15:08:24.626 [main] INFO reactor.Flux.Defer.1 - onNext(9)
15:08:24.626 [main] INFO reactor.Flux.Defer.1 - onComplete()
I don't see what's wrong?

As already answered in comments above RateLimiter tracks the number of subscriptions, not elements. To achieve rate limiting on elements you can use limitRate (and buffer + delayElements).
For example,
Flux.range(1, 100)
.delayElements(Duration.ofMillis(100)) // to imitate a publisher that produces elements at a certain rate
.log()
.limitRate(10) // used to requests up to 10 elements from the publisher
.buffer(10) // groups integers by 10 elements
.delayElements(Duration.ofSeconds(2)) // emits a group of ints every 2 sec
.subscribe(System.out::println);

Related

Issue with Docker container deployment via elastic beanstalk : basex 9.3.1

I have this Dockerfile :
FROM basex/basexhttp:9.3.1
I deploy it on Elasticbeanstalk but i get the following error :
[main] INFO org.eclipse.jetty.util.log - Logging initialized #549ms to
org.eclipse.jetty.util.log.Slf4jLog [main] INFO
org.eclipse.jetty.server.Server - jetty-9.4.24.v20191120; built:
2019-11-20T21:37:49.771Z; git:
363d5f2df3a8a28de40604320230664b9c793c16; jvm 1.8.0_212-b04 [main]
WARN org.eclipse.jetty.webapp.WebAppContext - Failed startup of
context
o.e.j.w.WebAppContext#31a5c39e{/,null,UNAVAILABLE}{/srv/basex/webapp}
java.lang.IllegalStateException: Parent for temp dir not configured
correctly: writeable=false
at org.eclipse.jetty.webapp.WebInfConfiguration.makeTempDirectory(WebInfConfiguration.java:499)
at org.eclipse.jetty.webapp.WebInfConfiguration.resolveTempDirectory(WebInfConfiguration.java:468)
at org.eclipse.jetty.webapp.WebInfConfiguration.preConfigure(WebInfConfiguration.java:138)
at org.eclipse.jetty.webapp.WebAppContext.preConfigure(WebAppContext.java:488)
at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:523)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72)
at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169)
at org.eclipse.jetty.server.Server.start(Server.java:407)
at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:110)
at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:100)
at org.eclipse.jetty.server.Server.doStart(Server.java:371)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72)
at org.basex.BaseXHTTP.(BaseXHTTP.java:129)
at org.basex.BaseXHTTP.main(BaseXHTTP.java:53) [main] INFO org.eclipse.jetty.server.AbstractConnector - Started
ServerConnector#48eff760{HTTP/1.1,[http/1.1]}{0.0.0.0:8984} [main]
INFO org.eclipse.jetty.server.Server - Started #936ms
java.lang.NullPointerException
and then container stops.
Now i tried to use the same image and run it via the root user and it starts and works perfectly fine. Following is the command :
docker run -d b58e4a50371d
Any suggestions/help appreciated.

Spring Reactor - Wait until a mono is finished then do next Mono

Lets say I have a repository.save(..) method which returns a Mono.
Also lets say I have a repository.findByEmail(..) which returns a Mono.
Problem:
I want that the first Mono finishes AFTER do the second Mono.
repository.save(..).then(repository.findByEmail(..))
However, the second Mono here always gets executed first?
I was under the impression that .then(..) finishes and then plays another Mono
The source code says:
Let this {#link Mono} complete then play another Mono.
What is the solution to my problem?
What makes you think that this operator doesn't behave as expected?
The following example shows it does:
Mono.just("first").log()
.then(Mono.just("second")).log()
.subscribe();
Logs:
[main] INFO reactor.Mono.IgnoreThen.2 - | onSubscribe([Fuseable] MonoIgnoreThen.ThenIgnoreMain)
[main] INFO reactor.Mono.IgnoreThen.2 - | request(unbounded)
[main] INFO reactor.Mono.Just.1 - | onSubscribe([Synchronous Fuseable] Operators.ScalarSubscription)
[main] INFO reactor.Mono.Just.1 - | request(unbounded)
[main] INFO reactor.Mono.Just.1 - | onNext(first)
[main] INFO reactor.Mono.Just.1 - | onComplete()
[main] INFO reactor.Mono.IgnoreThen.2 - | onNext(second)
[main] INFO reactor.Mono.IgnoreThen.2 - | onComplete()
Please add log operators and share the logs in your question.

How to create an OntModel for an ontology which imports another ontology?

I have an ontology that imports another ontology (both are stored in the same folder on my machine, created using Protege & saved as RDF/XML). When I create the ontology model in my java application through Jena, I get the following
0 [main] DEBUG org.apache.jena.riot.system.stream.JenaIOEnvironment - Failed to find configuration: location-mapping.ttl;location-mapping.rdf;location-mapping.n3;etc/location-mapping.rdf;etc/location-mapping.n3;etc/location-mapping.ttl
297 [main] DEBUG com.hp.hpl.jena.util.FileManager - Add location: LocatorFile
300 [main] DEBUG com.hp.hpl.jena.util.FileManager - Add location: ClassLoaderLocator
301 [main] DEBUG com.hp.hpl.jena.util.LocationMapper - Failed to find configuration: file:location-mapping.rdf;file:location-mapping.n3;file:location-mapping.ttl;file:etc/location-mapping.rdf;file:etc/location-mapping.n3;file:etc/location-mapping.ttl
301 [main] DEBUG com.hp.hpl.jena.util.FileManager - Add location: LocatorFile
301 [main] DEBUG com.hp.hpl.jena.util.FileManager - Add location: LocatorURL
301 [main] DEBUG com.hp.hpl.jena.util.FileManager - Add location: ClassLoaderLocator
301 [main] DEBUG com.hp.hpl.jena.util.FileManager - Add location: LocatorFile
301 [main] DEBUG com.hp.hpl.jena.util.FileManager - Add location: LocatorURL
301 [main] DEBUG com.hp.hpl.jena.util.FileManager - Add location: ClassLoaderLocator
301 [main] DEBUG com.hp.hpl.jena.util.FileManager - Found: file:ont-policy.rdf (ClassLoaderLocator)
1002 [main] DEBUG com.hp.hpl.jena.util.FileManager - readModel(model,http://www.semanticweb.org/myOnt)
1002 [main] DEBUG com.hp.hpl.jena.util.FileManager - readModel(model,http://www.semanticweb.org/myOnt, null)
1002 [main] DEBUG com.hp.hpl.jena.util.FileManager - Not mapped: http://www.semanticweb.org/myOnt
2065 [main] WARN com.hp.hpl.jena.ontology.OntDocumentManager - An error occurred while attempting to read from http://www.semanticweb.org/myOnt. Msg was 'http://www.semanticweb.org/myOnt'.
This exception is also thrown
com.hp.hpl.jena.shared.DoesNotExistException
I understand that it is unable to find the imported ontology, and that one solution might be to create separate models for both ontologies, but there's an objective property in parent ontology which relates one of its class to the classes of imported ontology.
What can I do?

Neo4j server not starting up:

I am using Neo4j 2.3.0-M03 community version. I have created a database using neo4j import tool. Now when I am going to start the neo4j server, its failing. Any advice? Thanks in advance!
Starting Neo4j Server...WARNING: not changing user
process [21597]... waiting for server to be ready......... Failed to start within 120 seconds.
Neo4j Server may have failed to start, please check the logs.
The log is as follows:
2015-10-14 14:55:35.438-0400 INFO No SSL certificate found, generating a self-signed certificate..
14:55:35.936 [main] DEBUG i.n.u.i.l.InternalLoggerFactory - Using SLF4J as the default logging framework
14:55:36.105 [main] DEBUG i.n.util.internal.PlatformDependent0 - java.nio.Buffer.address: available
14:55:36.106 [main] DEBUG i.n.util.internal.PlatformDependent0 - sun.misc.Unsafe.theUnsafe: available
14:55:36.107 [main] DEBUG i.n.util.internal.PlatformDependent0 - sun.misc.Unsafe.copyMemory: available
14:55:36.107 [main] DEBUG i.n.util.internal.PlatformDependent0 - java.nio.Bits.unaligned: true
14:55:36.108 [main] DEBUG i.n.util.internal.PlatformDependent - Java version: 7
14:55:36.108 [main] DEBUG i.n.util.internal.PlatformDependent - -Dio.netty.noUnsafe: false
14:55:36.109 [main] DEBUG i.n.util.internal.PlatformDependent - sun.misc.Unsafe: available
14:55:36.109 [main] DEBUG i.n.util.internal.PlatformDependent - -Dio.netty.noJavassist: false
14:55:36.110 [main] DEBUG i.n.util.internal.PlatformDependent - Javassist: unavailable
14:55:36.110 [main] DEBUG i.n.util.internal.PlatformDependent - You don't have Javassist in your class path or you don't have enough permission to load dynamically generated classes. Please check the configuration for better performance.
14:55:36.110 [main] DEBUG i.n.util.internal.PlatformDependent - -Dio.netty.tmpdir: /tmp (java.io.tmpdir)
14:55:36.110 [main] DEBUG i.n.util.internal.PlatformDependent - -Dio.netty.bitMode: 64 (sun.arch.data.model)
14:55:36.110 [main] DEBUG i.n.util.internal.PlatformDependent - -Dio.netty.noPreferDirect: false
14:55:36.119 [main] DEBUG io.netty.util.ResourceLeakDetector - -Dio.netty.leakDetectionLevel: simple
14:55:36.135 [main] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.allocator.type: unpooled
14:55:36.135 [main] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.threadLocalDirectBufferSize: 65536
2015-10-14 14:55:41.341-0400 INFO Successfully started database
2015-10-14 14:55:41.405-0400 INFO Starting HTTP on port 7474 (32 threads available)
2015-10-14 14:55:41.659-0400 INFO Successfully shutdown Neo4j Server
2015-10-14 14:55:42.224-0400 INFO Successfully stopped database
2015-10-14 14:55:42.227-0400 ERROR Failed to start Neo4j: Starting Neo4j failed: tried to access field org.neo4j.server.rest.repr.RepresentationFormat.mediaType from class org.neo4j.server.rest.repr.RepresentationFormatRepository Starting Neo4j failed: tried to access field org.neo4j.server.rest.repr.RepresentationFormat.mediaType from class org.neo4j.server.rest.repr.RepresentationFormatRepository
org.neo4j.server.ServerStartupException: Starting Neo4j failed: tried to access field org.neo4j.server.rest.repr.RepresentationFormat.mediaType from class org.neo4j.server.rest.repr.RepresentationFormatRepository
at org.neo4j.server.exception.ServerStartupErrors.translateToServerStartupError(ServerStartupErrors.java:67)
at org.neo4j.server.AbstractNeoServer.start(AbstractNeoServer.java:234)
at org.neo4j.server.Bootstrapper.start(Bootstrapper.java:96)
at org.neo4j.server.CommunityBootstrapper.start(CommunityBootstrapper.java:48)
at org.neo4j.server.CommunityBootstrapper.main(CommunityBootstrapper.java:35)
Caused by: java.lang.IllegalAccessError: tried to access field org.neo4j.server.rest.repr.RepresentationFormat.mediaType from class org.neo4j.server.rest.repr.RepresentationFormatRepository
at org.neo4j.server.rest.repr.RepresentationFormatRepository.<init>(RepresentationFormatRepository.java:46)
at org.neo4j.server.AbstractNeoServer.createDefaultInjectables(AbstractNeoServer.java:641)
at org.neo4j.server.AbstractNeoServer.configureWebServer(AbstractNeoServer.java:360)
at org.neo4j.server.AbstractNeoServer.start(AbstractNeoServer.java:216)
... 3 more
Seems like some permission issue on the lib folder of the Neo4j installation directory.
Try to give full permission to the neo4j installation folder.
Would also suggest to use stable version and not the milestone version.
I could fix it. There was some problem with my embedded database. I tried to run the server with its default graph.db and it run successfully. Now I have to identify what's wrong with the embedded database with I created using import tool.

Sonar infinite loop in Jenkins

I'm invoking a standalone sonar analysis under Jenkins with these versions:
Jenkins: 1.529
Jenkins Sonar Plugin: 2.1
Sonar: 3.5.1
Sonar is using the default h2 database. When I launch a build on Jenkins, it starts correctly but go in an infinite loop at the end of the following log, I let it run for 3 days without any result... Does somebody know where it comes from?
[SRC] $ /var/lib/jenkins/tools/hudson.plugins.sonar.SonarRunnerInstallation/SonarForJenkins2.0/bin/sonar-runner ******** ******** -Dsonar.host.url=http://192.168.1.1:9000/sonar ******** ******** -Dsonar.projectBaseDir=/var/lib/jenkins/workspace/Project/SRC -Dsonar.language=py -Dsonar.projectName=Project -Dsonar.projectVersion=1.0 -Dsonar.projectKey=Project -Dsonar.sources=server/apps/Project/
Runner configuration file: /var/lib/jenkins/tools/hudson.plugins.sonar.SonarRunnerInstallation/SonarForJenkins2.0/conf/sonar-runner.properties
Project configuration file: NONE
Runner version: 2.0
Java version: 1.6.0_27, vendor: Sun Microsystems Inc.
OS name: "Linux", version: "3.2.0-52-generic", arch: "amd64"
Default locale: "fr_FR", source code encoding: "UTF-8" (analysis is platform dependent)
Server: http://192.168.1.1:9000/sonar
Work directory: /var/lib/jenkins/workspace/Project/SRC/.sonar
15:29:43.167 INFO .s.b.b.BatchSettings - Load batch settings
15:29:43.260 INFO o.s.h.c.FileCache - User cache: /var/lib/jenkins/.sonar/cache
15:29:43.265 INFO atchPluginRepository - Install plugins
15:29:43.640 INFO .s.b.b.TaskContainer - ------------- Executing Project Scan
15:29:43.947 INFO b.b.JdbcDriverHolder - Install JDBC driver
15:29:43.949 INFO .b.ProjectExclusions - Apply project exclusions
15:29:43.952 WARN .c.p.DefaultDatabase - H2 database should be used for evaluation purpose only
15:29:43.952 INFO o.s.c.p.Database - Create JDBC datasource for jdbc:h2:tcp://localhost/sonar
15:29:44.020 INFO actDatabaseConnector - Initializing Hibernate
15:29:45.683 INFO .s.b.s.ScanContainer - ------------- Inspecting Project
15:29:45.687 INFO .b.b.ProjectSettings - Load module settings
15:29:45.895 INFO .s.b.ProfileProvider - Quality profile : [name=Sonar way,language=py]
15:29:45.908 INFO s.f.ExclusionFilters - Excluded tests:
15:29:45.908 INFO s.f.ExclusionFilters - **/package-info.java
15:29:45.929 INFO nPluginsConfigurator - Configure Maven plugins
15:29:45.954 INFO org.sonar.INFO - Compare to previous analysis (2013-09-09)
15:29:45.974 INFO org.sonar.INFO - Compare over 5 days (2013-09-04, analysis of 2013-09-03 07:57:38.902)
15:29:45.980 INFO org.sonar.INFO - Compare over 30 days (2013-08-10, analysis of 2013-08-12 19:34:45.97)
15:29:46.057 INFO s.f.FileSystemLogger - Base dir: /var/lib/jenkins/workspace/Project/SRC
15:29:46.057 INFO s.f.FileSystemLogger - Working dir: /var/lib/jenkins/workspace/Project/SRC/.sonar
15:29:46.057 INFO s.f.FileSystemLogger - Source dirs: /var/lib/jenkins/workspace/Project/SRC/server/apps/Project
15:29:46.057 INFO s.f.FileSystemLogger - Source encoding: UTF-8, default locale: fr_FR
15:29:46.096 INFO p.PhasesTimeProfiler - Sensor org.sonar.plugins.python.PythonSourceImporter#549b6976...
15:29:46.825 INFO p.PhasesTimeProfiler - Sensor org.sonar.plugins.python.PythonSourceImporter#549b6976 done: 729 ms
15:29:46.825 INFO p.PhasesTimeProfiler - Sensor PythonSquidSensor...
15:29:48.152 INFO p.PhasesTimeProfiler - Sensor PythonSquidSensor done: 1327 ms
15:29:48.153 INFO p.PhasesTimeProfiler - Sensor PythonXunitSensor...
15:29:48.160 INFO p.PhasesTimeProfiler - Sensor PythonXunitSensor done: 7 ms
15:29:48.161 INFO p.PhasesTimeProfiler - Sensor PythonCoverageSensor...
15:29:48.161 INFO p.PhasesTimeProfiler - Sensor PythonCoverageSensor done: 0 ms
15:29:48.161 INFO p.PhasesTimeProfiler - Sensor CpdSensor...
15:29:48.161 INFO o.s.p.cpd.CpdSensor - SonarBridgeEngine is used
15:29:48.169 INFO s.p.c.i.IndexFactory - Cross-project analysis disabled
15:29:48.497 INFO p.PhasesTimeProfiler - Sensor CpdSensor done: 336 ms
15:29:48.497 INFO p.PhasesTimeProfiler - Sensor ProfileSensor...
15:29:48.522 INFO p.PhasesTimeProfiler - Sensor ProfileSensor done: 25 ms
15:29:48.523 INFO p.PhasesTimeProfiler - Sensor ProfileEventsSensor...
15:29:48.539 INFO p.PhasesTimeProfiler - Sensor ProfileEventsSensor done: 17 ms
15:29:48.540 INFO p.PhasesTimeProfiler - Sensor ProjectLinksSensor...
15:29:48.543 INFO p.PhasesTimeProfiler - Sensor ProjectLinksSensor done: 3 ms
15:29:48.543 INFO p.PhasesTimeProfiler - Sensor VersionEventsSensor...
15:29:48.551 INFO p.PhasesTimeProfiler - Sensor VersionEventsSensor done: 8 ms
15:29:48.984 INFO p.PhasesTimeProfiler - Execute decorators...
15:29:51.132 INFO s.c.c.ScanGraphStore - Persist graphs of components
15:29:51.193 INFO .b.p.UpdateStatusJob - ANALYSIS SUCCESSFUL, you can browse http://192.168.1.1:9000/sonar
15:29:51.194 INFO b.p.PostJobsExecutor - Executing post-job class org.sonar.plugins.core.batch.IndexProjectPostJob
15:29:51.213 INFO b.p.PostJobsExecutor - Executing post-job class org.sonar.plugins.dbcleaner.ProjectPurgePostJob
15:29:51.223 INFO .p.d.p.KeepOneFilter - -> Keep one snapshot per day between 2013-08-26 and 2013-09-08
15:29:51.225 INFO .p.d.p.KeepOneFilter - -> Keep one snapshot per week between 2012-09-10 and 2013-08-26
15:29:51.227 INFO .p.d.p.KeepOneFilter - -> Keep one snapshot per month between 2008-09-15 and 2012-09-10
15:29:51.230 INFO .d.p.DeleteAllFilter - -> Delete data prior to: 2008-09-15
15:29:51.232 INFO o.s.c.purge.PurgeDao - -> Clean Project [id=1]
15:29:51.236 INFO o.s.c.purge.PurgeDao - <- Clean snapshot 39274
The H2 database is for testing purpose only. Could you please move to a "real" database? See http://docs.codehaus.org/display/SONAR/Requirements.

Resources