With StepVerifier it is very easy to check whether provided Mono has completed (just by expectComplete() method in StepVerifier), but what should I do if need to check the opposite case ?
I tried to use this approach:
#Test
public void neverMonoTest() {
Mono<String> neverMono = Mono.never();
StepVerifier.create(neverMono)
.expectSubscription()
.expectNoEvent(Duration.ofSeconds(1))
.thenCancel()
.verify();
}
and such test passes. But this is false positive, because when I replace Mono.never() with Mono.empty() the test is still green.
Is there any better and reliable method to check lack of Mono's completion (of course within given scope of time) ?
It looks like you're hitting a bug in reactor-test, and unfortunately one that doesn't look to be solved any time soon:
Due to my memory that was a constant flaw in design of reactor-test. Most likely that will be fixed once reactor-test will be rewritten from scratch / significantly.
Downgrading to 3.1.2 seems to fix the problem, but that's quite a downgrade. The only other workaround I'm aware of was posted by PyvesB here, and involves waiting for the Mono to timeout:
Mono<String> mono = Mono.never();
StepVerifier.create(mono.timeout(Duration.ofSeconds(1L)))
.verifyError(TimeoutException.class);
When the next release rolls out, then you should be able to do:
Mono<String> mono = Mono.never();
StepVerifier.create(mono)
.expectTimeout(Duration.ofSeconds(1));
...as a more concise alternative.
Related
I am trying to synchronize a resource with spring webClient:
this.semaphore.acquire()
webClient
.post()
.uri("/a")
.bodyValue(payload)
.retrieve()
.bodyToMono(String.class)
// release
.doFinally(st -> this.semaphore.release())
.switchIfEmpty(Mono.just("a"))
.onErrorResume(Exception.class, e -> Mono.empty())
.doOnNext()
.subscribe();
Is doFinally sufficient to handle the release?
If not, what are the "escape" points?
This will clean up your resources if your mono is cancelled, completes, or errors out, which are all the ways in which a mono can end.
However, a Mono does not necessarily have to end and the doFinally hook will not be executed.
So it depends on how your webClient is configured in cases where the external api fails to respond: Normally, there should be a timeout and a maximum number of retries. In that case, your code should be correct.
NOTE: the release may not happen on the same thread as the acquire. Depending on the resource, this might actually be a problem. For example, a ReentrantReadWriteLock has semantics that it is owned by the thread that created it. I do not know if this problem exists with your semaphore.
It would be very useful for me to see in the terminal what requests are executed and how long they take.
Logging of HTTP requests works fine, but I did not find a similar function for SQL.
Is there a way to enable logging globally using config.yaml or in prepare() of ApplicationChannel?
Looks like i found dirty hack solution:
Future prepare() async {
logger.onRecord.listen((rec) => print("$rec ${rec.error ?? ""} ${rec.stackTrace ?? ""}"));
logger.parent.level = Level.FINE;
...
}
We need to set log level higher then default INFO. All SQL queries log their requests on FINE level.
I expected that this setting should be able to load from a config.yaml, but I did not find anything similar.
More about log levels can be find here
I have an XQuery query intended to wipe test documents from the database before each test that is run. Essentially it looks for a certain element to be present as a top level element in a document (called 'forTestOnly') and if it finds it it deletes the document. This query is run before each test in order to ensure the tests don't interfere with one another (we have about 200 tests using this). The exact XQuery is as such:
xquery version "1.0-ml";
import module namespace dls = "http://marklogic.com/xdmp/dls" at "/MarkLogic/dls.xqy";
let $deleteNonManagedDocs := for $testDoc in /*[forTestOnly]
let $testDocUri := fn:base-uri($testDoc)
where fn:not(dls:document-is-managed($testDocUri))
return xdmp:document-delete($testDocUri)
let $deleteManagedDocs := for $testDoc in cts:search(/*[forTestOnly], dls:documents-query())
let $testDocUri := fn:base-uri($testDoc)
return dls:document-delete($testDocUri, fn:false(), fn:false())
return ($deleteManagedDocs, $deleteNonManagedDocs)
While it seems to work fine most of the time, it recently has begun to sporadically spiral out of control. At some point during the test execution it begins to run for a near indefinite amount of time (I usually stop it after 600-700 seconds), most of the time it takes less than a second though. The database used for testing is not large (it has a few basic seed documents but nothing compared to a production database), and typically each test only creates a handful of documents with the 'forTestOnly' (if not less).
The query seems simple enough, and although running it 200 times in relatively quick succession would understandably put a strain on the database I can't imagine it would cause this kind of lagging (the tests are Grails integration tests and the entire execution takes a little over two minutes). Any ideas why the long run time?
As a side note I have verified that when the tests stall it is indeed after the XQuery has begun to run and not before in some sort of test wiring/execution.
Any help is greatly appreciated.
The query might look simple, but it isn't necessarily simple to evaluate. Those dls function calls could be doing anything, so it's tricky to estimate the complexity. The use of DLS also means that we don't know how much version history has to be deleted to delete each document.
One possibility is that you've discovered a bug. It might already be fixed, which is a good reason why you should always report the full version of the software you're using. The answer may be as simple as upgrading to pick up the fix.
Another possibility is that your test suite ends up running all of this work in a single high-level evaluation, so everything's in memory until the end. That could use enough memory to drive the server into swap. That would explain the recent "spiral out of control" behavior. Check the OS and see what it says.
Next, set the group file-log-level=Debug and check ErrorLog.txt while one of these slow events is happening. If you see XDMP-DEADLOCK messages, you may have a problem where two or more copies of this delete query are running at the same time. MarkLogic has automatic deadlock detection and resolution, but it's faster to avoid the deadlock in the first place.
Some logging might also help determine where the time is spent. Something like:
let $deleteNonManagedDocs := for $testDoc in /*[forTestOnly]
let $testDocUri := fn:base-uri($testDoc)
where fn:not(dls:document-is-managed($testDocUri))
return (
xdmp:log(text { 'unmanaged', $testDocUri }),
xdmp:document-delete($testDocUri))
let $deleteManagedDocs := for $testDoc in cts:search(/*[forTestOnly], dls:documents-query())
let $testDocUri := fn:base-uri($testDoc)
let $_ := xdmp:log(text { 'managed', $testDocUri })
return dls:document-delete($testDocUri, fn:false(), fn:false())
return ()
Finally you should also be able to simplify the query a bit. Since you're deleting everything, you can just ignore DLS.
xdmp:document-delete(
cts:uris(
(), (),
cts:element-query(xs:QName('forTestOnly'), cts:and-query(())))
This would be even simpler and more efficient if you set a collection on every test document: xdmp:collection-delete('test-docs').
I have the following very simple piece of code in Ada which is giving me grief. I trimmed down the code to the minimum to show the problem, the only thing you need to know is that Some_Task is a task type:
task body TB is
Task1 : Some_Task_Ref;
begin
Task1 := new Some_Task;
loop
Put_Line("Main loop is running, whatever...");
delay 5.0;
end loop;
end TB;
From what I understand about task activation in Ada this should be sufficient: I'm creating a task of type "Some_Task" and I don't have to do anything with it, it will execute it's main loop without any intervention. It's not like in java where you have to call a "start" method on the task object.
But if I'm correct, why is the compiler refusing to build, giving me the error:
warning variable "Task1" is assigned but never read
Why should I be forced to "read" Task1? It's a task, all it needs to do is run... what am I missing?
Note: this seems to happen only when I use GNAT in "Gnat mode" (switch -gnatg). Unfortunately I need this mode for some advanced pragmas, but it seems it introduces some "overzelous" checks like the one causing the problem above. How can I deactivate that check?
It's a warning, not an error, and does not prevent building an executable (unless you've turned on "treat warnings as errors"). It's a hint from the compiler that you may have made a mistake in creating a variable that is never used. You can tell the compiler that you don't indend to use Task1 by declaring it as a constant, like this:
Task1 : constant Some_Task_Ref := new Some_Task;
Just to answer this question, since the answer was posted in a comment, which cannot be marked as an answer.
As Holt said (all props to him) this can be fixed by using:
pragma Warnings (Off, Some_Task_Ref) ;
I am migrating from Z3 version 3.2 to version 4.0.
However, the code which was working earlier, no longer works directly and I am trying to find out the reasons for the same. I reduced the entire code to a very simple declaration and assertion, still it won't work. The code is -
long intSort = Z3_mk_int_sort (context);
long periodDeclStr = Z3_mk_string_symbol(context, "period");
long periodVar = Z3_mk_const(context, periodDeclStr, intSort);
long solver = z3_mk_solver();
long zero = Z3_mk_int (context, 0, intSort);
long eqSt = Z3_mk_eq(context, periodVar, zero);
Z3_solver_assert (context, solver, eqSt);
The problem is with the second last statement Z3_mk_eq()
I receive the error as -
WARNING: invalid function application, sort mismatch on argument at position 2
WARNING: (define = arith arith Bool) applied to:
period of sort arith
0::Int of sort Int
My Questions are as follows -
How to debug this error? This is still working with version 3.2, without solver though.
Is it necessary that I must create solver only after I finished adding variables to the context? Can I add variables to the context after creating the solver? Or I have to re-create the solver?
Sorry for the trouble. I was mixing up solver and context to pass them to the solver.
However, the problem still remains the unsolved.
I am having a crash in Z3_ast_to_String() API.
I will try to resolve the problem and post the update.
There is an interaction log now with Z3 4.0 that records the API interaction precisely.
It should be possible use this feature to debug the JNI layering and the bugs you find.
The log is opened using Z3_open_log().
You should open the log before creating any contexts.
You can close the log at any point as well (Z3_close_log()) if you only want to capture a subset of the interaction. You can replay the log by giving the the suffix ".log" and run Z3 on it.
Alternatively, you can run Z3 with the option /log, that is, "Z3.exe /log " to replay the interaction.
Don't you want Z3_mk_eq(context, id, zero) instead of Z3_mk_eq(context, periodDecl, zero)?