IncompatibleWorkflowDefinition when canceling the workflow execution - amazon-swf

I am testing cancel workflow logic with flow library. The code cancel the workflow within the decider code but it throws IncompatibleWorkflowDefinition
com.amazonaws.services.simpleworkflow.flow.worker.IncompatibleWorkflowDefinition: Unknown DecisionId [type=EXTERNAL_WORKFLOW, id=735]The possible causes are nondeterministic workflow definition code or incompatible change in the workflow defini
tion.
I don't understand why it breaks the logic. Can someone explain why it makes the workflow nondeterministic? Code is like below
#Override
public void dosomething(final Input input) {
checkInput();
cancelCurrentWorkflow();
asyncMethod();
}
private cancelCurrentWorkflow() { contextProvider.getDecisionContext().getWorkflowClient().requestCancelWorkflowExecution(contextProvider.getDecisionContext().getWorkflowContext().getWorkflowExecution());}
#Asynchronous
asyncMethod()

Workflow cancelling itself doesn't make sense. It is usually an operation invoked from the outside using the SWF requestCancelWorkflowExecution API.
If you need to cancel certain part of the workflow code use TryCatchFinally.cancel method.
BTW. Are you aware about Cadence Workflow which is an open source reincarnation of SWF? It has much more developer friendly Java client that doesn't use code generation and AspectJ. It also allows write blocking synchronous code inside the workflow.

Related

Have Spock fail a data-driven feature if an iteration fails

Is there a way to do the equivalent of #Stepwise's failure behavior for a single feature? We have some integration tests that are set up such that setupSpec() kicks off a Kafka process, and then the actual test checks that each step happened. If step 3 failed, there's no reason to bother checking subsequent steps.
There is no built-in way to do this, but assuming you are using a recent 2.x Spock version and not 1.3 or so, a relatively simple annotation-driven Spock extension can do the trick for you.
package de.scrum_master.stackoverflow.q71414311
import org.spockframework.runtime.extension.ExtensionAnnotation
import java.lang.annotation.ElementType
import java.lang.annotation.Retention
import java.lang.annotation.RetentionPolicy
import java.lang.annotation.Target
#Retention(RetentionPolicy.RUNTIME)
#Target(ElementType.METHOD)
#ExtensionAnnotation(StepwiseIterationsExtension)
#interface StepwiseIterations {}
package de.scrum_master.stackoverflow.q71414311
import org.spockframework.runtime.extension.IAnnotationDrivenExtension
import org.spockframework.runtime.model.FeatureInfo
import org.spockframework.runtime.model.parallel.ExecutionMode
class StepwiseIterationsExtension implements IAnnotationDrivenExtension<StepwiseIterations> {
#Override
void visitFeatureAnnotation(StepwiseIterations annotation, FeatureInfo feature) {
// Disable parallel iteration execution for #StepwiseIterations feature,
// similarly to how #Stepwise disables it for the whole specification
feature.setExecutionMode(ExecutionMode.SAME_THREAD)
// If an error occurs in this feature, skip remaining iterations
feature.getFeatureMethod().addInterceptor({ invocation ->
try {
invocation.proceed()
}
catch (Throwable t) {
invocation.getFeature().skip("skipping subsequent iterations after failure")
throw t
}
})
}
}
Add this to your code base, annotate your iterated test with #StepwiseIterations and run it. I think the result is exactly what you are looking for.
In Spock 1.3, an similar, but more complex extension would also be possible.
I also want to express my special thanks to Leonard Brünings, Spock maintainer and boundless source of knowledge. I had a more complex version of this extension in place, but after discussing with him, it evolved into this tiny, elegant solution we are seeing here.
FYI, there is a pre-existing Spock issue #1008 requesting this feature. I created pull request #1442 which adds this capability to #Stepwise. So hopefully in the future we do not need an extra annotation and extra extension anymore.

Is Mono.doFinally sufficient to handle release/cleanup?

I am trying to synchronize a resource with spring webClient:
this.semaphore.acquire()
webClient
.post()
.uri("/a")
.bodyValue(payload)
.retrieve()
.bodyToMono(String.class)
// release
.doFinally(st -> this.semaphore.release())
.switchIfEmpty(Mono.just("a"))
.onErrorResume(Exception.class, e -> Mono.empty())
.doOnNext()
.subscribe();
Is doFinally sufficient to handle the release?
If not, what are the "escape" points?
This will clean up your resources if your mono is cancelled, completes, or errors out, which are all the ways in which a mono can end.
However, a Mono does not necessarily have to end and the doFinally hook will not be executed.
So it depends on how your webClient is configured in cases where the external api fails to respond: Normally, there should be a timeout and a maximum number of retries. In that case, your code should be correct.
NOTE: the release may not happen on the same thread as the acquire. Depending on the resource, this might actually be a problem. For example, a ReentrantReadWriteLock has semantics that it is owned by the thread that created it. I do not know if this problem exists with your semaphore.

Dart Functions Framework usage

I'm new to the Dart functions framework. My goal is to use this package to create several functions and deploy them to Cloud Run (in combination with Firebase, but I guess that's irrelevant to this question).
I've run the quick starts and I've read all of the contents in the docs.
The quick start mentions just one function at a time (e.g. Hello World, Cloud Events, etc..), like this:
import 'package:functions_framework/functions_framework.dart';
import 'package:shelf/shelf.dart';
#CloudFunction()
Response function(Request request) {
return Response.ok('Hello, World!');
}
But as you can see in the quickstarts only one function is handled in a project at a time. How about me wanting to deploy several functions? Should I:
Write several functions in the same project / file, so that the function framework compiles the 'server.dart` by itself
OR
Create a different functions_framework for each function?
Let me be more specific. Should I do the following (option 1 - which makes more sense to me):
import 'dart:math';
import 'package:functions_framework/functions_framework.dart';
import 'package:shelf/shelf.dart';
#CloudFunction()
Response function(Request request) {
return Response.ok('Hello, World!');
}
#CloudFunction()
Response function2(Request request) {
if (Random().nextBool()) {
return Response.ok('Hello, World!');
} else {
return Response.internalServerError();
}
}
Or should I build a different folder by running a build_runner for each function I need in my project?
Is there a difference and/or a best practice?
Thanks in advance.
EDIT. This question is related to the deployment on Cloud Run itself, and not just testing on my own PC. To test my own functions I did the following:
Run dart run build_runner build, so that it updates the server.dart file correctly (I can see that the framework does a lot behind the scenes and that the _nameToFunctionTarget is basically a router);
Run the server in two different terminals, like this: dart run bin/server.dart --port MYPORT --target MYFUNCTION (where MYPORT and MYFUNCTION are either 8080/8081 or function/function2 respectively).
I guess I'm just confused on how to correctly manage this framework once deployed on Cloud Run.
EDIT 2. I just gave up using Dart as a Serverless language or even a Backend language. There's just too much jargon even for the basic things. Any backend framework is either dead, or maintained by one single enthusiast guy (props to him!). This language has not yet received enough love from the Google Team / the community and at this moment in time is basically not possible to go fullstack on just Dart. It's a dream, but it can't be realized now. Furthermore, Dart hardly lacks a proper SDKs to use Firestore, etc., so Firebase isn't an option. I find it easier to just learn NodeJS and exploit the Firebase support for Firebase Functions written in NodeJS, and I'll wait for more support in there in the future, if there ever will be.
The documentation is a bit sparse right now (and I'm new to it also! I couldn't find any good examples, so here goes...)
You can only have a single function that is served. It should be
named 'function' (the type and name can be overriden, see the
cloudevent example dartfn generate cloudevent)
You 'could' have many of these deployed so that each does a specific thing, such as processing cloudevents above, but most people
want something more REST-like (see next)
You need to attach a Router() so that you can have the single entry point (function) handled by specific logic in your code.
Example for Rest
add to pubspec.yaml (in dependencies:) shelf_router: ^1.1.2
delegate the #CloudFunction to use the Router()
functions.dart
import 'package:functions_framework/functions_framework.dart';
import 'package:shelf/shelf.dart';
import 'package:shelf_router/shelf_router.dart';
Router app = Router()
..get('/health', (Request request) {
return Response.ok('healthy');
})
..get('/user/<user>', (Request request, String user) {
// fetch the user... (probably return as json)
return Response.ok('hello $user');
})
..post('/user', (Request request) {
// convert request body to json and persist... (probably return as json)
return Response.ok('saved the user');
});
#CloudFunction()
Future<Response> function(Request request) => app.call(request);

about boost beast websocket api : async_close, async_write

I have read the official document.I'm confused that the document conflict itself.
Here is the document picked from the official:
However, this code is well-formed:
ws.async_read(b, [](error_code, std::size_t){});
ws.async_write(b.data(), [](error_code, std::size_t){});
ws.async_ping({}, {});
ws.async_close({}, {});
and here is another snippet:
This operation is implemented in terms of one or more calls to the next layer's async_write_some functions, and is known as a composed operation. The program must ensure that the stream performs no other write operations (such as websocket::stream::async_write, websocket::stream::async_write_some, or websocket::stream::async_close).
so, which one should I trust?
This part is correct:
https://www.boost.org/doc/libs/1_67_0/libs/beast/doc/html/beast/using_websocket/notes.html#beast.using_websocket.notes.thread_safety
The other text needs to be updated.

Call utility function from a utility function in a Jenkins Pipeline Shared Library

I am following the example under Accessing steps. In src/org/foo/Zot.groovy I would like to call a utility function defined in e.g. src/org/foo/Bar.groovy. How to do that?
I tried several things without success, e.g.:
// src/org/foo/Zot.groovy
package org.foo;
def bar = new org.foo.Bar()
def checkOutFrom(repo) {
bar.someFunction()
git url: "git#github.com:jenkinsci/${repo}"
}
In this case Jenkins hangs on loading the global library. I also tried to import the file.
There have been reproduction of a similar, and probably related problem here: https://issues.jenkins-ci.org/browse/JENKINS-31484
I reproduced a similar situation using the Global CPS Library. The executor stack trace shows that the thread gets locked in InvokerInvocationException, like in the link provided.
I was able to workaround my small reproduce case by adding the #NonCPS annotation to all the called methods down the line.

Resources