Jbpm : ended TaskInstance transitions - task

I don't understand something in JBPM API. I have two users on a task at the same time. The first one chooses a transition and completes the task, so the TaskInstance is now ended. The second user does the same but gets a nullPointerException : getAvalaibleTransition() returns null.
Why would getAvailableTransition() (of class TaskInstance) return null ? It's the same node, transitions should be the same ?
I am a total newbie with JBPM. Just testing the behaviour of an application in response to competitive actions and ran into this error...

I suppose that you are using jBPM 3.x right?
If you have one single instance of a business process, why do you have two users in one task? You are probably missing the idea of Process Instance, so can you describe your business situation? Because if one user complete a task, then that task can not be worked by another user.
Cheers

Related

Camunda: how to cancel a human task via interrupting boundary event?

I have a simple BPMN flow where on instantiation a human task gets created. I need the ability to cancel / delete the human task whilst the process instance is active and the workflow moves to the next logical step. See attached proccess
I am considering using an interrupting boundary event with a dynamic message name so that I am sure of only cancelling the specific task. I am trying to have a general pattern for cancelling only the specific task (identified by the task ID, for example). Hence, I would like use the ID of the task in the message name of boundary event. Is that possible?
Otherwise, what would be the best approach for achieving the desired outcome of being able to cancel / delete a specific task?
I have also looked at this post but it doesnt address the specific query I have around dynamic naming
Have you tried to use "Process Instance Modification"? ->
https://docs.camunda.org/manual/latest/user-guide/process-engine/process-instance-modification/
IMHO you could cancel the specific task by ID and instantiate a new one after the transaction point of the User Task. When instantiating, you can pass to the new process the variables needed from the old process
You don't need to make the message name unique. Instead include a correlation criteria when you send the message, so the process engine can identify a unique receiver. The correlation criteria could be
the unique business key of the process instance
a unique (combination of) process data / correlation keys
the process instance id
https://docs.camunda.org/manual/latest/reference/rest/message/post-message/

Is Reactor Context used only for statically initialised data?

Consider following 4 lines of code:
Mono<Void> result = personRepository.findByNameStartingWith("Alice")
.map(...)
.flatMap(...)
.subscriberContext()
Fictional Use Case which I hope you will immediately map to your real task requirement:
How does one adds "Alice" to the context, so that after .map() where "Alice" is no longer Person.class but a Cyborg.class (assuming irreversible transformation), in .flatMap() I can access original "Alice" Person.class. We want to compare the strength of "Alice" person versus "Alice" cyborg inside .flatMap() and then send them both to the moon on a ship to build a colony.
I've read about 3 times:
https://projectreactor.io/docs/core/release/reference/#context
I've read dozen articles on subscriberContext
I've looked at colleague code who uses subscriberContext but only for Tracing Context and MDM which are statically initialised outside of pipelines at the top of the code.
So the conclusion I am coming to is that something else was named as "context" , what majority can't use for the overwhelming use case above.
Do I have to stick to tuples and wrappers? Or I am totally dummy and there is a way. I need this context to work in entirely opposite direction :-), unless "this" context is not the context I need.
I will await for Reactor developers attention (or later than that go to GitHub to raise an issue with the conceptual naming error, if I am correct) but in the meantime. I believed that Reactor Context could solve this:
What is the efficient/proper way to flow multiple objects in reactor
But what it actually reminds is some kind of mega closure over reactive pipeline propagating down->up and accepting values from outside in an imperative way, which IMO is a very narrow and limited use case to be called a "context", which will confuse more people to come.
Context and subscribeContext in posts you refer to are indeed one and the same...
The goal of the Context is more along the lines of attaching some information to a given subscription.
This works because upon subscription, a chain of Subscriber is constructed to "materialize" the processing, and by nature each given operator (or step) as a reference to its downstream in order to be able to push data to it.
As a result, it can also query it for its view of what the current subscription Context is, hence the down-to-up approach.

calling system commands from rails - general design pattern

Rails newbie here, building a rails api backend application.
Here is the broad application flow
There is a single model palindrome which has a field name of string kind
In the simplest user interaction user client sends POST with string name to PalindromeController.
This string need to be passed to a system application systemapp and the app would return another string. This would need to be parsed as a JSON string and returned to the client.
Questions on how to go about the following.
where should I call the systemapp from - from model or controller ?
should the call to systemapp be wrapped in a background job ?
Call it from the model.
If it were a more complex case, you could treat the subprogram as a proper interface and write a separate class for it. This one I would probably put into lib/interfaces or even I a separate gem, as it would contain only code specific to the subprogram, not to your application.
The lackmus test for where it goes is "will or should your model ever be able to do anything at all without that subprogram". If yes, then it is a case of dependency injection and could go into the controller. If "no", then it goes into the model.
Rule of thumb: fat models, lean controllers.
Do it directly if the performance is acceptable, i.e. if there is no risk of normal user activity overwhelming the webserver. Else, you would use deferred execution, i.e. batch processing, which would entail a very much larger end result.

Documentum xCP 2.0 creating multiple objects

I am using xCP Designer 2.0 and I'm trying to create multiple objects at once. Say I receive the number 20 as input and need to create 20 of these objects with an increasing integer attribute from 1-20.
Is it possible to achieve this with a stateless process? How exactly?
You have at least 2 options:
write an custom Java code an execute in inside Call Java Service activity
create specific process flow to achieve it
If you decide for first, you can check how to integrate your custom (Java) code to the xCPDesigner via self paced tutorial which you can download from this link. You find useful things on this link too.
If you choose second approach, do it this way:
Add process variable like here
Model a stateless process like on the picture
Define loop_count++ activity like on the picture
Note that loop_count++ activity is of type Set Process Data.
Additionally, you need to set trigger tab on Join activity like in a picture:
You will know what to do in Create activity. ;)
EDIT: I just saw I overlooked you stated that you set 20 when initiating stateless process. Logic is the same, you just use Substract function in loop_count++ activity (you can consider changing activity name too) :)

SpecFlow Dependent Features

I have 2 Features 1. User Creation 2. User Enrollment both are separate and have multiple scenarios. 2nd Feature is dependent on 1st one, so when I run the 2nd Feature directly then how this feature checks the 1st Feature is already run and user created. I am using database in which creation status column (True/False) tells if user has been created or not. So, I want if i run the 2nd feature before that it runs the 1st feature for user creation.
In general, it is considered a very bad practice to have dependencies between tests and a specially features. Each test/scenario should have its own independent setup.
If your second feature depends on user creation, you could just add another step to you scenarios, e.g. "When such and such user is created."
If all scenarios under one feature share common content, you could move it up under a Background tag. For example:
Feature: User Enrollment
Background
Given such and such user
Scenario
When ...
And ...
Then...
Scenario
When ...
And ...
Then...
I used reflection
Find all Types with a DescriptionAttribute (aka Features)
Find their MethodInfos with a TestAttribute and DescriptionAttribute (aka Scenarios)
Store them to a Dictionary
Call them by "Title of the Feature/Title of the Scenario" with Activator.CreateInstance and Invoke
You have to set the (private) field "testRunner" according to your needs of course.

Resources