Camunda: how to cancel a human task via interrupting boundary event? - task

I have a simple BPMN flow where on instantiation a human task gets created. I need the ability to cancel / delete the human task whilst the process instance is active and the workflow moves to the next logical step. See attached proccess
I am considering using an interrupting boundary event with a dynamic message name so that I am sure of only cancelling the specific task. I am trying to have a general pattern for cancelling only the specific task (identified by the task ID, for example). Hence, I would like use the ID of the task in the message name of boundary event. Is that possible?
Otherwise, what would be the best approach for achieving the desired outcome of being able to cancel / delete a specific task?
I have also looked at this post but it doesnt address the specific query I have around dynamic naming

Have you tried to use "Process Instance Modification"? ->
https://docs.camunda.org/manual/latest/user-guide/process-engine/process-instance-modification/
IMHO you could cancel the specific task by ID and instantiate a new one after the transaction point of the User Task. When instantiating, you can pass to the new process the variables needed from the old process

You don't need to make the message name unique. Instead include a correlation criteria when you send the message, so the process engine can identify a unique receiver. The correlation criteria could be
the unique business key of the process instance
a unique (combination of) process data / correlation keys
the process instance id
https://docs.camunda.org/manual/latest/reference/rest/message/post-message/

Related

How to change the worker name of Dask cluster?

I am asking this for local cluster.
Here is the code for getting the address and the name of the worker:
def f():
worker = get_worker().name
return worker
client.run(f)
Output:
{'tcp://127.0.0.1:58709': 0,
'tcp://127.0.0.1:58710': 2,
'tcp://127.0.0.1:58711': 1}
It is in the dictionary. The key is the address and the value is the name of the worker. is there any way to assign the worker's name while creating the client? or any way around.
just changing the dictionary value doesn't change the original of the worker.
My motivation behind this:
*My goal is to create a worker and assign the preprocessing task to that. After the create two models let's say logistic reg and random forest classifier.
and Now I want to scale up the worker using the scale method. and assign the model training for two models on two new workers.
But the thing is how will I identify which worker is the new worker? Because the worker name is like worker_0, worker_1, worker_2. Is the new worker which is created either worker_0 or worker_1? How do identify the new worker name?
I do believe that new worker names are in an incremental fashion.
But I need proof to validate my reason. That's why I thought It better to change the name. So that I can keep track of the worker more easily.*
Reference for the original question: How to get the worker name in dask cluster?
I'm afraid the current implementation of LocalCluster does not allow you to set the name via the constructor. You could propose a change to the codebase via a PR.
It raises the question, what do you wish to achieve with the names? You already have both a unique sequence number (the current .name) and unique UUID-based ID for every worker, as well as the workers' unique TCP addresses.
Finally - and this is untested - you could plausibly use client.run together with get_worker() to set the .name attribute dynamically at runtime, should you wish. The following would set the name attribute for the specific worker (as a TCP address) given in the workers= list.
def set_name(name):
get_worker().name = name
client.run(set_name, "name1", workers=[..])
-EDIT-
After providing motivation, I believe this question is not actually asking what you want - maybe make a new a new question. Perhaps clinet.who_has is what you want, but your workflow isn't clear to me.

Save global attribute value when new session starts

I have two fields in SAP Fiori App: Template_ID and Offer_ID.
I want to choose value in Offer_ID depending on Template_ID field value.
For solving this problem I've tried to do this steps:
When the user click on Template_ID field in Back-End runs the method:
CL_CUAN_CAMPAIGN_DPC->contentset_get_entityset().
This method has returning paramater et_result. In et_result I have the necessary field temp_id.
For saving temp_id value I created a global attribute in class ZCL_CUAN_CLASS.
ZCL_CUAN_CLASS=>GV_CONTENT = VALUE #( et_result[ 1 ]-temp_ID OPTIONAL ).
I'll use this global attribute as an input parameter for my second method:
CL_CUAN_CAMPAIGN_DPC->GET_OFFER_BY_TEMPLATE().
This method returns to me the internal table with the offer_id, which belongs to my choosen temp_id.
But when the user click on Offer_ID field on Web UI, in debugging I see that my global attribute is blank.
May be it's because of session or something else, but it's blank.
OData is a stateless protocol, meaning the server responds your query, then forgets you were ever there. By definition, this does not allow you to transport main memory content from one request to the next.
User interfaces on the other hand usually require state. It can be gained through one of the following options:
Stateful user interface
As Haojie points out, one solution is to store the data that was selected in the user interface and submit it as a filter criterion back to the server with the next request. Having a stateful user interface is the standard solution for stateless server apps.
Stateful persistence
Another option is to store the data permanently in the server's database, in ABAP preferredly in a business object. This object has a unique identifier, probably a GUID, that you can reference in your requests to identify the process you are working on.
Draft persistence
If not all information is available in one step, such as in a multi-step wizard, should not become "active" right away, or you want to be able to switch devices while working on a multi-step process, drafts are an option. Drafts are regular business objects, with the one specialty that they remain inert until the user triggers a final activation step.
Soft state
For performance optimizations, you can have a look at SAP Gateway's soft state mode, which allows you to buffer some data to be able to respond to related requests more quickly. This is generally discouraged though, as it contradicts the stateless paradigm of OData.
Stateful protocol
In some cases, stateless protocols like OData are not the right way to go. For example, banking apps still prefer to pertain state to avoid that users remain logged in infinitely, and thus becoming vulnerable to attacks like CSRF. If this is the case for you, you should have a look at ABAP WebDynpro for your user interface. Generally, stateful server protocols are considered inferior because they bind lots of server resources for long times and thus cannot handle larger user numbers.
When ther user click on OfferId field, it will start a NEW session and of course what you store as GV_CONTENT in class ZCL_CUAN_CLASS is lost.
What you should do is that for the second request you should send to backend with filter Template_ID so in your CL_CUAN_CAMPAIGN_DPC->GET_OFFER_BY_TEMPLATE() method, you can further process the result by Template_ID.
Or SET/GET Parameter.

Documentum xCP 2.0 creating multiple objects

I am using xCP Designer 2.0 and I'm trying to create multiple objects at once. Say I receive the number 20 as input and need to create 20 of these objects with an increasing integer attribute from 1-20.
Is it possible to achieve this with a stateless process? How exactly?
You have at least 2 options:
write an custom Java code an execute in inside Call Java Service activity
create specific process flow to achieve it
If you decide for first, you can check how to integrate your custom (Java) code to the xCPDesigner via self paced tutorial which you can download from this link. You find useful things on this link too.
If you choose second approach, do it this way:
Add process variable like here
Model a stateless process like on the picture
Define loop_count++ activity like on the picture
Note that loop_count++ activity is of type Set Process Data.
Additionally, you need to set trigger tab on Join activity like in a picture:
You will know what to do in Create activity. ;)
EDIT: I just saw I overlooked you stated that you set 20 when initiating stateless process. Logic is the same, you just use Substract function in loop_count++ activity (you can consider changing activity name too) :)

Jbpm : ended TaskInstance transitions

I don't understand something in JBPM API. I have two users on a task at the same time. The first one chooses a transition and completes the task, so the TaskInstance is now ended. The second user does the same but gets a nullPointerException : getAvalaibleTransition() returns null.
Why would getAvailableTransition() (of class TaskInstance) return null ? It's the same node, transitions should be the same ?
I am a total newbie with JBPM. Just testing the behaviour of an application in response to competitive actions and ran into this error...
I suppose that you are using jBPM 3.x right?
If you have one single instance of a business process, why do you have two users in one task? You are probably missing the idea of Process Instance, so can you describe your business situation? Because if one user complete a task, then that task can not be worked by another user.
Cheers

SpecFlow Dependent Features

I have 2 Features 1. User Creation 2. User Enrollment both are separate and have multiple scenarios. 2nd Feature is dependent on 1st one, so when I run the 2nd Feature directly then how this feature checks the 1st Feature is already run and user created. I am using database in which creation status column (True/False) tells if user has been created or not. So, I want if i run the 2nd feature before that it runs the 1st feature for user creation.
In general, it is considered a very bad practice to have dependencies between tests and a specially features. Each test/scenario should have its own independent setup.
If your second feature depends on user creation, you could just add another step to you scenarios, e.g. "When such and such user is created."
If all scenarios under one feature share common content, you could move it up under a Background tag. For example:
Feature: User Enrollment
Background
Given such and such user
Scenario
When ...
And ...
Then...
Scenario
When ...
And ...
Then...
I used reflection
Find all Types with a DescriptionAttribute (aka Features)
Find their MethodInfos with a TestAttribute and DescriptionAttribute (aka Scenarios)
Store them to a Dictionary
Call them by "Title of the Feature/Title of the Scenario" with Activator.CreateInstance and Invoke
You have to set the (private) field "testRunner" according to your needs of course.

Resources