How do you pass deployment properties from a Stream - Spring Cloud Dataflow processor to a Task Launcher Dataflow to a composed task and child tasks - spring-cloud-dataflow

I have a SCDF Steam that consists of a HTTP Source - Custom Processor and Task Launcher Dataflow Sink. I am trying to pass properties from the processor to the task-launcher-dataflow and to child tasks. I have tried building the message to send to the task-launcher-dataflow in the processor that looks like the following...
{
"args":["--spring.profiles.active=prod","--dataflow-server-uri=http://spring-cloud-dataflow-server:8080"],
"name":"composedtask-filecopy2",
"deploymentProps":{"runID":"e6ac18d2-f53f-11eb-9a03-0242ac130003"}
}
The properties are making it to the composed task but not to the child tasks.
What is the format for the deployment properties to pass them along to the child tasks?

The general procedure to pass application/deployer properties to child tasks in a Composed Task graph is documented here.
That said, the tasklauncher-sink is responsible for launching this graph on every upstream HTTP event in your use case. To pass deployer properties to child tasks in this method, you'll wrap the deployer properties with a similar composite prefix as the key, and along with the desired value.
For instance, if you want to pass the deployer property to prescript task in the graph, you may want to try:
{
"args":["--spring.profiles.active=prod","--dataflow-server-uri=http://spring-cloud-dataflow-server:8080"],
"name":"composedtask-filecopy2",
"deploymentProps":{"deployer.composedtask-filecopy2.prescript.memory":"2048m"}
}

Related

Camunda: how to cancel a human task via interrupting boundary event?

I have a simple BPMN flow where on instantiation a human task gets created. I need the ability to cancel / delete the human task whilst the process instance is active and the workflow moves to the next logical step. See attached proccess
I am considering using an interrupting boundary event with a dynamic message name so that I am sure of only cancelling the specific task. I am trying to have a general pattern for cancelling only the specific task (identified by the task ID, for example). Hence, I would like use the ID of the task in the message name of boundary event. Is that possible?
Otherwise, what would be the best approach for achieving the desired outcome of being able to cancel / delete a specific task?
I have also looked at this post but it doesnt address the specific query I have around dynamic naming
Have you tried to use "Process Instance Modification"? ->
https://docs.camunda.org/manual/latest/user-guide/process-engine/process-instance-modification/
IMHO you could cancel the specific task by ID and instantiate a new one after the transaction point of the User Task. When instantiating, you can pass to the new process the variables needed from the old process
You don't need to make the message name unique. Instead include a correlation criteria when you send the message, so the process engine can identify a unique receiver. The correlation criteria could be
the unique business key of the process instance
a unique (combination of) process data / correlation keys
the process instance id
https://docs.camunda.org/manual/latest/reference/rest/message/post-message/

Chaining another transform after DataStoreIO.Write

I am creating a Google dataflow pipeline, using Apache Beam Java SDK. I have a few transforms there, and I finally create a collection of Entities ( PCollection< Entity > ) . I need to write this into the Google DataStore and then, perform another transform AFTER all entities have been written. (such as broadcasting the IDs of the saved objects through a PubSub Message to multiple subscribers).
Now, the way to store a PCollection is by:
entities.DatastoreIO.v1().write().withProjectId("abc")
This returns a PDone object, and I am not sure how I can chain another transform to occur after this Write() has completed. Since DatastoreIO.write() call does not return a PCollection, I am not able to further the pipeline. I have 2 questions :
How can I get the Ids of the objects written to datastore?
How can I attach another transform that will act after all entities are saved?
We don't have a good way to do either of these things (returning IDs of written Datastore entities, or waiting until entities have been written), though this is far from the first similar request (people have asked for this for BigQuery, for example) and we're thinking about it.
Right now your only option is to wait until the entire pipeline finishes, e.g. via pipeline.run().waitUntilFinish(), and then doing what you wanted in your main program (e.g. you can run another pipeline).

How to reference the node paramater from the active choices reactive parameter in Jenkins

I basically want to drive a set of choice parameters based on the slave node parameter choice. I tried to use the active choices reactive plugin to point to the node choice like this:
if (Node.contains("name_of_slave_node")) {
return ["you_chose_slave_node"]
} else {
return ["master"]
}
Nothing I do seems to work. I can use this type of logic and point to any other type of parameter and it works. Im just a bit stumped as to where to go with this. Could it be a limitation with the plugin or am I missing something with how the groovy is addressing the node parameter. I really appreciate any advice.
It can be hard to get your head around how the plug-in works. Even now I still get surprised by all the cases users apply the plug-in, and the different ways they use it.
You can definitely do what you want. The plug-in executes a Groovy script that must return an array with the values to be displayed on the UI. You also have some helper variables that the plug-in tries to make available for you. One of these variables is the jenkinsProject.
If in your project you assigned a label, then programmatically you can return values based on the node label assigned to your project (or to the node object...).
Here's a very simple example.
assignedNode = jenkinsProject.getAssignedLabelString()
list = []
if (assignedNode == null) {
// do something
} else if (assignedNode.equals('abc')) {
// ...
}
return list
Here assignedNode will have the string name of the node, or null if none. If you use instead jenkinsProject.getAssignedLabel(), then you will have not a String, but a Label.
If you need further customization, the best way is to dive into the Java API, then build your Groovy script from the bottom up. Or try finding examples online (there are many for Jenkins and Groovy) and adapt them to your parameter.
Hope that helps,
Bruno
Working freestyle job solution
I managed to cobble something together yesterday that accomplishes what you want in Jenkin's freestyle jobs. For reference though, it seems like what we want to accomplish is easier done with Jenkin's pipelines (see Active Choices Reactive Reference Parameter in jenkins pipeline)
First, know that the Active choices reactive plugin does not detect any changes in the NodeLabel node parameter choice. You are not going to be able to dynamicially adjust your reactive parameters based on the node/label parameter choice. (perhaps this is because the Node parameter simply prepares a separate job for each node chosen and in that way the Node parameter is more like a build time choice instead of a dynamic one) But I have a workaround.
On the NodeLabel documentation, they specify that the plugin works with the Parameterized Trigger Plugin, see Using the Parameterized Trigger Plugin. This is going to be key.
We will make two new freestyle items. The first will be holding on to your nodes in an active choice parameter, and whatever else you want, then the second job will be a copy of the first one but it will have an additional node parameter alongside the others. Like:
job
list of parameters
master
- active choice parameter (node) - foo - bar
copy
- NodeLabel parameter - active choice parameter (node) - foo - bar
This real NodeLabel parameter will take in the active choice parameter from the master job to the end user, this node standin active parameter on the master job will react and act just like any other parameter. The node choice in the first box changes the results in the second box. In the configurtation of the master job we then pass our parameters like such, using the "parameterized trigger plugin" from above.
We pass our current parameters because the copy/scoped job is going to be doing the real leg work here and will need all our parameters.
Then under "parameter factories" we add a all nodes for label factory and pass the token-expanded form of our active choice parameter's value that represented the node the user chose.
Finally, in our scoped/copy project, we have our node parameter. This parameter's name needs to match the name of node label factory in the previous step (ie. node_list)
Just make sure the scoped/copy job is doing the real work and you should be good to go. The master/wrap job's purpose is only to act as a nice visual wrapper to the node parameter and its subsequent choices that takes advantage of the Active choice reactive parameter plugin's capabilities. Finally, it passes its results and responsibilities to the scoped/copy job.

Is it possible to have default properties of nodes in neo4j?

In my application there are already many nodes with different labels. We are passing property value at the time of creation. I wanted to have 2 properties for all the nodes by default (like creationDate and createdBy). Is there any possibility from configuration side that we can pass these property by default to all the nodes at the time of creation.
If by configuration, you only mean neo4j.conf, then no. You need some code to actually compute the value of the properties anyway: how do you represent the date, how do you determine who created the node?
To do that, you could deploy an extension in Neo4j to intercept the creation of nodes through transaction events by implementing a TransactionEventHandler: you'll get the TransactionData which directly exposes the nodes that were created, on which you can then set the audit properties you want.
The handler is registered through GraphDatabaseService, which can be obtained at startup by implementing PluginLifecycle and exposing the implementation via the Service Locator mechanism (put the class name in META-INF/services/org.neo4j.server.plugins.PluginLifecycle).

Is it possible to manage windows scheduler from an asp.net-mvc site or other remote options?

I am trying to see if I can manage a bunch of windows scheduled tasks from an asp.net-mvc site (setting up new tasks, updating frequency of existing tasks, etc) because we want to allow non technical users to update this info (so we want to avoid people having to log into the webserver directly,etc)
I see you can do this from C# in general (links below) but not sure about entitlements, etc that would make it possible to manage via web app.
http://code.msdn.microsoft.com/windowsdesktop/CSTaskScheduler-2f70d723
Creating Scheduled Tasks
If its not possible from the web, are there other "remote" options so folks can do this without going in to specific box?
Yes you can, just use the taskschd.dll which can be found in C:\Windows\SysWOW64\
Just reference that to your web app and use it like this
using TaskScheduler;
namespace Test
{
public class Class1
{
public void Test()
{
var taskService = new TaskScheduler.TaskScheduler();
taskService.Connect();
var rootFolder = taskService.GetFolder(#"\");
var registeredTasks = rootFolder.GetTasks(0);
foreach (var registeredTask in registeredTasks)
{
}
}
}
}
To prove to you its running here is screenshot of the running code called from a webpage
As you can see from my machine I grabbed 5 items from my task scheduler.
BTW I created a whole article for you in my blog on how to:
Get All Scheduled Tasks
Create/Update Scheduled Tasks
Delete Scheduled Tasks
in here http://macaalay.com/2013/10/02/programmatic-management-of-scheduled-task/
I would rather to use Quartz.net, it looks more flexible to me rather than the windows task scheduler but the idea could work for both types of schedulers, Take a look to the following case to get the idea and maybe this helps you out:
Create a WCF
Consume the WCF service from your asp-net-mvc application
the WCF will have methods to perform common operations like add and remove tasks
Store the information for the task into a persistance storage (db, no-sql, xml, etc) or over your cache
You must have a JOB (from quartz) that reads the database and creates the tasks into the windows box, this job could run every x minutes or y hours
Use the information and perform a FIFO operation
Create a form in your asp-net-app for the users to input the tasks and the form will send the information to your wcf service
Edit:
you need to have the following in mind:
a wcf will give you the state you need to have an "state" to handle the web application tasks, quartz can handle the job to read the database and set up new tasks (quartz or windows scheduler) and the web form is just the front end to handle those operations.
setting up a job in quartz couls be as easy as:
public class ReadDBJob : IJob
{
public void Execute(IJobExecutionContext context)
{
// Do your work to read the database
// and create the tasks that you need
}
}

Resources