I have a composite task with two cloud task ( AAA && BBB ).
I want to pass the properties to AAA and BBB task from a directory.
For example, the usage of "--spring.config.location=directory/" when launching the Spring boot application.
As per the documentation, i understand that we can pass properties using app.CompositeTaskName.taskname.prop1=val1.
But, i want to load a bunch of configuration at launch.
So, is there a way to launch the tasks with "spring.config.location" argument ?
I found the solution. I passed "--spring.config.location" in the task definition of my composite task.
task create myctr --definition "AAA --spring.config.location=/data/prop/ '*'->BBB"
I launched the composite task "myctr" and it referred the property files from the "/data/prop/" directory.
Documentation reference :
http://docs.spring.io/spring-cloud-dataflow/docs/1.7.4.RELEASE/reference/htmlsingle/#spring-cloud-dataflow-composed-tasks
--> Task Application Parameters
Related
It may sound like a known issue but the problem is that when system reboots, the containers don't start and appear to be in the Exited status. We're using docker-compose to start up the containers (in total about ~10 containers launched as a PowerShell script).
The docker documentation says to use the restart_policy but that mainly deals with container crashes. https://docs.docker.com/compose/compose-file.
The restart always flag is also set in the config file and doesn't seem to help, have tried setting up the task scheduler however it's still the same issue.
I'm wondering if there's a way the containers will be started gracefully or if it could be set up in Task Scheduler?
You could create and schedule task to stop the containers at system startup first and create another task to schedule an event on the successful completion of the previous task.
The important thing for another task is to edit the new event filter in XML format and to update the original task upon the successful completion of which we want to trigger a new task.
<QueryList>
<Query Id="0" Path="Microsoft-Windows-TaskScheduler/Operational">
<Select Path="Microsoft-Windows-TaskScheduler/Operational">*[System[Provider[#Name='Microsoft-Windows-TaskScheduler'] and Task = 102]]</Select>
</Query>
</QueryList>
You need to edit query manually and to replace the following line in the XML filter:
*[System[Provider[#Name='Microsoft-Windows-TaskScheduler'] and Task = 102]]
with:
*[EventData [#Name='TaskSuccessEvent'][Data[#Name='TaskName']='\Original\Task']]
The event filter details for the new task are as follows:
Events Logs: Microsoft-Windows-TaskScheduler/Operational
Event source: TaskScheduler
Task category: Task completed (status 102)
The event ID of the original task with the completion status code id 102:
EventID: 102
Provider-Name: Microsoft-Windows-TaskScheduler
Channel: Microsoft-Windows-TaskScheduler/Operational
TaskName: \Original\Task
Finally, add the action details with the program executable path and script/command (passing it as the argument) and save your changes to be able to run with the highest privileges.
Name: spring-cloud-dataflow-server
Version: 2.5.0.BUILD-SNAPSHOT
I have a very simple task created. First run it always COMPLETES fine with NO ISSUES. If task is run again it FAILS with following error.
Subsequent Launch of same task fails with below exception and it's a fresh run after the previous execution completed fully. If a task is run one time can't it be run again?
(log from Task Execution Details - Execution ID: 246)
Caused by: org.springframework.batch.core.repository.JobInstanceAlreadyCompleteException: A job instance already exists and is complete for parameters={-spring.cloud.data.flow.taskappname=composed-task-runner, -spring.cloud.task.executionid=246, -graph=threetasks-t1 && threetasks-t2 && threetasks-t3, -spring.datasource.username=root, -spring.cloud.data.flow.platformname=default, -dataflow-server-uri=http://10.104.227.49:9393, -management.metrics.export.prometheus.enabled=true, -management.metrics.export.prometheus.rsocket.host=prometheus-proxy, -spring.datasource.url=jdbc:mysql://10.110.89.91:3306/mysql, -spring.datasource.driverClassName=org.mariadb.jdbc.Driver, -spring.datasource.password=manager, -management.metrics.export.prometheus.rsocket.port=7001, -management.metrics.export.prometheus.rsocket.enabled=true, -spring.cloud.task.name=threetasks}. If you want to run this job again, change the parameters.
A Job instance in a Spring Batch application requires a unique Job Parameter and this is by design.
In this case, since you are using the Composed Task, you can use the property --increment-instance-enabled=true as part of the composed task definition to handle it. This property will make sure to have the Job Instance get the unique Job parameters.
You can check the list of properties supported for Composed Task Runner here
I am using jmeter for functional testing and have 2 different jmx.
The first jmx has all APIs automated and the second jmx is used to send the html report (generated using Ant-Jmeter task) through SMTP sampler.
Now, I want to send the count of Total, Pass, Fail sample counts in the same email by parsing the jtl file generated by first jmx.
Here is what I can see in the jtl file, s="true" and s="false".
I want count of the same and save it as property to use it further in SMTP sampler.
Example in jtl:
<sample t="2" it="0" lt="2" ct="0" ts="1565592433268" s="false" lb="Verify Latest Patch" rc="200" rm="OK" tn="Tenant_Login 3-1" dt="text" by="9" sby="0" ng="1" na="1">
Any help will be appreciated.
Add the next line to user.properties file:
jmeter.save.saveservice.autoflush=true
it will instruct JMeter to immediately write results to file as soon as they're available
Add tearDown Thread Group to your Test Plan
Add HTTP Request sampler to the TearDown Thread Group
Configure it as follows:
Protocol: file
Path: `location of your .jtl result file
Add XPath Extractor as a child of the HTTP Request sampler
Configure it as follows:
Reference Name: anything meaningful, i.e. successCount
XPath query: count(//sample[#s='true'])
That's it, now you should be able to refer the successful samples count as ${successCount} where required
Every example I've seen (task-launcher sink and triggertask source ) shows how to launch the task defined by uri attribute.
My tasks definitions look like this :
sampleTask <t2: timestamp || t1: timestamp>
sampleTask-t1 timestamp
sampleTask-t2 timestamp
sampleTaskRunner composed-task-runner --graph=sampleTask
My question is how do I launch the composed task runner (sampleTaskRunner, defined by DSL) from stream application.
Thanks
UPDATE
I ended up with the below solution that triggers task using SCDF REST API :
composedTask definition :
<timestamp || mySampleTask>
Stream definition :
http | httpclient | log
Deployment properties :
app.http.port=81
app.httpclient.body=name=composedTask&arguments=--increment-instance-enabled=true
app.httpclient.http-method=POST
app.httpclient.url=http://localhost:9393/tasks/executions
app.httpclient.headers-expression={'Content-Type':'application/x-www-form-urlencoded'}
Though it's easy to implement http sink component, would be great if stream application starters will provide one out of the box.
Another concern I have is about discovering the SCDF REST URL when deployed in distributed environment.
Here's a quick take from one of the SCDF's R&D team members (Glenn Renfro).
stream create foozer --definition "trigger --fixed-delay=5 | tasklaunchrequest-transform --uri=maven://org.springframework.cloud.task.app:composedtaskrunner-task:1.1.0.BUILD-SNAPSHOT --command-line-arguments='--graph=sampleTask-t1||sampleTask-t2 --increment-instance-enabled=true --spring.datasource.url=jdbc:mariadb://localhost:3306/test --spring.datasource.username=root --spring.datasource.password=password --spring.datasource.driverClassName=org.mariadb.jdbc.Driver' | task-launcher-local" --deploy
In the foozer stream definition,
1) "trigger" source happens to trigger an upstream event every 5s
2) "tasklaunchrequest-transform" processor takes a few arguments; more specifically, it uses "composedtaskrunner-task:1.1.0.BUILD-SNAPSHOT" to launch a composed-task graph (i.e., sampleTask-t1||sampleTask-t2)
3) Pay attention to --increment-instance-enabled. This was recently added to CTR application and this provides the ability to re-launch a composed-task in a recurring cadence
4) Since the CTR and SCDF must share the same database, we are also passing datasource properties as command-line args. (SCDF-server is already started with the same datasource credentials)
Hope this helps.
Lastly, we will add a sample to the reference guide via: spring-cloud/spring-cloud-dataflow#1780
I have two tasks,task_1 should compress png files and task_2 should not compress png files,so i want to add an parameter to control it.
project.ext.set("compressPngs", 1);
task taskCompressPngs(type:Exec){
commandLine "myshell.sh"
args compressPngs
}
task task_1(dependsOn:'taskCompressPngs'){}
task task_2(dependsOn:'taskCompressPngs'){}
gradle.taskGraph.whenReady { taskGraph ->
if (taskGraph.hasTask(task_1))
{
compressPngs=1
}
if (taskGraph.hasTask(task_2))
{
compressPngs=0
}
}
But when i run task_1 or task_2,in task 'taskCompressPngs', 'compressPngs' passed to my script 'myshell.sh' always be 1, why? how to solve it?
taskCompressPngs gets configured before the configuration value is changed. Conditional configuration is rarely a good solution. A better approach is to declare two Exec tasks.
As others have mentioned, it's probably best to take the advice of #PeterNiederwieser and use two separate tasks, but if you really don't think you can, here are a couple other options that should work.
1) Check Gradle startParameter
Configure your reusable task based on which task is passed to gradle on the command line.
task taskCompressPngs(type: Exec) {
def compressPngs = 1
if(gradle.startParameter.taskNames.toString().toLowerCase().contains("task_2")) compressPngs = 0
commandLine "myshell.sh $compressPngs".tokenize()
}
This gives you a variable to use (gradle.startParameter.taskNames) that is available at configuration-time.
Here we change compressPngs to 0 only if task_2 is specified on the command line when running gradle.
I.E. gradlew task_1 will run myshell.sh 1, but gradlew task_2 (or even gradlew task_1 task_2) will run myshell.sh 0.
This logic could also be applied to a project property outside of the taskCompressPngs task - if, for example, you wanted to change other tasks too.
Again, this only works if "task_2" is specified in the command used to run gradle.
2) Use DefaultExecAction instead of Exec task
Instead of using a task of type Exec, you could write a custom task and check the taskGraph in it.
task taskCompressPngs << {
def compressPngs = 1
if(gradle.taskGraph.hasTask(two)) compressPngs = 2
org.gradle.process.internal.DefaultExecAction e = new org.gradle.process.internal.DefaultExecAction(getServices().get(org.gradle.api.internal.file.FileResolver.class))
e.commandLine("myshell.sh $compressPngs".tokenize())
e.execute()
}
This is just moves your existing logic from configuration-time to execution-time.
This requires the use of "internal" Gradle classes (which is bad), but it gives you a little more flexibility in how/when the shell command is run.
Note that these solutions were checked against Gradle 1.7 and Gradle 1.11.