Would the use of variables be the best method? In the similar way the EQQPIFJC does to manipulate JCL/TWS variables?
There are 2 main possibilities to pass some variables to a submitted job stream:
Replace the value in a variable table, and submit the job stream (occurrence). If you need to submit the same job stream multiple time, this method may be difficult to manage, since you cannot change the variable table value until the job stream has completed.
Use a JCL preparation job at the beginning of your job stream and then submit the job stream and use the JCL Preparation PIFs to specify your variable values or edit the job JCLs.
Related
I have a multi job project which accepts some parameters and one of them is a choice parameter, because
I'm new to Jenkins it is defined manually through UI and without using groovy.
When the parameters selected or passed, there is a single build that will run for the defined parameters.
I would like to apply some changes and achieve the following behavior:
Execute this same multi job project with all parameters per selected options in the choice parameter.
e.g If selected 2 options in the choice parameter - It will trigger the build twice sequentially or in parallel, some sort of a loop with the parameters it received.
I tried to get some information regarding this online but because I'm not familiar with the proper terminology to search for, all I get is groovy scripts or answers which not related to what I need.
How I can achieve this?
Thanks in advance.
After additional search I came with the following solution:
change the Choice parameter to Extended choice parameter to allow multiple selection.
Create another job that will do the following:
a. Parse the received parameters from the Extended Choice parameter using shell script with the delimiter that is set in the 'Extended choice parameter options'
b. Execute build on the desired job from a loop by using API and passing the relevant parameters.
echo $OPTIONS
IFS=',' read -ra options_array <<< "$OPTIONS"
for option in "${options_array[#]}"
do
echo $option
curl -X POST "https://<user>:<password>#<jenkins_host>/job/<job_name>/buildWithParameters?parameter=${option}"
sleep 5
done
In case there are no free executors - increase the number of executors to allow multiple job executions
Edit job configuration that needs to be executed multiple times and enable the 'Execute concurrent builds if necessary' option.
How could I check passed job parameters before the job start building.
Based on the result of applied conditions for passed parameters, if result is True, I will start the build, if result is False, then skip the job without even start the build and then abort it or make it as Failure/Unstable.
Note, I know I can do that inside the job itself in Jenkinsfile, by check the passed parameters there. But I would like to do that in way so that I don't need to start build job directly.
I think what I'm looking for is such as Pre-Build procedure.
If Pre-Build == True:
-> Start Build Else
-> Skip the build at all
Is there a plugin or a workaround can help in that please ?
Thanks.
Nope. Parameters are part of a job. A ā€¯valid paramter" parameter can only be determined in the context of a job. The only thing that would know what is a valid is the logic inside the job. Therefore bthe job must be triggered to do the evaluation.
You could create a generic trigger job which took your parameters, did some universal validation and the triggered the actual job, passing validated parameters in.
But what would you achieve by that? There are many extended build parameters plugins (including choice parameter) which only let the user pick from available options.
I have an Apache Beam job running on Google Cloud Dataflow, and as part of its initialization it needs to run some basic sanity/availability checks on services, pub/sub subscriptions, GCS blobs, etc. It's a streaming pipeline intended to run ad infinitum that processes hundreds of thousands of pub/sub messages.
Currently it needs a whole heap of required, variable parameters: which Google Cloud project it needs to run in, which bucket and directory prefix it's going to be storing files in, which pub/sub subscriptions it needs to read from, and so on. It does some work with these parameters before pipeline.run is called - validation, string splitting, and the like. In its current form in order to start a job we've been passing these parameters to to a PipelineOptionsFactory and issuing a new compile every single time, but it seems like there should be a better way. I've set up the parameters to be ValueProvider objects, but because they're being called outside of pipeline.run, Maven complains at compile time that ValueProvider.get() is being called outside of a runtime context (which, yes, it is.)
I've tried using NestedValueProviders as in the Google "Creating Templates" document, but my IDE complains if I try to use NestedValueProvider.of to return a string as shown in the document. The only way I've been able to get NestedValueProviders to compile is as follows:
NestedValueProvider<String, String> pid = NestedValueProvider.of(
pipelineOptions.getDataflowProjectId(),
(SerializableFunction<String, String>) s -> s
);
(String pid = NestedValueProvider.of(...) results in the following error: "incompatible types: no instance(s) of type variable(s) T,X exist so that org.apache.beam.sdk.options.ValueProvider.NestedValueProvider conforms to java.lang.String")
I have the following in my pipelineOptions:
ValueProvider<String> getDataflowProjectId();
void setDataflowProjectId(ValueProvider<String> value);
Because of the volume of messages we're going to be processing, adding these checks at the front of the pipeline for every message that comes through isn't really practical; we'll hit daily account administrative limits on some of these calls pretty quickly.
Are templates the right approach for what I want to do? How do I go about actually productionizing this? Should (can?) I compile with maven into a jar, then just run the jar on a local dev/qa/prod box with my parameters and just not bother with ValueProviders at all? Or is it possible to provide a default to a ValueProvider and override it as part of the options passed to the template?
Any advice on how to proceed would be most appreciated. Thanks!
The way templates are currently implemented there is no point to perform "post-template creation" but "pre-pipeline start" initialization/validation.
All of the existing validation executes during template creation. If the validation detects that there the values aren't available (due to being a ValueProvider) the validation is skipped.
In some cases it is possible to approximate validation by adding runtime checks either as part of initial splitting of a custom source or part of the #Setup method of a DoFn. In the latter case, the #Setup method will run once for each instance of the DoFn that is created. If the pipeline is Batch, after 4 failures for a specific instance it will fail the pipeline.
Another option for productionizing pipelines is to build the JAR that runs the pipeline, and have a production process that runs that JAR to initiate the pipeline.
Regarding the compile error you received -- the NestedValueProvider returns a ValueProvider -- it isn't possible to get a String out of that. You could, however, put the validation code into the SerializableFunction that is run within the NestedValueProvider.
Although I believe this will currently re-run the validation everytime the value is accessed, it wouldn't be unreasonable to have the NestedValueProvider cache the translated value.
I have a parent jenkins multijob that calls 3 children jobs, passing to the children the same parameters the parent was built with.
Each child needs to use the same timestamp as it is a unique identifier that each child needs to search for on a webpage.
My problem is this:
When the parent is built, the "name" parameter is set to ${BUILD_TIMESTAMP}, lets call this "02201200" short for Feb 20, 12:00. Each child is called with "pass current job parameters". However, instead of each child receiving 02201200, they each receive ${BUILD_TIMESTAMP} and fetch this value again (eg 02201204).
How do I force the parent to evaluate ${BUILD_TIMESTAMP} and pass its evaluation to the children instead of the variable itself?
One possible solution would be to write the value of this time stamp to a file. Then you could reference that value in subsequent jobs via the "Parameters from properties file" option. Obviously you would just keep overwriting this file every time your job sequence runs.
I used this method and also ended up generally saving all metadata (system/environment variables, jenkins parameters, and build properties, etc) into a properties file and even archived them. This approach simplifies/works around many problems I had. Now, every build has its metadata archived, for downstream jobs or later references, I can get all necessary information from this one file; no extra parameters need to be passed around.
Furthermore, if anything goes wrong, the metadata is very helpful for investigating. I would recommend this simple strategy as it has proven extremely useful to myself and my team.
Currently i have to define each process variable before execution of the process and pass it to the startProcessInstanceBy* functions of Activiti. I wonder if it's possible to define these with default values in the process definition XML? This way i can avoid changing Java code if my process needs new variables for execution. Can i achieve this somehow?
you can use a script task just after your start event to set up these variables