I have a parent jenkins multijob that calls 3 children jobs, passing to the children the same parameters the parent was built with.
Each child needs to use the same timestamp as it is a unique identifier that each child needs to search for on a webpage.
My problem is this:
When the parent is built, the "name" parameter is set to ${BUILD_TIMESTAMP}, lets call this "02201200" short for Feb 20, 12:00. Each child is called with "pass current job parameters". However, instead of each child receiving 02201200, they each receive ${BUILD_TIMESTAMP} and fetch this value again (eg 02201204).
How do I force the parent to evaluate ${BUILD_TIMESTAMP} and pass its evaluation to the children instead of the variable itself?
One possible solution would be to write the value of this time stamp to a file. Then you could reference that value in subsequent jobs via the "Parameters from properties file" option. Obviously you would just keep overwriting this file every time your job sequence runs.
I used this method and also ended up generally saving all metadata (system/environment variables, jenkins parameters, and build properties, etc) into a properties file and even archived them. This approach simplifies/works around many problems I had. Now, every build has its metadata archived, for downstream jobs or later references, I can get all necessary information from this one file; no extra parameters need to be passed around.
Furthermore, if anything goes wrong, the metadata is very helpful for investigating. I would recommend this simple strategy as it has proven extremely useful to myself and my team.
Related
I have a pipeline whos job is to take attached submodules, bundle them up in a zip, and push them to an artifact repo only on a merge to the primary branch; all of this logic works fine.
However, because a merge event gets trigged for opened and merged for merge requests, for every merge into the primary branch there is always an effective "no op" build because it will receive the opened event.
From the documentation around option filtering in the generic webhook, it isn't clear to me if a no-match also won't trigger a build, or simply will product a value of "". Here is the documentation:
Value filter
Optional. Anything in the evaluated value, matching this regular expression, will be removed. Having [^0-9] would only allow numbers. The regexp syntax is documented here.
This simply leads to the javadoc on regex.
I would love to not trigger a build at all, even a no-op build, unless the state is "merged"
Yes, you can do it, but you need to add Optional Filter in another way.
In the end of GWT plugin configuration of your job, there is a global Optional Filter section:
In build definition (on Variables tab) I am trying to define a custom variable (Build.Repository.Clean) using simple expression $[not(false)]. But when I print variables during build -- regardless of expression used, Build.Repository.Clean value is always false.
Strangely enough, definining it with something like $(FullBuild) (where FullBuild is another custom variable) works just fine.
Am I missing something?
Notes:
using TFS 2018
Backstory:
Trying to set Build.Repository.Clean variable depending on a custom variable QuickBuild (which can be set by user when kicking off a build). Tried specifying $[not(variables.QuickBuild)] (and other variations of same expression) -- no luck.
here is how it works right now (but I'd rather have QuickBuild instead of FullBuild -- just can't figure out how to negate a variable):
Update 3:
Well, ignore if it changes clean operation during queue time. For what you are looking for, you could try this format:
Build.Repository.Clean=$[not(eq(variables.QuickBuild,'True'))]
If QuickBuild=True, Build.Repository.Clean=False,
If QuickBuild=False, then Build.Repository.Clean.Clean=True
For example:
I have set the clean option in Get Source Step, Clean=true
Build.Repository.Clean=$(FullBuild)
FullBuild=false
Now when I queue the build, then try to change the FullBuild=false during queue
time.
What you thought, the Build.Repository.Clean should change to False , then the clean operation will not be executed. But the truth is, the Build.Repository.Clean is still True and the clean is executed.
Even you do not update the value of FullBuild=false during the queue time, directly set the value FullBuild=false in build pipeline. This will also not use.
In the opposite, if you set Clean=false in Get Source Step. No matter what kind of value you input in FullBuild or Build.Repostiory.Clean during queue build.
It will not clean during the build pipeline.
Conclusion: It's not able to change the clean operation during queue time. This is not related any expression at all. Not matter what kind of value you set for Build.Repository.Clean.
Update 2
After go through your question and all comments once again. Seems your truly goal is assigning the clean options at queue-time based another customized variable.
Since you are not able to change Build.Repository.Clean during queue time. So you are trying to use this workaround. It's not support. There is not a way to assign the clean options at queue-time.
You may have to pre-define this variable in your build pipeline.
Also take a look at this question: How to clean build using self-hosted agent when queuing
In your scenario, you can create two build pipelines as an ugly workaround. One for incremental build (Disable the Clean option in get source step, or use variable Build.Repository.Clean = False), and another one enable the Clean option.
Hope it's clearly.
Expressions are not evaluated when they are used to initialize custom variables (on Variables tab). I.e. variable value ends up being a string with value equal to your expression (e.g. '$[not(<whatever>)]'). Later, when that variable gets used in context that expect boolean -- it still doesn't get evaluated, instead it gets type-casted and any non-empty string yields true.
On the other hand variable substitution happens -- i.e. value $(MyVar) gets replaced with value of MyVar.
Built-in variable seem to be special in sense that if you override them -- this process happens at the start and it's value gets immediately replaced with resulting value.
Note -- this may (or may not) be related to this.
Bottomline: you can't use expressions to override value of a built-in variable.
I have an Apache Beam job running on Google Cloud Dataflow, and as part of its initialization it needs to run some basic sanity/availability checks on services, pub/sub subscriptions, GCS blobs, etc. It's a streaming pipeline intended to run ad infinitum that processes hundreds of thousands of pub/sub messages.
Currently it needs a whole heap of required, variable parameters: which Google Cloud project it needs to run in, which bucket and directory prefix it's going to be storing files in, which pub/sub subscriptions it needs to read from, and so on. It does some work with these parameters before pipeline.run is called - validation, string splitting, and the like. In its current form in order to start a job we've been passing these parameters to to a PipelineOptionsFactory and issuing a new compile every single time, but it seems like there should be a better way. I've set up the parameters to be ValueProvider objects, but because they're being called outside of pipeline.run, Maven complains at compile time that ValueProvider.get() is being called outside of a runtime context (which, yes, it is.)
I've tried using NestedValueProviders as in the Google "Creating Templates" document, but my IDE complains if I try to use NestedValueProvider.of to return a string as shown in the document. The only way I've been able to get NestedValueProviders to compile is as follows:
NestedValueProvider<String, String> pid = NestedValueProvider.of(
pipelineOptions.getDataflowProjectId(),
(SerializableFunction<String, String>) s -> s
);
(String pid = NestedValueProvider.of(...) results in the following error: "incompatible types: no instance(s) of type variable(s) T,X exist so that org.apache.beam.sdk.options.ValueProvider.NestedValueProvider conforms to java.lang.String")
I have the following in my pipelineOptions:
ValueProvider<String> getDataflowProjectId();
void setDataflowProjectId(ValueProvider<String> value);
Because of the volume of messages we're going to be processing, adding these checks at the front of the pipeline for every message that comes through isn't really practical; we'll hit daily account administrative limits on some of these calls pretty quickly.
Are templates the right approach for what I want to do? How do I go about actually productionizing this? Should (can?) I compile with maven into a jar, then just run the jar on a local dev/qa/prod box with my parameters and just not bother with ValueProviders at all? Or is it possible to provide a default to a ValueProvider and override it as part of the options passed to the template?
Any advice on how to proceed would be most appreciated. Thanks!
The way templates are currently implemented there is no point to perform "post-template creation" but "pre-pipeline start" initialization/validation.
All of the existing validation executes during template creation. If the validation detects that there the values aren't available (due to being a ValueProvider) the validation is skipped.
In some cases it is possible to approximate validation by adding runtime checks either as part of initial splitting of a custom source or part of the #Setup method of a DoFn. In the latter case, the #Setup method will run once for each instance of the DoFn that is created. If the pipeline is Batch, after 4 failures for a specific instance it will fail the pipeline.
Another option for productionizing pipelines is to build the JAR that runs the pipeline, and have a production process that runs that JAR to initiate the pipeline.
Regarding the compile error you received -- the NestedValueProvider returns a ValueProvider -- it isn't possible to get a String out of that. You could, however, put the validation code into the SerializableFunction that is run within the NestedValueProvider.
Although I believe this will currently re-run the validation everytime the value is accessed, it wouldn't be unreasonable to have the NestedValueProvider cache the translated value.
I've created a Task that I converted to a Task Group (and why TFS won't allow you JUST to create a Task group is still beyond me, but I digress).
All the parameters in this task have default values. However the one I really care about is the third one (highlighted)
My understanding was that I could leave that blank when I consume the task in a build definition. However this is what I get when I leave it blank:
In addition I'm unable to save this build definitition until I've entered a value. It's not a show stopper by any means and it's really easy to enter the same value again. I'm just perplexed as to why it's doing this. Have I missed a new definition of the word Default?
Check the first item of how the task group is created:
Ensure that all of the tasks you want to include in a task group have their parameters defined as variables, such as $(MyVariable), where you want to be able to configure these parameters when you use the task group. Variables used in the tasks are automatically extracted and converted into parameters for the task group. Values of these configuration variables will be converted into default values for the task group.
If you specify a value (instead of a variable) for a parameter, that
value becomes a fixed parameter value and cannot be exposed as a
parameter to the task group. Parameters of the encapsulated tasks for
which you specified a value (instead of a variable), or you didn't
provide a value for, are not configurable in the task group when added
to a build or release definition.
I want to add a conditional parameter to jenkins job. In other words, I want to add a boolean parameter when checked, another string parameter appears to the user when they build the job. If not checked the string parameter should not appear as a parameter when the user builds the job (Kind of a similar behaviour to the conditional steps of jenkins but on parameters.). Is their a plugin for that?
In the image below, if repo_update is not checked: Clean and Changesets should not be present to the user if they are building the job.
You might want to look at the Multi job plugin
Make the parent and child two separate jobs. This to me makes more sense logically.
In the build part, run the parent job first.
Based on a condition, the child job can run.
You can add the child params in the child job only, so people who configure the parent job will never see the child params
This to me feels like a cleaner implementation.