Jenkins job builder: conditionally include builder and publisher - jenkins

I have a set of Jenkins jobs that are substantially the same. I have created a job template that creates them all. However, some have builders that others don't (i.e. the first in the chain doesn't copy artifacts from another project) and others have publishers that others don't (they don't all have JUnit tests).
I would like to conditionally include these modules depending on a variable, but I can't find a way of doing this:
I can't use a jinja2 template to include or exclude one item in a list
Including empty variables typically causes the build to fail
I could include yaml files, but I would need to include all of the builders section, and I would need one for each job, meaning a lot of repetition
Is this possible? I would like to include the comment section below in some of the jobs.
43 builders:
44 - shell: |
45 echo Removing working directory from previous run
46 rm -rf ${{WORKSPACE}}/css-build/working
47 # - copyartifact:
48 # project: "{previous-project}"
49 # whichbuild: last-successful
50 # optional: "{copy-optional}"
51 - shell: |
52 {init-shell}
53 ${{WORKSPACE}}/css-build/build-util.sh {shell-args} ${{WORKSPACE}}/{location} -w ${{WORKSPACE}}/css-b uild/working

Well, here is the workaround:
Define a new module (in this case it will be a builder) with a different name to the original. If the omit tag is present, don't do anything; otherwise, do what would have happened anyway.
def optional_copy(registry, xml_parent, data):
if data['omit'].lower() == 'true':
return
else:
new_data = collections.OrderedDict()
new_data['copyartifact'] = data
registry.dispatch('builder', xml_parent, new_data)
Register it to jjb in setup.py:
setup(
name='JJB config',
py_modules = ['optionals'],
entry_points={
'jenkins_jobs.builders': [
'optional-copy=optionals:optional_copy'
]
}
)
Then, in your yaml, you can use the optional-copy module and the omit property:
builders:
- shell: |
echo Removing working directory from previous run
rm -rf "{working-dir}"
- optional-copy:
omit: "{omit-copy}"
project: "{prev}"
whichbuild: last-successful
- shell: |
{init-shell}
${{WORKSPACE}}/css-build/build-util.sh -u {diirt-version} {shell-args} -p ${{WORKSPACE}}/{location} -w "{working-dir}"

I have got a workaround for your issue that does not require extending the job builder.
But it requires the availablity of the Conditional build step plugin on jenkins.
Example for optional builders:
- job-template
id: my-custom-template
builders:
- conditional-step:
condition-kind: always
steps: "{obj:optional_builders|[]}"
With this you can add builders to your job using a optional_builders variable (if you want to).
jobs:
#With optional_builders
- my-custom-template:
optional_builders:
- copyartifact:
project: "{previous-project}"
whichbuild: last-successful
#Without optional_builders
- my-custom-template:
Example for optional publishers:
publishers:
- conditional-publisher:
- condition-kind: always
action: "{obj:optional_publishers|[]}"

Related

klocwork in parallel: how append kwtables into existing build or how kwadmin publish results from mutiple kwtable folders

Due to our projects is really huge and combined all targets into single compilation + analysis + publish will take much time to finished. So I'd like to running the klocwork analysis in parallel.
Here what I've got right now (split the targets in various sub tasks):
kw-analysis -> kw-analysis-sub-1
| -> kw-analysis-sub-2
| -> ...
| -> kw-analysis-sub-n
| |
| + the sub-task will handle:
| 1. compile single target and generate spec: kwinject_<target_name>.out :
| $ export KWWRAP_HOOKS_DIR='/temp/kw/hooks'
| $ export PATH=${KWWRAP_HOOKS_DIR}:$PATH
| $ make <target_name>
| $ kwinject --trace-in "/temp/kw/kwwrap.trace" --output "kwinject_<target_name>.out"
| 2. trace and anslysis for each target:
| $ kwbuildproject --url "<https://url:port>/<project_name>" [-I] --table-directory kwtable_<target_name> kwinject_<target_name>.out
| 3. archive kwtable_<target_name> folder
|
+ leading job will do:
1. copy all kwtable_<target_name> from sub-analysis jobs (downstream jobs)
2. deploy and publish the result into klocwork sever once for all
<<<<<< this is the key point of parallel analysis
As I know to publish single kwtable can be:
$ kwadmin --url <https://url:port> load --name <build_name> <project_name> kwtable_<target_name>
However, seems kwadmin neither support multiple kwtabels via:
kwadmin load --name <build_name> ... kwtable_<target_name_1> kwtable_<target_name_2> ...
nor support add additional result in exists build via
$ kwadmin load --name <build_name> ... kwtable_<target_name_1>
|
+ create build first
$ kwadmin "append" --name <build_name> ... kwtable_<target_name_2>
*
+ append new result in <build_name> for anothers kwtables folder
So, is there any way I can run klocwork analysis in parallel. Btw I'm using the Jenkins as integration tool
Running the parallel analysis by breaking the project into multiple pieces may take more time than building it as a single project with Klocwork sometimes. the reason behind this is, Klocwork is going to analyze all the dependent files multiple times as you perform multiple builds/analyses which are actually a part of a single project. (Parallel analysis can be a benefit when you do not have dependencies on different modules/files/pieces that you are building in parallel).
Klocwork can perform incremental/delta analysis when you pass --incremental argument as part kwbuildproject command. This should save the build time.

Bazel: map set(files) -> set(targets)

According to https://docs.bazel.build/versions/master/query-how-to.html#What_build_rule_contains_file_ja,
fullname=$(bazel query path/to/file/bar.java)
bazel query "attr('srcs', $fullname, ${fullname//:*/}:*)"
will tell me which target bar.java belongs to.
How can I get the set of targets that multiple files belong to? I.e. map set(files) -> set(targets). I could do this in serial, but each bazel call is fairly expensive and slow---I want to get it done in one call.
Context: I'd like to do this (build targets pertaining to a few files):
git diff HEAD~ | xargs bazel query "get targets for set(files)" | xargs bazel build
I feel like this capability must already exist, but I haven't been able to find it.
Using query will work, but you can also use aquery for a more direct approach.
https://docs.bazel.build/versions/master/aquery.html
BUILD:
genrule(
name = "gen1",
srcs = ["a"],
outs = ["gen1.out"],
cmd = "echo foo > $#",
)
pkg/BUILD:
genrule(
name = "gen2",
srcs = ["b"],
outs = ["gen2.out"],
cmd = "echo foo > $#",
)
$ bazel aquery "inputs('a|pkg/b', ...)" --include_artifacts=false --include_commandline=false
INFO: Analyzed 2 targets (6 packages loaded, 10 targets configured).
INFO: Found 2 targets...
action 'Executing genrule //pkg:gen2'
Mnemonic: Genrule
Target: //pkg:gen2
Configuration: k8-fastbuild
ActionKey: 8d7d05620bfd8303aa66488e0cd6586d8e978197126cdb41c5fc8c49c81988ef
action 'Executing genrule //:gen1'
Mnemonic: Genrule
Target: //:gen1
Configuration: k8-fastbuild
ActionKey: d4c76a6b6913ce5d887829dbc00d101c1cf5b0ff5240ed13ea328c26e4b41960
INFO: Elapsed time: 0.198s
INFO: 0 processes.
INFO: Build completed successfully, 0 total actions
inputs (like attr) accepts a regular expression, so you can "or" the files together with |. Then just filter for Target:, or use one of the other outputs (--output=(proto|textproto|jsonproto))
This has the advantages that:
you don't need to figure out the labels of the files first (since attr in query operates on labels).
aquery queries after the analysis phase, so your results are more accurate because it accounts for configuration (flags, etc).
this will work for any attribute, since it's querying the inputs of all actions, which is at a lower level than rules
On the other hand, since aquery runs loading + analysis, it might take longer than query, since query just runs the loading phase.

Artifact not being published in bitbucket pipeline

I'm doing quite trivial java build in BitBucket Pipeline. the only twist is that it is in the repository subdirectory.
my bitbucket-pipelines.yml:
pipelines:
default:
- step:
caches:
- gradle
script: # Modify the commands below to build your repository.
# You must commit the Gradle wrapper to your repository
# https://docs.gradle.org/current/userguide/gradle_wrapper.html
- bash "./foo bar/gradlew" -p "./foo bar" distTar
- ls ./foo\ bar/build -R
- echo 'THE END'
artifacts:
- ./foo bar/build/distributions/xxx.tar
My ls confirms that xxx.tar is in the expected location
....
./foo bar/build/distributions:brigitte.tar
....
, but artifact page is empty.
Found it! It should be
# ...
artifacts:
- foo bar/build/distributions/brigitte.tar
artifacts paths are not real path so "dot slash" at the beginning was invalidating my path. Shame that it was not raised as a warning!
Extending the existing answer, I'd like to highlight the docs fragment that speaks about this
https://support.atlassian.com/bitbucket-cloud/docs/use-artifacts-in-steps/#Introduction
You can use glob patterns to define artifacts. Glob patterns that start with a * will need to be put in quotes. Note: As these are glob patterns, path segments “.” and “..” won’t work. Use paths relative to the build directory.
So that's it: artifacts can't be outside the build directory and artifact definitions in the pipelines must not contain . and .. path segments.
Absolute paths starting with / will neither work!
It is truly shameful how obscure this is like, why wouldn't the pipelines throw an error if that declaration is invalid? Damn, Bitbucket!

Setting environment variables in Flutter

For example, building a client for an API, like Twitch.
In a Dart CLI binary, I could use a generic environment variable, or a Dart definition variable. For example, using both as fallbacks:
main() {
String clientId =
// dart -dCLIENT_ID='abc bin/example.dart
// This is considered "compiled-into" the application.
const String.fromEnvironment('CLIENT_ID') ??
// CLIENT_ID='abc' dart bin/example.dart
// This is considered a runtime flag.
Platform.environment['CLIENT_ID'];
// Use clientId.
}
Does Flutter have a way of setting either/both of these, specifically...
During dev time
When shipped to prod
Happy to help with some docs once I figure out how :)
Starting from Flutter 1.17 you can define compile-time variables if you want to.
To do so just use --dart-define argument during flutter run or flutter build
If you need to pass multiple key-value pairs, just define --dart-define multiple times:
flutter run --dart-define=SOME_VAR=SOME_VALUE --dart-define=OTHER_VAR=OTHER_VALUE
and then, anywhere in your code you can use them like:
const SOME_VAR = String.fromEnvironment('SOME_VAR', defaultValue: 'SOME_DEFAULT_VALUE');
const OTHER_VAR = String.fromEnvironment('OTHER_VAR', defaultValue: 'OTHER_DEFAULT_VALUE');
Also, they can be used in native layers too.
Here is an article that explains more.
For configuration a common pattern I've seen is to use separate main files instead. i.e.
flutter run -t lib/production_main.dart
and
flutter build apk -t lib/debug_main.dart
And then in those different main files set up the configurations desired.
In terms of reading ids, you can do that from arbitrary assets https://flutter.io/assets-and-images/.
I believe it is possible in Flutter to read from the environment as you suggest, however I don't know how to set those environment variables on iOS or Android.
Since I was trying to solve this as well and encountered this thread I just wanted to add this for people looking for a solution in the future... If all you're looking for is PROD/DEV environments there is now a supported way of getting if the app is in production or not:
const bool isProduction = bool.fromEnvironment('dart.vm.product');
As suggested by:
https://twitter.com/FlutterDev/status/1048278525432791041
https://github.com/flutter/flutter/issues/4014
To run your app (in flutter run)
flutter run --dart-define=EXAMPLE_API_ENDPOINT=https://api.example.com/
To release your app (in flutter build)
My app wasn't letting users log in I realized that environment variables were empty strings in the app, instead of their actual values 😅.
iOS: flutter build ipa --dart-define=EXAMPLE_API_ENDPOINT=https://api.example.com/
Android: flutter build apk --dart-define=EXAMPLE_API_ENDPOINT=https://api.example.com/
--dart-define documentation
From the flutter run --help or flutter build ipa --help, the --dart-define shows:
Additional key-value pairs that will be available as
constants from the String.fromEnvironment, bool.fromEnvironment,
int.fromEnvironment, and double.fromEnvironment constructors.
Multiple defines can be passed by repeating "--dart-define"
multiple times.
I use simple shell script to generate dart defines. In my app there are 3 build flavors: dev, staging and prod. Environment variables were defined in a regular .env file.
env/
├── dev.env
├── prod.env
└── staging.env
Here is the script to generate dart-defines from .env file.
#!/bin/bash
# scripts/generate_dart_defines.sh
case "$1" in
"dev") INPUT="env/dev.env"
;;
"staging") INPUT="env/staging.env"
;;
"prod") INPUT="env/prod.env"
;;
*)
echo "Missing arguments [dev|staging|prod]"
exit 1
;;
esac
while IFS= read -r line
do
DART_DEFINES="$DART_DEFINES--dart-define=$line "
done < "$INPUT"
echo "$DART_DEFINES"
Here is the script to trigger a build.
#!/bin/bash
# build.sh
if [ -z "$1" ] || [ -z "$2" ] || [ -z "$3" ]; then
echo -e "Missing arguments: [apk|appbundle|ios] [release|debug|profile] [dev|staging|prod]"
# invalid arguments
exit 128
fi
DART_DEFINES=$(scripts/generate_dart_defines.sh $3)
if [ $? -ne 0 ]; then
echo -e "Failed to generate dart defines"
exit 1
fi
echo -e "artifact: $1, type: $2, flavor: $3\n"
echo -e "DART_DEFINES: $DART_DEFINES\n"
eval "flutter build $1 --$2 --flavor $3 $DART_DEFINES"
The script accepts 3 arguments. First one is the artifact apk, appbundle or ios. Second one is the build type release, debug or profile. Third one is the build flavor, dev, staging or prod.
./build.sh apk release prod
Please note that you also required to configure android and ios for different build flavors separately.
https://developer.android.com/studio/build/build-variants
https://shockoe.com/ideas/development/how-to-setup-configurations-and-schemes-in-xcode/
https://developer.apple.com/library/archive/documentation/ToolsLanguages/Conceptual/Xcode_Overview/ManagingSchemes.html
I do agree with the answer posted by #tatsuDn but I wanted to provide a solution that loads your environment variables from a .env file.
First create a .env file in the root folder of your project.
Ensure that you add the file to your pubspec.yaml and [git] ignore it.
Here is how your .env file should look
API_KEY=sampleapikey
# This line is a comment
# The white line above will be ignored
HEADER=sampleapiheader
ANOTHER_UNIQUE_KEY=theValueOfThisKey
KEY_CONTAINS_#=*234*5#
KEY_CONTAINS_EQUALS=IP8iwe=0&
Here is how your assets section to look like.
# To add assets to your application, add an assets section, like this:
assets:
- assets/images/
- assets/flags/
- .env
Finally, load your environment variable by reading and parsing the .env file to get a Map<String, String> that contains your key value pairs.
Future<Map<String, String>> parseStringToMap({String assetsFileName = '.env'}) async {
final lines = await rootBundle.loadString(assetsFileName);
Map<String, String> environment = {};
for (String line in lines.split('\n')) {
line = line.trim();
if (line.contains('=') //Set Key Value Pairs on lines separated by =
&&
!line.startsWith(RegExp(r'=|#'))) {
//No need to add emty keys and remove comments
List<String> contents = line.split('=');
environment[contents[0]] = contents.sublist(1).join('=');
}
}
return environment;
}
You can put a quick button in your code to test that the environment variables are being loaded properly.
ElevatedButton(
onPressed: () async {
final env = await parseStringToMap(assetsFileName: '.env');
print(env);
},
child: Text('Print Environment Variables')
),
Here is the output from the .env file above.
>>>I/flutter ( 7182): {API_KEY: sampleapikey, HEADER: sampleapiheader, ANOTHER_UNIQUE_KEY: theValueOfThisKey, KEY_CONTAINS_#: *234*5#, KEY_CONTAINS_EQUALS: IP8iwe=0&}
Notes: You will need to rerun the app (not hot reload) so that the .env assets is loaded.
You can also just load your variables in a json file[this may be helpful when you have non string environemental variables and you dont want to parse string.
To avaoid IO, it is a good Idea to just load the environment variables once and access them through out the app using service locators like GetIt.
although above answers are correct coming from python and reactjs I used dotenv and found the same for flutter to load .env file
https://pub.dev/packages/dotenv
Create a class:
import 'package:flutter/foundation.dart';
class AppUtils {
static String get clientId {
if (kDebugMode) return 'debug_id';
else if (kProfileMode) return 'profile_id';
else if (kReleaseMode) return 'production_id';
else if (kIsWeb) return 'web_mode_id';
throw ArgumentError('No mode detected');
}
}
Usage:
var id = AppUtils.clientId;

In jenkins how can I see last time build ran, by his xml file?

I Have more than 1000 jobs in Jenkins,
And I would like to go through all of them in order to clean unused jobs.
What is the recommended way to do so?
I guess in every job "xml" file there is an indication to when it last ran,
Can anyone point me where this file is located?
I ended up filter the jobs by the "View job Filters" plugin,
You can use "Filter by Build Trend" option as follows:
Create a view for "All jobs" -> go to edit view -> in "add job filter " choose "Build Trend Filter" -> choose the filter you desire.
This is what I did:
I don't think you can do this in one step. But you can do this in 2 steps.
Find the URLs of all jobs with this:
https://jenkins-server/api/json?tree=jobs[url]
Get more info about each job by using the urls returned from step 1:
url-from-step1/api/json
This will give you the healthreport, last failed/successful build etc. If you need more info about these builds you can make a new request with :
url-from-step1/last-build-number/api/json
I recommend using JSON, and using JQ (http://stedolan.github.io/jq/, https://jqplay.org/) to parse your JSON
Happy coding!
You can leverage the REST API. The following urls might be relevant for you:
https://ci.jenkins-ci.org/api/xml?tree=jobs[name] -- to get a list of jobs
https://ci.jenkins-ci.org/job/{jobName}/lastBuild/buildTimestamp?format=yyyy-MM-dd-HH-mm-ss -- to get the time of last build of job {jobName}
Feel free to change xml to json/python...
I can provide a following shell script as a rough example:
#!/bin/bash
jenkinsUrlBase='https://ci.jenkins-ci.org'
callJenkins() {
curl --silent --show-error -g "$jenkinsUrlBase${1}"
}
callJenkins '/api/xml?tree=jobs[name]' | xmlstarlet sel -t -v '//hudson/job/name' | while read projectName ; do
timestamp=$(callJenkins "/job/${projectName}/lastBuild/buildTimestamp?format=yyyy-MM-dd-HH-mm-ss")
echo "Last build of ${projectName}: ${timestamp}"
done
You can exploit directory and file structure in ${JENKINS_HOME}:
cd ${JENKINS_HOME}/jobs/${JOB_NAME}/builds
ls -lt | head -2 | tail -1 | awk '{print $9}'
Example output:
2015-08-13_11-48-25

Resources