I am trying to push a bunch of repeated Jenkinsfile pipeline steps into a shared library.
However i ran into an issue when moving an Artifactory build step; i get this error:
com.fasterxml.jackson.databind.JsonMappingException: Can not deserialize instance of java.lang.String out of START_OBJECT token
at [Source: N/A; line: -1, column: -1] (through reference chain: org.jfrog.hudson.pipeline.types.deployers.MavenDeployer["releaseRepo"])
at com.fasterxml.jackson.databind.JsonMappingException.from(JsonMappingException.java:270)
I've created an example Jenkins project and a shared library showing the error.
I get the impression this means you can't run Artifactory setup/build within a shared library. However, i found a post that shows some things are obviously possible.
I can't find any examples where the deployer/run are actually in a shared library, however.
Any thoughts or suggestions would be appreciated.
Thanks
-B
The issue i encountered was one of GString interpolation. The invocation of the deployer(...) method required that the parameters be immutable at the time of execution.
To do this, my interpolated strings needed to be converted to immutable strings; resulting in this:
rtMaven.deployer(releaseRepo: "${config.releaseRepo}", snapshotRepo: "${config.snapshotRepo}", server: artServer)
Becoming this:
rtMaven.deployer(releaseRepo: config.releaseRepo.toString(), snapshotRepo: config.snapshotRepo.toString(), server: artServer)
-B
Related
When using Apache Beam (GCP Dataflow) I see the following warning in worker logs:
No session file found: /var/opt/google/dataflow/pickled_main_session.
Functions defined in __main__ (interactive session) may fail.
My Dataflow job seems to be fine regardless, but I'm wondering what this warning is all about.
I have seen the following in some sample code (which I am NOT currently doing):
pipeline_options.view_as(SetupOptions).save_main_session = True
where pipeline_options is the main way of specifying options for the Beam/Dataflow pipeline, as in the following later in the code:
with beam.Pipeline(options=pipeline_options) as p:
# actual pipeline code here
I am curious if the two are related. Does the presence of the warning mean I should always be saving the main session? Are these two things related? Unrelated?
You should be able to safely ignore this warning. No need to set save_main_session if it's not required for your pipeline.
in Jenkins file one of the variable is having the comma separated values like below.
infra_services=[abc,def,xyz]
when I write the below code it was throwing an error.
if ("{$Infra_Services}".contains("xyz"))
then
echo "$Infra_Services"
fi
yes you can do if statements in a Jenkinsfile. However if you are using declarative pipeline you need to brace it with the step script.
Your issue comes from the fact you did not put any double quotes around "abc" and all the elements of your array
infra_services=[abc,def,xyz]
β
A second error will raise after you fix this. If infra_services is an array, to manipulate it you should not try to cast it as string. It should throw when you do "{$Infra_Services}"
here is a working example
βdef Infra_Services = ["abc","def","xyz"]
if (Infra_Services.contains("xyz")) {
println "found"
}ββ
My advice is to test your groovy before running it on jenkins, you will gain precious time. Here is a good online groovy console I use to test my code. running the groovy console from terminal is an alternative
https://groovyconsole.appspot.com/
I have setup Jenkins to run pylint on all python source files and all the log files are generated (apparently correctly) into a sub-directory as follows:
Source\pylint_logs\pylint1.log, pylint2.log, ..., pylint75.log
I have included a --msg-template definition based on the instructions on my Jenkins Configure page: Post-build Actions->Record compiler warnings and static analysis results->Static Analysis Tools. The template is shown as:
msg-template={path}:{line}: [{msg_id}, {obj}] {msg} ({symbol})
An example of one of the log files being generated by Jenkins/pylint is as follows:
************* Module FigureView
myapp\Views\FigureView.py:1: [C0103, ] Module name "FigureView" doesn't conform to snake_case naming style (invalid-name)
myapp\Views\FigureView.py:30: [C0103, FigureView.__init__] Attribute name "ax" doesn't conform to snake_case naming style (invalid-name)
------------------------------------------------------------------
Your code has been rated at 8.57/10 (previous run: 8.57/10, +0.00)
For the PyLint Report File Pattern, I have: Source/pylint_logs/pylint*.log
It appears that PyLint Warnings is parsing the files because the console output looks like this:
[PyLint] Searching for all files in 'D:\Jenkins\workspace\PROJECT' that match the pattern 'Source/pylint_logs/pylint*.log'
[PyLint] -> found 75 files
[PyLint] Successfully parsed file D:\Jenkins\workspace\PROJECT\Source\pylint_logs\pylint1.log
[PyLint] -> found 0 issues (skipped 0 duplicates)
[PyLint] Successfully parsed file D:\Jenkins\workspace\PROJECT\Source\pylint_logs\pylint10.log
[PyLint] -> found 0 issues (skipped 0 duplicates)
This repeats for all 75 files, even though there are plenty of issues in the log files.
What is odd, is that when I was first prototyping the use of Jenkins on this project, I set it up to just run pylint on a single file. I ran across another StackOverflow post that showed a msg-template that allowed me to get it working (unable to get pylint output to populate the violations graph). I even got the graph to show up for the PyLint Warnings Trend. I used the following definition per the post:
msg-template={path}:{line}: [{msg_id}({symbol}), {obj}] {msg}
Note that this format is slightly different from the one recommended by my Jenkins page (shown earlier). Even though this worked for a single file, neither template now seems to work for multiple files, or else there is something other than the template causing the problem. My graph has flat-lined, and I always get 0 issues reported.
I have had trouble finding useful documentation on the Jenkins PyLint Warnings tool. Does anyone have any ideas or pointers to documentation I can research further? Thanks much!
Ensure pass output-format parameter in pylint command. Example:
pylint --exit-zero --output-format=parseable module1 module2 > pylint.report
you have to set the Pylint's option --message-template in .pylintrc as
msg-template={path}: {line}: [{msg_id} ({symbol}), {obj}] {msg}
output-format=text
If I have a Pipeline script method in Pipeline script (Jenkinsfile), my Global Pipeline Library's vars/ or in a src/ class, how can obtain the OutputStream for the console log? I want to write directly to the console log.
I know I can echo or println, but for this purpose I need to write without the extra output that yields. I also need to be able to pass the OutputStream to something else.
I know I can call TaskListener.getLogger() if I can get the TaskListener (really hudson.util.StreamTaskListener) instance, but how?
I tried:
I've looked into manager.listener.logger (from the groovy postbuild plugin) and in the early-build context I'm calling from it doesn't yield an OutputStream that writes to the job's Console Log.
echo "listener is a ${manager.listener} - ${manager.listener.getClass().getName()} from ${manager} and has a ${manager.listener.logger} of class ${manager.listener.logger.getClass().getName()}"
prints
listener is a hudson.util.LogTaskListener#420c55c4 - hudson.util.LogTaskListener from org.jvnet.hudson.plugins.groovypostbuild.GroovyPostbuildRecorder$BadgeManager#58ac0c55 and has a java.io.PrintStream#715b9f99 of class java.io.PrintStream
I know you can get it from a StepContext via context.get(TaskListener.class) but I'm not in a Step, I'm in a CpsScript (i.e. WorkflowScript i.e. Jenkinsfile).
Finding it from a CpsFlowExecution obtained from the DSL instance registered as the steps script-property, but I couldn't work out how to discover the TaskListener that's passed to it when it's created
How is it this hard? What am I missing? There's so much indirect magic I find it incredibly hard to navigate the system.
BTW, I'm aware direct access is blocked by Script Security, but I can create #Whitelisted methods, and anything in a global library's vars/ is always whitelisted anyway.
You can access the build object from the Jenkins root object:
def listener = Jenkins.get()
.getItemByFullName(env.JOB_NAME)
.getBuildByNumber(Integer.parseInt(env.BUILD_NUMBER))
.getListener()
def logger = listener.getLogger() as PrintStream
logger.println("Listener: ${listener} Logger: ${logger}")
Result:
Listener: CloseableTaskListener[org.jenkinsci.plugins.workflow.log.BufferedBuildListener#6e9e6a16 / org.jenkinsci.plugins.workflow.log.BufferedBuildListener#6e9e6a16] Logger: java.io.PrintStream#423efc01
After banging my head against this problem for a couple days I think I have a solution:
CpsThreadGroup.current().execution.owner.listener
It's ugly, and I don't know if it's correct or if there's a better way, but seems to work.
I'm getting the following exception when running the pipeline locally. There is no exception when submitting for cloud execution.
Thanks,
Genady
INFO: Executing pipeline using the DirectPipelineRunner.
Exception in thread "main" java.lang.IllegalStateException: no evaluator registered for GroupedValues [GroupedValues]
at com.google.cloud.dataflow.sdk.runners.DirectPipelineRunner$Evaluator.visitTransform(DirectPipelineRunner.java:606)
at com.google.cloud.dataflow.sdk.runners.TransformTreeNode.visit(TransformTreeNode.java:200)
at com.google.cloud.dataflow.sdk.runners.TransformTreeNode.visit(TransformTreeNode.java:196)
at com.google.cloud.dataflow.sdk.runners.TransformHierarchy.visit(TransformHierarchy.java:109)
at com.google.cloud.dataflow.sdk.Pipeline.traverseTopologically(Pipeline.java:204)
at com.google.cloud.dataflow.sdk.runners.DirectPipelineRunner$Evaluator.run(DirectPipelineRunner.java:583)
at com.google.cloud.dataflow.sdk.runners.DirectPipelineRunner.run(DirectPipelineRunner.java:327)
at com.google.cloud.dataflow.sdk.runners.DirectPipelineRunner.run(DirectPipelineRunner.java:70)
at app.Main.main(Main.java:124)
The code outline is basically this:
PCollection<KV<MyKey, Iterable<MyValue>>> groupedByMyKey = ...
PCollection<KV<MyKey, MyAggregated>> aggregated = groupedByMyKey.apply(
Combine.<MyKey, MyValue, MyAggregated>groupedValues(new Aggregator()));
Aggregator class extends CombineFn<MyValue, List<MyValue>, MyAggregated>
Can you share a code snippet that triggers this? GroupedValues is a PTransform that is often used within various combining transforms, so it might be from using something like Min, Max, etc.
The error means that the DirectPipelineRunner doesn't know how to evaluate a GroupedValues. However, that's unexpected, since that should have been expanded into a ParDo before execution.
I found the reason to this behaviour
I was using a command line argument to run it in remote mode (--runner=BlockingDataflowPipelineRunner) and then forced it to run locally with
PipelineRunner<?> runner = DirectPipelineRunner.fromOptions(options);
runner.run(p);
After removing these lines and just using the --runner=DirectPipelineRunner argument it worked as expected.