Use Jenkins-Plugin in Jenkins-Pipeline - jenkins

I'm trying to incorporate the ClearCase UCM Plugin into a Jenkins-Pipeline. Currently, I'm using the step-method to invoke the ClearCase-plugin:
step([
$class: 'hudson.plugins.clearcase.ClearCaseUcmSCM',
stream: "MY_STREAM",
loadrules: "load \\SOMETHING\\int load \\OTHER load \\RESOURCES",
[...]
changeset: "BRANCH",
viewStorage: null
])
As far as I understood the plugin-system the instantiation is done using the constructor annotated with #DataBoundConstructor:
src/main/java/hudson/plugins/clearcase/ClearCaseUcmSCM.java:
#DataBoundConstructor
public ClearCaseUcmSCM(String stream, String loadrules, String viewTag, boolean usedynamicview, String viewdrive, String mkviewoptionalparam,
boolean filterOutDestroySubBranchEvent, boolean useUpdate, boolean rmviewonrename, String excludedRegions, String multiSitePollBuffer,
String overrideBranchName, boolean createDynView, boolean freezeCode, boolean recreateView, boolean allocateViewName, String viewPath,
boolean useManualLoadRules, ChangeSetLevel changeset, ViewStorage viewStorage, boolean buildFoundationBaseline) {
I copied most of the parameters (boolean, String, integers) from a similar project's config.xml-file. Unfortunately, I get an Exception:
"java.lang.IllegalArgumentException: argument type mismatch"
I guess it's because I can't create "ChangeSetLevel" and "ViewStorage" (the latter is not even in the config.xml).
So my question is how to invoke the plugin properly.
Additionally, I'd like to know how to pass "loadrules" properly as there seems to be newlines in the config.xml:
"load \\SOMETHING\\int[NEWLINE] load \\OTHER[NEWLINE] load \\RESOURCES"

You can take a look at Pipeline Syntax of your job and use Snippet Generator to construct your step. It can be found at http://[jenkins-url]/pipeline-syntax.
More info in the Jenkins pipeline docs.

Related

Unapprovable RejectedAccessException when using Tuple in Jenkinsfile

I tried to use Tuple in a Jenkinsfile.
The line I wrote is def tupleTest = new Tuple('test', 'test2').
However, Jenkins did not accept this line and keep writing the following error to the console output:
No such constructor found: new groovy.lang.Tuple java.lang.String java.lang.String. Administrators can decide whether to approve or reject this signature.
...
org.jenkinsci.plugins.scriptsecurity.sandbox.RejectedAccessException: No such constructor found: new groovy.lang.Tuple java.lang.Integer java.lang.String
...
When I visited the "Script Approval" configuration I could not see any scripts that pend approval.
Following this link, I tried to install and enable the "Permissive Security" plugin, but it did not help either - The error was the same.
I even tried to manually add the problematic signature to the scriptApproval.xml file. After I added it, I was able to see it in the list of approved signatures, but the error still remained.
Is there something I am doing wrong?
I had the same issue trying to use tuple on jenkins so I found out that I can simply use a list literal instead:
def tuple = ["test1", "test2"]
which is equivalent to
def (a, b) = ["test1", "test2"]
So now, instead of returning a tuple, I am returning a list in my method
def myMethod(...) {
...
return ["test 1", "test 2"]
}
...
def (a, b) = myMethod(...)
This is more or less a problem caused by groovy.lang.Tuple constructor + Jenkins sandbox Groovy mode. If you take a look at the constructor of this class you will see something like this:
package groovy.lang;
import java.util.AbstractList;
import java.util.List;
public class Tuple extends AbstractList {
private final Object[] contents;
private int hashCode;
public Tuple(Object[] contents) {
if (contents == null) throw new NullPointerException();
this.contents = contents;
}
//....
}
Groovy sandbox mode (enabled by default for all Jenkins pipelines) ensures that every invocation passes script approval check. It's not foolproof, and when it sees new Tuple('a','b') it thinks that the user is looking for a constructor that matches exactly two parameters of type String. And because such constructor does not exists, it throws this exception. However, there are two simple workarounds to this problem.
Use groovy.lang.Tuple2 instead
If your tuple is a pair, then use groovy.lang.Tuple2 instead. The good news about this class is that it provides a constructor that supports two generic types, so it will work in your case.
Use exact Object[] constructor
Alternatively, you can use the exact constructor, e.g
def tuple = new Tuple(["test","test2"] as Object[])
Both options require script approval before you can use them (however, in this case both constructors appear in the in-process script approval page).

Expected getter for property [tempLocation] to be marked with #default on all

I am trying to execute a Dataflow pipeline that writes to BigQuery. I understand that in order to do so, I need to specify a GCS temp location.
So I defined options:
private interface Options extends PipelineOptions {
#Description("GCS temp location to store temp files.")
#Default.String(GCS_TEMP_LOCATION)
#Validation.Required
String getTempLocation();
void setTempLocation(String value);
#Description("BigQuery table to write to, specified as "
+ "<project_id>:<dataset_id>.<table_id>. The dataset must already exist.")
#Default.String(BIGQUERY_OUTPUT_TABLE)
#Validation.Required
String getOutput();
void setOutput(String value);
}
And try to pass this to the Pipeline.Create() function:
public static void main(String[] args) {
Pipeline p = Pipeline.create(PipelineOptionsFactory.fromArgs(args).withValidation().as(Options.class));
...
}
But I am getting the following error. I don't understand why because I annotate with#Default:
Exception in thread "main" java.lang.IllegalArgumentException: Expected getter for property [tempLocation] to be marked with #Default on all [my.gcp.dataflow.StarterPipeline$Options, org.apache.beam.sdk.options.PipelineOptions], found only on [my.gcp.dataflow.StarterPipeline$Options]
Is the above snippet your code or a copy from the SDK?
You don't define a new options class for this. You actually want to call withCustomGcsTempLocation on BigQueryIO.Write [1].
Also, I think BQ should determine a temp location on it's own if you do not provide one. Have you tried without setting this? Did you get an error?
[1] https://github.com/apache/beam/blob/a17478c2ee11b1d7a8eba58da5ce385d73c6dbbc/sdks/java/io/google-cloud-platform/src/main/java/org/apache/beam/sdk/io/gcp/bigquery/BigQueryIO.java#L1402
Most users simply set the staging directory. To set the staging directory, you want to do something like:
DataflowPipelineOptions options = PipelineOptionsFactory.create()
.as(DataflowPipelineOptions.class);
options.setRunner(BlockingDataflowPipelineRunner.class);
options.setStagingLocation("gs://SET-YOUR-BUCKET-NAME-HERE");
However if you want to set gcpTemporaryDirectory, you can do that as well:
GcpOptions options = PipelineOptionsFactory.as(GcpOptions.class);
options.setGcpTempLocation()
Basically you have to do .as(X.class) to get to the X options. Then once you have that object you can just set any options that are part of X. You can find many additional examples online.

F# constructor with <"string", str>

In this article it shows how to use the SqlCommandProvider type. The sample code has this:
use cmd = new SqlCommandProvider<"
SELECT TOP(#topN) FirstName, LastName, SalesYTD
FROM Sales.vSalesPerson
WHERE CountryRegionName = #regionName AND SalesYTD > #salesMoreThan
ORDER BY SalesYTD
" , connectionString>(connectionString)
what does the <... ,...> before the type constructor name mean and why the the
first parameter have to be a string literal? It looks like a generic but it's taking variables not types. The constructor seems to be taking in a connection string already in the <> section.
The angle brackets are the configuration for a type.
In your example, you are defining a type and creating an instance at the same type. It's clearer when the steps are separated.
Define a type.
type SalesPersonQuery = SqlCommandProvider<query, connectionString>
But to actually have an instance of the type you have to create it:
let command = new SalesPersonQuery()
Now you can use the command.Execute() rather then SalesPersonQuery.Execute().
The reason there is a constructor is because later on (at run-time) you can change the connection string to a different then the one provided in the definition, so for instance:
let command = new SalesPersonQuery(differentConnectionString)
You can find that in the documentation in configuration section:
Connection string can be overridden at run-time via constructor optional parameter
First parameter can be a path to a SQL script or a SQL query. I suppose that's the reason it's a string: how else would you like to define a SQL query?
Again, from the documentation:
Command text (sql script) can be either literal or path to *.sql file

Dataflow output parameterized type to avro file

I have a pipeline that successfully outputs an Avro file as follows:
#DefaultCoder(AvroCoder.class)
class MyOutput_T_S {
T foo;
S bar;
Boolean baz;
public MyOutput_T_S() {}
}
#DefaultCoder(AvroCoder.class)
class T {
String id;
public T() {}
}
#DefaultCoder(AvroCoder.class)
class S {
String id;
public S() {}
}
...
PCollection<MyOutput_T_S> output = input.apply(myTransform);
output.apply(AvroIO.Write.to("/out").withSchema(MyOutput_T_S.class));
How can I reproduce this exact behavior except with a parameterized output MyOutput<T, S> (where T and S are both Avro code-able using reflection).
The main issue is that Avro reflection doesn't work for parameterized types. So based on these responses:
Setting Custom Coders & Handling Parameterized types
Using Avrocoder for Custom Types with Generics
1) I think I need to write a custom CoderFactory but, I am having difficulty figuring out exactly how this works (I'm having trouble finding examples). Oddly enough, a completely naive coder factory appears to let me run the pipeline and inspect proper output using DataflowAssert:
cr.RegisterCoder(MyOutput.class, new CoderFactory() {
#Override
public Coder<?> create(List<? excents Coder<?>> componentCoders) {
Schema schema = new Schema.Parser().parse("{\"type\":\"record\,"
+ "\"name\":\"MyOutput\","
+ "\"namespace\":\"mypackage"\","
+ "\"fields\":[]}"
return AvroCoder.of(MyOutput.class, schema);
}
#Override
public List<Object> getInstanceComponents(Object value) {
MyOutput<Object, Object> myOutput = (MyOutput<Object, Object>) value;
List components = new ArrayList();
return components;
}
While I can successfully assert against the output now, I expect this will not cut it for writing to a file. I haven't figured out how I'm supposed to use the provided componentCoders to generate the correct schema and if I try to just shove the schema of T or S into fields I get:
java.lang.IllegalArgumentException: Unable to get field id from class null
2) Assuming I figure out how to encode MyOutput. What do I pass to AvroIO.Write.withSchema? If I pass either MyOutput.class or the schema I get type mismatch errors.
I think there are two questions (correct me if I am wrong):
How do I enable the coder registry to provide coders for various parameterizations of MyOutput<T, S>?
How do I values of MyOutput<T, S> to a file using AvroIO.Write.
The first question is to be solved by registering a CoderFactory as in the linked question you found.
Your naive coder is probably allowing you to run the pipeline without issues because serialization is being optimized away. Certainly an Avro schema with no fields will result in those fields being dropped in a serialization+deserialization round trip.
But assuming you fill in the schema with the fields, your approach to CoderFactory#create looks right. I don't know the exact cause of the message java.lang.IllegalArgumentException: Unable to get field id from class null, but the call to AvroCoder.of(MyOutput.class, schema) should work, for an appropriately assembled schema. If there is an issue with this, more details (such as the rest of the stack track) would be helpful.
However, your override of CoderFactory#getInstanceComponents should return a list of values, one per type parameter of MyOutput. Like so:
#Override
public List<Object> getInstanceComponents(Object value) {
MyOutput<Object, Object> myOutput = (MyOutput<Object, Object>) value;
return ImmutableList.of(myOutput.foo, myOutput.bar);
}
The second question can be answered using some of the same support code as the first, but otherwise is independent. AvroIO.Write.withSchema always explicitly uses the provided schema. It does use AvroCoder under the hood, but this is actually an implementation detail. Providing a compatible schema is all that is necessary - such a schema will have to be composed for each value of T and S for which you want to output MyOutput<T, S>.

Azure WebJobs QueueTrigger attempts (and fails) to convert message's byte[] body to string

I have a storage queue to which I post messages constructed using the CloudQueueMessage(byte[]) constructor. I then tried to process the messages in a webjob function with the following signature:
public static void ConsolidateDomainAuditItem([QueueTrigger("foo")] CloudQueueMessage msg)
I get a consistent failure with exception
Microsoft.Azure.WebJobs.Host.FunctionInvocationException: Exception while executing function: Program.ConsolidateDomainAuditItem ---> System.InvalidOperationException: Exception binding parameter 'msg' ---> System.Text.DecoderFallbackException: Unable to translate bytes [FF] at index -1 from specified code page to Unicode.
at System.Text.DecoderExceptionFallbackBuffer.Throw(Byte[] bytesUnknown, Int32 index)
at System.Text.DecoderExceptionFallbackBuffer.Fallback(Byte[] bytesUnknown, Int32 index)
at System.Text.DecoderFallbackBuffer.InternalFallback(Byte[] bytes, Byte* pBytes)
at System.Text.UTF8Encoding.GetCharCount(Byte* bytes, Int32 count, DecoderNLS baseDecoder)
at System.String.CreateStringFromEncoding(Byte* bytes, Int32 byteLength, Encoding encoding)
at System.Text.UTF8Encoding.GetString(Byte[] bytes, Int32 index, Int32 count)
at Microsoft.WindowsAzure.Storage.Queue.CloudQueueMessage.get_AsString()
at Microsoft.Azure.WebJobs.Host.Storage.Queue.StorageQueueMessage.get_AsString()
at Microsoft.Azure.WebJobs.Host.Queues.Triggers.UserTypeArgumentBindingProvider.UserTypeArgumentBinding.BindAsync(IStorageQueueMessage value, ValueBindingContext context)
at Microsoft.Azure.WebJobs.Host.Queues.Triggers.QueueTriggerBinding.<BindAsync>d__0.MoveNext()
Looking at the code of UserTypeArgumentBindingProvider.BindAsync, it clearly expects to be passed a message whose body is a JSON object. And the UserType... of the name also implies that it expects to bind a POCO.
Yet the MSDN article How to use Azure queue storage with the WebJobs SDK clearly states that
You can use QueueTrigger with the following types:
string
A POCO type serialized as JSON
byte[]
CloudQueueMessage
So why is it not binding to my message?
The WebJobs SDK parameter binding relies heavily on magic parameter names. Although [QueueTrigger(...)] string seems to permit any parameter name (and the MSDN article includes as examples logMessage, inputText, queueMessage, blobName), [QueueTrigger(...)] CloudQueueMessage requires that the parameter be named message. Changing the name of the parameter from msg to message fixes the binding.
Unfortunately, I'm not aware of any documentation which states this explicitly.
Try this instead:
public static void ConsolidateDomainAuditItem([QueueTrigger("foo")] byte[] message)
CloudQueueMessage is a wrapper, usually the bindings get rid of the wrapper and allow you to deal with the content instead.

Resources