I have a groovy script that will be common to many jobs - they will all contain an Active Choices Reactive Parameter. Rather than repeat the same script dozens of times I would like to place in a (library | ??) one time, and reference it in each job.
The script works beautifully for any job I paste it in. Just need to know if it is possible to plop it into one place and share across all jobs. Update it once, updates all jobs.
import jenkins.model.Jenkins;
ArrayList<String> res = new ArrayList<String>();
def requiredLabels = [new hudson.model.labels.LabelAtom ("Product")];
requiredLabels.add(new hudson.model.labels.LabelAtom(ClientName));
Jenkins.instance.computers.each {
if (it.assignedLabels.containsAll(requiredLabels)) {
res.add(it.displayName);
}
}
return res;
CAVEAT: This will work only if you have access to your Jenkins box. I haven't tried to do it by adding paths to the jenkins home
You can use this:
Make all your functions into a groovy file. For example will call it: activeChoiceParams.groovy
Convert that file into a jar by: jar cvf <jar filename> <groovy file>. For example: jar cvf activeChoiceParams.jar activeChoiceParams.groovy
Move your jar file to /packages/lib/ext
Restart Jenkins
In your active choices groovy script use (for example>:
import activeChoiceParams
return <function name>()
All functions must return a list or a map
The option we decide on was to have a common parameters function .groovy we store in git. There is a service hook that pushes the files out to a known network location on check-in.
In our Jenkins build step we then have the control dynamically load up the script and invoke the function passing in any parameters.
ArrayList<String> res = new ArrayList<String>();
try {
new GroovyShell().parse( new File( '\\\\server\\share\\folder\\parameterFunctions.groovy' ) ).with {
res = getEnvironments(ClientName);
}
} catch (Exception ex) {
res.add(ex.getMessage());
}
return res;
And our parameterFunctions.groovy will respond how we want:
public ArrayList<String> getEnvironments(String p_clientName) {
ArrayList<String> res = new ArrayList<String>();
if (!(p_clientName?.trim())){
res.add("Select a client");
return res;
}
def possibleEnvironments = yyz.getEnvironmentTypeEnum();
def requiredLabels = [new hudson.model.labels.LabelAtom ("PRODUCT")];
requiredLabels.add(new hudson.model.labels.LabelAtom(p_clientName.toUpperCase()));
Jenkins.instance.computers.each { node ->
if (node.assignedLabels.containsAll(requiredLabels)) {
// Yes. Let's get the environment name out of it.
node.assignedLabels.any { al ->
def e = yyz.getEnvironmentFromString(al.getName(), true);
if (e != null) {
res.add(al.getName());
return; // this is a continue
}
}
}
}
return res;
}
Nope, looks like it isn't possible (yet).
https://issues.jenkins-ci.org/browse/JENKINS-46394
I found interesting solution by using Job DSL plugin.
usually job definition for Active Choices is look like:
from https://jenkinsci.github.io/job-dsl-plugin/#method/javaposse.jobdsl.dsl.helpers.BuildParametersContext.activeChoiceParam
job('example') {
parameters {
activeChoiceParam('CHOICE-1') {
choiceType('SINGLE_SELECT')
groovyScript {
script(readFileFromWorkspace('className.groovy') + "\n" + readFileFromWorkspace('executionPart.groovy'))
}
}
}
}
in className.groovy you can define class as a common part
with executionPart.groovy you can create instance and make your particular part
Related
I have multiple subscriptions from Cloud PubSub to read based on certain prefix pattern using Apache Beam. I extend PTransform class and implement expand() method to read from multiple subscriptions and do Flatten transformation to the PCollectionList (multiple PCollection on from each subscription). I have a problem to pass subscription prefix as ValueProvider into the expand() method, since expand() is called on template creation time, not when launching the job. However, if I only use 1 subscription, I can pass ValueProvider into PubsubIO.readStrings().fromSubscription().
Here's some sample code.
public class MultiPubSubIO extends PTransform<PBegin, PCollection<PubsubMessage>> {
private ValueProvider<String> prefixPubsub;
public MultiPubSubIO(#Nullable String name, ValueProvider<String> prefixPubsub) {
super(name);
this.prefixPubsub = prefixPubsub;
}
#Override
public PCollection<PubsubMessage> expand(PBegin input) {
List<String> myList = null;
try {
// prefixPubsub.get() will return error
myList = PubsubHelper.getAllSubscription("projectID", prefixPubsub.get());
} catch (Exception e) {
LogHelper.error(String.format("Error getting list of subscription : %s",e.toString()));
}
List<PCollection<PubsubMessage>> collectionList = new ArrayList<PCollection<PubsubMessage>>();
if(myList != null && !myList.isEmpty()){
for(String subs : myList){
PCollection<PubsubMessage> pCollection = input
.apply("ReadPubSub", PubsubIO.readMessagesWithAttributes().fromSubscription(this.prefixPubsub));
collectionList.add(pCollection);
}
PCollection<PubsubMessage> pubsubMessagePCollection = PCollectionList.of(collectionList)
.apply("FlattenPcollections", Flatten.pCollections());
return pubsubMessagePCollection;
} else {
LogHelper.error(String.format("No subscription with prefix %s found", prefixPubsub));
return null;
}
}
public static MultiPubSubIO read(ValueProvider<String> prefixPubsub){
return new MultiPubSubIO(null, prefixPubsub);
}
}
So I'm thinking of how to use the same way PubsubIO.read().fromSubscription() to read from ValueProvider. Or am I missing something?
Searched links:
extract-value-from-valueprovider-in-apache-beam - Answer talked about using DoFn, while I need PTransform that receives PBegin.
Unfortunately this is not possible currently:
It is not possible for the value of a ValueProvider to affect transform expansion - at expansion time, it is unknown; by the time it is known, the pipeline shape is already fixed.
There is currently no transform like PubsubIO.read() that can accept a PCollection of topic names. Eventually there will be (it is enabled by Splittable DoFn), but it will take a while - nobody is working on this currently.
You can use MultipleReadFromPubSub from apache beam io module https://beam.apache.org/releases/pydoc/2.27.0/_modules/apache_beam/io/gcp/pubsub.html
topic_1 = PubSubSourceDescriptor('projects/myproject/topics/a_topic')
topic_2 = PubSubSourceDescriptor(
'projects/myproject2/topics/b_topic',
'my_label',
'my_timestamp_attribute')
subscription_1 = PubSubSourceDescriptor(
'projects/myproject/subscriptions/a_subscription')
results = pipeline | MultipleReadFromPubSub(
[topic_1, topic_2, subscription_1])
We are trying to run a daily Dataflow pipeline that reads off Bigtable and dumps data into GCS (using HBase's Scan and BaseResultCoder as coder) as follows (just to highlight the idea):
Pipeline pipeline = Pipeline.create(options);
Scan scan = new Scan();
scan.setCacheBlocks(false).setMaxVersions(1);
scan.addFamily(Bytes.toBytes("f"));
CloudBigtableScanConfiguration btConfig = BCloudBigtableScanConfiguration.Builder().withProjectId("aaa").withInstanceId("bbb").withTableId("ccc").withScan(scan).build();
pipeline.apply(Read.from(CloudBigtableIO.read(btConfig))).apply(TextIO.Write.to("gs://bucket/dir/file").withCoder(HBaseResultCoder.getInstance()));
pipeline.run();
This seems to run perfectly as expected.
Now, we want to be able to use the dumped file in GCS for a recovery job if needed. That is, we want to have a dataflow pipeline which reads the dumped data (which is PCollection) from GCS and creates Mutations ('Put' objects, basically). For some reason, the following code fails with a bunch of NullPointerExceptions. We are unsure why that would be the case -- if-statements below which check for null or 0-length strings were added to see if that would make a difference, but it did not.
// Part of DoFn<Result,Mutation>
#Override
public void processElement(ProcessContext c) {
Result result = c.element();
byte[] row = result.getRow();
if (row == null || row.length == 0) { // NullPointerException at this line
return;
}
Put mutation = new Put(result.getRow());
// go through the column/value entries of this row, and create a corresponding put mutation.
for (Entry<byte[], byte[]> entry : result.getFamilyMap(Bytes.toBytes(cf)).entrySet()) {
byte[] qualifier = entry.getKey();
if (qualifier == null || qualifier.length == 0) {
continue;
}
byte[] val = entry.getValue();
if (val == null || val.length == 0) {
continue;
}
mutation.addImmutable(cf_bytes, qualifier, entry.getValue());
}
c.output(mutation);
}
The error we get is the following (line 83 is marked above):
(2a6ad6372944050d): java.lang.NullPointerException at some.package.RecoveryFromGcs$CreateMutationFromResult.processElement(RecoveryFromGcs.java:83)
I have two questions:
1. Has someone experienced something like this when they try to ParDo on PCollection to get PCollection which is to be written to a bigtable?
2. Is this a reasonable approach? The end-goal is to be able to leave a daily snapshot of our bigtable (for a specific column family) on a regular basis by means of a back-up in case something bad happens. We wish to be able to read the back-up data via dataflow, and write it to bigtable when we need to.
Any suggestions and help will be really appreciated!
-------- Edit
Here is the code that scans Bigtable and dumps data to GCS:
(Some details are hidden if they are not relevant.)
public static void execute(Options options) {
Pipeline pipeline = Pipeline.create(options);
final String cf = "f"; // some specific column family.
Scan scan = new Scan();
scan.setCacheBlocks(false).setMaxVersions(1); // Disable caching and read only the latest cell.
scan.addFamily(Bytes.toBytes(cf));
CloudBigtableScanConfiguration btConfig =
BigtableUtils.getCloudBigtableScanConfigurationBuilder(options.getProject(), "some-bigtable-name").withScan(scan).build();
PCollection<Result> result = pipeline.apply(Read.from(CloudBigtableIO.read(btConfig)));
PCollection<Mutation> mutation =
result.apply(ParDo.of(new CreateMutationFromResult(cf))).setCoder(new HBaseMutationCoder());
mutation.apply(TextIO.Write.to("gs://path-to-files").withCoder(new HBaseMutationCoder()));
pipeline.run();
}
}
The job that reads the output of the above code has the following code:
(This is the one throwing exception when reading from GCS)
public static void execute(Options options) {
Pipeline pipeline = Pipeline.create(options);
PCollection<Mutation> mutations = pipeline.apply(TextIO.Read
.from("gs://path-to-files").withCoder(new HBaseMutationCoder()));
CloudBigtableScanConfiguration config =
BigtableUtils.getCloudBigtableScanConfigurationBuilder(options.getProject(), btTarget).build();
if (config != null) {
CloudBigtableIO.initializeForWrite(pipeline);
mutations.apply(CloudBigtableIO.writeToTable(config));
}
pipeline.run();
}
}
The error I am getting (https://jpst.it/Qr6M) is a bit confusing as the mutations are all Put objects, but the error is about 'Delete' object.
It's probably best to discuss this issue on the cloud bigtable client github issues page. We are currently working on import / export features like this one, so we'll respond quickly. We'll also explore this approach on our own, even if you don't add the github issue. The github issue will allow us to communicate better.
FWIW, I don't understand how you could get an NPE on the line you highlighted. Are you sure you have the right line?
EDIT (12/12):
The following processElement() method should work to convert a Result to a Put:
#Override
public void processElement(DoFn<Result, Mutation>.ProcessContext c) throws Exception {
Result result = c.element();
byte[] row = result.getRow();
if (row != null && row.length > 0) {
Put put = new Put(row);
for (Cell cell : result.rawCells()) {
put.add(cell);
}
c.output(put);
}
}
I'm using following code snippet to retrieve job list in a Jenkins plugin :
SecurityContext old = ACL.impersonate(ACL.SYSTEM);
for (AbstractProject<?, ?> job : Jenkins.getInstance()
.getAllItems(AbstractProject.class)) {
// useful work on jobs
}
SecurityContextHolder.setContext(old);
Unfortunately, not all jobs are processed through the loop, according to the Jenkins logs.
I have Maven and FreeStyle jobs, only a few of them are discarded. The filter "AbstractProject.class", according to class hierarchy, should return everything.
Could someone point out documentation or something i'm missing? thanks by advance
Fixed the bug with a refactoring of the loop :
SecurityContext old = ACL.impersonate(ACL.SYSTEM);
for (AbstractProject<?, ?> job : Jenkins.getInstance()
.getAllItems(AbstractProject.class)) {
// useful work on jobs
}
SecurityContextHolder.setContext(old);
with :
ACL.impersonate(ACL.SYSTEM, new Runnable() {
#Override
public void run() {
for (AbstractProject<?, ?> job : Jenkins.getInstance()
.getAllItems(AbstractProject.class)) {
try {
processJob(job, remote, scm);
} catch (Exception jobProcessingException) {
LOGGER.severe("Something bad occured processing job "
+ job.getName());
jobProcessingException.printStackTrace();
}
}
}
});
I'm new to the Testacular(now Karma). But I found it is really powerful and great for automatic cross-browser JS testing. So I want to know if it is possible to use it as part of TFS building procedure to conduct automatic JS code unit testing? If anyone has previous experience, could you please let us know what to notice so that we are not going to take the wrong way.
Regards,
Jun
Here is my pseudo code to run the karma in TFS using C# helper class. The basic idea is:
Use C# unit test to test your js files using Karma.
Capture the output of Karma to show that in your build log.
Use separate process to run Karma.
Pack all Karma files into a zip file, extract that into temporary folder for each build, so that builds with different version of karma wouldn't conflict with each other.
Clean the temp folder after build.
-
namespace Test.Javascript.CrossBrowserTests
{
public class KarmaTestRunner : IDisposable
{
private const string KarmaPath = #".\node_modules\karma\bin\karma";
private string NodeBasePath { get; set; }
private string NodeFullPath { get { return NodeBasePath + #"\node\node.exe"; } }
private string NpmFullPath { get { return NodeBasePath + #"\node\npm.cmd"; } }
public KarmaTestRunner()
{
ExtractKarmaZip();
LinkGlobalKarma();
}
public int Execute(params string[] arguments)
{
Process consoleProcess = RunKarma(arguments);
return consoleProcess.ExitCode;
}
public void Dispose()
{
UnlinkGlobalKarma();
RemoveTempKarmaFiles();
}
private void ExtractKarmaZip()
{
NodeBasePath = Path.GetTempPath() + Path.GetRandomFileName();
byte[] resourceBytes = Assembly.GetExecutingAssembly().GetEmbeddedResourceBytes(typeof(KarmaTestRunner).Namespace + "." + "karma0.9.4.zip");
ZipFile file = ZipFile.Read(resourceBytes);
file.ExtractAll(NodeBasePath);
}
private void LinkGlobalKarma()
{
ExecuteConsoleProcess(NpmFullPath, "link", "karma");
}
private Process RunKarma(IEnumerable<string> arguments)
{
return ExecuteConsoleProcess(NodeFullPath, new[] { KarmaPath }.Concat(arguments).ToArray());
}
private static Process ExecuteConsoleProcess(string path, params string[] arguments)
{
//Create a process to run karma with arguments
//Hook up the OutputDataReceived envent handler on the process
}
static void OnOutputLineReceived(string message)
{
if (message != null)
Console.WriteLine(message);
}
private void UnlinkGlobalKarma()
{
ExecuteConsoleProcess(NpmFullPath, "uninstall", "karma");
}
private void RemoveTempKarmaFiles()
{
Directory.Delete(NodeBasePath, true);
}
}
}
Then use it like this:
namespace Test.Javascript.CrossBrowserTests
{
[TestClass]
public class CrossBrowserJSUnitTests
{
[TestMethod]
public void JavascriptTestsPassForAllBrowsers()
{
using (KarmaTestRunner karmaRunner = new KarmaTestRunner())
{
int exitCode = karmaRunner.Execute("start", #".\Test.Project\Javascript\Karma\karma.conf.js");
exitCode.ShouldBe(0);
}
}
}
}
A lot has changed since the original question and answer.
However, we've gotten Karma to run in our TFS build by running a Grunt task (I'm sure the same is possible with Gulp/whatever task runner you have). We were using C# before, but recently changed.
Have a grunt build task run.
Add a Grunt task after that
point the file path to your gruntfile.js and run your test task. This task will run karma:single. The grunt-cli location may be node_modules/grunt-cli/bin/grunt.
grunt.registerTask('test', [
'karma:single'
]);
Add a Publish Test Results step. Test Results Files = **/*.trx
More information about publishing Karma Test Results
I use the following Groovy snippet to obtain the plain-text representation of an HTML-page in a Grails application:
String str = new URL("http://www.example.com/some/path")?.text?.decodeHTML()
Now I want to alter the code so that the request will timeout after 5 seconds (resulting instr == null). What is the easiest and most Groovy way to achieve that?
I checked source code of groovy 2.1.8, below code is available:
'http://www.google.com'.toURL().getText([connectTimeout: 2000, readTimeout: 3000])
The logic to process configuration map is located in method org.codehaus.groovy.runtime.ResourceGroovyMethods#configuredInputStream
private static InputStream configuredInputStream(Map parameters, URL url) throws IOException {
final URLConnection connection = url.openConnection();
if (parameters != null) {
if (parameters.containsKey("connectTimeout")) {
connection.setConnectTimeout(DefaultGroovyMethods.asType(parameters.get("connectTimeout"), Integer.class));
}
if (parameters.containsKey("readTimeout")) {
connection.setReadTimeout(DefaultGroovyMethods.asType(parameters.get("readTimeout"), Integer.class));
}
if (parameters.containsKey("useCaches")) {
connection.setUseCaches(DefaultGroovyMethods.asType(parameters.get("useCaches"), Boolean.class));
}
if (parameters.containsKey("allowUserInteraction")) {
connection.setAllowUserInteraction(DefaultGroovyMethods.asType(parameters.get("allowUserInteraction"), Boolean.class));
}
if (parameters.containsKey("requestProperties")) {
#SuppressWarnings("unchecked")
Map<String, String> properties = (Map<String, String>) parameters.get("requestProperties");
for (Map.Entry<String, String> entry : properties.entrySet()) {
connection.setRequestProperty(entry.getKey(), entry.getValue());
}
}
}
return connection.getInputStream();
}
You'd have to do it the old way, getting a URLConnection, setting the timeout on that object, then reading in the data through a Reader
This would be a good thing to add to Groovy though (imho), as it's something I could see myself needing at some point ;-)
Maybe suggest it as a feature request on the JIRA?
I've added it as a RFE on the Groovy JIRA
https://issues.apache.org/jira/browse/GROOVY-3921
So hopefully we'll see it in a future version of Groovy...