Incomplete list of jobs using getAllItems API call - jenkins

I'm using following code snippet to retrieve job list in a Jenkins plugin :
SecurityContext old = ACL.impersonate(ACL.SYSTEM);
for (AbstractProject<?, ?> job : Jenkins.getInstance()
.getAllItems(AbstractProject.class)) {
// useful work on jobs
}
SecurityContextHolder.setContext(old);
Unfortunately, not all jobs are processed through the loop, according to the Jenkins logs.
I have Maven and FreeStyle jobs, only a few of them are discarded. The filter "AbstractProject.class", according to class hierarchy, should return everything.
Could someone point out documentation or something i'm missing? thanks by advance

Fixed the bug with a refactoring of the loop :
SecurityContext old = ACL.impersonate(ACL.SYSTEM);
for (AbstractProject<?, ?> job : Jenkins.getInstance()
.getAllItems(AbstractProject.class)) {
// useful work on jobs
}
SecurityContextHolder.setContext(old);
with :
ACL.impersonate(ACL.SYSTEM, new Runnable() {
#Override
public void run() {
for (AbstractProject<?, ?> job : Jenkins.getInstance()
.getAllItems(AbstractProject.class)) {
try {
processJob(job, remote, scm);
} catch (Exception jobProcessingException) {
LOGGER.severe("Something bad occured processing job "
+ job.getName());
jobProcessingException.printStackTrace();
}
}
}
});

Related

Common Groovy script in active choices parameter

I have a groovy script that will be common to many jobs - they will all contain an Active Choices Reactive Parameter. Rather than repeat the same script dozens of times I would like to place in a (library | ??) one time, and reference it in each job.
The script works beautifully for any job I paste it in. Just need to know if it is possible to plop it into one place and share across all jobs. Update it once, updates all jobs.
import jenkins.model.Jenkins;
ArrayList<String> res = new ArrayList<String>();
def requiredLabels = [new hudson.model.labels.LabelAtom ("Product")];
requiredLabels.add(new hudson.model.labels.LabelAtom(ClientName));
Jenkins.instance.computers.each {
if (it.assignedLabels.containsAll(requiredLabels)) {
res.add(it.displayName);
}
}
return res;
CAVEAT: This will work only if you have access to your Jenkins box. I haven't tried to do it by adding paths to the jenkins home
You can use this:
Make all your functions into a groovy file. For example will call it: activeChoiceParams.groovy
Convert that file into a jar by: jar cvf <jar filename> <groovy file>. For example: jar cvf activeChoiceParams.jar activeChoiceParams.groovy
Move your jar file to /packages/lib/ext
Restart Jenkins
In your active choices groovy script use (for example>:
import activeChoiceParams
return <function name>()
All functions must return a list or a map
The option we decide on was to have a common parameters function .groovy we store in git. There is a service hook that pushes the files out to a known network location on check-in.
In our Jenkins build step we then have the control dynamically load up the script and invoke the function passing in any parameters.
ArrayList<String> res = new ArrayList<String>();
try {
new GroovyShell().parse( new File( '\\\\server\\share\\folder\\parameterFunctions.groovy' ) ).with {
res = getEnvironments(ClientName);
}
} catch (Exception ex) {
res.add(ex.getMessage());
}
return res;
And our parameterFunctions.groovy will respond how we want:
public ArrayList<String> getEnvironments(String p_clientName) {
ArrayList<String> res = new ArrayList<String>();
if (!(p_clientName?.trim())){
res.add("Select a client");
return res;
}
def possibleEnvironments = yyz.getEnvironmentTypeEnum();
def requiredLabels = [new hudson.model.labels.LabelAtom ("PRODUCT")];
requiredLabels.add(new hudson.model.labels.LabelAtom(p_clientName.toUpperCase()));
Jenkins.instance.computers.each { node ->
if (node.assignedLabels.containsAll(requiredLabels)) {
// Yes. Let's get the environment name out of it.
node.assignedLabels.any { al ->
def e = yyz.getEnvironmentFromString(al.getName(), true);
if (e != null) {
res.add(al.getName());
return; // this is a continue
}
}
}
}
return res;
}
Nope, looks like it isn't possible (yet).
https://issues.jenkins-ci.org/browse/JENKINS-46394
I found interesting solution by using Job DSL plugin.
usually job definition for Active Choices is look like:
from https://jenkinsci.github.io/job-dsl-plugin/#method/javaposse.jobdsl.dsl.helpers.BuildParametersContext.activeChoiceParam
job('example') {
parameters {
activeChoiceParam('CHOICE-1') {
choiceType('SINGLE_SELECT')
groovyScript {
script(readFileFromWorkspace('className.groovy') + "\n" + readFileFromWorkspace('executionPart.groovy'))
}
}
}
}
in className.groovy you can define class as a common part
with executionPart.groovy you can create instance and make your particular part

Jenkins API to retrieve a build log in chunks

For a custom monitoring tool I need an API (REST) to fetch the console log of a Jenkins build in chunks.
I know about the /consoleText and /logText/progressive{Text|HTML} APIs, but the problem with this is that sometimes, our build logs get really huge (up to a few GB). I have not found any way using those existing APIs that avoids fetching and transferring the whole log in one piece. This then normally drives the Jenkins master out of memory.
I already have the Java code to efficiently fetch chunks from a file, and I have a basic Jenkins plugin that gets loaded correctly.
What I'm missing is the correct extension point so that I could call my plugin via REST, for example like
http://.../jenkins/job/<jobname>/<buildnr>/myPlugin/logChunk?start=1000&size=1000
Or also, if that is easier
http://.../jenkins/myPlugin/logChunk?start=1000&size=1000&job=<jobName>&build=<buildNr>
I tried to register my plugin with something like (that code below does not work!!)
#Extension
public class JobLogReaderAPI extends TransientActionFactory<T> implements Action {
public void doLogChunk(StaplerRequest req, StaplerResponse rsp) throws IOException {
LOGGER.log(Level.INFO, "## doLogFragment req: {}", req);
LOGGER.log(Level.INFO, "## doLogFragment rsp: {}", rsp);
}
But I failed to find the right encantation to register my plugin action.
Any tips or pointers to existing plugins where I can check how to register this?
This was indeed more simple than I expected :-) It as always: once one understands the plugin system, it just needs a few lines of code.
Turns out all I needed to do was write 2 very simple classes
The "action factory" that get's called by Jenkins and registers an action on the object in question (in my case a "build" or "run"
public class ActionFactory extends TransientBuildActionFactory {
public Collection<? extends Action> createFor(Run target) {
ArrayList<Action> actions = new ArrayList<Action>();
if (target.getLogFile().exists()) {
LogChunkReader newAction = new LogChunkReader(target);
actions.add(newAction);
}
return actions;
}
The class the implements the logic
public class LogChunkReader implements Action {
private Run build;
public LogChunkReader(Run build) {
this.build = build;
}
public String getIconFileName() {
return null;
}
public String getDisplayName() {
return null;
}
public String getUrlName() {
return "logChunk";
}
public Run getBuild() {
return build;
}
public void doReadChunk(StaplerRequest req, StaplerResponse rsp) throws IOException, ServletException {

How do I get the current attempt number on a background job in Hangfire?

There are some database operations I need to execute before the end of the final attempt of my Hangfire background job (I need to delete the database record related to the job)
My current job is set with the following attribute:
[AutomaticRetry(Attempts = 5, OnAttemptsExceeded = AttemptsExceededAction.Delete)]
With that in mind, I need to determine what the current attempt number is, but am struggling to find any documentation in that regard from a Google search or Hangfire.io documentation.
Simply add PerformContext to your job method; you'll also be able to access your JobId from this object. For attempt number, this still relies on magic strings, but it's a little less flaky than the current/only answer:
public void SendEmail(PerformContext context, string emailAddress)
{
string jobId = context.BackgroundJob.Id;
int retryCount = context.GetJobParameter<int>("RetryCount");
// send an email
}
(NB! This is a solution to the OP's problem. It does not answer the question "How to get the current attempt number". If that is what you want, see the accepted answer for instance)
Use a job filter and the OnStateApplied callback:
public class CleanupAfterFailureFilter : JobFilterAttribute, IServerFilter, IApplyStateFilter
{
public void OnStateApplied(ApplyStateContext context, IWriteOnlyTransaction transaction)
{
try
{
var failedState = context.NewState as FailedState;
if (failedState != null)
{
// Job has finally failed (retry attempts exceeded)
// *** DO YOUR CLEANUP HERE ***
}
}
catch (Exception)
{
// Unhandled exceptions can cause an endless loop.
// Therefore, catch and ignore them all.
// See notes below.
}
}
public void OnStateUnapplied(ApplyStateContext context, IWriteOnlyTransaction transaction)
{
// Must be implemented, but can be empty.
}
}
Add the filter directly to the job function:
[CleanupAfterFailureFilter]
public static void MyJob()
or add it globally:
GlobalJobFilters.Filters.Add(new CleanupAfterFailureFilter ());
or like this:
var options = new BackgroundJobServerOptions
{
FilterProvider = new JobFilterCollection { new CleanupAfterFailureFilter () };
};
app.UseHangfireServer(options, storage);
Or see http://docs.hangfire.io/en/latest/extensibility/using-job-filters.html for more information about job filters.
NOTE: This is based on the accepted answer: https://stackoverflow.com/a/38387512/2279059
The difference is that OnStateApplied is used instead of OnStateElection, so the filter callback is invoked only after the maximum number of retries. A downside to this method is that the state transition to "failed" cannot be interrupted, but this is not needed in this case and in most scenarios where you just want to do some cleanup after a job has failed.
NOTE: Empty catch handlers are bad, because they can hide bugs and make them hard to debug in production. It is necessary here, so the callback doesn't get called repeatedly forever. You may want to log exceptions for debugging purposes. It is also advisable to reduce the risk of exceptions in a job filter. One possibility is, instead of doing the cleanup work in-place, to schedule a new background job which runs if the original job failed. Be careful to not apply the filter CleanupAfterFailureFilter to it, though. Don't register it globally, or add some extra logic to it...
You can use OnPerforming or OnPerformed method of IServerFilter if you want to check the attempts or if you want you can just wait on OnStateElection of IElectStateFilter. I don't know exactly what requirement you have so it's up to you. Here's the code you want :)
public class JobStateFilter : JobFilterAttribute, IElectStateFilter, IServerFilter
{
public void OnStateElection(ElectStateContext context)
{
// all failed job after retry attempts comes here
var failedState = context.CandidateState as FailedState;
if (failedState == null) return;
}
public void OnPerforming(PerformingContext filterContext)
{
// do nothing
}
public void OnPerformed(PerformedContext filterContext)
{
// you have an option to move all code here on OnPerforming if you want.
var api = JobStorage.Current.GetMonitoringApi();
var job = api.JobDetails(filterContext.BackgroundJob.Id);
foreach(var history in job.History)
{
// check reason property and you will find a string with
// Retry attempt 3 of 3: The method or operation is not implemented.
}
}
}
How to add your filter
GlobalJobFilters.Filters.Add(new JobStateFilter());
----- or
var options = new BackgroundJobServerOptions
{
FilterProvider = new JobFilterCollection { new JobStateFilter() };
};
app.UseHangfireServer(options, storage);
Sample output :

Quartz.Net How To Log Misfire

I am using Quartz.Net and we regularly see misfires during development and live. Whilst this is not a problem as such we would like to enable some sort of tracing so in development it is possible to see when a misfire occurs.
Are there any events we can hook into for this purpose? Ideally I am after something like...
var factory = new StdSchedulerFactory();
var scheduler = factory.GetScheduler();
scheduler.Start();
scheduler.OnMisfire += (e) => {
Console.Out.WriteLine(e);
}
You can use a trigger listener to handle this, see Lesson 7: TriggerListeners and JobListeners.
You can use the history plugin as a reference for building your own logging.
Example
class MisfireLogger : TriggerListenerSupport
{
private readonly ILog log = LogManager.GetLogger (typeof (MisfireLogger));
public override void TriggerMisfired (ITrigger trigger)
{
log.WarnFormat("Trigger {0} misfired", trigger.Key);
}
}
scheduler.ListenerManager.AddTriggerListener (new MisfireLogger ());

Possibility to use Karma with TFS builds

I'm new to the Testacular(now Karma). But I found it is really powerful and great for automatic cross-browser JS testing. So I want to know if it is possible to use it as part of TFS building procedure to conduct automatic JS code unit testing? If anyone has previous experience, could you please let us know what to notice so that we are not going to take the wrong way.
Regards,
Jun
Here is my pseudo code to run the karma in TFS using C# helper class. The basic idea is:
Use C# unit test to test your js files using Karma.
Capture the output of Karma to show that in your build log.
Use separate process to run Karma.
Pack all Karma files into a zip file, extract that into temporary folder for each build, so that builds with different version of karma wouldn't conflict with each other.
Clean the temp folder after build.
-
namespace Test.Javascript.CrossBrowserTests
{
public class KarmaTestRunner : IDisposable
{
private const string KarmaPath = #".\node_modules\karma\bin\karma";
private string NodeBasePath { get; set; }
private string NodeFullPath { get { return NodeBasePath + #"\node\node.exe"; } }
private string NpmFullPath { get { return NodeBasePath + #"\node\npm.cmd"; } }
public KarmaTestRunner()
{
ExtractKarmaZip();
LinkGlobalKarma();
}
public int Execute(params string[] arguments)
{
Process consoleProcess = RunKarma(arguments);
return consoleProcess.ExitCode;
}
public void Dispose()
{
UnlinkGlobalKarma();
RemoveTempKarmaFiles();
}
private void ExtractKarmaZip()
{
NodeBasePath = Path.GetTempPath() + Path.GetRandomFileName();
byte[] resourceBytes = Assembly.GetExecutingAssembly().GetEmbeddedResourceBytes(typeof(KarmaTestRunner).Namespace + "." + "karma0.9.4.zip");
ZipFile file = ZipFile.Read(resourceBytes);
file.ExtractAll(NodeBasePath);
}
private void LinkGlobalKarma()
{
ExecuteConsoleProcess(NpmFullPath, "link", "karma");
}
private Process RunKarma(IEnumerable<string> arguments)
{
return ExecuteConsoleProcess(NodeFullPath, new[] { KarmaPath }.Concat(arguments).ToArray());
}
private static Process ExecuteConsoleProcess(string path, params string[] arguments)
{
//Create a process to run karma with arguments
//Hook up the OutputDataReceived envent handler on the process
}
static void OnOutputLineReceived(string message)
{
if (message != null)
Console.WriteLine(message);
}
private void UnlinkGlobalKarma()
{
ExecuteConsoleProcess(NpmFullPath, "uninstall", "karma");
}
private void RemoveTempKarmaFiles()
{
Directory.Delete(NodeBasePath, true);
}
}
}
Then use it like this:
namespace Test.Javascript.CrossBrowserTests
{
[TestClass]
public class CrossBrowserJSUnitTests
{
[TestMethod]
public void JavascriptTestsPassForAllBrowsers()
{
using (KarmaTestRunner karmaRunner = new KarmaTestRunner())
{
int exitCode = karmaRunner.Execute("start", #".\Test.Project\Javascript\Karma\karma.conf.js");
exitCode.ShouldBe(0);
}
}
}
}
A lot has changed since the original question and answer.
However, we've gotten Karma to run in our TFS build by running a Grunt task (I'm sure the same is possible with Gulp/whatever task runner you have). We were using C# before, but recently changed.
Have a grunt build task run.
Add a Grunt task after that
point the file path to your gruntfile.js and run your test task. This task will run karma:single. The grunt-cli location may be node_modules/grunt-cli/bin/grunt.
grunt.registerTask('test', [
'karma:single'
]);
Add a Publish Test Results step. Test Results Files = **/*.trx
More information about publishing Karma Test Results

Resources