My project has a number of packages ("models", "controllers", etc.). I've set up Jenkins with the Cobertura plugin to generate coverage reports, which is great. I'd like to mark a build as unstable if coverage drops below a certain threshold, but only on certain packages (e.g., "controllers", but not "models"). I don't see an obvious way to do this in the configuration UI, however -- it looks like the thresholds are global.
Is there a way to do this?
(Answering my own question here)
As far as I can tell, this isn't possible -- I haven't seen anything after a couple days of looking. I wrote a simple script that would do what I want -- take the coverage output, parse it, and fail the build if coverage of specific packages didn't meet certain thresholds. It's dirty and can be cleaned up/expanded, but the basic idea is here. Comments are welcome.
#!/usr/bin/env python
'''
Jenkins' Cobertura plugin doesn't allow marking a build as successful or
failed based on coverage of individual packages -- only the project as a
whole. This script will parse the coverage.xml file and fail if the coverage of
specified packages doesn't meet the thresholds given
'''
import sys
from lxml import etree
PACKAGES_XPATH = etree.XPath('/coverage/packages/package')
def main(argv):
filename = argv[0]
package_args = argv[1:] if len(argv) > 1 else []
# format is package_name:coverage_threshold
package_coverage = {package: int(coverage) for
package, coverage in [x.split(':') for x in package_args]}
xml = open(filename, 'r').read()
root = etree.fromstring(xml)
packages = PACKAGES_XPATH(root)
failed = False
for package in packages:
name = package.get('name')
if name in package_coverage:
# We care about this one
print 'Checking package {} -- need {}% coverage'.format(
name, package_coverage[name])
coverage = float(package.get('line-rate', '0.0')) * 100
if coverage < package_coverage[name]:
print ('FAILED - Coverage for package {} is {}% -- '
'minimum is {}%'.format(
name, coverage, package_coverage[name]))
failed = True
else:
print "PASS"
if failed:
sys.exit(1)
if __name__ == '__main__':
main(sys.argv[1:])
Related
I have created a Python based pipeline that contains a ParDo that leverages the Python base64 package. When I run the pipeline locally with DirectRunner, all is well. When I run the same pipeline with Dataflow on Google Cloud, it fails weith an error of:
NameError: name 'base64' is not defined [while running 'ParDo(WriteToSeparateFiles)-ptransform-47']
It seems to be missing the base64 package but I believe that to be part of standard python and always present.
Here is my complete pipeline code:
import base64 as base64
import argparse
import apache_beam as beam
import apache_beam.io.fileio as fileio
import apache_beam.io.filesystems as filesystems
from apache_beam.options.pipeline_options import PipelineOptions
class WriteToSeparateFiles(beam.DoFn):
def __init__(self, outdir):
self.outdir = outdir
def process(self, element):
writer = filesystems.FileSystems.create(self.outdir + str(element) + '.txt')
message = "This is the content of my file"
message_bytes = message.encode('ascii')
base64_bytes = base64.b64encode(message_bytes) ### Error here
writer.write(base64_bytes)
writer.close()
argv=None
parser = argparse.ArgumentParser()
known_args, pipeline_args = parser.parse_known_args(argv)
pipeline_options = PipelineOptions(pipeline_args)
with beam.Pipeline(options=pipeline_options) as pipeline:
outputs = (
pipeline
| beam.Create(range(10)) # Change range here to be millions if needed
| beam.ParDo(WriteToSeparateFiles('gs://kolban-edi/'))
)
outputs | beam.Map(print)
#print(outputs)
Solution
The solution has been found and is documented here
https://cloud.google.com/dataflow/docs/resources/faq#how_do_i_handle_nameerrors
After much study, a section in the Google Dataflow Frequently Asked Questions was found titled: "How do I handle NameErrors?"
Reading that documentation, the suggestion was to add an extra runner parameter called --save_main_session. As soon as I added that to the execution, the problem was resolved.
I am trying to parse a large txt file (about 2000 sentence). when I want to set the model_path, I get this massage:
NLTK was unable to find stanford-parser.jar! Set the CLASSPATH
environment variable.
And also when I set the CLASSPATH to this file, another message comes out:
NLTK was unable to find stanford-parser-(\d+)(.(\d+))+-models.jar!
Set the CLASSPATH environment variable.
Would you help me to solve it?
This is my code:
import nltk
from nltk.parse.stanford import StanfordDependencyParser
dependency_parser = StanfordDependencyParser( model_path="edu\stanford\lp\models\lexparser\englishPCFG.ser.gz")
===========================================================================
NLTK was unable to find stanford-parser.jar! Set the CLASSPATH
environment variable.
For more information, on stanford-parser.jar, see:
https://nlp.stanford.edu/software/lex-parser.shtml
import os
os.environ['CLASSPATH'] = "stanford-corenlp-full-2018-10-05/*"
dependency_parser = StanfordDependencyParser( model_path="edu\stanford\lp\models\lexparser\englishPCFG.ser.gz")
===========================================================================
NLTK was unable to find stanford-parser.jar! Set the CLASSPATH
environment variable.
For more information, on stanford-parser.jar, see:
https://nlp.stanford.edu/software/lex-parser.shtml
os.environ['CLASSPATH'] = "stanford-corenlp-full-2018-10-05/stanford-parser-full-2018-10-17/stanford-parser.jar"
>>> dependency_parser = StanfordDependencyParser( model_path="stanford-corenlp-full-2018-10-05/stanford-parser-full-2018-10-17/edu/stanford/nlp/models/lexparser/englishPCFG.ser.gz")
NLTK was unable to find stanford-parser-(\d+)(.(\d+))+-models.jar!
Set the CLASSPATH environment variable.
For more information, on stanford-parser-(\d+)(.(\d+))+-models.jar, see:
https://nlp.stanford.edu/software/lex-parser.shtml
You should get the new stanfordnlp dependency parser that is native to Python!
It will run slower on the CPU than GPU, but it still should run reasonably fast.
Just run pip install stanfordnlp to install.
import stanfordnlp
stanfordnlp.download('en') # This downloads the English models for the neural pipeline
nlp = stanfordnlp.Pipeline() # This sets up a default neural pipeline in English
doc = nlp("Barack Obama was born in Hawaii. He was elected president in 2008.")
doc.sentences[0].print_dependencies()
There is also a helpful command line tool:
python -m stanfordnlp.run_pipeline -l en example.txt
Full details here: https://stanfordnlp.github.io/stanfordnlp/
GitHub: https://github.com/stanfordnlp/stanfordnlp
I want to integrate the Specs2 test results with Jenkins.
I was added the below properties in sbt:
resolver:
"maven specs2" at "http://mvnrepository.com/artifact"
libraryDependencies:
"org.specs2" %% "specs2" % "2.0-RC1" % "test",
System Property:
testOptions in Test += Tests.Setup(() => System.setProperty("specs2.outDir", "/target/specs2-reports")) //Option1
//testOptions in Test += Tests.Setup(() => System.setProperty("specs2.junit.outDir", "/target/specs2-reports")) //Option2
testOptions in Test += Tests.Argument(TestFrameworks.Specs2, "console", "junitxml")
If I run the below command, it is not generating any specs reports in the above mentioned directory("/target/specs2-reports").
sbt> test
If I run the below command, it is asking for the directory as shown in the below error message:
sbt> test-only -- junitxml
[error] Could not run test code.model.UserSpec: java.lang.IllegalArgumentException: junitxml requires directory to be specified, example: junitxml(directory="xxx")
And it is working only if I give the directory as shown below:
sbt> test-only -- junitxml(directory="\target\specs-reports")
But sometimes its not generating all the specs report xmls (some times generating only one report, sometimes only two reports etc.).
If I give test-only -- junitxml(directory="\target\specs-reports") in the jenkins it is giving the below error.
[error] Not a valid key: junitxml (similar: ivy-xml)
[error] junitxml(
[error] ^
My main goal is, I want to generate the consolidated test reports in junit xml format and integrate with Jenkins. Kindly help me to solve my problem.
Best Regards,
Hari
The option for the junitxml output directory is: "specs2.junit.outDir" and the default value is "target/test-reports".
So if you don't change anything you could just instruct Jenkins to grab the xml files from "target/test-reports" which is what I usually do.
Also you might have to enclose your sbt commands in Jenkins with quotes. This is what I typically do:
"test-only -- html junitxml console"
What:
With jenkins I want to process periodically only changed files from SVN and commit the output of the processing back to SVN.
Why:
We are committing binary files into SVN (we are working with Oracle Forms and are committing fmb-Files). I created a script which exports the fmb's to xml (with the original Fmb2XML tool from Oracle) and than I convert the XML to plain source which we also want to commit. This allows us greping, viewing the changes, ....
Problem:
At the moment I am only able to checkout everything, convert the whole directory and committing the whole directory back to SVN. But since all plain text files are newly generated they appear changed in SVN. I want to commit only the changed ones.
Can anyone help me with this?
I installed the Groovy plugin, configured the Groovy language and created a script which I execute as "system Groovy script". The scripts looks like:
import java.lang.ProcessBuilder.Redirect
import hudson.model.*
import hudson.util.*
import hudson.scm.*
import hudson.scm.SubversionChangeLogSet.LogEntry
// uncomment one of the following def build = ... lines
// work with current build
def build = Thread.currentThread()?.executable
// for testing, use last build or specific build number
//def item = hudson.model.Hudson.instance.getItem("Update_SRC_Branch")
//def build = item.getLastBuild()
//def build = item.getBuildByNumber(35)
// get ChangesSets with all changed items
def changeSet= build.getChangeSet()
List<LogEntry> items = changeSet.getItems()
def affectedFiles = items.collect { it.paths }
// get filtered file names (only fmb) without path
def fileNames = affectedFiles.flatten().findResults {
if (it.path.substring(it.path.lastIndexOf(".") + 1) != "fmb") return null
it.path.substring(it.path.lastIndexOf("/") + 1)
}.sort().unique()
// setup log files
def stdOutFile = "${build.rootDir}\\stdout.txt"
def stdErrFile = "${build.rootDir}\\stderr.txt"
// now execute the external transforming
fileNames.each {
def params = [...]
def processBuilder = new ProcessBuilder(params)
// redirect stdout and stderr to log files
processBuilder.redirectOutput(new File(stdOutFile))
processBuilder.redirectError(new File(stdErrFile))
def process = processBuilder.start()
process.waitFor()
// print log files
println new File(stdOutFile).readLines()
System.err.println new File(stdErrFile).readLines()
}
Afterwards I use command line with "svn commit" to commit the updated files.
Preliminary note: getting files from repo in SVN-jargon is "checkout", saving to repo - "commit". Don't mix CVS and SVN terms, it can lead to misinterpretation
In order to get list of changed files in revision (or revset) you can use
Easy way - svn log with options -q -v. For single revision you also add -c REVNO, for revision range: -r REVSTART:REVEND. Probably additional --xml will produce more suitable output, than plain-text
You must to post-process output of log in order to get pure list, because: log contain some useless for you data, in case of log for range you can have the same file included in more than one revision
z:\>svn log -q -v -r 1190 https://subversion.assembla.com/svn/customlocations-greylink/
------------------------------------------------------------------------
r1190 | lazybadger | 2012-09-20 13:19:45 +0600 (Чт, 20 сен 2012)
Changed paths:
M /trunk/Abrikos.ini
M /trunk/ER-Telecom.ini
M /trunk/GorNet.ini
M /trunk/KrosLine.ini
M /trunk/Rostelecom.ini
M /trunk/Vladlink.ini
------------------------------------------------------------------------
example of single revision: you have to log | grep trunk | sort -u, add repo-base to filenames
Harder way: with additional SCM (namely - Mercurial) and hgsubversion you'll get slightly more (maybe) log with hg log --template "{files}\n" - only slightly because you'll get only filelist, but filesets in different revisions are newline-separated, filenames inside revision are space-separated
I've grouped a lot of projects in a project group. All the info is in the project.bpg. Now I'd like to automatically build them all.
How do I build all the projects using the command line?
I'm still using Delphi 7.
I never tried it myself, but here is a German article describing that you can use make -f ProjectGroup.bpg because *.bpgs essentially are makefiles.
You can also run Delphi from the command line or a batch file, passing the .bpg file name as a parameter.
Edit: Example (for D2007, but can be adjusted for D7):
=== rebuild.cmd (excerpt) ===
#echo off
set DelphiPath=C:\Program Files\CodeGear\RAD Studio\5.0\bin
set DelphiExe=bds.exe
set LibPath=V:\Library
set LibBpg=Library.groupproj
set LibErr=Library.err
set RegSubkey=BDSClean
:buildlib
echo Rebuilding %LibBpg%...
if exist "%LibPath%\%LibErr%" del /q "%LibPath%\%LibErr%"
"%DelphiPath%\%DelphiExe%" -pDelphi -r%RegSubkey% -b "%LibPath%\%LibBpg%"
if %errorlevel% == 0 goto buildlibok
As I said as a comment to Ulrich Gerhardt answer, the make project_group.bpg is useless if your projects are in subdirectories. Make won't use relative paths and the projects won't compile correctly.
I've made a python script to compile all the DPRs in every subdirectory. This is what I really wanted to do, but I'll leave the above answer as marked. Although it didn't worked for me, It really answered my question.
Here is my script to compile_all.py . I believe it may help somebody:
# -*- coding: utf-8 -*-
import os.path
import subprocess
import sys
#put this file in your root dir
BASE_PATH = os.path.dirname(os.path.realpath(__file__))
os.chdir(BASE_PATH)
os.environ['PATH'] += "C:\\Program Files\\Borland\\Delphi7\\Bin" #your delphi compiler path
DELPHI = "DCC32.exe"
DELPHI_PARAMS = ['-B', '-Q', '-$D+', '-$L+']
for root, dirs, files in os.walk(BASE_PATH):
projects = [project for project in files if project.lower().endswith('.dpr')]
if ('FastMM' in root ): #put here projects you don't want to compile
continue
os.chdir(os.path.join(BASE_PATH, root))
for project in projects:
print
print '*** Compiling:', os.path.join(root, project)
result = subprocess.call([DELPHI] + DELPHI_PARAMS + [project])
if result != 0:
print 'Failed for', project, result
sys.exit(result)
Another vantage of this approach is that you don't need to add new projects to your bpg file. If it is in a subdir, it will compile.