I have several targets defined in my top wscript, let's call them build_a, build_b and build_c.
How do I add a function all to my wscript, that builds all these targets (doesn't matter if sequential or in parallel).
So in dummy python code, I expect something like this:
def all():
tar = ['configure', 'build_a', 'build_b', 'build_c']
It's simple to compose commands:
from waflib import Options
def all(bld):
commands_after = Options.commands
Options.commands = ['configure', 'build_a', 'build_b', 'build_c']
Options.commands += commands_after
See https://waf.io/book/#_custom_commands (§7.1.2 Command composition)
waf consume Options.commands while processing it. So you can use:
waf all test
# equivalent to waf configure build_a build_b build_c test
Related
I am trying to use Bazel with Pybind, and it requires that I set the following variables:
"""Repository rule for Python autoconfiguration.
`python_configure` depends on the following environment variables:
* `PYTHON_BIN_PATH`: location of python binary.
* `PYTHON_LIB_PATH`: Location of python libraries.
"""
https://github.com/pybind/pybind11_bazel/blob/master/python_configure.bzl
I dont want to have to pass it in manually when building my libraries, how can i hardcode these env vars in my WORKSPACE?
To (always) set environmental variable for a repository rule consumption, you case use --repo_env command line option. And if you want to include those with every invocation in your workspace, you can set add these flags to your .bazelrc file therein.
Now the wisdom of doing that could be questioned. If it's actually a project (repo) and not build host configuration, it would probably make more sense, be more targeted and more explicit, if it was an attribute of the given rule which was then checked in with the rest of the build configuration.
And looking at the name, there may be another question about specifying python configuration (from outside the bazel build) instead of actually using correctly resolved python toolchain (but there I have to say have no background in what the given rule is about and what is it trying to accomplish to render judgment, this is just a general comment).
To address your comment... I don't what other factors make it "not accept" or what exactly does that actually look like, but if I have this mini-example:
.
├── BUILD
├── WORKSPACE
└── customrule.bzl
Where customrule.bzl reads:
def _run_me(repo_ctx):
repo_ctx.file(
"WORKSPACE",
'workspace(name = "{}")\n'.format(repo_ctx.name),
executable = False,
)
repo_ctx.file(
"BUILD",
'exports_files(["var.sh"], visibility=["//visibility:public"])',
executable = False,
)
repo_ctx.file(
"var.sh",
"echo {}\n".format(repo_ctx.os.environ.get("var1")),
executable = True,
)
wsrule = repository_rule(
implementation = _run_me,
environ = ["var1"],
)
The WORKSPACE is:
load(":customrule.bzl", "wsrule")
wsrule(
name = "extdep"
)
And BUILD:
sh_binary(
name = "tgt",
srcs = ["#extdep//:var.sh"],
)
Then I do get:
$ bazel run --repo_env var1=val1 tgt
val1
and:
$ bazel run --repo_env var1=val2 tgt
val2
I.e. this is a way to pass variables to a repo rule and it does (as such) work.
If you absolutely know, you must call a build with some variable set to certain value (which as mentioned above is itself a requirement that is worth closer examination) and you want these to be associated with the project / repo. You can always check in a build.sh or any such file that wraps your bazel call to be exactly what it must be. But again, this looks more likely to not be really entirely "The Right Thing" to do or want.
I had a pipeline with PowerShell task that run some python script. It worked without any problems.
After I convert my pipeline into YAML format to store it as code and got something like this (a part of whole yaml pipeline):
variables:
Build.SyncSources: false
REPO_PATH_DA: '/asdfg/qwerty'
REPO_PATH_DS: '/zxcvbn/tyuio'
PIP_REPO_HOST: 'bbbb.nnnn.yyyy.com'
PIP_REPO_URL: 'https://$(PIP_REPO_HOST)/api/pypi/pypi/simple'
PIP_VENV_NAME: 'my_test_venv'
SelectedBranch: ''
WorkingDirectory: $(System.DefaultWorkingDirectory)
…………………………………………….
- task: PowerShell#1
displayName: 'Install package'
inputs:
scriptType: inlineScript
inlineScript: |
.\$(PIP_VENV_NAME)\Scripts\activate
python.exe -m pip install --index-url=$(PIP_REPO_URL) --trusted-host=$(PIP_REPO_HOST) mypackage
python.exe -m pip install --index-url=$(PIP_REPO_URL) --trusted-host=$(PIP_REPO_HOST) $(WorkingDirectory)$(REPO_PATH_DA)\qwerty
And after I run this pipeline I get an error:
##[error]System.DefaultWorkingDirectory : The term 'System.DefaultWorkingDirectory' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again.
I tried to change the name of variable in format: $(env:System_DefaultWorkingDirectory) , but no success. I suppose that predefined variables are not passed into yaml pipeline. Do you have any ideas how to resolve it?
Based on my test, the same script could work fine in my yaml pipeline. The $(WorkingDirectory) will be converted to paths xxx/xx/s.
To check if the predefined variable: $(System.DefaultWorkingDirectory) has been passed to Yaml Pipeline.
You could add a task to list all Environment variables:
steps:
- script: SET | more
In the task log, you could search the SYSTEM_DEFAULTWORKINGDIRECTORY and check if the variable exists.
For example:
If this environment variable exists, you could try to use the following format: $env:SYSTEM_DEFAULTWORKINGDIRECTORY
Here is the example:
variables:
Build.SyncSources: false
.....
WorkingDirectory: '$env:SYSTEM_DEFAULTWORKINGDIRECTORY'
If you couldn't find this variable, you can also check if there are equivalent variables.
For example:
BUILD_SOURCESDIRECTORY , BUILD_REPOSITORY_LOCALPATH
Is this a build or a release pipeline? If it's a release, you may need to use $(Pipeline.Workspace) instead of $(System.DefaultWorkingDirectory).
Also, have you tried using $(System.WorkingDirectory) directly in your Powershell task, instead of declaring a variable that references it?
What:
With jenkins I want to process periodically only changed files from SVN and commit the output of the processing back to SVN.
Why:
We are committing binary files into SVN (we are working with Oracle Forms and are committing fmb-Files). I created a script which exports the fmb's to xml (with the original Fmb2XML tool from Oracle) and than I convert the XML to plain source which we also want to commit. This allows us greping, viewing the changes, ....
Problem:
At the moment I am only able to checkout everything, convert the whole directory and committing the whole directory back to SVN. But since all plain text files are newly generated they appear changed in SVN. I want to commit only the changed ones.
Can anyone help me with this?
I installed the Groovy plugin, configured the Groovy language and created a script which I execute as "system Groovy script". The scripts looks like:
import java.lang.ProcessBuilder.Redirect
import hudson.model.*
import hudson.util.*
import hudson.scm.*
import hudson.scm.SubversionChangeLogSet.LogEntry
// uncomment one of the following def build = ... lines
// work with current build
def build = Thread.currentThread()?.executable
// for testing, use last build or specific build number
//def item = hudson.model.Hudson.instance.getItem("Update_SRC_Branch")
//def build = item.getLastBuild()
//def build = item.getBuildByNumber(35)
// get ChangesSets with all changed items
def changeSet= build.getChangeSet()
List<LogEntry> items = changeSet.getItems()
def affectedFiles = items.collect { it.paths }
// get filtered file names (only fmb) without path
def fileNames = affectedFiles.flatten().findResults {
if (it.path.substring(it.path.lastIndexOf(".") + 1) != "fmb") return null
it.path.substring(it.path.lastIndexOf("/") + 1)
}.sort().unique()
// setup log files
def stdOutFile = "${build.rootDir}\\stdout.txt"
def stdErrFile = "${build.rootDir}\\stderr.txt"
// now execute the external transforming
fileNames.each {
def params = [...]
def processBuilder = new ProcessBuilder(params)
// redirect stdout and stderr to log files
processBuilder.redirectOutput(new File(stdOutFile))
processBuilder.redirectError(new File(stdErrFile))
def process = processBuilder.start()
process.waitFor()
// print log files
println new File(stdOutFile).readLines()
System.err.println new File(stdErrFile).readLines()
}
Afterwards I use command line with "svn commit" to commit the updated files.
Preliminary note: getting files from repo in SVN-jargon is "checkout", saving to repo - "commit". Don't mix CVS and SVN terms, it can lead to misinterpretation
In order to get list of changed files in revision (or revset) you can use
Easy way - svn log with options -q -v. For single revision you also add -c REVNO, for revision range: -r REVSTART:REVEND. Probably additional --xml will produce more suitable output, than plain-text
You must to post-process output of log in order to get pure list, because: log contain some useless for you data, in case of log for range you can have the same file included in more than one revision
z:\>svn log -q -v -r 1190 https://subversion.assembla.com/svn/customlocations-greylink/
------------------------------------------------------------------------
r1190 | lazybadger | 2012-09-20 13:19:45 +0600 (Чт, 20 сен 2012)
Changed paths:
M /trunk/Abrikos.ini
M /trunk/ER-Telecom.ini
M /trunk/GorNet.ini
M /trunk/KrosLine.ini
M /trunk/Rostelecom.ini
M /trunk/Vladlink.ini
------------------------------------------------------------------------
example of single revision: you have to log | grep trunk | sort -u, add repo-base to filenames
Harder way: with additional SCM (namely - Mercurial) and hgsubversion you'll get slightly more (maybe) log with hg log --template "{files}\n" - only slightly because you'll get only filelist, but filesets in different revisions are newline-separated, filenames inside revision are space-separated
My project has a number of packages ("models", "controllers", etc.). I've set up Jenkins with the Cobertura plugin to generate coverage reports, which is great. I'd like to mark a build as unstable if coverage drops below a certain threshold, but only on certain packages (e.g., "controllers", but not "models"). I don't see an obvious way to do this in the configuration UI, however -- it looks like the thresholds are global.
Is there a way to do this?
(Answering my own question here)
As far as I can tell, this isn't possible -- I haven't seen anything after a couple days of looking. I wrote a simple script that would do what I want -- take the coverage output, parse it, and fail the build if coverage of specific packages didn't meet certain thresholds. It's dirty and can be cleaned up/expanded, but the basic idea is here. Comments are welcome.
#!/usr/bin/env python
'''
Jenkins' Cobertura plugin doesn't allow marking a build as successful or
failed based on coverage of individual packages -- only the project as a
whole. This script will parse the coverage.xml file and fail if the coverage of
specified packages doesn't meet the thresholds given
'''
import sys
from lxml import etree
PACKAGES_XPATH = etree.XPath('/coverage/packages/package')
def main(argv):
filename = argv[0]
package_args = argv[1:] if len(argv) > 1 else []
# format is package_name:coverage_threshold
package_coverage = {package: int(coverage) for
package, coverage in [x.split(':') for x in package_args]}
xml = open(filename, 'r').read()
root = etree.fromstring(xml)
packages = PACKAGES_XPATH(root)
failed = False
for package in packages:
name = package.get('name')
if name in package_coverage:
# We care about this one
print 'Checking package {} -- need {}% coverage'.format(
name, package_coverage[name])
coverage = float(package.get('line-rate', '0.0')) * 100
if coverage < package_coverage[name]:
print ('FAILED - Coverage for package {} is {}% -- '
'minimum is {}%'.format(
name, coverage, package_coverage[name]))
failed = True
else:
print "PASS"
if failed:
sys.exit(1)
if __name__ == '__main__':
main(sys.argv[1:])
I've grouped a lot of projects in a project group. All the info is in the project.bpg. Now I'd like to automatically build them all.
How do I build all the projects using the command line?
I'm still using Delphi 7.
I never tried it myself, but here is a German article describing that you can use make -f ProjectGroup.bpg because *.bpgs essentially are makefiles.
You can also run Delphi from the command line or a batch file, passing the .bpg file name as a parameter.
Edit: Example (for D2007, but can be adjusted for D7):
=== rebuild.cmd (excerpt) ===
#echo off
set DelphiPath=C:\Program Files\CodeGear\RAD Studio\5.0\bin
set DelphiExe=bds.exe
set LibPath=V:\Library
set LibBpg=Library.groupproj
set LibErr=Library.err
set RegSubkey=BDSClean
:buildlib
echo Rebuilding %LibBpg%...
if exist "%LibPath%\%LibErr%" del /q "%LibPath%\%LibErr%"
"%DelphiPath%\%DelphiExe%" -pDelphi -r%RegSubkey% -b "%LibPath%\%LibBpg%"
if %errorlevel% == 0 goto buildlibok
As I said as a comment to Ulrich Gerhardt answer, the make project_group.bpg is useless if your projects are in subdirectories. Make won't use relative paths and the projects won't compile correctly.
I've made a python script to compile all the DPRs in every subdirectory. This is what I really wanted to do, but I'll leave the above answer as marked. Although it didn't worked for me, It really answered my question.
Here is my script to compile_all.py . I believe it may help somebody:
# -*- coding: utf-8 -*-
import os.path
import subprocess
import sys
#put this file in your root dir
BASE_PATH = os.path.dirname(os.path.realpath(__file__))
os.chdir(BASE_PATH)
os.environ['PATH'] += "C:\\Program Files\\Borland\\Delphi7\\Bin" #your delphi compiler path
DELPHI = "DCC32.exe"
DELPHI_PARAMS = ['-B', '-Q', '-$D+', '-$L+']
for root, dirs, files in os.walk(BASE_PATH):
projects = [project for project in files if project.lower().endswith('.dpr')]
if ('FastMM' in root ): #put here projects you don't want to compile
continue
os.chdir(os.path.join(BASE_PATH, root))
for project in projects:
print
print '*** Compiling:', os.path.join(root, project)
result = subprocess.call([DELPHI] + DELPHI_PARAMS + [project])
if result != 0:
print 'Failed for', project, result
sys.exit(result)
Another vantage of this approach is that you don't need to add new projects to your bpg file. If it is in a subdir, it will compile.