Patching class cprogram to accept two targets - waf

I have a implement a custom C compiler tool, but at the last step (linking) I am struggling to get it working. The linker produces to output files, one is the binary, and the second one is some file with additional information.
Normally you would have a wscript with something like this:
def configure(cnf):
cnf.load('my_compiler_c')
def build(bld):
bld(features='c cprogram', source='main.c', target='app.bbin')
And I could fake a second target like this
class cprogram(link_task):
run_str = (
"${LINK_CC} ${CFLAGS} ${OTHERFLAGS} "
"${INFO_FILE}${TGT[0].relpath()+'.abc'} " # TGT[0] + some string concatenating will be the app.bbin.abc file
"${CCLNK_TGT_F}${TGT[0].relpath()} " # TGT[0] this is the app.bbin file
"${CCLNK_SRC_F}${SRC} ${STLIB_MARKER} ${STLIBPATH_ST:STLIBPATH} "
"${CSTLIB_ST:CSTLIB} ${STLIB_ST:STLIB} ${LIBPATH_ST:LIBPATH} ${LIB_ST:LIB} ${LDFLAGS}"
)
ext_out = [".bbin"]
vars = ["LINKDEPS"]
But of course, with this hacky implementation waf does not know about the second target and rebuilds will not be triggered when app.bbin.abc is missing.
So how do I correctly pass two or more targets to the cprogram class?

Well, you just have to tell waf that you need two targets:
def configure(cnf):
cnf.load('my_compiler_c')
def build(bld):
bld(features='c cprogram', source='main.c', target=['app.bbin', 'app.bbin.abc'])
As I suppose you dont want to type two targets, you can use an alias to build your task generator:
# Naive, non-tested code.
from waflib.Configure import conf
#conf
def myprogram(bld, *k, **kw):
kw['features'] = "c cprogram"
add_my_abc_target_to_target(kw) # I'm lazy
return bld(*k, **kw)
You call:
def build(bld):
bld.myprogram(source='main.c', target='app.bbin')
Note: You can put all your code in a plugin, to have clean wscripts:
def configure(cnf):
cnf.load('myprogram') # loads my_c_compiler and myprogram alias
def build(bld):
bld.myprogram(source='main.c', target='app.bbin')

Related

Conditionally create a Bazel rule based on --config

I'm working on a problem in which I only want to create a particular rule if a certain Bazel config has been specified (via '--config'). We have been using Bazel since 0.11 and have a bunch of build infrastructure that works around former limitations in Bazel. I am incrementally porting us up to newer versions. One of the features that was missing was compiler transitions, and so we rolled our own using configs and some external scripts.
My first attempt at solving my problem looks like this:
load("#rules_cc//cc:defs.bzl", "cc_library")
# use this with a select to pick targets to include/exclude based on config
# see __build_if_role for an example
def noop_impl(ctx):
pass
noop = rule(
implementation = noop_impl,
attrs = {
"deps": attr.label_list(),
},
)
def __sanitize(config):
if len(config) > 2 and config[:2] == "//":
config = config[2:]
return config.replace(":", "_").replace("/", "_")
def build_if_config(**kwargs):
config = kwargs['config']
kwargs.pop('config')
name = kwargs['name'] + '_' + __sanitize(config)
binary_target_name = kwargs['name']
kwargs['name'] = binary_target_name
cc_library(**kwargs)
noop(
name = name,
deps = select({
config: [ binary_target_name ],
"//conditions:default": [],
})
)
This almost gets me there, but the problem is that if I want to build a library as an output, then it becomes an intermediate dependency, and therefore gets deleted or never built.
For example, if I do this:
build_if_config(
name="some_lib",
srcs=[ "foo.c" ],
config="//:my_config",
)
and then I run
bazel build --config my_config //:some_lib
Then libsome_lib.a does not make it to bazel-out, although if I define it using cc_library, then it does.
Is there a way that I can just create the appropriate rule directly in the macro instead of creating a noop rule and using a select? Or another mechanism?
Thanks in advance for your help!
As I noted in my comment, I was misunderstanding how Bazel figures out its dependencies. The create a file section of The Rules Tutorial explains some of the details, and I followed along here for some of my solution.
Basically, the problem was not that the built files were not sticking around, it was that they were never getting built. Bazel did not know to look in the deps variable and build those things: it seems I had to create an action which uses the deps, and then register an action by returning a (list of) DefaultInfo
Below is my new noop_impl function
def noop_impl(ctx):
if len(ctx.attr.deps) == 0:
return None
# ctx.attr has the attributes of this rule
dep = ctx.attr.deps[0]
# DefaultInfo is apparently some sort of globally available
# class that can be used to index Target objects
infile = dep[DefaultInfo].files.to_list()[0]
outfile = ctx.actions.declare_file('lib' + ctx.label.name + '.a')
ctx.actions.run_shell(
inputs = [infile],
outputs = [outfile],
command = "cp %s %s" % (infile.path, outfile.path),
)
# we can also instantiate a DefaultInfo to indicate what output
# we provide
return [DefaultInfo(files = depset([outfile]))]

Multiple outputs from one input based on features

I would like to build many outputs based on the same input, e.g. a hex and a binary from an elf.
I will do this multiple times, different places in the wscript so I'd like to wrap it in a feature.
Ideally something like:
bld(features="hex", source="output.elf")
bld(features="bin", source="output.elf")
How would I go about implementing this?
If your elf files always have the same extension, you can simply use that:
# untested, naive code
from waflib import TaskGen
#TaskGen.extension('.elf')
def process_elf(self, node): # <- self = task gen, node is the current input node
if "bin" in self.features:
bin_node = node.change_ext('.bin')
self.create_task('make_bin_task', node, bin_node)
if "hex" in self.features:
hex_node = node.change_ext('.hex')
self.create_task('make_hex_task', node, hex_node)
If not, you have to define the features you want like that:
from waflib import TaskGen
#Taskgen.feature("hex", "bin") # <- attach method features hex AND bin
#TaskGen.before('process_source')
def transform_source(self): # <- here self = task generator
self.inputs = self.to_nodes(getattr(self, 'source', []))
self.meths.remove('process_source') # <- to disable the standard process_source
#Taskgen.feature("hex") # <- attach method to feature hex
#TaskGen.after('transform_source')
def process_hex(self):
for i in self.inputs:
self.create_task("make_hex_task", i, i.change_ext(".hex"))
#Taskgen.feature("bin") # <- attach method to feature bin
#TaskGen.after('transform_source')
def process_hex(self):
for i in self.inputs:
self.create_task("make_bin_task", i, i.change_ext(".bin"))
You have to write the two tasks make_elf_task and make_bin_task. You should put all this in a separate python file and make a "plugin".
You can also define a "shortcut" to call:
def build(bld):
bld.make_bin(source = "output.elf")
bld.make_hex(source = "output.elf")
bld(features = "hex bin", source = "output.elf") # when both needed in the same place
Like that:
from waflib.Configure import conf
#conf
def make_bin(self, *k, **kw): # <- here self = build context
kw["features"] = "bin" # <- you can add bin to existing features kw
return self(*k, **kw)
#conf
def make_hex(self, *k, **kw):
kw["features"] = "hex"
return self(*k, **kw)

How to Reference A Jenkins Global Shared Library

After reviewing the docs, a number of questions here on SO, and trying a dozen or so different script configurations, I cannot figure out how to reference a shared Groovy library. I've added the library like so:
This appears to be working. I'm referencing the script like so:
You can see the error message therein:
Script1: 1: unable to resolve class Library , unable to find class
for annotation # line 1, column 1. #Library('sonarQubeAPI')_
The script code, not I think it matters, looks like this:
import groovy.json.JsonSlurper
class SonarQubeAPI{
static string getVersion(){
return "1.0";
}
static void getSonarStatus(projectKey){
def sonarQubeUserToken = "USERTOKEN";
def projectStatusUrl = "pathtosonarqube/api/qualitygates/project_status?projectKey=" + projectKey;
println("Retrieving project status for " + projectKey);
def json = getJson(sonarQubeUserToken, projectStatusUrl);
def jsonSlurper = new JsonSlurper();
def object = jsonSlurper.parseText(json);
println(object.projectStatus.status);
}
static string getJson(userToken, url){
def authString = "${userToken}:".getBytes().encodeBase64().toString();
def conn = url.toURL().openConnection();
conn.setRequestProperty( "Authorization", "Basic ${authString}" );
return conn.content.text;
}
}
I'm probably just a magic character off, but I can't seem to lock it down.
Shared libraries are a feature of Jenkins Pipelines, not of Jenkins (core) itself. You can use them only in Pipeline jobs (and child types like Multibranch Pipeline).

How to create a rule from within another rule in Bazel

Situation
I have two Skylark extension rules: blah_library and blah_binary. All of a blah_library's transitive dependencies are propagated by returning a provider(transitive_deps=...), and are handled appropriately by any ultimate dependent blah_binary target.
What I want to do
I want each blah_library to also create a filegroup with all the transitive dependencies mentioned above, so that I can access them separately. E.g., I'd like to be able to pass them in as data dependencies to a cc_binary. In other words:
# Somehow have this automatically create a target named `foo__trans_deps`?
blah_library(
name = "foo",
srcs = [...],
deps = [...],
)
cc_binary(
...,
data = [":foo__trans_deps"],
)
How should I do this? Any help would be appreciated!
What I've tried
Make a macro
I tried making a macro like so:
_real_blah_library = rule(...)
def blah_library(name, *args, **kwargs):
native.filegroup(
name = name + "__trans_deps",
srcs = ???,
)
_real_blah_library(name=name, *args, **kwargs)
But I'm not sure how to access the provider provided by _real_blah_library from within the macro, so I don't know how to populate the filegroup's srcs field...
Modify the blah_library rule's implementation
Right now I have something like:
_blah_provider = provider(fields=['transitive_deps'])
def _blah_library_impl(ctx):
...
trans_deps = []
for dep in ctx.attr.deps:
trans_deps += dep[_blah_provider].trans_deps
return _blah_provider(trans_deps=trans_deps)
blah_library = rule(impl=_blah_library_impl, ...)
I tried adding the following to _blah_library_impl, but it didn't work because apparently native.filegroup can't be called within a rule's implementation ("filegroup() cannot be called during the analysis phase"):
def _blah_library_impl(ctx):
...
trans_deps = []
for dep in ctx.attr.deps:
trans_deps += dep[_blah_provider].trans_deps
native.filegroup(
name = ctx.attr.name + "__trans_deps",
srcs = trans_deps,
)
return _blah_provider(trans_deps=trans_deps)
You can't easily create a filegroup like that, but you can still achieve what you want.
If you want to use the rule in genrule.srcs, filegroup.srcs, cc_binary.data, etc., then return a DefaultInfo provider (along with _blah_provider) and set the files field to the transitive closure of files.
You can refine the solution if you want a different set of files when the rule is in a data attribute vs. when in any other (e.g. srcs): just also set the runfiles-related members in DefaultInfo. (Frankly I don't know the difference between them, I'd just set all runfiles-fields to the same value.)
I ended up making my own special filegroup-like rule, as discussed in the comments under #Laszlo's answer. Here's the raw code in case it's a useful starting point for anyone:
def _whl_deps_filegroup_impl(ctx):
input_wheels = ctx.attr.src[_PyZProvider].transitive_wheels
output_wheels = []
for wheel in input_wheels:
file_name = wheel.basename
output_wheel = ctx.actions.declare_file(file_name)
# TODO(josh): Use symlinks instead of copying. Couldn't figure out how
# to do this due to issues with constructing absolute paths...
ctx.actions.run(
outputs=[output_wheel],
inputs=[wheel],
arguments=[wheel.path, output_wheel.path],
executable="cp",
mnemonic="CopyWheel")
output_wheels.append(output_wheel)
return [DefaultInfo(files=depset(output_wheels))]
whl_deps_filegroup = rule(
_whl_deps_filegroup_impl,
attrs = {
"src": attr.label(),
},
)

Is there an equivalent to __MODULE__ for named functions in Elixir/ Erlang?

Is there an equivalent for retrieving the name of a function just like like __MODULE__ retrieves the name of a Module in Elixir/Erlang?
Example:
defmodule Demo do
def home_menu do
module_name = __MODULE__
func_name = :home_menu
# is there a __FUNCTION__
end
End
EDITED
The selected answer works,
but calling the returned function name with apply/3 yields this error:
[error] %UndefinedFunctionError{arity: 4, exports: nil, function: :public_home, module: Demo, reason: nil}
I have a function :
defp public_home(u, m, msg, reset) do
end
The function in question will strictly be called within its module.
Is there a way to dynamically call a private function by name within its own module?
▶ defmodule M, do: def m, do: __ENV__.function
▶ M.m
#⇒ {:m, 0}
Essentially, __ENV__ structure contains everything you might need.
Yes, there is. In Erlang there are several predefined macros that should be able to provide the information you need:
% The name of the current function
?FUNCTION_NAME
% The arity of the current function (remember name alone isn't enough to identify a function in Erlang/Elixir)
?FUNCTION_ARITY
% The file name of the current module
?FILE
% The line number of the current line
?LINE
Source: http://erlang.org/doc/reference_manual/macros.html#id85926
To add to Aleksei's answer, here is an example of a macro, f_name(), that returns just the name of the function.
So if you use it inside a function, like this:
def my_very_important_function() do
Logger.info("#{f_name()}: about to do important things")
Logger.info("#{f_name()}: important things, all done")
end
you will get a log statement similar to this:
my_very_important_function: about to do important things
my_very_important_function: important things, all done
Details:
Here is the definition of the macro:
defmodule Helper do
defmacro f_name() do
elem(__CALLER__.function, 0)
end
end
(__CALLER__ is just like __ENV__, but it's the environment of the caller.)
And here is how the macro can be used in a module:
defmodule ImportantCodes do
require Logger
import Helper, only: [f_name: 0]
def my_very_important_function() do
Logger.info("#{f_name()}: doing very important things here")
end
end

Resources