Boost-build - dependency on subproject target - target

I have a jamfile-based project where one of the build steps compiles a custom tool (called 'codegen') which I want to use in a later build step. The codegen tool is built in projects/codegen/Jamfile.jam relative to the root, and the executable target is ultimately declared with the line:
install codegen-tool : $(full-exe-target) : <location>$(install-dir) ;
In Jamroot.jam, I have the following:
rule codegen ( target : source : properties * )
{
COMMAND on $(target) = projects/codegen//codegen-tool ;
DEPENDS $(target) : projects/codegen//codegen-tool ;
}
actions codegen bind COMMAND
{
$(COMMAND) $(<) $(>)
}
project.load projects/codegen//codegen-tool ;
local codegen-input = <blah> ;
local codegen-output = <blah> ;
make $(codegen-output) : $(codegen-input) : #codegen ;
alias codegen-output : $(codegen-output) ;
When I run the command "b2 codegen-output", I get the error:
don't know how to make project projects/codegen//codegen-tool
But running the command "b2 projects/codegen//codegen-tool" is successful. How come I'm not able to reference the codegen-tool target from Jamroot.jam?

The key problem you are having is that the codegen rule of the tool:
rule codegen ( target : source : properties * )
{
COMMAND on $(target) = projects/codegen//codegen-tool ;
DEPENDS $(target) : projects/codegen//codegen-tool ;
}
Are to the meta-target instead of a real target (aka a file-target) generated from building the codegen-tool meta-target. The "easy" way to get such tool dependencies to work is to use a feature on your make target to inform it of what the built full path to the tool is. And the feature you use for that is a "dependency" feature. For example you would add in your jamroot something like:
import feature ;
feature.feature codegen : : dependency free ;
And set and use that feature to refer to the codegent-tool:
project : requirements <codegen>projects/codegen//codegen-tool ;
There's not enough information in your question to answer with a full example.. But you should consult the fully working built_tool example for how to get the details of how using the dependency feature works for the use case of custom built tools.

Related

Getting runtime path to a package in Bazel toolchain configuration files

What is the best way to refer to an external package's path in any arbitrary files processed by Bazel?
I'm trying to understand how Bazel preprocesses BUILD and .bzl files. I see instances where strings contain calls to package() and I am wondering how it works (and could not find any relevant documentation). Here is an example of this:
I have a toolchain which BUILD file contains the following expression :
cc_toolchain_config(
name = "cc-toolchain-config",
abi_libc_version = "glibc_" + host_gcc8_bundle()["pkg_version"]["glibc"],
abi_version = "gcc-" + host_gcc8_bundle()["version"],
compiler = "gcc-" + host_gcc8_bundle()["version"],
cpu = "x86_64",
cxx_builtin_include_directories = [
"%package(#host_gcc8_toolchain//include/c++/8)%",
"%package(#host_gcc8_toolchain//lib64/gcc/x86_64-unknown-linux-gnu/8/include-fixed)%",
"%package(#host_gcc8_kernel_headers//include)%",
"%package(#host_gcc8_glibc//include)%",
],
host_system_name = "x86_64-unknown-linux-gnu",
target_libc = "glibc_" + host_gcc8_bundle()["pkg_version"]["glibc"],
target_system_name = "x86_64-unknown-linux-gnu",
toolchain_identifier = "host_linux_gcc8",
)
From my understanding, the cxx_builtin_include_directories defines a list of strings to serve as the --sysroot option passed to GCC as detailed in https://docs.bazel.build/versions/0.23.0/skylark/lib/cc_common.html These strings are in the format %sysroot%.
Since package(#host_gcc8_toolchain//include/c++/8) for example, does not mean anything to GCC, bazel has to somehow expand this function to produce the actual path to the files included in the package before passing them to the compiler driver.
But how can it determine that this needs to be expanded and that it is not a regular string ? So how does Bazel preprocess the BUILD file ? Is it because of the % ... % pattern ? Where is this documented ?
is "%package(#external_package//target)%" a pattern that can be used elsewhere ? In any BUILD file ? Where do I find Bazel documentation showing how this works ?
These directives are expanded by cc_common.create_cc_toolchain_config_info within the cc_toolchain_config rule implementation not any sort of preprocessing on the BUILD file (I.e., "%package(#host_gcc8_glibc//include)%" is literally passed into the cc_toolchain_config rule.) I'm not aware that these special expansions are completely documented anywhere but the source.

How can I use the JAR tool with Bazel v0.19+?

Starting with Bazel v0.19, if you have Starlark (formerly known as "Skylark") code that references #bazel_tools//tools/jdk:jar, you see messages like this at build time:
WARNING: <trimmed-path>/external/bazel_tools/tools/jdk/BUILD:79:1: in alias rule #bazel_tools//tools/jdk:jar: target '#bazel_tools//tools/jdk:jar' depends on deprecated target '#local_jdk//:jar': Don't depend on targets in the JDK workspace; use #bazel_tools//tools/jdk:current_java_runtime instead (see https://github.com/bazelbuild/bazel/issues/5594)
I think I could make things work with #bazel_tools//tools/jdk:current_java_runtime if I wanted access to the java command, but I'm not sure what I'd need to do to get the jar tool to work. The contents of the linked GitHub issue didn't seem to address this particular problem.
I stumbled across a commit to Bazel that makes a similar adjustment to the Starlark java rules. It uses the following pattern: (I've edited the code somewhat)
# in the rule attrs:
"_jdk": attr.label(
default = Label("//tools/jdk:current_java_runtime"),
providers = [java_common.JavaRuntimeInfo],
),
# then in the rule implementation:
java_runtime = ctx.attr._jdk[java_common.JavaRuntimeInfo]
jar_path = "%s/bin/jar" % java_runtime.java_home
ctx.action(
inputs = ctx.files._jdk + other inputs,
outputs = [deploy_jar],
command = "%s cmf %s" % (jar_path, input_files),
)
Additionally, java is available at str(java_runtime.java_executable_exec_path) and javac at "%s/bin/javac" % java_runtime.java_home.
See also, a pull request with a simpler example.
Because my reference to the jar tool is inside a genrule within top-level macro, rather than a rule, I was unable to use the approach from Rodrigo's answer. I instead explicitly referenced the current_java_runtime toolchain and was then able to use the JAVABASE make variable as the base path for the jar tool.
native.genrule(
name = genjar_rule,
srcs = [<rules that create files being jar'd>],
cmd = "some_script.sh $(JAVABASE)/bin/jar $# $(SRCS)",
tools = ["some_script.sh", "#bazel_tools//tools/jdk:current_java_runtime"],
toolchains = ["#bazel_tools//tools/jdk:current_java_runtime"],
outs = [<some outputs>]
)

How to invoke CROSSTOOL tools from Bazel macros / rules?

I'm building ARM Cortex-M firmware from Bazel with a custom CROSSTOOL. I'm successfully building elf files and manually objcopying them to binary files with the usual:
path/to/my/objcopy -o binary hello.elf hello.bin
I want to make a Bazel macro or rule called cc_firmware that:
Adds the -Wl,-Map=hello.map flags to generate a mapfile
Changes the output elf name from hello to hello.elf
Invokes path/to/my/objcopy to convert the elf to a bin.
I don't know how to get the name of a CROSSTOOL tool (objcopy) to invoke it, and it feels wrong to have the rule know the path to the tool executable.
Is there a way to use the objcopy that I've already told Bazel about in my CROSSTOOL file?
You can actually access this from a custom rule. Basically you need to tell Bazel that you want access to the cpp configuration information (fragments = ["cpp"]) and then access its path via ctx.fragments.cpp.objcopy_executable, e.g.,:
def _impl(ctx):
print("path: {}".format(ctx.fragments.cpp.objcopy_executable))
# TODO: actually do something with the path...
cc_firmware = rule(
implementation = _impl,
fragments = ["cpp"],
attrs = {
"src" : attr.label(allow_single_file = True),
"map" : attr.label(allow_single_file = True),
},
outputs = {"elf" : "%{name}.elf"}
)
Then create the output you want with something like (untested):
def _impl(ctx):
src = ctx.attr.src.files.to_list()[0]
m = ctx.attr.map.files.to_list()[0]
ctx.action(
command = "{objcopy} -Wl,-Map={map} -o binary {elf_out} {cc_bin}".format(
objcopy=ctx.fragments.cpp.objcopy_executable,
map=m,
elf_out=ctx.outputs.elf.path,
cc_bin=src,
),
outputs = [ctx.outputs.elf],
inputs = [src, m],
)

How to get a field's type by using CDT parser

I'm trying to extract c++ source code's info.
One is field's type.
when source code like under I want to extract info's Type when info.call() is called.
Info info;
//skip
info.call(); //<- from here
Trough making a visitor which visit IASTName node, I tried to extract type info like under.
public class CDTVisitor extends ASTVisitor {
public CDTVisitor(boolean visitNodes) {
super(true);
}
public int visit(IASTName node){
if(node.resolveBinding().getName().toString().equals("info"))
System.out.println(((IField)node.getBinding()).getType());
// this not work properly.
//result is "org.eclipse.cdt.internal.core.dom.parser.ProblemType#86be70a"
return 3;
}
}
Assuming the code is in fact valid, a variable's type resolving to a ProblemType is an indication of a configuration problem in whatever tool or plugin is running this code, or in the project/workspace containing the code on which it is run.
In this case, the type of the variable info is Info, which is presumably a class or structure type, or a typedef. To resolve it correctly, CDT needs to be able to see the declaration of this type.
If this type is not declared in the same file that's being analyzed, but rather in a header file included by that file, CDT needs to use the project's index to find the declaration. That means:
The AST must be index-based. For example, if using ITranslationUnit.getAST to create the AST, the overload that takes an IIndex parameter must be used, and a non-null argument must be provided for it.
Since an IIndex is associated with a CDT project, the code being analyzed needs to be part of a CDT project, and the project needs to be indexed.
In order for the indexer to resolve #include directives correctly, the project's include paths need to be configured correctly, so that the indexer can actually find the right header files to parse.
Any one of these not being the case can lead to a type resolving to a ProblemType.
Self response.
The reason I couldn't get a binding object was the type of AST.
When try to parse C++ source code, I should have used ICPPASTTranslationUnit.
There is no code related this, I used IASTTranslationUnit as a return type of AST.
After using ICPPASTTranslationUnit instead of IASTTranslationUnit, I solved this problem.
Yes, I figure it out! Here is the entire code which can index all files in "src" folder of a cpp project and output the resolved type binding for all code expressions including the return value of low level API such as memcpy. Note that the project variable in following code is created by programatically importing an existing manually configured cpp project. I often manually create an empty cpp project and programatically import it as a general project (once imported, Eclipse will automatically detect the project type and complete the relevant configuration of CPP project). This is much more convenient than creating and configuring a cpp project from scratch programmatically. When importing project, you'd better not to copy the project or containment structures into workspace, because this may lead to infinitely copying same project in subfolder (infinite folder depth). The code works in Eclipse-2021-12 version. I download Eclipse-For-cpp and install plugin-development and jdt plugins. Then I create an Eclipse plugin project and extend the "org.eclipse.core.runtime.applications" extension point.
In another word, it is an Eclipse-Application plugin project which can use nearly all features of Eclipse but do not start the graphical interface (UI) of Eclipse. You should add all cdt related non-ui plugins as the dependencies because new version of Eclipse does not automatically add missing plugins any more.
ICProject cproject = CoreModel.getDefault().getCModel().getCProject(project.getName());
// this code creates index for entire project.
IIndex index = CCorePlugin.getIndexManager().getIndex(cproject);
IFolder folder = project.getFolder("src");
IResource[] rcs = folder.members();
// iterate all source files in src folder and visit all expressions to print the resolved type binding.
for (IResource rc : rcs) {
if (rc instanceof IFile) {
IFile f = (IFile) rc;
ITranslationUnit tu= (ITranslationUnit) CoreModel.getDefault().create(f);
index.acquireReadLock(); // we need a read-lock on the index
ICPPASTTranslationUnit ast = null;
try {
ast = (ICPPASTTranslationUnit) tu.getAST(index, ITranslationUnit.AST_SKIP_INDEXED_HEADERS);
} finally {
index.releaseReadLock();
}
if (ast != null) {
ast.accept(new ASTVisitor() {
#Override
public int visit(IASTExpression expression) {
// get the resolved type binding of expression.
IType etp = expression.getExpressionType();
System.out.println("IASTExpression type:" + etp + "#expr_str:" + expression.toString());
return super.visit(expression);
}
});
}
}
}

Get current script path or current project path using new test runner

I am porting old vm unittest files using the new test package. Some relies on input files in sub directories of my test folder. Before I was using Platform.script to find the location of such files. This works fine when using
$ dart test/my_test.dart
However using
$ pub run test
this is now pointing to a temp folder (tmp/dart_test_xxxx/runInIsolate.dart). I am unable to locate my test input files anymore. I cannot rely on the current path as I might run the test from a different working directory.
Is there a way to find the location of my_test.dart (or event the project root path), from which I could derive the locations of my files?
This is a current limitation of pub run.
What I currently do when I run into such requirements is to set an environment variable and read them from within the tests.
I have them set in my OS and set them from grinder on other systems before launching tests.
This also works nice from WebStorm where launch configurations allow to specify environment variables.
This might be related http://dartbug.com/21020
I have the following workaround in the meantime. It is an ugly workaround that gets me the directory name of the current test script if I'm running it directly or with pub run test. It will definitely break if anything in the implementation changes but I needed this desperately...
library test_utils.test_script_dir;
import 'dart:io';
import 'package:path/path.dart';
// temp workaround using test package
String get testScriptDir {
String scriptFilePath = Platform.script.toFilePath();
print(scriptFilePath);
if (scriptFilePath.endsWith("runInIsolate.dart")) {
// Let's look for this line:
// import "file:///path_to_my_test/test_test.dart" as test;
String importLineBegin = 'import "file://';
String importLineEnd = '" as test;';
int importLineBeginLength = importLineBegin.length;
String scriptContent = new File.fromUri(Platform.script).readAsStringSync();
int beginIndex = scriptContent.indexOf(importLineBegin);
if (beginIndex > -1) {
int endIndex = scriptContent.indexOf(importLineEnd, beginIndex + importLineBeginLength);
if (endIndex > -1) {
scriptFilePath = scriptContent.substring(beginIndex + importLineBegin.length, endIndex);
}
}
}
return dirname(scriptFilePath);
}

Resources