I have been unable to find any examples of how to create an iOS framework using Bazel. There is an ios_framework rule, but being new to the build system, I am unsure how to use it.
Is this possible to create a framework, and if so, how would I go about doing so?
Would this work for you? I've left all of the optionals off to keep things simpler:
BUILD:
ios_framework(
name = "framework",
binary = ":framework_binary",
)
ios_framework_binary(
name = "framework_binary",
srcs = [
"frameworksource.m",
],
)
Related
I have a code analysis tool that I'd like to run for each cc_library (and cc_binary, silently implied for rest of the question). The tool has a CLI interfaces taking:
A tool project file
Compiler specifics, such as type sizes, built-ins, macros etc.
Files to analyze
File path, includes, defines
Rules to (not) apply
Files to add to the project
Options for synchronizing files with build data
JSON compilation database
Parse build log
Analyze and generate analysis report
I've been looking at how to integrate this in Bazel so that the files to analyze AND the associated includes and defines are updated automatically, and that any analysis result is properly cached. Generating JSON compilation database (using third party lib) or parsing build log both requires separate runs and updating the source tree. For this question I consider that a workaround I'm trying to remove.
What I've tried so far is using aspects, adding an analysis aspect to any library. The general idea is having a base project file holding library invariant configuration, appended with the cc_library files to analysis, and finally an analysis is triggered generating the report. But I'm having trouble to execute, and I'm not sure it's even possible.
This is my aspect implementation so far, trying to iterate through cc_library attributes and target compilation context:
def _print_aspect_impl(target, ctx):
# Make sure the rule has a srcs attribute
if hasattr(ctx.rule.attr, 'srcs'):
# Iterate through the files
for src in ctx.rule.attr.srcs:
for f in src.files.to_list():
if f.path.endswith(".c"):
print("file: ")
print(f.path)
print("includes: ")
print(target[CcInfo].compilation_context.includes)
print("quote_includes: ")
print(target[CcInfo].compilation_context.quote_includes)
print("system_includes: ")
print(target[CcInfo].compilation_context.system_includes)
print("define: " + define)
print(ctx.rule.attr.defines)
print("local_defines: ")
print(ctx.rule.attr.local_defines)
print("") # empty line to separate file prints
return []
What I cannot figure out is how to get ALL includes and defines used when compiling the library:
From libraries depended upon, recursively
copts, defines, includes
From the toolchain
features, cxx_builtin_include_directories
Questions:
How do I get the missing flags, continuing on presented technique?
Can I somehow retrieve the compile action command string?
Appended to analysis project using the build log API
Some other solution entirely?
Perhaps there is something one can do with cc_toolchain instead of aspects...
Aspects are the right tool to do that. The information you're looking for is contained in the providers, fragments, and toolchains of the cc_* rules the aspect has access to. Specifically, CcInfo has the target-specific pieces, the cpp fragment has the pieces configured from the command-line flag, and CcToolchainInfo has the parts from the toolchain.
CcInfo in target tells you if the current target has that provider, and target[CcInfo] accesses it.
The rules_cc my_c_compile example is where I usually look for pulling out a complete compiler command based on a CcInfo. Something like this should work from the aspect:
load("#rules_cc//cc:action_names.bzl", "C_COMPILE_ACTION_NAME")
load("#rules_cc//cc:toolchain_utils.bzl", "find_cpp_toolchain")
[in the impl]:
cc_toolchain = find_cpp_toolchain(ctx)
feature_configuration = cc_common.configure_features(
ctx = ctx,
cc_toolchain = cc_toolchain,
requested_features = ctx.features,
unsupported_features = ctx.disabled_features,
)
c_compiler_path = cc_common.get_tool_for_action(
feature_configuration = feature_configuration,
action_name = C_COMPILE_ACTION_NAME,
)
[in the loop]
c_compile_variables = cc_common.create_compile_variables(
feature_configuration = feature_configuration,
cc_toolchain = cc_toolchain,
user_compile_flags = ctx.fragments.cpp.copts + ctx.fragments.cpp.conlyopts,
source_file = src.path,
)
command_line = cc_common.get_memory_inefficient_command_line(
feature_configuration = feature_configuration,
action_name = C_COMPILE_ACTION_NAME,
variables = c_compile_variables,
)
env = cc_common.get_environment_variables(
feature_configuration = feature_configuration,
action_name = C_COMPILE_ACTION_NAME,
variables = c_compile_variables,
)
That example only handles C files (not C++), you'll have to change the action names and which parts of the fragment it uses appropriately.
You have to add toolchains = ["#bazel_tools//tools/cpp:toolchain_type"] and fragments = ["cpp"] to the aspect invocation to use those. Also see the note in find_cc_toolchain.bzl about the _cc_toolchain attr if you're using legacy toolchain resolution.
The information coming from the rules and the toolchain is already structured. Depending on what your analysis tool wants, it might make more sense to extract it directly instead of generating a full command line. Most of the provider, fragment, and toolchain is well-documented if you want to look at those directly.
You might pass required_providers = [CcInfo] to aspect to limit propagation to rules which include it, depending on how you want to manage propagation of your aspect.
The Integrating with C++ Rules documentation page also has some more info.
My project depends on some external libraries which I have to bazelfy myself. Thus, my WORKSPACE:
http_archive(
name = "external_lib_component1",
build_file = "//third_party:external_lib_component1.BUILD",
sha256 = "xxx",
urls = ["https://example.org/external_lib_component1.tar.gz"],
)
http_archive(
name = "external_lib_component2",
build_file = "//third_party:external_lib_component2.BUILD",
sha256 = "yyy",
urls = ["https://example.org/external_lib_component2.tar.gz"],
)
...
The two entries above are similar, and external_lib_component{1, 2}.BUILD share a lot of code.
What is the best way to share code (macros) between them?
Just putting a shared_macros.bzl file into third_party/ won't work, because it will not be copied into
the archive location on build (only the build_file is copied).
If you place a bzl file such a In your./third_party/shared_macros.bzl into your tree as you've mentioned.
Then in the //third_party:external_lib_component1.BUILD and //third_party:external_lib_component2.BUILD you provide for your external dependencies, you can load symbols from that shared file using:
load("#//third_party:shared_macros.bzl", ...)
Labels starting with #// refer to packages from the main repository, even when used in an external dependency (as they would otherwise be rooted when starting with //. You can for check docs on labels, in particular the last paragraph.
Alternatively you can also refer to the "parent" project by its name. If in your WORKSPACE file you've had:
workspace(name = "parent")
You could say:
load("#parent//third_party:shared_macros.bzl", ...)
Note: in versions prior to 2.0.0 you might want to add --incompatible_remap_main_repo if you mixed both of above approaches in your project.
I am writing a sample C++ project that uses Bazel to serve as an example idiom for other collaborators to follow.
Here is the repository: https://github.com/thinlizzy/bazelexample
I am interested to know if I am doing it 'right', more specifically about this file: https://github.com/thinlizzy/bazelexample/blob/38cc07931e58ff5a888dd6a83456970f76d7e5b3/demo/BUILD
when regarding to pick particular implementations.
cc_library(
name = "demo",
srcs = ["demo.cpp"],
deps = [
"//example:frontend",
],
)
cc_binary(
name = "main_win",
deps = [
":demo",
"//example:impl_win",
],
)
cc_binary(
name = "main_linux",
deps = [
":demo",
"//example:impl_linux",
],
)
Is this following a correct/expected idiom for Bazel projects? I am doing this way already for other projects, by concentrating all the platform-specific dependencies in separate targets and then the binaries just depend on them.
Someone in bazel-discuss list told me to use select, instead, but my attempts failed to 'detect' the operating system. I'm sure I did something wrong, but the lack of info and examples don't tell me much how to use it properly.
#bazel_tools contains predefined platform conditions:
$ bazel query #bazel_tools//src/conditions:all
#bazel_tools//src/conditions:windows_msys
#bazel_tools//src/conditions:windows_msvc
#bazel_tools//src/conditions:windows
#bazel_tools//src/conditions:remote
#bazel_tools//src/conditions:host_windows_msys
#bazel_tools//src/conditions:host_windows_msvc
#bazel_tools//src/conditions:host_windows
#bazel_tools//src/conditions:freebsd
#bazel_tools//src/conditions:darwin_x86_64
#bazel_tools//src/conditions:darwin
You can use them directly in the BUILD file:
cc_library(
name = "impl",
srcs = ["Implementation.cpp"] + select({
"#bazel_tools//src/conditions:windows": ["ImplementationWin.cpp"],
"#bazel_tools//src/conditions:darwin": ["ImplementationMacOS.cpp"],
"//conditions:default": ["ImplementationLinux.cpp"],
}),
# .. same for hdrs and data
)
cc_binary(
name = "demo",
deps = [":impl"],
)
See the documentation for select for details on the syntax.
Add a .bazelrc to your project. Add the lines build:vs2019 --cxxopt=/std:c++14 and build:gcc --cxxopt=-std=c++14. Build your code bazel build --config=msvc //... or bazel build --config=gcc //....
#Vertexwahn's answer caused some confusion on my end, so I hope this answer helps clarify a bit. While his answer does not directly tie into the question, it may be of use to others trying to build on entirely different platforms without file specific inclusions.
Here is a link to where I answered that particular question: How do I specify portable build configurations for different operating systems for Bazel?
Starting with Bazel v0.19, if you have Starlark (formerly known as "Skylark") code that references #bazel_tools//tools/jdk:jar, you see messages like this at build time:
WARNING: <trimmed-path>/external/bazel_tools/tools/jdk/BUILD:79:1: in alias rule #bazel_tools//tools/jdk:jar: target '#bazel_tools//tools/jdk:jar' depends on deprecated target '#local_jdk//:jar': Don't depend on targets in the JDK workspace; use #bazel_tools//tools/jdk:current_java_runtime instead (see https://github.com/bazelbuild/bazel/issues/5594)
I think I could make things work with #bazel_tools//tools/jdk:current_java_runtime if I wanted access to the java command, but I'm not sure what I'd need to do to get the jar tool to work. The contents of the linked GitHub issue didn't seem to address this particular problem.
I stumbled across a commit to Bazel that makes a similar adjustment to the Starlark java rules. It uses the following pattern: (I've edited the code somewhat)
# in the rule attrs:
"_jdk": attr.label(
default = Label("//tools/jdk:current_java_runtime"),
providers = [java_common.JavaRuntimeInfo],
),
# then in the rule implementation:
java_runtime = ctx.attr._jdk[java_common.JavaRuntimeInfo]
jar_path = "%s/bin/jar" % java_runtime.java_home
ctx.action(
inputs = ctx.files._jdk + other inputs,
outputs = [deploy_jar],
command = "%s cmf %s" % (jar_path, input_files),
)
Additionally, java is available at str(java_runtime.java_executable_exec_path) and javac at "%s/bin/javac" % java_runtime.java_home.
See also, a pull request with a simpler example.
Because my reference to the jar tool is inside a genrule within top-level macro, rather than a rule, I was unable to use the approach from Rodrigo's answer. I instead explicitly referenced the current_java_runtime toolchain and was then able to use the JAVABASE make variable as the base path for the jar tool.
native.genrule(
name = genjar_rule,
srcs = [<rules that create files being jar'd>],
cmd = "some_script.sh $(JAVABASE)/bin/jar $# $(SRCS)",
tools = ["some_script.sh", "#bazel_tools//tools/jdk:current_java_runtime"],
toolchains = ["#bazel_tools//tools/jdk:current_java_runtime"],
outs = [<some outputs>]
)
I'm trying to take a non-Bazel produced zipfile, modify some files in it, keeping most of them alone, and then ultimately produce a new tarball with the ~original content (plus my modifications)
I'm having trouble specifying my rules in a clean way and it'd be great if there was a suggestion on how to do it.
I'm importing the original zip file via the 'new_http_archive' WORKSPACE rule. This works pretty well. I put the build file in a package one level under the root. Let's call this 'foo_repackage'.
In foo_repackage/BUILD.root_archive:
package(default_visibility = ["//visibility:public"])
filegroup(
name = "all_files",
srcs = glob(
["**"],
exclude = ["*", "share/doc/api/**"]
),
)
The bigger issue is in the foo_repackage/BUILD file, I'd like to take all of the files out of the all_files group above, except for a few of them that I will modify. I can't see how to do this easily. It seems like every file that I want to modify I should exclude from the above glob and make a new rule that specifies that file. This means that I have to keep modifying the global all_files exclude rule.
If I could create a new filegroup that was all of the above files with some files excluded, that'd be ideal.
I should mention that the last step is of course to use pkg_tar to repackage the result - this is in foo_repackage/BUILD
pkg_tar(
name = "OutputTarball",
files = ["#root_archive//:all_files"],
deps = [":layers_of_modified_files"],
strip_prefix = "/../root_archive",
)
Does anyone have a better way to do this?
Thanks, Sean
Could you use a variable like:
MODIFIABLE_FILES = [
"some/file",
"another/file",
...
]
filegroup(
name = "static-files",
srcs = glob(["**"], exclude = MODIFIABLE_FILES)
)
filegroup(
name = "modifiable-files",
srcs = MODIFIABLE_FILES,
)
Then the list of static files and modifiable files will be kept in sync and you'll get a build error if you accidentally specify a non-existent modifiable file.