Bazel fails to build project when using the maven_jar rule - bazel

I've made some refactoring to have this setup function being called from my WORKSPACE file:
load("#io_bazel_rules_kotlin//kotlin:kotlin.bzl", "kotlin_repositories", "kt_register_toolchains")
load("#build_bazel_rules_swift//swift:repositories.bzl", "swift_rules_dependencies")
load("#build_bazel_rules_apple//apple:repositories.bzl", "apple_rules_dependencies")
load("#bazel_tools//tools/build_defs/repo:maven_rules.bzl", "maven_jar")
load("#bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
load("#io_bazel_rules_go//go:def.bzl", "go_rules_dependencies", "go_register_toolchains")
load("#bazel_gazelle//:deps.bzl", "gazelle_dependencies")
def setup():
swift_rules_dependencies()
apple_rules_dependencies()
kotlin_repositories()
kt_register_toolchains()
go_rules_dependencies()
go_register_toolchains()
gazelle_dependencies()
maven_jar(name = "retrofit",artifact = "com.squareup.retrofit2:retrofit:2.3.0")
maven_jar(name = "retrofit_converter_jackson", artifact = "com.squareup.retrofit2:converter-jackson:2.3.0")
maven_jar(name = "jackson_core", artifact = "com.fasterxml.jackson.core:jackson-core:2.9.4")
maven_jar(name = "jackson_annotations", artifact = "com.fasterxml.jackson.core:jackson-annotations:2.9.4")
maven_jar(name = "okhttp",artifact = "com.squareup.okhttp3:okhttp:3.6.0")
maven_jar(name = "jackson_databind", artifact = "com.fasterxml.jackson.core:jackson-databind:2.9.4")
maven_jar(name = "jackson_datatype_guava", artifact = "com.fasterxml.jackson.datatype:jackson-datatype-guava:2.9.4")
maven_jar(name = "jackson_module_kotlin", artifact = "com.fasterxml.jackson.module:jackson-module-kotlin:2.9.4")
maven_jar(name = "google_collections", artifact = "com.google.collections:google-collections:1.0")
maven_jar(name = "rxjava", artifact = "io.reactivex.rxjava2:rxjava:2.1.13")
It builds on my local machine but on the CI server I get the error jackson_databind requires mvn as a dependency. Please check your PATH.
Before the refactoring i.e. when all of the above was done inside the WORKSPACE file it worked fine (although the maven_jar rule didn't need to be loaded then which I figure it's because it's a workspace rule).
What could be the issue and how do I solve it?

The native maven_jar is being deprecated, use java_import_external or jvm_maven_import_external instead.
From the GitHub issue tracking the deprecation of this rule:
load("#bazel_tools//tools/build_defs/repo:jvm.bzl", "jvm_maven_import_external")
jvm_maven_import_external(
name = "truth",
artifact = "com.google.truth:truth:0.30",
artifact_sha256 = "59721f0805e223d84b90677887d9ff567dc534d7c502ca903c0c2b17f05c116a",
server_urls = ["http://central.maven.org/maven2"],
licenses = ["notice"], # Apache 2.0
)

Related

Emulate http_file but for a file from another repository

I'm including a repository that has an extra_deps rule of the form:
maybe(
http_file,
name = "external_dependency",
downloaded_file_path = "foo.h",
sha256 = "<some_sha>",
urls = ["https://example.com/foo.h"],
)
If I have an existing repository, foo_repo, that provides foo.h, how can I substitute the target for it in place of external_dependency? http_file apparently provides #external_dependency//file, so I can't simply define an alias.
Using https://github.com/bazelbuild/bazel/blob/master/tools/build_defs/repo/http.bzl as a reference, you can define a custom repository rule that provides #external_dependency//file. For example:
def _repository_file(ctx):
ctx.file("WORKSPACE", "workspace(name = \"{name}\")".format(name = ctx.name))
ctx.file("file/BUILD.bazel", """
filegroup(
name = "file",
srcs = ["{}"],
visibility = ["//visibility:public"],
)
""".format(ctx.attr.source))
repository_file = repository_rule(
attrs = {"source": attr.label(mandatory = True, allow_single_file = True)},
implementation = _repository_file,
doc = """Analogue of http_file, but for a file in another repository.
Usage:
repository_file(
name = "special_file"
source = "#other_repo//path/to:special_file.txt",
)
""",
)
Now use:
repository_file(
name = "external_dependency",
source = "#foo_repo//path/to:foo.h",
)

How to properly use bazel transitions for multiarch build

I'm trying to define a bazel rule that will build 2 different cc_binaries for 2 different platforms with just 1 bazel build invocation. I'm struggling with how to define the transition properly and attach it.
I would like ultimately to be able to do something like:
cc_binary(
name = "binary_platform_a"
...
)
cc_binary(
name = "binary_platform_b"
...
)
my_custom_multi_arch_rule(
name = "multiarch_build",
binary_a = ":binary_platform_a",
binary_b = ":binary_platform_b",
...
)
I have deduced from bazel documents: [https://bazel.build/rules/config#user-defined-transitions] that I need to do something like the following in a defs.bzl:
def _impl(settings, attr):
_ignore = (settings, attr)
return {
"Platform A": {"//command_line_option:platform": "platform_a"},
"Platform B": {"//command_line_option:platform": "platform_b"},
}
multi_arch_transition = transition(
implementation = _impl,
inputs = [],
outputs = ["//command_line_option:platform"]
)
def _rule_impl(ctx):
# How to implement this?
my_custom_multi_arch_rule = rule(
implementation = _rule_impl,
attrs = {
"binary_a": attr.label(cfg = multi_arch_transition)
"binary_b": attr.label(cfg = multi_arch_transition)
...
})
The best-case final scenario would be able to issue:
bazel build //path/to/my:multiarch_build
and it successfully builds my 2 separate binaries for their respective platforms.
Use ctx.split_attr.<attr name>[<transition key>] to get the configured Target object representing a particular arch configuration of a binary.
def _rule_impl(ctx):
binary_a_platform_a = ctx.split_attr.binary_a["Platform A"]
binary_a_platform_b = ctx.split_attr.binary_b["Platform B"]
# ...
return [DefaultInfo(files = depset([binary_a_platform_a, binary_b_platform_b, ...]))]
https://bazel.build/rules/config#accessing-attributes-with-transitions

How to resolve paths relative to workspace in Bazel?

local_repository and new_local_repository both take paths as arguments, and these paths are resolved relative to the workspace.
local_repository(
name = "my-ssl",
path = "../ssl", # relative to workspace
)
I am trying to get similar behavior for a custom repository rule, but I can't figure it out.
It seems that all the repository_ctx functions operate relative to the repository, not the workspace.
my_repository(
name = "my-ssl",
path = "../ssl", # how can my rule resolve that path
)
How can I resolve path arguments relative to the workspace, like the built-in repository rules do?
One option could be to use a Label("//:WORKSPACE") to get the workspace dir and compose it with your relative path:
def _impl(repository_ctx):
workspace_dir = repository_ctx.path(Label("//:WORKSPACE")).dirname
repo_dir_str = '/'.join([str(workspace_dir), repository_ctx.attr.path])
print(repo_dir_str)
repo_dir = repository_ctx.path(repo_dir_str)
print(repo_dir)
print(repo_dir.exists)
my_repository = repository_rule(
implementation = _impl,
attrs = {
"path": attr.string(mandatory = True),
}
)
The workspace could also be an attribute, if needed:
my_repository = repository_rule(
implementation = _impl,
attrs = {
"path": attr.string(mandatory = True),
"workspace": attr.label(default = Label("//:WORKSPACE")),
}
)

Apache karaf4.2.3 - separate log file for each bundle

How to create a separate log file for each bundle deployed in karaf-4.2.3 using pax logging, which has log4j2 native style config?
I've tried with routing appender, but no results.
I am excepted to write each bundle logs in a separate log file for easy debugging.
I don't know anyway doing this automatically. But what you could do is to create for each module a separate configuration based on the root package name
log4j2.logger.xy.name = com.company.module.xy
log4j2.logger.xy.level = INFO
log4j2.logger.xy.additivity = false
log4j2.logger.xy.appenderRef.inovel.ref = XyFile
log4j2.logger.zz.name = com.company.module.zz
log4j2.logger.zz.level = INFO
log4j2.logger.zz.additivity = false
log4j2.logger.zz.appenderRef.inovel.ref = ZzFile
log4j2.logger.keycloak.name = org.keycloak
log4j2.logger.keycloak.level = INFO
log4j2.logger.keycloak.additivity = false
log4j2.logger.keycloak.appenderRef.keycloak.ref = KeycloakFile
And a ref could look like
# keyclok file appender
log4j2.appender.keycloak.type = RollingRandomAccessFile
log4j2.appender.keycloak.name = KeycloakFile
log4j2.appender.keycloak.fileName = ${karaf.data}/log/keycloak.log
log4j2.appender.keycloak.filePattern = ${karaf.data}/log/keycloak.log.%i
log4j2.appender.keycloak.append = true
log4j2.appender.keycloak.layout.type = PatternLayout
log4j2.appender.keycloak.layout.pattern = %d{ISO8601}
log4j2.appender.keycloak.policies.type = Policies
log4j2.appender.keycloak.policies.size.type = SizeBasedTriggeringPolicy
log4j2.appender.keycloak.policies.size.size = 8MB
log4j2.appender.keycloak.strategy.type = DefaultRolloverStrategy
log4j2.appender.keycloak.strategy.max = 10
This is a lot of manual work. So maybe someone come up with an automatic configuration
Sincerely
Just have a look at the official Log4j 2.x configuration coming with every Karaf distribution and have a look at the commented "Routing" section.
E.g. I've used the following in one of my projects:
# Root logger
log4j2.rootLogger.level = INFO
log4j2.rootLogger.appenderRef.RollingFile.ref = RollingFile
log4j2.rootLogger.appenderRef.RollingFile.filter.threshold.type = ThresholdFilter
log4j2.rootLogger.appenderRef.RollingFile.filter.threshold.level = WARN
log4j2.rootLogger.appenderRef.PaxOsgi.ref = PaxOsgi
log4j2.rootLogger.appenderRef.Console.ref = Console
log4j2.rootLogger.appenderRef.Console.filter.threshold.type = ThresholdFilter
log4j2.rootLogger.appenderRef.Console.filter.threshold.level = ${karaf.log.console:-OFF}
# Enable log routing...
log4j2.rootLogger.appenderRef.Routing.ref = Routing
# Loggers configuration
...
# Configure the routing (pay close attention to the escapes)...
log4j2.appender.routing.type = Routing
log4j2.appender.routing.name = Routing
log4j2.appender.routing.routes.type = Routes
log4j2.appender.routing.routes.pattern = \$\$\\\{ctx:bundle.name\}
log4j2.appender.routing.routes.bundle.type = Route
log4j2.appender.routing.routes.bundle.appender.type = RollingRandomAccessFile
log4j2.appender.routing.routes.bundle.appender.name = Bundle-\$\\\{ctx:bundle.name\}
log4j2.appender.routing.routes.bundle.appender.fileName = ${karaf.data}/log/bundle-\$\\\{ctx:bundle.name\}.log
log4j2.appender.routing.routes.bundle.appender.filePattern = ${karaf.data}/log/bundle-\$\\\{ctx:bundle.name\}.log.%d{yyyy-MM-dd}
log4j2.appender.routing.routes.bundle.appender.append = true
log4j2.appender.routing.routes.bundle.appender.layout.type = PatternLayout
log4j2.appender.routing.routes.bundle.appender.layout.pattern = ${log4j2.pattern}
log4j2.appender.routing.routes.bundle.appender.policies.type = Policies
log4j2.appender.routing.routes.bundle.appender.policies.time.type = TimeBasedTriggeringPolicy
log4j2.appender.routing.routes.bundle.appender.strategy.type = DefaultRolloverStrategy
log4j2.appender.routing.routes.bundle.appender.strategy.max = 31
That clearly worked for me. I wouldn't even think about a static configuration in OSGi! ;-)
log4j Configuration commented section on below link
https://github.com/apache/karaf/blob/master/assemblies/features/base/src/main/resources/resources/etc/org.ops4j.pax.logging.cfg
will log messages for each bundle to a separate file but By default karaf comes with multiple bundles this will result one log file for each bundle. So many logs file will be generated.
How it can be done for specific bundles which user have deployed on deploy folder

Custom C++ rule with the cc_common API

I'm trying to write a custom rule to compile C++ code using the cc_common API. Here's my current attempt at an implementation:
load("#bazel_tools//tools/cpp:toolchain_utils.bzl", "find_cpp_toolchain")
load("#bazel_tools//tools/build_defs/cc:action_names.bzl", "C_COMPILE_ACTION_NAME")
def _impl(ctx):
cc_toolchain = find_cpp_toolchain(ctx)
feature_configuration = cc_common.configure_features(
cc_toolchain = cc_toolchain,
unsupported_features = ctx.disabled_features,
)
compiler = cc_common.get_tool_for_action(
feature_configuration=feature_configuration,
action_name=C_COMPILE_ACTION_NAME
)
compile_variables = cc_common.create_compile_variables(
feature_configuration = feature_configuration,
cc_toolchain = cc_toolchain,
)
compiler_options = cc_common.get_memory_inefficient_command_line(
feature_configuration = feature_configuration,
action_name = C_COMPILE_ACTION_NAME,
variables = compile_variables,
)
outfile = ctx.actions.declare_file("test.o")
args = ctx.actions.args()
args.add_all(compiler_options)
ctx.actions.run(
outputs = [outfile],
inputs = ctx.files.srcs,
executable = compiler,
arguments = [args],
)
return [DefaultInfo(files = depset([outfile]))]
However, this fails with the error "execvp(external/local_config_cc/wrapped_clang, ...)": No such file or directory. I assume this is because get_tool_for_action returns a string representing a path, not a File object, so Bazel doesn't add wrapped_clang to the sandbox. Executing the rule with sandboxing disabled seems to confirm this, as it completes successfully.
Is there a way to implement this custom rule without disabling the sandbox?
If you use ctx.actions.run_shell you can add the files associated with the toolchain to the input (ctx.attr._cc_toolchain.files). Also, you'll want to add the compiler environment variables. E.g.
srcs = depset(ctx.files.srcs)
tools = ctx.attr._cc_toolchain.files
...
compiler_env = cc_common.get_environment_variables(
feature_configuration = feature_configuration,
action_name = C_COMPILE_ACTION_NAME,
variables = compiler_variables,
)
...
args = ctx.actions.args()
args.add_all(compiler_options)
ctx.actions.run_shell(
outputs = [outfile],
inputs = depset(transitive = [srcs, tools]), # Merge src and tools depsets
command = "{compiler} $*".format(compiler = compiler),
arguments = [args],
env = compiler_env,
)
Bazel doesn't add files as action inputs automatically, you have to do it explicitly, as you did in your second approach (ctx.attr._cc_toolchain.files). With that, ctx.actions.run should work just fine.

Resources