Project MAIN has multiple local_repository rules: A, B, C. They are added into a common macro and loaded by MAIN WORKSPACE file:
def load_dependencies():
native.local_repository(name="A"...) # many of those
...
Now, I want to create new project NEW which depends on MAIN. MAIN is loaded by git_repository repository_rule.
How to properly load these local repositories?
I can't just hardcode absolute path to MAIN root dir since it is downloaded by git.
If MAIN is imported into NEW using a git_repository rule like this:
git_repository(
name = "MAIN",
<other stuff>
)
then you can load that macro into NEW's workspace and use it like this:
load("#MAIN//path/to:the_file.bzl", "load_dependencies")
load_dependencies()
git_repository is used instead of local_repository. They both create a Bazel repository which you can reference with labels. Once you have the git_repository, you don't need a path like you would with local_repository because it's already a repository.
Related
I'm maintaining two python libraries A and B, each partially using Bazel for building non-python codes. Library B depends on A in terms of Bazel, so B needs a remote repository of A.
For the released version of B, I'd like to have remote repository of A in canonical form, for example git_repository with commit hash.
git_repository(
name = "A",
commit = "...",
remote = "https://github.com/foo/A",
)
During development, I'd like to have remote repository of A in symbolic link form, for example, git_repository with master branch.
git_repository(
name = "A",
branch = "master",
remote = "https://github.com/foo/B",
)
And I'd like to use exactly one of them. After some research I found there is no "conditional branch" method (fed from command line flags or environment variable) I can use at the WORKSPACE level. I'm asking for any options I couldn't have found.
Followings are the alternatives that I've searched for, but not 100% happy.
Using local_repository during development is not an attractive solution, as in real there are 8+ libraries with chained dependencies, and I don't think it is realistic to manually clone and sometimes pull them.
Using alias() with select() at a BUILD level is not also very attractive solution, because it turns out there are tens of A's blaze targets that are used in B. Defining aliases for all of them is not maintainable at scale. (or is there a way to define alias at a package level?).
# WORKSPACE
git_repository(name = "A", ...)
git_repository(name = "A_master", ...)
# BUILD
config_setting(name = "use_master", ...)
alias(
name = "A_pkg_label", # There are too many targets to declare
actual = select({
":use_master": "#A_master/pkg:label",
"//conditions:default": "#A/pkg:label",
})
)
Using two WORKSPACE files seems feasible, but I couldn't find a clean way to select WORKSPACE file other than manually renaming them.
Defining custom repository_rule, branching by the repository_ctx.os.environ value, seemed promising until I figured out that I cannot reuse other repository rule inside implementation.
While you can't reuse other repository rules in general, in practice many of them are written in Starlark and are easy to reuse. For example, git_repository's implementation looks like this:
def _git_repository_implementation(ctx):
update = _clone_or_update(ctx)
patch(ctx)
ctx.delete(ctx.path(".git"))
return _update_git_attrs(ctx.attr, _common_attrs.keys(), update)
Most of those utility functions are either NOPs if you're only using the basic features or possible to load from your own starlark code. You could do a barebones replacement with just this:
load("#bazel_tools//tools/build_defs/repo:git_worker.bzl", "git_repo")
def _my_git_repository_implementation(ctx):
directory = str(ctx.path("."))
git_repo(ctx, directory)
ctx.delete(ctx.path(".git"))
I'm having trouble understanding how to construct proper label forms when dealing with external repositories (directories with their own WORKSPACE).
What is the semantic meaning of characters like /, :, // or #?
For example:
#foo/bar
#foo:bar
//foo
foo
Do they preserve their meaning when used in an external repository? Also, is //external special in any way?
/ is a separator for package and target names.
relative/package/to/my:target
//absolute/package/to:my/file/target.java
A package is defined as a directory containing a BUILD or BUILD.bazel file.
: is the lexeme for selecting a rule or file target in a package.
//my/package:my_java_binary
Selects the target my_java_binary defined in <workspace root>/my/package/BUILD
//my/package:file.go
Selects the file <workspace root>/my/package/file.go if <workspace root>/my/package/BUILD exists, and if there's a rule in that BUILD file that references it.
//:my/nested/file.txt
Selects the file <workspace root>/my/nested/file.txt if <workspace root>/BUILD exists, but not in the my and my/nested subdirectories.
// is the location of the current or closest parent directory containing a WORKSPACE file.
Otherwise known as workspace root.
# is used for referencing a repository by its name when used to the left of //
#io_bazel_rules_scala//scala:scala.bzl: look into your WORKSPACE file for a repository named io_bazel_rules_scala. Usually defined using http_archive or git_repository.
#//my/package:target: # alone refers to the current workspace.
As of Bazel 0.16.0, # can be used in package names.
Do they preserve their meaning when used in an external repository?
Yes, think of the #<repository> syntax as a namespace mechanism.
Also, is //external special in any way?
Yes, it's used for the bind function, which is not recommended anymore. bind lets you give a target an alias in //external.
Say there are two Bazel projects, they both depend on the Python package six.
Project A adds six with the name six_1_10_0:
new_http_archive(
name = "six_1_10_0"
...
)
py_binary(
name = "lib_a",
deps = ["#six_1_10_0//:six"]
)
Project B adds six with the name six_archive.
new_http_archive(
name = "six_archive"
...
)
py_binary(
name = "lib_b",
deps = ["#six_archive//:six"]
)
In my project, I depend on both A and B. Is there a way to let them use the same six?
To change the BUILD file contents of a dependency, the simplest way I can think of is to use one of the new_* repository rules (e.g. new_git_repository). Using the build_file or build_file_content attribute to write a new BUILD file, write a new py_binary rule with its deps containing your canonical #six repository, and keeping all other attributes the same.
There isn't a straightforward way of doing this because Bazel makes no assumption on why Project A uses a different version of six compared to Project B.
The only way that Bazel knows that they're using the same version is if both new_http_archive rules specify the same SHA checksum. If they are the same checksum, you can use --experimental_repository_cache=/some/path to avoid downloading the same archive twice.
I have the following maven_jar in my workspace:
maven_jar(
name = "com_google_code_findbugs_jsr305",
artifact = "com.google.code.findbugs:jsr305:3.0.1",
sha1 = "f7be08ec23c21485b9b5a1cf1654c2ec8c58168d",
)
In my project I reference it through #com_google_code_findbugs_jsr305//jar. However, I now want to depend on a third party library that references #com_google_code_findbugs_jsr305 without the jar target.
I tried looking into both bind and alias, however alias cannot be applied inside the WORKSPACE and bind doesn't seem to allow you to define targets as external repositories.
I could rename the version I use so it doesn't conflict, but that feels like the wrong solution.
IIUC, your code needs to depend on both #com_google_code_findbugs_jsr305//jar and #com_google_code_findbugs_jsr305//:com_google_code_findbugs_jsr305. Unfortunately, there isn't any pre-built rule that generates BUILD files for both of those targets, so you basically have to define the BUILD files yourself. Fortunately, #jart has written most of it for you in the closure rule you linked to. You just need to add //jar:jar by appending a couple of lines, after line 69 add something like:
repository_ctx.file(
'jar/BUILD',
"\n".join([
"package(default_visibility = '//visibility:public')"] + _make_java_import('jar', '//:com_google_code_findbugs_jsr305.jar')
This creates a //jar:jar (or equivalently, //jar) target in the repository.
I want to copy only specific files in a directory to remote server using Jenkins SCP Plugin.
I have folder structure /X/Y/...Under Y, I need only the files a b c among a b c d e f. Is this possible...?
Of course, to copy all files all you need is X/Y/**. But what about copying selectively.
I was reading somewhere that this is a kind of bug in the plugin.
I have string parameter, $FILES=x,y,z highlighted in "BUILD WITH PARAMETERS"
SCP Configuration:
Source: some/path/$FILES (relative to $WORKSPACE)
Destination: /var/lib/some/path
You should be able to say X/Y/a; X/Y/b; X/Y/c
Also remember that these files have to be under the job's ${WORKSPACE}
Alternatively, you can have another build step in-between that copies only the files that you want into a staging folder, and then supplying the staging folder to SCP plugin
Edit after OP clarification:
Your $FILES variable contains x,y,z When you supply this as Source to SCP plugin, it becomes:
some/path/x,y,z
Or if we break this one item per line:
some/path/x
y
z
The first item is valid, the next two are not complete paths, therefore are not found.
Several ways to fix it (chose either one):
Full path in parameter variable.
Under your FILES string parameter, list the full path, like:
some/path/x, some/path/y, some/path/z
Under SCP Source, use only $FILES
pros: quick and stable.
cons: looks ugly with long paths.
Wildcard path in parameter variables.
Under your FILES string parameter, list the global wildcard path (files will be found under any directory), like:
**/x, **/y, **/z
Under SCP Source, use only $FILES
pros: quick and looks better than long paths.
cons: only works if files x, y and z are unique in your whole workspace. If there is $WORKSPACE/x and $WORKSPACE/some/path/x, one will end up overwriting the other.
Prepare MYFILES variable and inject it.
You need an Execute Shell build step added. In there write the following:
mypath=some/path/
echo MYFILES=${mypath}${files//,/,$mypath} > myfiles.props
Then add Inject environment variables build step (get the plugin in the link). Under Properties File Path, specify myfiles.props.
Under SCP Source, use only $MYFILES (note you are reading modified and injected variable, not the original $FILES)
pros: looks good in UI, proper and further customizable.
cons: more build steps to maintain in configuration.
p.s.
In all these cases, a multi select Extended Choice Parameter will probably look better than a string parameter.