We have proto files that import "google/api/annotations.proto";. I'm adding # gazelle:resolve proto go google/api/annotations.proto #org_golang_google_genproto//googleapis/api/annotations to some BUILD files but gazelle is still adding "#go_googleapis//google/api:annotations_go_proto" as a dep to go_library rules.
How do I find out why gazelle is doing that?
As per the docs for directives: https://github.com/bazelbuild/bazel-gazelle#directives
import-lang is the language importing the library. This is usually the same as source-lang but may differ with generated code. For example, when resolving dependencies for a go_proto_library, source-lang would be "proto" and import-lang would be "go". import-lang may be omitted if it is the same as source-lang.
Since you have your import-lang set to go, Its picking up the name of the rule defining the go_proto_library in the repo:
https://github.com/googleapis/googleapis/blob/9f7c0ffdaa8ceb2f27982bad713a03306157a4d2/google/api/BUILD.bazel#L345
Related
We're starting to use gRPC and are currently using bazel as our build tool. After an engineer pulls in updates to proto definitions, they'll need to proto compile. Due to the structure of our repository, the proto compile targets will be scattered in the repo.
The only option I'm seeing is to use a target naming convention so engineers just need to do something like bazel build //...:compile-proto. Are there other ways to make it easy for engineers to proto compile all updated proto definitions?
If you add a specific tag to each of them, you can use --build_tag_filters.
For example:
a_proto_library(
name = "compile-proto",
tags = ["a_proto"],
[...]
)
and then bazel build --build_tag_filters=a_proto //....
You can also wrap the rule in a macro to add the tag automatically.
I don't think //...:compile-proto is a valid target pattern, so unfortunately I'm not sure that that would work (not that you necessarily really want to rely on naming conventions anyway). See https://docs.bazel.build/versions/main/guide.html#specifying-targets-to-build
One option is to let bazel do all the updating for you. If you're already doing builds like bazel build //... to build everything, then once you pull in updates to proto definitions, another bazel build //... should rebuild only what has changed.
Another option is to find all rules using bazel query:
https://docs.bazel.build/versions/main/query.html
https://docs.bazel.build/versions/main/query-how-to.html
https://docs.bazel.build/versions/main/query.html#kind
Something like:
targets=$(bazel query "kind('java_proto_library', //...)")
bazel build $targets
Note that query with //... will load every build file in the workspace, but not build anything.
I want to create a tarball of a binary and all libs it depends upon using pkg_tar(). I can retrieve a list of the binary's dependencies with
deps = native.existing_rule('my_binary')['deps']
However, the items in the list lack the #repo_name// prefix that was specified in the cc_binary() rule. For example, #system//:ace becomes :ace; when I try to operate on :ace, bazel rightfully tells me there is no such target.
I've looked through the entire dictionary returned by native.existing_rule and don't see a way to find the missing info. Is it not possible to retrieve this information with native.existing_rule or similar?
I know I can write a macro that creates the cc_binary target and the pkg_tar target, sharing the list of deps between them. This would be more elegant - but it seems quite strange if the deps can't be retrieved from the rule.
Have you considered using aspects? You can attach an aspect to dependencies of a given target and propagate information (in this case, fully-qualified label strings?) up to the root.
Let me know if you need any additional guidance!
This question piggybacks this github issue. However, I have ran into this issue in one other context.
Context
Within Bazel, there are two repository rules, maven_jar and maven_server.
maven_jar(name, artifact, repository, server, sha1)
maven_server(name, repository, settings)
The maven_jar rule's server attribute is a label pointing to some maven_server target.
Currently, whenever the server attribute is provided, the maven_jar rule errors out.
What I would like to accomplish
Within maven_jar's implementation function, I would like to access the maven_server's attributes. Specifically, I would like to do something along the lines of:
def _impl(rtx):
settings_attr = rtx.attr.server.getSettings()
# alternatively
settings_attr = rtx.attr.server.getAttributes().settings
Is this behavior supported? If not, any way I can approximate it?
The server attribute is a label, so I'm not sure if one can obtain these values using its providers/aspects.
Repository rules are macros, so they do not have providers the same way "normal" rules do. Thus, if you specify a label attribute, it basically has to be a source file.
As settings.xml isn't supposed to be project-specific, I think it mgiht make more sense for maven_jar to use the users/system's settings.xml, as described in the Maven docs:
There are two locations where a settings.xml file may live:
The Maven install: ${maven.home}/conf/settings.xml
A user’s install: ${user.home}/.m2/settings.xml
The former settings.xml are also called
global settings, the latter settings.xml are referred to as user
settings. If both files exists, their contents gets merged, with the
user-specific settings.xml being dominant.
I have the following maven_jar in my workspace:
maven_jar(
name = "com_google_code_findbugs_jsr305",
artifact = "com.google.code.findbugs:jsr305:3.0.1",
sha1 = "f7be08ec23c21485b9b5a1cf1654c2ec8c58168d",
)
In my project I reference it through #com_google_code_findbugs_jsr305//jar. However, I now want to depend on a third party library that references #com_google_code_findbugs_jsr305 without the jar target.
I tried looking into both bind and alias, however alias cannot be applied inside the WORKSPACE and bind doesn't seem to allow you to define targets as external repositories.
I could rename the version I use so it doesn't conflict, but that feels like the wrong solution.
IIUC, your code needs to depend on both #com_google_code_findbugs_jsr305//jar and #com_google_code_findbugs_jsr305//:com_google_code_findbugs_jsr305. Unfortunately, there isn't any pre-built rule that generates BUILD files for both of those targets, so you basically have to define the BUILD files yourself. Fortunately, #jart has written most of it for you in the closure rule you linked to. You just need to add //jar:jar by appending a couple of lines, after line 69 add something like:
repository_ctx.file(
'jar/BUILD',
"\n".join([
"package(default_visibility = '//visibility:public')"] + _make_java_import('jar', '//:com_google_code_findbugs_jsr305.jar')
This creates a //jar:jar (or equivalently, //jar) target in the repository.
i use one Neos installation for multiple domains with different content.
duplicating the package TYPO3.NeosDemoTypo3Org, removing the node-identifier and doing some replacements brought me nearby everything i need.
But only the first Settings.yaml found in Packages/Sites/ seems to be parsed. All changes to the Settings.yaml found in other packages (Test1 and Test2 in the following example) are ignored.
Packages/Sites/TYPO3.NeosDemoTypo3Org/Configuration/Settings.yaml
TYPO3:
Form:
yamlPersistenceManager:
savePath: 'resource://TYPO3.NeosDemoTypo3Org/Private/Form/'
Packages/Sites/UDF.Test1/Configuration/Settings.yaml
TYPO3:
Form:
yamlPersistenceManager:
savePath: 'resource://UDF.Test1/Private/Form/'
Packages/Sites/UDF.Test2/Configuration/Settings.yaml
TYPO3:
Form:
yamlPersistenceManager:
savePath: 'resource://UDF.Test2/Private/Form/'
When i delete the first Settings.yaml (Packages/Sites/UDF.Test2/Configuration/Settings.yaml), the next Setting.yaml in alphabetical order (Packages/Sites/UDF.Test1/Configuration/Settings.yaml) is used for all 3 site packages. When i also delete this file, the Settings.yaml from UDF.Test2 is used and so on.
would be awesome if somebody can enlighten me. I am new to flow and neos and any help is welcome. RTFM, i know, but as described here i have to believe, that it should work like i did?
alternative way?
is it possible not to set the savePath in the site package configuration but in the common settings ./Packages/Application/TYPO3.Form/Configuration/Settings.yaml
I see a {#package} placeholder in
### BASE ELEMENTS ###
# NAMING: base class for everything is RENDERABLE
'TYPO3.Form:Base':
renderingOptions:
templatePathPattern: 'resource://{#package}/Private/Form/{#type}.html'
but this doesn't work here
TYPO3:
Form:
yamlPersistenceManager:
#savePath: '%FLOW_PATH_DATA%Forms/'
savePath: 'resource://{#package}/Private/Form/'
as you see i am not really experienced with this stuff but i am very motivated.
All Settings.yaml are used, but the settings are merged in order of the package loading.
The loading order of packages again is based on their dependencies.
All three packages probably have the same dependencies so they are loaded one after the other (would need to check with which ordering), so third Settings.yaml is loaded, then second Settings.yaml is loaded and overwrites the third, then the first is loaded and again overwrites the second. Every setting path can only be set once, that's why.
In any case what you are trying to archive probably won't work. This is one of the things we have to fix (site package dependent configuration).
A possible workaround is either using a common package with the form configuration and just set the savePath to this package or using diferent subcontexts (like Production/Domain1 Production/Domain2) and setting this setting different per subcontext, then you could define the subcontext by domain (as the sites are triggered by domain anyway).