I want to retrieve current workspace name using bazel info or bazel query or some other smart tool (not cat + grep)
What is the best way to do that?
I'm not aware of any "smart" way.
What do you mean by "current" workspace? Why is cat + grep not good enough?
Related
Is there a list of tools that are assumed to be always in the PATH when a Bazel target runs a shell command?
This is relevant for creating isolated build environments. AFAIU (see https://github.com/NixOS/nixpkgs/pull/50765#issuecomment-440009735) by default Bazel picks up tools from /bin and /usr/bin when in strict mode.
But what can ultimately be assumed about the minimal content of those? For example, I saw awk to be used liberally. But then git as well, which sounds border-line.
I imagine the exact set might correspond to whatever Google-internal Bazel expects to find in Google's build images bin directories. At least for BUILD rules open-sourced by Google.
Is there such a definitive list? Thank you.
As far as I can tell, your assessment of the tool usage is correct, and unfortunately I'm not aware of such a list.
There should be one, and Bazel should treat the shell as a toolchain. Alas nobody is working on that at the moment. See https://github.com/bazelbuild/bazel/issues/5265.
Question
Is there any way I could use bazel query or aspects to identify where on the package path bazel is picking up a package? Something similar to the which command.
The documentation suggests using the --show_package_location. However that is deprecated and no longer supported, see #5592. Additionally, my attempts at using it have not uncovered much useful information. I have tried bazel query //some/target/... --output label_kind --show_package_location as well as other permutations with bazel build and it doesn't add output anything different to the console output.
Motivation
I have two different directories on my package path for fetch, query and build.
--package_path=%workspace%:%workspace%/__fuse__
This configuration supports a workflow where users perform sparse-checkouts of our large repository, while still being able to build code that has not been locally checked out. When building targets, Bazel checks for the locally checked out version of package, and if that doesn't exist, it searches a read only fuse mount.
Sometimes it's unclear to users where a package is getting picked up from, i.e. whether it's the locally checked out version or the one served from fuse. This becomes problematic when they delete or move a Bazel package, and Bazel picks up the version on the fuse mount.
It'd be nice if I could point them to a command that would map each package to where it's being picked up. For example, if i ran the command on ...
//some/package/foo --> package_path/some/package/foo
//some/package/bar --> other_package_path/some/package/bar
I completely missed this in the bazel query documentation.
With bazel query, I simply needed to add --output location, so provided I make a query like:
bazel query //some/package/... --output location
Then bazel query will output
/absolute/path/some/package/BUILD:lineno:colno target_kind label
for each target in //some/package/...
Suppose you have this:
$ bazel query "filter('_image_publish$', attr(generator_function, go_server_v1, ...))"
//helloworld/server:zurigo_server_image_publish
//bababot:bababot_server_image_publish
Is it possible to create rules or macros that let me do a single bazel build the builds all the targets above?
I'd like to do:
$ bazel build :all-servers
Which would implicitly build the ones from the output above. Is this possible?
Another way to put it, I'm looking for a Skylark alternative to doing a loop using bash on the output of the query.
You can write a genquery() rule, which will write the query result targets into a file in bazel-bin.
The final command will look something like:
bazel build //package:my_genquery && cat bazel-bin/package/my_genquery | xargs bazel build
I wonder are there features for jenkins to capture the result /data in a node and persist it in master.
I come up with the scenario that I need to check some folders in two machines to see whether they have same no of files & same size.
If hudson can save some result like "ls -ltR" in master , then I can gather at both node the results in two jobs then compare.
Are there any elegant solution to this simple problem?
currently I can connect two machines to each other via SSH and solve the problem, while this connection is not always available.
(With SSH I believe the best way is to use rsync -an /path/to/ hostB:/path/to/)
Simple problem, only slightly elegant solution :
Write a simple job listdir which does DIR > C:\logs\list1.txt .. list
Go to Post-build Actions
Add Archive the artifacts for example from above: C:\logs\*.*
Now run a build and go to http://jenkinsservername:8080/job/listdir/
You'll see the list1.txt which you can click on, and see the contents.
I have given a Windows example, you can of course replace DIR with ls -ltr
Or use archive artifacts in combination with the Copy Artifacts Plugin to pull the result of another job in the job where the comparison shall be done.
Is there a way, in Hudson, of getting the list of files from a p4 change list and passing it to an ant build script ?
Do you want to just trigger your Ant build script if a check-in is made to Perforce ? If so, that's straightforward; use the Perforce plugin.
You might be able to parse them out of the change list that Hudson generates. I don't know of any way to get it from the p4 plugin, although I think it would be useful information also.
Try something like this:
wget ${BUILD_URL}/changes -O - > changes.txt