I am trying to create a rule (maybe one already exists?), to un-tar a file during a bazel build step.
If I understand correctly all output files need to be known during the "Analysis Phase". To work around this I have a file lets call it manifest.txt which lists all the files in the tar file. However, I don't quite understand how I can read this file as a list outputs for my skylark rule? Is there an easy way to read a file and have each line be a generated output?
Thanks.
This is only possible if the manifest file is a source file, i.e. it is NOT generated by some rule in the build.
Rules must declare all their outputs without relying on the content of generated files, therefore it's not supported to create for example a genrule whose outs is computed based on a manifest file that's generated by another rule.
To work with a tar file input, the rule needs to unpack the tar with an action, and ultimately produce a predictable amount of outputs (i.e. independent of how many files are there in the tar). Typically this is done by repacking the outputs, that is, the rule would consume one tar and produce another.
Related
I have a .bzl file in the same directory as WORKSPACE. This .bzl file is loaded by the WORKSPACE and one other file in the source tree.
bazel query --universe_scope=//... --order_output=no 'rbuildfiles(variables.bzl)'
prints the paths of the two files I would expect, but also references to about 200 other files which are all external dependencies and cannot possibly depend on variables.bzl
for example:
#pypi__futures_3_2_0//:BUILD
#pypi__grpcio_1_14_1//:BUILD
#eigen//:BUILD.bazel
#io_bazel_rules_go//go/private:BUILD.bazel
Assuming I am doing something incorrectly and that this is not a bug. Any expertise would be greatly appreciated. How can I use rbuilddeps to return /only/ the files which load variables.bzl?
The WORKSPACE file of the main repo can arbitrarily affect external repositories. So, rbuildfiles is showing you because changes to variables.bzl could in fact affect all those external BUILD files indirectly through WORKSPACE.
If you don't actually want to see the BUILD files in external repositories, you could intersect the result of rbuildfiles with //....
I am aware that bazel accepts both BUILD and BUILD.bazel as valid filenames.
The android tools seem to also have a BUILD.tools file.
In general, does bazel have any restrictions for a BUILD file's extension? For example, could i have BUILD.generated to delineate generate BUILD files from non-generated BUILD files?
The .tools extension is part of building Bazel itself. From the perspective of Bazel, it's just any ordinary file. It gets picked up here: https://github.com/bazelbuild/bazel/blob/bbc8ed16aee07c3ba9321d58aa4c0ffc55fa2ba9/tools/android/BUILD#L197
then eventually gets processed here: https://github.com/bazelbuild/bazel/blob/c816b89a2224c3c318f1228755ef41c53975f45c/src/create_embedded_tools.py#L74
For the use case you mention, one way to go about it is to generate a .bzl file with a meaningful name that contains a macro that you can call from a BUILD or BUILD.bazel file. That way you can separate the generated rules from manually maintained rules. This is similar to how generate_workspace works: https://docs.bazel.build/versions/master/generate-workspace.html
I'm writing a post-build tool that needs the location of a list of target's jar files.
For these locations I have an aspect that runs on a list of targets (separately for each target using --aspects) and fetch the jar file path for each of them.
I've managed to get each jar file path in a custom output file (e.g. jar.txt) in each target's output folder.
But this will mean I would need to go over each jar.txt file separately to get the location.
Is there a way to accumulate the jar files paths in a single file?
Something like:
Try and write to the same output folder with append command in the aspect. I'm not sure if a shared output folder is possible.
Create a synthetic target which depends on all the relevant targets, then run an aspect on this target and accumulate the jars and only write them at the root after the recursion is back.
Are 1. or 2. valid options?
What is the recommended strategy to accumulate data in bazel aspects output files?
Bazel doesn't provide facitlities in Skylark for accumulating information between targets that are not related to each other in the target graph (e.g. ones that are mentioned on the command line next to each other).
One possibility would be to write a Skylark rule that depends on all the targets you usually mention on the command line and built that one; that rule will be able to collate the classpaths from each Java target to a single file.
Another possibility is to tell Bazel to write build events (that includes all the outputs of all targets the specified build pattern expands to) to a file using the --experimental_build_event_{json,text,binary}_file. (The "experimental" will be removed soon.). The files contain instances of this message:
https://github.com/bazelbuild/bazel/blob/master/src/main/java/com/google/devtools/build/lib/buildeventstream/proto/build_event_stream.proto
Natan,
If I understand correctly, you want to transitively propagate the information from each aspect node out into a single result. To do this, build the transitive set in your aspect rule implementation and pass it via the "provider" mechanism [^1]. I wrote up some examples on bazel aspects, perhaps you'll find it useful[^2].
https://github.com/pcj/bazel_aspects/blob/master/aspects.bzl#L94-L104
https://github.com/pcj/bazel_aspects
The app I have under development is utilizing a lot of plist that's taking up a lot of space. I am considering zipping up the plist files. At runtime, the app will unzip them into NSData and deserialize them into NSDictionary using NSPropertyListSerialization for eventual use.
Is there anything risky about this approach? Or is it useless if Property List Output Encoding is set to binary.
If you are getting to the point where you're considering zipping up property lists to save on space, I think it's time to move to a different format. Property lists are good for storing a few defaults or some settings, but they're not a replacement for a database.
My recommendation would be to look at moving this data into a Core Data database. If you are trying to embed large digital files as elements in the property list, look at storing those as separate resources in your application bundle or in your application's /Documents directory.
Core Data will allow you to lazy-load items as you need them, saving on load times and memory, and its SQLite base will provide for fast read / write times.
If your project settings are properly set up (that is, the Property List Output Encoding you mentioned is set to binary in the active target's settings), all of the .plist files will be converted to binary form. Look inside the bundle of your build product and check what the sizes of files are.
Update:
Further investigation shows that if you add a directory containing your plist files as a Folder Reference, Xcode will not look inside it, it will simply copy its contents into the produced bundle. Therefore, you should convert plists yourself using plutil.
A possible way do automate the task is to add a Build Script phase to your target in Xcode and make it recursively copy all the files from the directory A to the directory B converting all the files that end in .plist. Then add B to your target as a reference instead of A and you're good to go.
You could also simply create a new directory for binary plists and add them to your target instead of ordinary XML plists.
Here's how the script might look. Supposing all of your plists are stored in a directory named dir (or any of its subdirectories) and it is located at path /path/to/dir. To convert them all into directory /path/to/B one could write the following:
#!/bin/bash
find "/path/to/dir" -iname '*.plist' | xargs -L 1 -J % cp % "/path/to/B"
find "/path/to/B" -iname '*.plist' | xargs -L 1 plutil -convert binary1
When compiling latex documents the compiler emits a lot of "object" files. This clutters the directories I'm working on and it difficults the use of VCS like SVN. When I work with C++ code I have separate directories for the code and the objects, I can run make on the source directory but the .o files go to the build directory.
Is there a proper way to perform this separate compilation with Latex documents? Can it be done by using Makefiles or by passing options to the latex compiler?
Thanks
You can use:
pdflatex --output-directory=tmp file.tex
and all the files will be stored in the folder tmp (pdf included).
Because this is not an optimal solution, I made my own tool, pydflatex, that compiles the LaTeX source by stashing away the auxilliary files (using the trick above), and brings the pdf back to the current directory, so after compiling you only have file.tex and file.pdf in your directory. This plays very well with version control.
I can't help much with LaTeX (having last user it seriously 20 years ago;-), but for Subversion, you should read up on the svn:ignore property -- it makes it easy to ignore files with extensions you do not want to version (object files, bytecode files as Python can often put in the same directory as the sources, backup files some text editors use, &c).
Latex generates the temporary files in the directory where the main document is located. If you want the contents to be placed in a different location, try with a main file like below.
\documentclass{article}
\input{src/maindocument.tex}
Using this method, you could maintain a directory structure like below
/
main.tex
/src
maindocument.tex
Two options, besides the above.
Use Lyx: it looks after the separate files. I think it copies the Latex file over to its own private directory and runs latex on it. In any case, nothing is created in the current directory.
Use a makefile or one of the special Latex make programs, and have your regular targets run make clean.