I wanted to make a BUILD rule that depends on a data file which is generated from other files. I tried to write something like this:
genrule(
name = "data",
outs = ["MyApp/data.dat"],
cmd = "cd ../libpackfiles ; bazel run FilePacker $(PWD)/../apps/MyApp/data.dat $(PWD)/../apps/MyApp/dataFiles",
)
But it didn't work for many reasons. $(PWD) was not recognized by the genrule (the cmd works fine in an equivalent sh script). And I don't know if I am allowed to leave my workspace directory (which is called apps) to do another bazel run command like this.
How could I write a correct genrule that achieves what I want?
Thanks!
Some things to keep in mind when writing genrules:
A genrule needs to know all its input files and output files (srcs and outs attributes)
It needs to know the tools it's going to use in the command (exec_tools attribute). These tools can be other things that need to be built, like binary targets (cc_binary, java_binary, sh_binary, py_binary, etc), or they can be pre-compiled binaries.
The tools have to produce the same files that the genrule declares in the outs attribute. It's often easier to declare the files in the outs attribute and then pass the file names to the tool using $(OUTS) in the cmd attribute.
See the documentation for genrule, which includes some simple examples: https://docs.bazel.build/versions/master/be/general.html#genrule
It would look something like this:
genrule(
name = "gen_data",
srcs = [":dataFiles"],
outs = ["data.dat"],
exec_tools = ["//libpackfiles:FilePacker"],
cmd = "$(location //libpackfiles:FilePacker) $(OUTS) $(SRCS)"
)
This assumes that :dataFiles is a filegroup target in the same BUILD file as the gen_data target which has a list of files of all the inputs you want to pack. Or it could be a filegroup target in a BUILD file in the dataFiles directory, in which case it would be something like //app/MyApp/dataFiles:dataFiles (and don't forget to set its visibility attribute to //visibility:public).
$(location //libpackfiles:FilePacker) is replaced with the file path of that target. $(OUTS) is replaced with all the files in the outs attribute, and similarly for $(SRCS).
Related
I have generated some output files using bazel build, but its is a bit tedious to specify the path of the bazel-bin directory everytime I need to access the output.
In deeply nested bazel projects, not only do I need to get the specific repository, /Users/username/repos/organisation/folder/folder/repo, I also need to add the bazel-bin/folder1/folder2/folder3/folder4/binary_i_want. I would prefer to say $output/binary_i_want. Bazel should be able to get the project directory (as it looks up the workspace file), and find the bazel-bin, and then look for the equivalent directory I am in. This is because I might not be running it directly, but instead copying this file to an android device, with adb push.
Is this possible? Thank you
You can use $(bazel info bazel-bin)/binary_i_want for this.
Edit: Getting the complete path to an artifact generate by a rule is a bit more involved. One option using jq could be:
$(bazel info workspace)/$(bazel aquery //:some_path --output jsonproto 2>/dev/null | jq -r ".artifacts[0].execPath")
(Inspired by this answer: Bazel: How do you get the path to a generated file?)
I am attempting to use Bazel to compile a dhall program based on dhall-kubernetes to generate a Kubernetes YAML file.
The basic dhall compile without dhall-kubernetes using a simple bazel macro works ok.
I have made an example using dhall's dependency resolution to download dhall-kubernetes - see here. This also works but is very slow (I think because dhall downloads each remote file separately), and introduces a network dependency to the bazel rule execution, which I would prefer to avoid.
My preferred approach is to use Bazel to download an archive release version of dhall-kubernetes, then have the rule access it locally (see here). My solution requires a relative path in Prelude.dhall and package.dhall for the examples/k8s package to reference dhall-kubernetes. While it works, I am concerned that this is subverting the Bazel sandbox by requiring special knowledge of the folder structure used internally by Bazel. Is there a better way?
Prelude.dhall:
../../external/dhall-kubernetes/1.17/Prelude.dhall
WORKSPACE:
workspace(name = "dhall")
load("#bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
DHALL_KUBERNETES_VERSION = "4.0.0"
http_archive(
name = "dhall-kubernetes",
sha256 = "0bc2b5d2735ca60ae26d388640a4790bd945abf326da52f7f28a66159e56220d",
url = "https://github.com/dhall-lang/dhall-kubernetes/archive/v%s.zip" % DHALL_KUBERNETES_VERSION,
strip_prefix = "dhall-kubernetes-4.0.0",
build_file = "#//:BUILD.dhall-kubernetes",
)
BUILD.dhall-kubernetes:
package(default_visibility=['//visibility:public'])
filegroup(
name = "dhall-k8s-1.17",
srcs = glob([
"1.17/**/*",
]),
)
examples/k8s/BUILD:
package(default_visibility = ["//visibility:public"])
genrule(
name = "special_ingress",
srcs = ["ingress.dhall",
"Prelude.dhall",
"package.dhall",
"#dhall-kubernetes//:dhall-k8s-1.17"
],
outs = ["ingress.yaml"],
cmd = "dhall-to-yaml --file $(location ingress.dhall) > $#",
visibility = [
"//visibility:public"
]
)
There is a way to instrument dhall to do "offline" builds, meaning that the package manager fetches all Dhall dependencies instead of Dhall fetching them.
In fact, I implemented something exactly this for Nixpkgs, which you may be able to translate to Bazel:
Add Nixpkgs support for Dhall
High-level explanation
The basic trick is to take advantage of a feature of Dhall's import system, which is that if a package protected by a semantic integrity check (i.e. a "semantic hash") is cached then Dhall will use the cache instead of fetching the package. You can build upon this trick to have the package manager bypass Dhall's remote imports by injecting dependencies in this way.
You can find the Nix-related logic for this here:
Nix function for building a Dhall package
... but I will try to explain how it works in a package-manager-independent way.
Package structure
First, the final product of a Dhall "package" built using Nix is a directory with the following structure:
$ nix-build --attr 'dhallPackages.Prelude'
…
$ tree -a ./result
./result
├── .cache
│ └── dhall
│ └── 122026b0ef498663d269e4dc6a82b0ee289ec565d683ef4c00d0ebdd25333a5a3c98
└── binary.dhall
2 directories, 2 files
The contents of this directory are:
./cache/dhall/1220XXX…XXX
A valid cache directory for Dhall containing a single build product: the binary encoding of the interpreted Dhall expression.
You can create such a binary file using dhall encode and you can compute the file name by replacing the XXX…XXX above with the sha256 encoding of the expression, which you can obtain using the dhall hash command.
./binary.dhall
A convenient Dhall file containing the expression missing sha256:XXX…XXX. Interpreting this expression only succeeds if the expression we built matching the hash sha256:XXX…XXX is already cached.
The file is called binary.dhall because this is the Dhall equivalent of a "binary" package distribution, meaning that the import can only be obtained from a binary cache and cannot be fetched and interpreted from source.
Optional: ./source.dhall
This is a file containing a fully αβ-normalized expression equivalent to the expression that was cached. By default, this should be omitted for all packages except perhaps the top-level package, since it contains the same expression that is stored inside of ./cache/1220XXX…XXX, albeit less efficiently (since the binary encoding is more compact)
This file is called ./source.dhall because this is the Dhall equivalent of a "source" package distribution, which contains valid source code to produce the same result.
User interface
The function for building a package takes four arguments:
The package name
This is not material to the build. It's just to name things since every Nix package has to have a human-readable name.
The dependencies for the build
Each of these dependencies is a build product that produces a directory tree just like the one I described above (i.e. a ./cache directory, a ./binary.dhall file, and an optional ./source.dhall file)
A Dhall expression
This is can be arbitrary Dhall source code, with only one caveat: all remote imports transitively referenced by the expression must be protected by integrity checks AND those imports must match one of the dependencies to this Dhall package (so that the import can be satisfied via the cache instead of the Dhall runtime fetching the URL)
A boolean option specifying whether to keep the ./source.dhall file, which is False by default
Implementation
The way that the Dhall package builder works is:
First, build the Haskell Dhall package with the -f-with-http flag
This flag compiles out support for HTTP remote imports, that way if the user forgets to supply a dependency for a remote import they will get an error message saying Import resolution is disabled
We'll be using this executable for all of the subsequent steps
Create a cache directory within the current working directory named .cache/dhall
... and populate the cache directory with the binary files stored inside each dependency's ./cache/ directory
Configure the interpreter to use the cache directory we created
... by setting XDG_CACHE_HOME to point to the .cache directory we just created in our current working directory
Interpret and α-normalize the Dhall source code for our package
... using the dhall --alpha command. Save the result to $out/source.dhall where $out is the directory that will store the final build product
Obtain the expression's hash
... using the dhall hash command. We will need this hash for the following two steps.
Create the corresponding binary cache file
... using the dhall encode command and save the file to $out/cache/dhall/1220${HASH}
Create the ./binary.dhall file
... by just writing out a text file to $out/binary.dhall containing missing sha256:${HASH}
Optional: Delete the ./source.dhall file
... if the user did not request to keep the file. Omitting this file by default helps conserve space within the package store by not storing the same expression twice (as both a binary file and source code).
Packaging conventions
Once you have this function, there are a couple of conventions that can help simplify doing things "in the large"
By default, a package should build a project's ./package.dhall file
Make it easy to override the package version
Make it easy to override the file built within the package
In other words, if a user prefers to import individual files like https://prelude.dhall-lang.org/List/map instead of the top-level ./package.dhall file there should be a way for them to specify a dependency like Prelude.override { file = "./List/map"; } to obtain a package that builds and caches that individual file.
Conclusion
I hope that helps! If you have more questions about how to do this you can either ask them here or you can also discuss more on our Discourse forum, especially on the thread where this idiom first originated:
Dhall Discourse - Offline use of Prelude
I have a simple project which uses SWIG to make a small C++ library available to C#. The C++ part is a single source and a single header file -- in addition there is one SWIG interface file. The output from SWIG consists of 5 C# source files and 1 C++ source file.
Doing this with make is fairly simple, but I'm having a few problems wrapping my head around bazel.
How can I tell bazel that those 6 files are all generated using the same command? Also, while I'm at it, how I do I tell bazel to actually invoke that command.
The end product, that I'm ultimately interested in, is a .net dll file which only depend on the interface file and the original C++ header file.
Bazel doesn't have a built-in rule to generate SWIG from C++, but you can either use a general-purpose rule (genrule) or teach Bazel how to build a SWIG library by writing your own rule.
If you use a genrule, you specify all the expected outputs in the outs attribute. Your rule will look something like this:
genrule(
name = "cc_swig",
srcs = [
"lib.cc",
"lib.h",
],
outs = [
"file1.cs",
...
"fileN.cc",
],
tools = [
"//path/to/swig/compiler:bin",
],
cmd = "$(location //path/to/swig/compiler:bin) --src=$(location lib.cc) --header=$(location lib.h) --out1=$(location file1.cs) ... --outN=$(location fileN.cc)",
)
The $(location) construct in cmd is a required placeholder, Bazel replaces those with the run-time path of the referenced file.
(If the SWIG compiler won't let you specify where to put its outputs, you can add more commands to cmd that mv the output files to their final location, e.g. cmd = "... && mv outputs/lib.cs $(location file1.cs)".)
Writing your own rules is more advanced so I won't describe that here, you can read about them in the docs.
On how to get Bazel to build the library -- if the SWIG-compiling rule is a dependency of your top-level target (i.e. whatever you "bazel build"), then Bazel will build it. See for example the Getting started guide, on how to build a C++ project.
iOS/OS X application names usually contains spaces (like "App Store.app").
But when i'm trying to use such name in my Qt/ios project like this:
ios: TARGET = "My Cool App"
build process failed with strange error in the autogenerated by Qt Bash script.
Am i doing something wrong, or such whitespaces in names just don't supported?
UPD
the problem first time occurs in qmake-generated shell script:
#!/bin/sh
cp -r $BUILT_PRODUCTS_DIR/$FULL_PRODUCT_NAME /Users/eraxillan/Projects/<PROJECT_DIR>
If $FULL_PRODUCT_NAME contains spaces, then script just fails.
Script generated and executed only in case of custom DESTDIR project variable value - my case.
So, looks like a bug in qmake-generated script to copy project build artefacts to custom output directory.
Workarounds:
Remove spaces from target name (My Cool App --> my-cool-app)
Do not use custom DESTDIR value at all
Hope this will be helpful
This should work:
TARGET = My" "Cool" "App
Xcode includes a flexible build rules system. The documentation is all but non-existant however.
A project I am working on for iOS 5 and iOS 6 includes an rtf help file. For iOS 6, I can convert the rtf file into an archived NSAttributedString object, then load that at runtimeand display it directly to a UITextView. For iOS 5, I can't (without a lot of work in Core Text...) so I want just the text without the style info.
I wrote a command line tool, RTFToData that takes an RTF file as input and generates a .txt file and a .data file (where the .data file contains a version of the styled text that my project knows how to use.)
Here is the syntax of my command line tool:
RTFToData [-o] source_path [destination_path]
-o (optional) overwite existing files
source_path (required) path to source RTF file (must have extension "rtf" or "RTF"
destination_directory (optional.) writes output files to source file directory if no destination specified
destination_directory must exist if specified.
I want to set up my project so that I can add .rtf files as sources (with the "add to target" checkbox NOT checked.) I want Xcode to run my RTFToData command on each file specifying that the output files should be copied into a directory and then added to the target.
Ideally, I'd like the build process to know about the dependencies between my source .rtf files and the processed .data and .txt files. If I touch a .rtf file, I'd like the build process to re-run the rtftodata command.
I am a makefile and unix scripting neophyte. I THINK I can use a run script build rule that will do this, but I am unclear on how. I guess I need to write a script that finds all files of type ".rtf", pipes that list of files into an invocation of my RTFToData.
Can somebody outline the steps I need to take in the Xcode IDE to make my project handle this smoothly?
As a side-note, is there some directory where you can put command line tools so they are available to the current version of Xcode? So far I've been installing the RTFToData command in /Library/usr/bin, but I'd really like the build tool to be included in the project, or at the very least, not have to use sudo to set up every development machine that is used to build this project.
Create a custom build phase
Add the .rtf files to your project and make sure they are added to your target.
Go to your target settings and select the "Build Rules" tab:
Click the "Add Build Rule" button at the bottom.
You want to configure your rule based on something like this:
Enter a standard wildcard glob for the files you want to match (*.rtf).
Inside the script section you can make use of a number of environment variables. Assuming your glob has matched the input file Test.rtf you have access to these vars:
INPUT_FILE_PATH = /path/to/your/project/source/Test.rtf
INPUT_FILE_NAME = Test.rtf
INPUT_FILE_BASE = Test
INPUT_FILE_SUFFIX = .rtf
INPUT_FILE_DIR = /path/to/your/project/source/
You want to process your file and send it to the ${DERIVED_FILES_DIR} directory with whatever new filename or extension you need. In this case we take the base filename from the input and give it a new extension.
Fill out the "Output Files" section with the same output file you used in the script. This will ensure the dependency system works and that the file will be copied to your .app. The script will only be run if the input has changed or the output file is missing from the .app.
Note that the "Output Files" should not have double quotes. The paths will be quoted for you by Xcode.
If your script generates multiple output files, add extra entries for those as well.
Once this is all set up, .rtf files added to your target will be converted to whatever output files your script generates. The original .rtf files will not exist in the final .app.
Where to put scripts/programs
As a side-note, is there some directory where you can put command line
tools so they are available to the current version of Xcode?
Put your tools somewhere below the directory that contains your .xcproject. Then from your build phase/rules use the ${SRCROOT} environment variable, which is the directory containing your project:
Assuming this file system layout:
/path/to/project/project.xcodeproj
/path/to/project/Tools/CommandLineTool
Use this in your build phase/rules:
"${SRCROOT}/Tools/CommandLineTool" "${INPUT_FILE_PATH}" ...
Remember to use double-quotes everywhere you can!