How do I get the target directory in bazel - bazel

I've got a genrule that produces some output files but the tool I'm using needs to know where to put the files.
So far, I've been able to get working by using dirname $(location outputfile), but this seems like a very fragile solution

You can read about which make variables are available in a genrule here:
https://docs.bazel.build/versions/master/be/make-variables.html
In particular:
#D: The output directory. If there is only one filename in outs, this
expands to the directory containing that file. If there are multiple
filenames, this variable instead expands to the package's root
directory in the genfiles tree, even if all the generated files belong
to the same subdirectory! If the genrule needs to generate temporary
intermediate files (perhaps as a result of using some other tool like
a compiler) then it should attempt to write the temporary files to #D
(although /tmp will also be writable), and to remove any such
generated temporary files. Especially, avoid writing to directories
containing inputs - they may be on read-only filesystems, and even if
they aren't, doing so would trash the source tree.
In general, if the tool lets you (or if you're writing your own tool) it's best if you give the tool the individual input and output file names. For example, if the tool understands inputs only as directories, and that's usually ok if the directory contains only the things you want, but if it doesn't, then you have to rely on sandboxing to show the tool only the files you want, or you have to manually create temporary directories. Outputs as directories gives you less control over what the outputs are named, and you still have to enumerate the files in the genrule's outs.

Related

Bazel Starlark: how can I generate a BUILD file procedurally?

After downloading an archive throug http_archive I'd like to run a script to generate a BUILD file from the folder structure and Cmake files in it (I currently do that by hand and it is easy enough that it could be scripted). I don't find anything on how to open, read and write files in the starlark documentation but since http_archive itself is loaded from a bzl file (haven't found the source of that file yet though...) and generates BUILD files (by unpacking them from archives) I guess it must be possible to write a wrapper for http_archive that also generates the BUILD file?
This is a perfect use case for a custom repository rule. That lets you run arbitrary commands to generate the files for the repository, along with some helpers for common operations like downloading a file over HTTP using the repository cache (if configured). A repository rule conceptually similar to a normal rule, but with much less infrastructure because it's running during the loading phase when most of the Bazel infrastructure doesn't apply yet.
The starlark implementation of http_archive is in http.bzl. The core of it is a single call to ctx.download_and_extract. Your custom rule should do that too. http_archive then calls workspace_and_buildfile and patch from util.bzl, which do what they sound like. Instead of workspace_and_buildfile, you should call ctx.execute to run your command to generate the BUILD file. You could call patch if you want, or skip that functionality if you're not going to use it.
The repository_ctx page in the documentation is the top-level reference for everything your repository rule's implementation function can do, if you want to extend it further.
When using http_archive, you can use the build_file argument to create a BUILD file. To generate it dynamically, I think you can use the patch_cmds argument to run external commands.

File match exclude pattern working in Copy Files task but not in Delete Files task?

As part of my TFS 2018 build, I want to move files using the Copy Files and Delete Files tasks together. I am using the same file match patterns in both of these tasks, but I seem to be getting different behavior.
Scenario: My TFS build copies build artifacts to a network folder \\some\path\Beta. In my build, this path is saved as a variable, $(NetworkPath). The root folder of these artifacts is a version number that is, of course, changing on each build. I want to clean this folder up by adding $(NetworkPath)\PreviousVersions that holds all previous versions, so that the only version shown in "Beta" is the most recent build.
My Attempt: I've added a Copy Files task (first) and a Delete Files task (second) to my build. My idea is to copy everything in $(NetworkPath) into $(NetworkPath)\PreviousVersions (excluding the contents of $(NetworkPath)\PreviousVersions), before I do the second copy to put the new version into $(NetworkPath).
In the task definitions, the Source Folder of both tasks is $(NetworkPath), and the file matching patterns I've defined in the Contents field for both tasks are:
**\**
!PreviousVersions\**
The Target Folder in the Copy Files task is, naturally, $(NetworkFolder)\PreviousVersion.
Results: With these search paths, the Copy Files task works properly -- it copies everything that is in $(NetworkPath) but is not in $(NetworkPath)\PreviousVersions and puts it in $(NetworkPath)\PreviousVersions. The subsequent Delete Files task, though, deletes everything from $(NetworkPath), including the entire $(NetworkPath)\PreviousVersions folder. I expected it to only delete the files and folders in $(NetworkPath) but not in $(NetworkPath)\PreviousVersions.
What am I missing here?
Here are the workaround file matching patterns I found to achieve the behavior I wanted:
For the Copy Files task: *.*.*.*\**
For the Delete Files task: *.*.*.*
Clearly, this exploits the fact that the files I wanted to move have a root folder which has a version number as its name (i.e. 2.5.0.11), so this solution is not applicable to many people.
That said, here are some things that helped me narrow down my issue and solution:
globtester is a handy little minimatch pattern tester.
When dealing with these two tasks, setting debug = true in the queuing panel of the build will give you more useful logs on what the match patterns are actually doing.
For most scenarios, Daniel Mann's comment above is applicable, and I will be discussing such a change in the future.

How to keep dcu files in per project folder (e.g.: .\$(Platform)\$(Config)\$(ProjectFilename)?

I have lot of Delphi projects in a project group. I can set Unit output directory to .\$(Platform)\$(Config) and all dcu files will keep in the directory according to platform and config value.
In my build environment, I would like to set the Unit output directory to something like .\$(Platform)\$(Config)\$(ProjectFilename) so all DCU files shall keep in it's own directory identified by current project file.
The Build Events in Project | Options has $(ProjectFilename) macro but I can't use it in Unit output directory.
I want to set .\$(Platform)\$(Config)\$(ProjectFilename) to all projects' Unit output directory and it will keep all DCU files in unique project directory.
I found this answer coincidentally. I pick one project and (ms)build with verbosity of diagnostic. By studying the output of msbuild, I simply pick a variable: MSBuildProjectName and specify in my optset file shared by 300 projects:
<DCC_DcuOutput>.\$(Platform)\$(Config)\$(MSBuildProjectName)</DCC_DcuOutput>
And I try build all projects in IDE. Amazingly, Delphi create folders for each project built and keep the DCU files in the folders respectively.
The Build Events pre-processors supports a range of macros, some of which are equivalent to some environment variables.
The DCU Output folder setting supports only environment variables and not these macros.
Possible Alternative Approach
To get a per-project DCU folder you can take a different approach, making dcu a subfolder of the current project, e.g.:
Unit Output Directory: .\dcu
(or perhaps just "dcu", but I prefer to include the ".\" if only to make it clear that the relative setting is intentional)
This achieves the objective of keeping the DCU's for each project separate from each other, but means you no longer have all DCU's in a separate location outside of the project folder.
You can of course still use the $(platform) and $(config) variables in this relative path, if this is important to you:
Unit Output Directory: .\dcu\$(platform)\$(config)
Whether this is an acceptable compromise only you can say in your situation.
Often the intention of keeping DCU's in a location other than the project folder is to:
keep the project folder "clean"
avoid having to maintain a long list of "ignore" entries for each dcu file in VCS (SubVersion/Git etc)
Keeping DCU's in a project subfolder achieves the first of these, and the second issue is much simplified by being able to add just the DCU subfolder to the VCS ignore list, to ignore any file in that DCU folder.

TFS MSBuild Copy Files from Network Location Into Build Directory

We are using TFS to build our solutions. We have some help files that we don't include in our projects as we don't want to grant our document writer access to the source. These files are placed in a folder on our network.
When the build kicks off we want the process to grab the files from the network location and place them into a help folder that is part of source.
I have found an activity in the xaml for the build process called CopyDirectory. I think this may work but I'm not sure what values to place into the Destination and Source properties. After each successful build the build is copied out to a network location. We want to copy the files from one network location into the new build directory.
I may be approaching this the wrong way, but any help would be much appreciated.
Thanks.
First, you might want to consider your documentation author placing his documents in TFS. You can give him access to a separate folder or project without granting access to your source code. The advantages of this are:
Everything is in source control. Files dropped in a network folder are easily misplaced or corrupted, and you have no history of changes to them. The ideal for any project is that everything related to the project is captured in source control so you can lift out a complete historical version whenever one is needed.
You can map the documentation to a different local folder on your build server such that simply executing the "get" of the source code automatically copies the documentation exactly where it's needed.
The disadvantage is that you may need an extra CAL for him to be able to do this.
Another (more laborious) approach is to let him save to the network location, and have a developer check the new files into TFS periodically. If the docs aren't updated often this may be an acceptable compromise.
However, if you wish to copy the docs from the net during your build, you can use one of the MSBuild Copy commands (as you are already aware), or you can use Exec. The copy commands are more complicated to use because they often populated with filename lists that are generated from outputs of other build targets, and are usually used with solution-relative pathnames. But if you're happy with DOS commands (xcopy/robocopy), then you may find it much easier just to use Exec to run an xcopy/robocopy command. You can then "develop" and test the xcopy command outside the MSBuild environment and then just paste it into the MSBuild script with confidence that it will work - much easier than trialling copy settings as part of your full build process.
Exec is documented here. The example shows pretty well how to do what you want, but in your case you can probably just replace the Command attribute with the entire xcopy/robocopy command (or even the name of a batch file) you want to use, so you won't need to set up the ItemGroup etc.

Comparison between two big directories

I have a large directory that contains only stuff in CS and Math. It is over 16GB in size. The types are text, png, pdf and chm. I have currently two branches: a branch of my brother's and mine. The initial files were the same. I need to compare them. I have tried to use Git, but there is a long loading time.
What is the best way to compare two big directories?
[Mixed Solution]
Do a "ls -R > different_files" in both directories [1]
"sdiff <(echo file1 | md5deep) <(echo file2 | md5deep)" [2]
What do you think? Any drawbacks?
[1] thanks to Paul Tomblin
[2] great thanks to all repliers!
Use fslint: website. One of the options of the tool is "Duplicates". As per the description from the site:
One of the most commonly used features of FSlint is the ability to find duplicate files. The easiest way to remove lint from a hard drive is to discard any duplicate files that may exist. Often a computer user may not know that they have four, five, or more copies of the exact same song in their music collection under different names or directories. Any file type whether it be music, photos, or work documents can easily be copied and replicated on your computer. As the duplicates are collected, they eat away at the available hard drive space. The first menu option offered by FSlint allows you to find and remove these duplicate files.
How to compare 2 folders without pre-existing commands/products:
Simply create a program that scans each directory and creates a file hash of each file. It outputs a file with each relative file path and the file hash.
Run this program on both folders.
Then you simply compare the 2 output files to see if they are the same. To compare those 2 files you just load them into a string and do a string compare.
The hashing algorithm you use doesn't matter. You can use MD5, SHA, CRC, ...
You could also use the file size in the output files to help reduce the chance of collisions.
How to compare 2 folders with pre-existing commands/products:
Now if you just want a program that does it, use diff -r or windiff for windows based systems.
Use md5deep to create recursive md5sum listings of every file in those directories.
You can the use a diff tool to compare the generated listings.
Are you just trying to discover what files are present in one that aren't in the other, and vice versa? A couple of suggestions:
Do a "ls -R" in both directories, redirect to files, and diff the files.
Do a "rsync -n" between them to see what rsync would have to copy if it were to be allowed to copy. (-n means don't do the rsync, just show you what it would do if you ran it without the -n)
I would diffing by comparing the output of md5sum * | sort
That will take you to the files that are different/missing
I know this question has already been answered, however if you are not into writing such a tool yourself, there's a very well working open source project by the name of tardiff available on sourceforge which basically does exactly what you want, and even supports automated creation of patches (in tar format obviously) to account for differences.
Hope this helps

Resources