Referencing local package from external package - bazel

What label can be used to reference a local package from an external package's BUILD file?
Say I have package A, which is my top level package. In the WORKSPACE file of package A, I grab external package B, which I use the build_file argument to overlay a BUILD.bazel file onto.
A's cc_library rule does not actually depend on B.
The A.Tests rule, depends on A and on B.
B has a dependency on A as well.
In the BUILD file that I defined for B, how do I reference A? No labels seemed to work. Is this possible?

If A.Tests depends on B (and A), and B also depends on A, why are A and B separate?
To answer your question, you need to create a third workspace C, declare both A and B as external workspaces, then A's targets can reference #B//x:y and B's targets can reference #A//z:w.

Inside of B's BUILD file (specified with the build_file argument), I can reference A via this label: #//<path_to_A>

Related

Can I get bazel to trigger extra actions after a target is built?

I have a bazel rule that produces an artifact. How should I do to add a post-processing step with the artifact produced as a dependency?
We have a big build system where our macros are used in several BUILD files. So now I need to add another step that would use the artifacts produced by a specific macro to create another artifact, and hopefully without having to update all BUILD files.
In a non-bazel context I would probably use something that triggers the extra step, but in the bazel context, the best thing I have come up with have been to add a new macro, that uses the rule created by the other macro as a dependency.
It is something like this today:
Macro M1 generate rule R1 - which produce artifact A.
Buildfile B use macro M1 and when that target is built artifact A is produced.
So I can now add a macro M2 that generate rule R2, which produce artifact B.Artifact A is a dependency to this rule.The users will use macro M2 instead.
But can I do this in some other way?
Example of a use case could be that I have a macro that produces a binary, and I now want to add e.g. signing. "The users" will still want to build that binary, and the signed artifact is created as a bi-product of little interest for the users.
You could update M1 to call M2.
M1 calling M2 merely declares rules. Typically macros look like this:
def M1(name, arg1, ...):
R1(name=name, arg1=arg1, ...)
When you build M1 rule "//foo:bar", you actually build R1 named "//foo:bar". So you must update M1 to call R1 using some other name than name, e.g. name + "dep", and call M2 with the name and pass R1's name as a dependency. So then if you build "//foo:bar", you'll build M2's underlying rule (R2), which depends on R1, and so Bazel first builds R1 (and produces A), and then R2 (consuming A)..
One more thing: Bazel preprocesses macros into actual rules, before it loads the rules in the BUILD file. You can inspect the result of this preprocessing, to see what rules you actually have in the package, like so:
bazel query --output=build //foo:*

Can I set different macro value in cc_library depend on different cc_binary?

There is a cc_library target named 'L', and cc_binary targets 'A', 'B', 'C' depend on it.
Library L implements a transaction framework, contains an plain char array with length 100 for example, and lots of complicated logic on the array.
Now target B need a larger data size, but target A and C want smaller size to hold more transactions at the same time.
When using makefile, a doable way is using #ifdef/#else in L to set different macro values for the length. Then loop A, B, C, build them with different -D=A, -D=B and -D=C. So the lib L will have different array length in three different binary.
Is there a better way to implement it?
Can I do the same thing in bazel?
You can follow exactly the same approach:
using define on a cc_library to define multiple versions of the library (such as "L_complex_transactions" on which A depends and "L_many_transactions" on which B and C depend).
and better use config_setting on the binaries, and a select statement on the cc_library to select the appropriate define.

SAS - results of %let and libname combinations

I'm currently learning SAS and I got a code to be adapted and reused but first I have to understand it. My question is about just a small part of it (top of the code). Here it is:
%let dir=/home/user/PROJECT/CODES/;
%let dir_project=/home/user/PROJECT/;
libname inp "&dir_project" compress=yes;
libname out "&dir.out" compress=yes;
%let tg=out.vip;
My questions are:
What does &dir.out mean? What is it referring to? I suppose it's something called "out". Is it looking for a database OUT? If yes and all my databases are usually temporary ones in WORK should I change it to WORK.OUT?
What is the resulting path of "tg"? I doubt that it is: "/home/user/PROJECT/CODES/out.vip".
Originally the code was referring to some locations on C: drive but I work entirely in SAS Studio so I have to adapt it.
Thank you in advance
The first two statements define two macro variables, DIR and DIR_PROJECT. In the second two statements you use those macro variables to define two librefs, INP and OUT. The last statement just defines another macro variable named TG.
Macro variable references start with & and are followed by the name of the macro variable to expand. SAS will stop looking for the macro variable name when is sees a character that cannot be part of a macro variable name or a period. That is why the first libname statement uses the value of the DIR_PROJECT macro variable instead of the DIR macro variable. The period in the second libname statement tells SAS that you want to replace &dir. with the value of the macro variable DIR. If you had instead just written &dirout then SAS would look for a macro variable named DIROUT.
Macro variable just contain text. The meaning of the text depends on what SAS code you generate with them. So the first two macro variable look like they contain absolute paths to directories on your Unix file system, since they start from the root node / and end with a /. This is confirmed by how you use them to generate libname statements.
By adding the constant text out after the path in the second libname statement the result is that you are looking for a sub-directory named out in the directory that the value of the macro variable DIR names.
As for the last macro variable TG what it means depends on how it is used. Since it is of the form of two names separated by a period then it looks like it can be used to refer to a SAS dataset. Especially since the first name is the same as one of the librefs that you defined in the libname statements. So you might use that macro variable in code like this:
proc print data=&tg ; run;
Which would be expanded into:
proc print data=out.vip ; run;
In that case you are looking for the SAS dataset named VIP in the library named OUT. So you would be looking for the Unix file named:
/home/user/PROJECT/CODES/out/vip.sas7bdat
Now if you used that macro variable in some SQL code like this:
select &tg ...
Then it would expand to
select out.vip ....
and in that case you would be referencing a variable named VIP in an input dataset named (or aliased as) OUT.
1 - &dir. is a macro variable. The period marks the end of the variable, and thus &dir.out resolves to /home/user/PROJECT/CODES/out at runtime. Your libname statement will now link the libref out to this physical location.
2 - the tg variable is a dataset reference, in the form "library.dataset". Here, out is the library, and vip is the dataset. This way you can write code such as:
data &tg.;
set sashelp.class;
run;
To create the dataset vip in the out library.
In this way, you are in fact (almost) right. The resulting path of &tg. (which resolves to out.vip) will be /home/user/PROJECT/CODES/out/vip.sas7bdat.

referencing local dart libraries

I am writing 4 dart libraries A, B, C, and D and they are all early in development so I don't want to publish them on pub yet.
It is also the case that only A, B and C are public libraries which all depend on D which should be private to just those three libraries. How Do I manage a situation like this?
Can I use pub to install library D for A, B and C on my local development machine whilst it isn't published? and how do I publish A, B and C when they are complete without publishing D, seen as D is not particularly useful if it isn't being used by A, B or C?
I have read the getting started guide and package structure documentation but neither seemed to cover this type of situation, i.e. how to manage private libraries. There is also this SO question but the user didn't answer it after resolving her issue.
By default, dependencies resolve to Pub but you can override that to import packages from URLs, git and local path.
For instance, the following is the syntax for importing a local package:
dependencies:
transmogrify:
path: /Users/me/transmogrify
See Pub Dependencies for more info.
As far as the other part of your question, I don't see how A, B, and C can logically be public packages and rely on a private package. I would publish all of the packages and just include in the description of D that is not meant as a standalone package and is only intended as a helper package for A, B, and C.
You could also publish A, B, and C to Pub and host D on github or a public URL and specify the relevant URL dependency in the pubspec for A, B, and C (see the above link for the proper syntax). This might make the differentiation between D and the other libraries a bit clearer, though in practice they will all still be publicly available packages.

Which TFS Branch Strategy should be using for this scenario

I'm creating a library which is referenced by components in a tree like
Component A -> Componenent B
Component A -> Component C
Component B -> Component C
By branching A into B, and then B into C I can safely complete all my references. But, I ran into a case where the tree was a little more complicated.
Component A -> Componenent B
Component A -> Component C
Component B -> Component C
Component A -> Component D
Component D -> Component C
When I branch D into C, I have two instances of A.
The goal of branching each component is that the solution of C can be checked out with all dependencies in its folder structure, rather than having to check out the solution and external folders which are referenced. Is there a better approach, and or how would I resolve scenario 2?
We had a strategy like this and also ran into the same problem you did.
We ended up going back to using a lib folder and checking in built dlls. Yea, you loose a few things, but it is much simpler and we have had no regrets.
Edit: we are now using nuget for this. Highly recommended.

Resources