referencing local dart libraries - dart

I am writing 4 dart libraries A, B, C, and D and they are all early in development so I don't want to publish them on pub yet.
It is also the case that only A, B and C are public libraries which all depend on D which should be private to just those three libraries. How Do I manage a situation like this?
Can I use pub to install library D for A, B and C on my local development machine whilst it isn't published? and how do I publish A, B and C when they are complete without publishing D, seen as D is not particularly useful if it isn't being used by A, B or C?
I have read the getting started guide and package structure documentation but neither seemed to cover this type of situation, i.e. how to manage private libraries. There is also this SO question but the user didn't answer it after resolving her issue.

By default, dependencies resolve to Pub but you can override that to import packages from URLs, git and local path.
For instance, the following is the syntax for importing a local package:
dependencies:
transmogrify:
path: /Users/me/transmogrify
See Pub Dependencies for more info.
As far as the other part of your question, I don't see how A, B, and C can logically be public packages and rely on a private package. I would publish all of the packages and just include in the description of D that is not meant as a standalone package and is only intended as a helper package for A, B, and C.
You could also publish A, B, and C to Pub and host D on github or a public URL and specify the relevant URL dependency in the pubspec for A, B, and C (see the above link for the proper syntax). This might make the differentiation between D and the other libraries a bit clearer, though in practice they will all still be publicly available packages.

Related

Can I get bazel to trigger extra actions after a target is built?

I have a bazel rule that produces an artifact. How should I do to add a post-processing step with the artifact produced as a dependency?
We have a big build system where our macros are used in several BUILD files. So now I need to add another step that would use the artifacts produced by a specific macro to create another artifact, and hopefully without having to update all BUILD files.
In a non-bazel context I would probably use something that triggers the extra step, but in the bazel context, the best thing I have come up with have been to add a new macro, that uses the rule created by the other macro as a dependency.
It is something like this today:
Macro M1 generate rule R1 - which produce artifact A.
Buildfile B use macro M1 and when that target is built artifact A is produced.
So I can now add a macro M2 that generate rule R2, which produce artifact B.Artifact A is a dependency to this rule.The users will use macro M2 instead.
But can I do this in some other way?
Example of a use case could be that I have a macro that produces a binary, and I now want to add e.g. signing. "The users" will still want to build that binary, and the signed artifact is created as a bi-product of little interest for the users.
You could update M1 to call M2.
M1 calling M2 merely declares rules. Typically macros look like this:
def M1(name, arg1, ...):
R1(name=name, arg1=arg1, ...)
When you build M1 rule "//foo:bar", you actually build R1 named "//foo:bar". So you must update M1 to call R1 using some other name than name, e.g. name + "dep", and call M2 with the name and pass R1's name as a dependency. So then if you build "//foo:bar", you'll build M2's underlying rule (R2), which depends on R1, and so Bazel first builds R1 (and produces A), and then R2 (consuming A)..
One more thing: Bazel preprocesses macros into actual rules, before it loads the rules in the BUILD file. You can inspect the result of this preprocessing, to see what rules you actually have in the package, like so:
bazel query --output=build //foo:*

Referencing local package from external package

What label can be used to reference a local package from an external package's BUILD file?
Say I have package A, which is my top level package. In the WORKSPACE file of package A, I grab external package B, which I use the build_file argument to overlay a BUILD.bazel file onto.
A's cc_library rule does not actually depend on B.
The A.Tests rule, depends on A and on B.
B has a dependency on A as well.
In the BUILD file that I defined for B, how do I reference A? No labels seemed to work. Is this possible?
If A.Tests depends on B (and A), and B also depends on A, why are A and B separate?
To answer your question, you need to create a third workspace C, declare both A and B as external workspaces, then A's targets can reference #B//x:y and B's targets can reference #A//z:w.
Inside of B's BUILD file (specified with the build_file argument), I can reference A via this label: #//<path_to_A>

How to compare Rails ''executables" before and after refactor?

In C, I could generate an executable, do an extensive rename only refactor, then compare executables again to confirm that the executable did not change. This was very handy to ensure that the refactor did not break anything.
Has anyone done anything similar with Ruby, particularly a Rails app? Strategies and methods would be appreciated. Ideally, I could run a script that output a single file of some sort that was purely bytecode and was not changed by naming changes. I'm guessing JRuby or Rubinus would be helpful here.
I don't think this strategy will work for Ruby. Unlike C, where the compiler throws away the names, most of the things you name in Ruby carry that name with them. That includes classes, modules, constants, and instance variables.
Automated unit and integration tests are the way to go to support Ruby refactoring.
Interesting question -- I like the definitive "yes" answer you can get from this regression strategy, at least for the specific case of rename refactoring.
I'm not expert enough to tell whether you can compile ruby (or at least a subset, without things like eval) but there seem to be some hints at:
http://www.hokstad.com/the-problem-with-compiling-ruby.html
http://rubini.us/2011/03/17/running-ruby-with-no-ruby/
Supposing that a complete compilation isn't possible, what about an abstract interpretation approach? You could parse the ruby into an AST, emit some kind of C code from the AST, and then compile the C code. The C code would not need to fully capture the behavior of the ruby code. It would only need to be compilable and to be distinct whenever the ruby was distinct. (Actually running it could result in gibberish, or perhaps an immediate memory violation error.)
As a simple example, suppose that ruby supported multiplication and C didn't. Then you could include a static mult function in your C code and translate from:
a = b + c*d
to
a = b + mult(c,d)
and the resulting compiled code would be invariant under name refactoring but would show discrepancies under other sorts of change. The mult function need not actually implement multiplication, you could have one of these instead:
static int mult( int a, int b ) { return a + b; } // pretty close
static int mult( int a, int b ) { return *0; } // not close at all, but still sufficient
and you'd still get the invariance you need as long as the C compiler isn't going to inline the definition. The same sort of translation, from an uncompilable ruby construct to a less functional but distinct C construct, should work for object manipulation and so forth, mapping class operations into C structure references. The key point is just that you want to keep the naming relationships intact while sacrificing actual behavior.
(I wonder whether you could do something with a single C struct that has members (all pointers to the same struct type) named after all the class and property names in the ruby code. Class and object operations would then correspond to nested dereference operations using this single structure. Just a notion.)
Even if you cannot formulate a precise mapping, an imprecise mapping that misses some minor distinctions might still be enough to increase confidence in the original name refactoring.
The quickest way to implement such a scheme might be to map from byte code to C (rather from the ruby AST to C). That would save a lot of parsing, but the mapping would be harder to understand and verify.

Go uses Go to parse itself?

I am starting a class project that regards adding some functionality to Go.
However, I am thoroughly confused on the structure of Go. I was under the impression that Go used flex and bison but I can't find anything familiar in the Go source code.
On the other hand, the directory go/src/pkg/go has folders with familiar names (ast, token, parser, etc.) but all they contain are .go files. I'm confused!
My request is, of anyone familiar with Go, can you give me an overview of how Go is lexed, parsed, etc. and where to find the files to edit the grammar and whatnot?
The directory structure:
src/cmd/5* ARM
src/cmd/6* amd64 (x86-64)
src/cmd/8* i386 (x86-32)
src/cmd/cc C compiler (common part)
src/cmd/gc Go compiler (common part)
src/cmd/ld Linker (common part)
src/cmd/6c C compiler (amd64-specific part)
src/cmd/6g Go compiler (amd64-specific part)
src/cmd/6l Linker (amd64-specific part)
Lexer is written in pure C (no flex). Grammar is written in Bison:
src/cmd/gc/lex.c
src/cmd/gc/go.y
Many directories under src/cmd contain a doc.go file with short description of the directory's contents.
If you are planning to modify the grammar, it should be noted that the Bison grammar sometimes does not distinguish between expressions and types.
lex.c
go.y
The Go compilers are written in c, which is why you need flex and bison. The Go package for parsing is not used. If you wanted to write a self hosting compiler in Go, you could use the Go parsing package.

Which TFS Branch Strategy should be using for this scenario

I'm creating a library which is referenced by components in a tree like
Component A -> Componenent B
Component A -> Component C
Component B -> Component C
By branching A into B, and then B into C I can safely complete all my references. But, I ran into a case where the tree was a little more complicated.
Component A -> Componenent B
Component A -> Component C
Component B -> Component C
Component A -> Component D
Component D -> Component C
When I branch D into C, I have two instances of A.
The goal of branching each component is that the solution of C can be checked out with all dependencies in its folder structure, rather than having to check out the solution and external folders which are referenced. Is there a better approach, and or how would I resolve scenario 2?
We had a strategy like this and also ran into the same problem you did.
We ended up going back to using a lib folder and checking in built dlls. Yea, you loose a few things, but it is much simpler and we have had no regrets.
Edit: we are now using nuget for this. Highly recommended.

Resources