How to test dart pub lib package before deploy? - dart

what is the best way to test a pub lib package before deploy, as if I had downloaded it via pub install? (not talking about unit tests)

You can use path packages. Instead of going through a pub server this will fetch the package from the local filesystem.

It very much depends on the type of package you are looking to use.
If the pub package is primarily a non-UI library then you should be able to exercise its API via a UnitTest script, a small script that has a main to kick off a bunch of unit tests (grouped or otherwise).
Another option for a non-UI package is to find the source project (usually noted in the package's page on pub.dartlang.org) and download it, where if you're lucky there will be a test directory with a unit test script in it.
Some UI providing packages do include unit tests in their project too.
A lot of projects include an example or two you can run to see how it works and pick up some tips from their source code, so I encourage you to check out the original source of the project you're interested in.
But generally (especially for UI providing packages), you're going to get best results by creating a small skeleton app for the purpose of playing with the package and then apply what you learn to your main application.
Hope that helps.

Related

FunScript: How to access TypeScript.Api<...>

I'm getting started with FunScript with a working example. Using Nuget to add the needed libraries, it works well.
In watching a 2013 video on channel9, they are making use of TypeScript.Api<...> to load types from typescript definition files.
I'm however unable to find this type provider anywhere.
Where is it located?
I realized that a good number of the type definitions have been compiled into libraries and available on nuget but I can't really use this since some of the code will be local typescript definition files.
The questions therefore are
Where is the TypeScript.Api<...> type provider?
If it is not available or the best way to use typescript definition, what other options exists.
As Thomas said, the type provider was removed mainly because it couldn't generate generic types, but the idea is to bring it back at some point.
For the moment, though not ideal, you can generate your own bindings following these steps.
Download or clone Funscript repository
git clone https://github.com/ZachBray/FunScript
Build the project
cd FunScript
build.cmd
This needs to be improved but for now you need to zip the .d.ts files you want to convert and then:
cd build\TypeScript
bin\FunScript.TypeScript.exe C:\Path\to\typedefinitions.zip
cd Output
Please note the first time you build the definitions it may take several minutes. Once it's done in the output folder you'll find the compiled .dll libraries with the bindings.
Also, while you're at it. It's better if you use the FunScript version you just build into build\main\bin, as it will probably be more updated than the nuget package.
Good luck and have fun(script)!
There were a bunch of changes in FunScript, so the TypeScript.Api<...> type provider is no longer the recommended way of calling JavaScript libraries from FunScript.
Instead, the bindings for JavaScript libraries are pre-generated and you can find them as packages on NuGet, if you search for the FunScript tag (NuGet search is not very good, so you may need to go through a number of pages to find the one you need...).
If you want to use a local TypeScript definition, then you'll need to run the command line tool to generate the bindings. The F# Atom plugin does this in the build script, so looking there is a good place to start. It has a local copy of various TypeScript bindings in the typings folder (together with the FunScript binaries needed to process them).
I liked the type provider approach much better, but sadly, type providers are somewhat restricted in what kind of types they can provide, so it wasn't all that powerful...

TFS Build custom activity requiring more assemblies than needed

I've just written the first version of a workflow activity that will run Resharper's Code Issues on the projects and parse the output to display the issues as build warnings and errors.
At first, I was going to just call Resharper's command line and parse the resulting xml manually. After fiddling with the dlls in Resharper's SDK (through disassembly mostly), I found a way to parse the results using it's own public classes, which I figured was a much more elegant and safe way to do this.
The first problem I have is that that nuget package is absolutely huge. There is 140mb of files in there, which to me is absurd for a single, unpartitioned package. There seems to be such heavy coupling between them that by using just a few model classes and the parser class, I have to drag a dozen or so of those dlls along, some of them which seemingly have nothing to do with the main dlls I need. This is not a show stopper though, I'm struggling with something else now:
In the end, I managed to track down the dependencies I needed to 41 assemblies (which is, again, insane, but alas). Initially, I tried removing everything and adding the missing references one by one, but this turned out to be unreliable, still missing some indirect references, even after compiling successfully. Then, I decided to code a small console application to find all referenced assemblies in the main Resharper assemblies I used, which gave me the 41 references I mentioned. This is the code I used to find every dependency.
Since these are custom activities we are talking about, I decided to create a unit test project to validate them. Using these 41 references only, everything works correctly.
When I added the activity to the build workflow though, and pointed the build controller to the source control folder containing the required assemblies, every time I schedule a build, the process fails stating that I need one extra dll from Resharper's SDK. For example, this is the first one it asks:
Could not load file or assembly 'AsyncBridge.Net35, PublicKeyToken=b3b1c0202c0d6a87' or one of its dependencies. The system cannot find the file specified. (type FileNotFoundException)
When I add this specific assembly to the TFS folder, I get another similar error for another dll, and this keeps going on and on.
What I wanted to know is how can I know exactly which assemblies a workflow XAML will need in order to run correctly? My custom activity dll has two specific CodeActivities and a XAML only activity that uses these two. This XAML acticity is what I'm directly using in the modified workflow template.
I see that besides the references in my project, the XAML activity also contains a TextExpression.ReferencesForImplementation section, with some assembly names. I've run my dependency finder program on those dependencies too, and the results are the same 41 assemblies already at the TFS folder.
Meanwhile I'll go with having the whole SDK into the custom assemblies folder, but I would really like to avoid this in the future since it has such an enormous amount of unneeded and big dlls in there.
First, we have request for our command line tool to support workflow activity and we decided to implement just plain MsBuild task which is universal and works in TFS too. Task and targets files are included in ReSharper CLT 8.2.
Second, if you still want to implement workflow activity it's pretty easy to do with new API in CLT, designed specially for custom processing of found issues - http://confluence.jetbrains.com/display/NETCOM/Custom+InspectCode+Issue+Logger.
And last, but not least, you do not need to put in VCS binaries of ReSharper SDK package.
Use NuGet's restore package functionality.
If you have any other questions I'll be glad to answer them.
A custom activity is being load and run by .NET CLR like any other .NET program. If the stack trace reports a missing file, then it's required by the CLR and you can't change this fact without refactoring your code.
Having an entire SDK references in the custom assembly folder doesn't make sense. I would prefer GAC deployment over huge binaries folder in the source control. Or maybe consider having these activities running an pre\post build scripts in MSBuild or PowerShell.

What exacly is "buidling" from source and how does it work

So I really cant understand how this work but late me explain. First, just in case you need it, I am running Ubuntu 12.04 64-bit on a laptop.
As a building tool am using CMake. I want to load in to my project OpenCV, MRPT (http://www.mrpt.org/) and libfreenect. All of them have a "source code". What I don't understand is when they say "build from source". How to I make a project with all of them?
Do I need to build each one individually and with some way but then in my project OR do I down load the source code and build them all together at ones? As you can see I'm really confused what I have to do... do I run the CMakeList.txt from each source code and the run one CMakeList.txt that has all the other CMakeList.txt?
In fewer world, if I want to build from source, two or more libraries, how do I do that?
I would like a general answer (how this "build from source" works) and an answer specifically on the the ones I mentioned (CMake, OpenCV, MRPT, libfreenect). I hope I made clear what I don't really understand.
It depends of the 'master' project. In general in the c/c++ universe your project must know how to invoke the build process of each subproject/library OR your project needs to know how to include&link the results after building each external project yourself.
You can also mix the two approaches if needed but I think it cleaner to try to use one if possible.
In the first case if all the subprojects offer cmake building files (CMakeLists.txt) you may try to add_subdirectory() each and see if there are any conflicts. For example google test can be easily included this way and it gives your project some global variables that easy linking later.
Alternatively or if the above approach gives problems or the sub project doesn't provide CMakeLists.txt you can use ExternalProject_add(). It takes more work and you have to handle includes/linking configurations with your project manually but it makes the subproject more independent. For example if there are conflicting targets with your project or the subproject doesn't provide CMakeLists.txt.
The last approach involves building and installing the sub projects separately, using configuration variables in your project to point the includes/libraries paths of the sub project. Check CMake:How To Find Libraries for details.

How can I define a third Java source folder for Maven which gets compiled into a third JAR?

By default, Maven standard directory layout has two Java source folders:
src/main/java
src/test/java
For my purposes, I need a third one src/junit/java which should be packaged into a JAR with the classifier junit.
If possible, the new source folder should have it's own classpath (compile + everything with scope junit).
My guess is that for this, I will have to modify at least the resource and compile plugins.
Or is there an easier way?
I have a workaround as explained here but for that, I have to put things like Mockito and JUnit on the compile classpath which violates my sense of purity.
For all people who doubt the wisdom of my approach: I have support code that help to write unit tests when you work with code from src/main/java. Since I'm using the same support code in the tests for the project itself, this code needs to be compiled after src/main/java and before src/test/java.
Specifically, my support code needs to import code from src/main/java and the tests need to be able to import the support code.
I've seen a couple of Maven setups, which bundle test code in an own Maven module. You could then create a simple main-module <- support-module <- test-module dependency chain with that. But then main-module would compile fine, if you build it on it's own without test-module. Ofc you could aggreate them together with a reactor-pom and just build the project via this pom.
Edit:
If you have problems with this setup regarding code coverage, you can use the Jacoco Maven plugin to aggregate the test coverage generated by test-module to main-module. See this for further information: http://www.petrikainulainen.net/programming/maven/creating-code-coverage-reports-for-unit-and-integration-tests-with-the-jacoco-maven-plugin/

Dealing with shared helpers in Common Test suites?

I've got an Erlang project comprising a bunch of different applications. I'm using Common Test to do some of the testing.
apps/foo/suites/foo_SUITE.erl
apps/bar/suites/bar_SUITE.erl
I'm starting to see duplication of utility code in those suites.
Where should I put my utility code so that it can be shared between the two suites?
I've considered adding another application:
apps/test_stuff
...but I can't make the CT suites depend on this without making the application under test depend on this (or can I?). I don't want to do that, because test_stuff is only needed when testing.
I have a similar problem with my eunit tests, both between applications (apps/foo/test vs. apps/bar/test), and where I'm using similar functionality between the eunit and CT tests in the same application (apps/bar/suites vs apps/bar/test). Can I use the same solution for this case as well? Or do I need to ask another question about that?
Do you think ct:require/1,2 could help you so that foo and bar SUITE would require test_stuff before it gets executed? For more information http://www.erlang.org/doc/man/ct.html#require-1
It depends on how you are packaging your final releases. For example, I use rebar for relase management. I have Cowboy fetched along with other dependencies for testing purposes, but in my reltool.config, I omit it, so it doesn't get packaged with the final product. I use rebar to run Common Test, and it's able to add Cowboy to the path without having it bundled as a lib with everything else or added as a dependency to the app I'm testing.
However, if you have another process which infers your release configuration from your dependencies, you'll have to find a way to exclude your test code when you generate a release.

Resources