I'm getting started with FunScript with a working example. Using Nuget to add the needed libraries, it works well.
In watching a 2013 video on channel9, they are making use of TypeScript.Api<...> to load types from typescript definition files.
I'm however unable to find this type provider anywhere.
Where is it located?
I realized that a good number of the type definitions have been compiled into libraries and available on nuget but I can't really use this since some of the code will be local typescript definition files.
The questions therefore are
Where is the TypeScript.Api<...> type provider?
If it is not available or the best way to use typescript definition, what other options exists.
As Thomas said, the type provider was removed mainly because it couldn't generate generic types, but the idea is to bring it back at some point.
For the moment, though not ideal, you can generate your own bindings following these steps.
Download or clone Funscript repository
git clone https://github.com/ZachBray/FunScript
Build the project
cd FunScript
build.cmd
This needs to be improved but for now you need to zip the .d.ts files you want to convert and then:
cd build\TypeScript
bin\FunScript.TypeScript.exe C:\Path\to\typedefinitions.zip
cd Output
Please note the first time you build the definitions it may take several minutes. Once it's done in the output folder you'll find the compiled .dll libraries with the bindings.
Also, while you're at it. It's better if you use the FunScript version you just build into build\main\bin, as it will probably be more updated than the nuget package.
Good luck and have fun(script)!
There were a bunch of changes in FunScript, so the TypeScript.Api<...> type provider is no longer the recommended way of calling JavaScript libraries from FunScript.
Instead, the bindings for JavaScript libraries are pre-generated and you can find them as packages on NuGet, if you search for the FunScript tag (NuGet search is not very good, so you may need to go through a number of pages to find the one you need...).
If you want to use a local TypeScript definition, then you'll need to run the command line tool to generate the bindings. The F# Atom plugin does this in the build script, so looking there is a good place to start. It has a local copy of various TypeScript bindings in the typings folder (together with the FunScript binaries needed to process them).
I liked the type provider approach much better, but sadly, type providers are somewhat restricted in what kind of types they can provide, so it wasn't all that powerful...
Related
I would like to use a very large non-bazel system in a bazel project. Specifically, ROS2. This dependency provides a large number of python, C, and C++ libraries which are built using its own hand-rolled buildsystem. Obviously, I would like to avoid having to translate the entire buildsystem over to bazel.
Broadly, what's the best way of me doing this? In instinct was to use a custom repository rule to download the source (since it's split across many repositories), then use a genrule to call the ROS2 build system. Then write my simple cc_import and py_library rules for each of the individual components that I need.
However, I'm having trouble with the bit where I need to call the foreign build system. It seems that genrules require a list of output files to be specified, while I would like it to make an entire build directory available.
Before I spent any more time on this, I thought I'd ask whether I'm on the right lines since I'm new to bazel. Is this a good strategy? How would you approach this problem? Are there any other projects that mainly use bazel, but call other build systems in this way that I can look at?
As of recent, you can use rules_foreign_cc to call native CMake or make/configure like projects.
I am building a Spring project with Bower to manage client libraries. I am interested to know what is the best practices way to expose those libraries (or any sort of client libraries managed by a package manager) to the web client.
I can see that I can use a .bowerrc file to choose where to install the files. I could have them install into a static resources folder, one where each of the files installed would be accessible to http requests. It struck me as a potential code smell, however, to expose all the files, instead of the ones that I specifically need.
I could copy individual files into such a directory, or adopt an automated solution to do the same. If this is not considered necessary, however, I would prefer not to expend the effort.
Which of these, or any other solution (if any) is considered the clear best practices way to do this and why? (Please provide a reference to support your answer.) To be clear, I am not interested in individual opinion, but rather if there is a known, clearly preferred, solution.
After looking at what a lot of projects and tutorial suggest, it seems that the clear way to do this is the following:
Use a framework like Grunt or Gulp to separate "built" code from source code. Built code, in this case refers to code that is copied, minified, and/or concatenated into a separate folder. The Grunt or Gulp configuration file should include all application code, as well as select source files from bower components. The running application should reference only these "built" files. The directory of "built" client-side code should be served statically by Spring.
I've just written the first version of a workflow activity that will run Resharper's Code Issues on the projects and parse the output to display the issues as build warnings and errors.
At first, I was going to just call Resharper's command line and parse the resulting xml manually. After fiddling with the dlls in Resharper's SDK (through disassembly mostly), I found a way to parse the results using it's own public classes, which I figured was a much more elegant and safe way to do this.
The first problem I have is that that nuget package is absolutely huge. There is 140mb of files in there, which to me is absurd for a single, unpartitioned package. There seems to be such heavy coupling between them that by using just a few model classes and the parser class, I have to drag a dozen or so of those dlls along, some of them which seemingly have nothing to do with the main dlls I need. This is not a show stopper though, I'm struggling with something else now:
In the end, I managed to track down the dependencies I needed to 41 assemblies (which is, again, insane, but alas). Initially, I tried removing everything and adding the missing references one by one, but this turned out to be unreliable, still missing some indirect references, even after compiling successfully. Then, I decided to code a small console application to find all referenced assemblies in the main Resharper assemblies I used, which gave me the 41 references I mentioned. This is the code I used to find every dependency.
Since these are custom activities we are talking about, I decided to create a unit test project to validate them. Using these 41 references only, everything works correctly.
When I added the activity to the build workflow though, and pointed the build controller to the source control folder containing the required assemblies, every time I schedule a build, the process fails stating that I need one extra dll from Resharper's SDK. For example, this is the first one it asks:
Could not load file or assembly 'AsyncBridge.Net35, PublicKeyToken=b3b1c0202c0d6a87' or one of its dependencies. The system cannot find the file specified. (type FileNotFoundException)
When I add this specific assembly to the TFS folder, I get another similar error for another dll, and this keeps going on and on.
What I wanted to know is how can I know exactly which assemblies a workflow XAML will need in order to run correctly? My custom activity dll has two specific CodeActivities and a XAML only activity that uses these two. This XAML acticity is what I'm directly using in the modified workflow template.
I see that besides the references in my project, the XAML activity also contains a TextExpression.ReferencesForImplementation section, with some assembly names. I've run my dependency finder program on those dependencies too, and the results are the same 41 assemblies already at the TFS folder.
Meanwhile I'll go with having the whole SDK into the custom assemblies folder, but I would really like to avoid this in the future since it has such an enormous amount of unneeded and big dlls in there.
First, we have request for our command line tool to support workflow activity and we decided to implement just plain MsBuild task which is universal and works in TFS too. Task and targets files are included in ReSharper CLT 8.2.
Second, if you still want to implement workflow activity it's pretty easy to do with new API in CLT, designed specially for custom processing of found issues - http://confluence.jetbrains.com/display/NETCOM/Custom+InspectCode+Issue+Logger.
And last, but not least, you do not need to put in VCS binaries of ReSharper SDK package.
Use NuGet's restore package functionality.
If you have any other questions I'll be glad to answer them.
A custom activity is being load and run by .NET CLR like any other .NET program. If the stack trace reports a missing file, then it's required by the CLR and you can't change this fact without refactoring your code.
Having an entire SDK references in the custom assembly folder doesn't make sense. I would prefer GAC deployment over huge binaries folder in the source control. Or maybe consider having these activities running an pre\post build scripts in MSBuild or PowerShell.
So I really cant understand how this work but late me explain. First, just in case you need it, I am running Ubuntu 12.04 64-bit on a laptop.
As a building tool am using CMake. I want to load in to my project OpenCV, MRPT (http://www.mrpt.org/) and libfreenect. All of them have a "source code". What I don't understand is when they say "build from source". How to I make a project with all of them?
Do I need to build each one individually and with some way but then in my project OR do I down load the source code and build them all together at ones? As you can see I'm really confused what I have to do... do I run the CMakeList.txt from each source code and the run one CMakeList.txt that has all the other CMakeList.txt?
In fewer world, if I want to build from source, two or more libraries, how do I do that?
I would like a general answer (how this "build from source" works) and an answer specifically on the the ones I mentioned (CMake, OpenCV, MRPT, libfreenect). I hope I made clear what I don't really understand.
It depends of the 'master' project. In general in the c/c++ universe your project must know how to invoke the build process of each subproject/library OR your project needs to know how to include&link the results after building each external project yourself.
You can also mix the two approaches if needed but I think it cleaner to try to use one if possible.
In the first case if all the subprojects offer cmake building files (CMakeLists.txt) you may try to add_subdirectory() each and see if there are any conflicts. For example google test can be easily included this way and it gives your project some global variables that easy linking later.
Alternatively or if the above approach gives problems or the sub project doesn't provide CMakeLists.txt you can use ExternalProject_add(). It takes more work and you have to handle includes/linking configurations with your project manually but it makes the subproject more independent. For example if there are conflicting targets with your project or the subproject doesn't provide CMakeLists.txt.
The last approach involves building and installing the sub projects separately, using configuration variables in your project to point the includes/libraries paths of the sub project. Check CMake:How To Find Libraries for details.
Installing third-party components always take a long time specially if you have large ones, but also it take more time if you setup the environment in more than one computer.
And I'm thinking to add them to the Version Control (Subversion), so it will be always easy to checkout the project with all it's required components.
So how you manage that, and what's the best practice to manage them inside the VCS?
Also consider some of these third-parties come without source but as Delphi libraries. (BPL).
If we have the source, then we include that in our repository, under a separate folder.
If we don't have the source, then we just keep the most recent binaries (bpl, dll, whatever) in the repository, and include installation / usage instructions in a setup document.
It looks like this:
\root
\third_party_stuff
\vendor1 --we *do* have the source for this
\src
\bin
\vendor2 --we *do* have the source for this
\src
\bin
\vendor3 --we don't have the source for this one
\bin
\our_stuff
\project1
\src
\bin
\project2
\src
\bin
With Subversion, I use the externals feature. It makes it easy to use the third-party stuff in multiple projects; when you check out a project, you get the external dependencies as well.
If you don't have it already, you should get a copy of Pragmatic Version Control Using Subversion. It's a great book about Subversion functionality and how to do things. While it references SVN from the command-line, the info is also easily translatable to the GUI in TortoiseSVN.
For reinstalling the components into Delphi for older projects, I usually export the registry entries for whatever version of Delphi used into the project's folder and then check that .REG file into Subversion along with the project. You can easily check out the project, export your existing Delphi registry section for the corresponding version of Delphi, import the .REG file from your project source folder, and then start Delphi with all of the components installed.
As far as the "binary BPL" issue, shame on you! If you have projects depending on third-party tools, you should buy the source for them. That way you're protected against that company going out of business, or dropping support for the components, or new releases of Delphi that aren't compatible. I always get source for third-party components; if the source isn't available, I find a different product or write the code myself. It's called self-preservation. :-)
Firstly, I'll agree with both Ken and Fabricio that you must have the source code for all components you are using in a project. Anything else is just asking for trouble.
We don't use Subversion for our Source Control, but I'm guessing what we do would still be applicable...
Each project that we work on has a full copy of all components (source) used in that project. When we release, we create a release branch that includes the components as well as the project source. Each project includes it's own BPL directory.
We always create separate shortcuts to run Delphi for each project (or branch of a project) that we want to work on, and use the -R command-line parameter to set a unique Registry key for the Delphi settings for that project.
We then make sure we override the Path environment variable within Delphi to point to our project BPL directory instead of the normal Delphi BPL directory.
We set the BPL and DCP output directories for all components to be the local project BPL directory.
This allows us to have multiple versions of Delphi, with multiple versions of projects using different versions of components without any problems.
I agree with Ken White on this: delphi 3rd party components' used in production code
must have the source code
Period. Compiled binaries-only distributions are for evaluation purposes ONLY. It's our policy here.
As for the question: I actually does not put them on VCS. Actually I use the latest version that my projects compiles and works. The mess with system, search, library, etcetera... paths doesn't worth. 2 JVCL on the same machine or comimg back and forth versions by any new project? ARRRRGH.
If I have to use an old version to a maintenance system, drop a new VM and install the latest version. It works? Ok. Not? It stay on the VM until I discover a way to integrate on the main environment.
One version of each thing is more than enough.
It's worth mentioning that some companys like LMD offer remote access to their own SVN repository for customers with support subscription. I find that a good way of getting fast bugfixes for critical issues.