Why must we run a build.dart script to develop with web_ui ?
I thought it was an frequent noob question about web_ui but I do not find an answer about that. Maybe I miss some web resources or articles.
With Angular.js or Polymer MDV don't need it, and they use bidirectionnal binding.
With future version of Web_ui or Chronium version, does the build.dart will be still necessary ?
This side of web_ui disappoints me a little bit and I feel it could discourage developers to use it.
Another point is I don't like project organisation with HTML sources in "web" and another "web/out" directory ? Can we configure the script to have another out directory like "templates" for templates and "web" for output ?
Than
If you want to use #observable, then you need to run a code generation step. Because Dart is a more structured language, it's not currently possible to add methods or change structure of an object at runtime. Therefore, we must run through a small code generation step that converts #observable into the code to track and notify for changes.
Polymer doesn't need this because they can alter the object at runtime. Also, Object.observe is landing in V8 (already landed?) which means the runtime performs observability automatically.
We know this is a problem, and we have a few ideas on how to solve it.
Build a devserver that does the building for you automatically.
Implement (eventually) mirror builders, which would allow you to alter program structure at runtime.
Option 1 is a near-term solution, and option 2 is a long-term solution.
Related
I'm looking for a framework that will allow me to write a SPA and a embeddable library. I would love to have a way to share component between both. So I'm looking for a solution that has relatively small amount of potential conflicts with other frameworks and with AngularDart it self. Including case when library has been included using script tab, yes two versions of AngularDart on the same page. A framework that has less Global Objects, no Standard Object overrides, no Global Event handling and limited polyfill conflicts.
Dart and AngularDart seams what I need, but I also need more details and docs to validate my assumptions. Anything you are able to point out would be very helpful and greatly appreciated (issues, PR, blogs , roadmap, commits, specs, docs)
It's possible to run multiple AngularDart apps on the same page. I've tested AngularDart todo example app embedded in itself. But I need more details on what dart2js is doing and how compiler avoids global scope pollution.
Yes, AngularDart should be well suited for your requirements.
Dart itself shouldn't pollute your scope at all, you can try running dart2js on something trivial (like just print inside main) and verify the code - it creates a closure and executes it, so nothing inside is accessible from outside. There is also no patching of any global JS objects, so you can run it alongside anything without interference. If it's not the case file a bug.
You can run as many AngularDart applications on a single page as you wish. To get them fully isolated you can compile each one separately with dart2js, then they wouldn't be able to access any of each other internals whatsoever.
I'm getting started with FunScript with a working example. Using Nuget to add the needed libraries, it works well.
In watching a 2013 video on channel9, they are making use of TypeScript.Api<...> to load types from typescript definition files.
I'm however unable to find this type provider anywhere.
Where is it located?
I realized that a good number of the type definitions have been compiled into libraries and available on nuget but I can't really use this since some of the code will be local typescript definition files.
The questions therefore are
Where is the TypeScript.Api<...> type provider?
If it is not available or the best way to use typescript definition, what other options exists.
As Thomas said, the type provider was removed mainly because it couldn't generate generic types, but the idea is to bring it back at some point.
For the moment, though not ideal, you can generate your own bindings following these steps.
Download or clone Funscript repository
git clone https://github.com/ZachBray/FunScript
Build the project
cd FunScript
build.cmd
This needs to be improved but for now you need to zip the .d.ts files you want to convert and then:
cd build\TypeScript
bin\FunScript.TypeScript.exe C:\Path\to\typedefinitions.zip
cd Output
Please note the first time you build the definitions it may take several minutes. Once it's done in the output folder you'll find the compiled .dll libraries with the bindings.
Also, while you're at it. It's better if you use the FunScript version you just build into build\main\bin, as it will probably be more updated than the nuget package.
Good luck and have fun(script)!
There were a bunch of changes in FunScript, so the TypeScript.Api<...> type provider is no longer the recommended way of calling JavaScript libraries from FunScript.
Instead, the bindings for JavaScript libraries are pre-generated and you can find them as packages on NuGet, if you search for the FunScript tag (NuGet search is not very good, so you may need to go through a number of pages to find the one you need...).
If you want to use a local TypeScript definition, then you'll need to run the command line tool to generate the bindings. The F# Atom plugin does this in the build script, so looking there is a good place to start. It has a local copy of various TypeScript bindings in the typings folder (together with the FunScript binaries needed to process them).
I liked the type provider approach much better, but sadly, type providers are somewhat restricted in what kind of types they can provide, so it wasn't all that powerful...
Lets say I'm making an app where I need to push updates to the client-side regularly. Now most of these update would only ever affect my own code, never the libraries that form up my dependancies. As far as I'm aware, the transformer I'll use when invoking pub build, will take all my libraries, dependancies, and whatever else, and compile them together in a single web/main.dart.js file. This tends to remind me somewhat of static linking, as with languages like C++.
For obvious reasons, compiling like this is something you'd only want to do when finally deploying an app. For most people, it suffices to just use dartium and work off the .dart hierarchy directly. However, what if I were testing in a javascript browser, trying out dart:js code, for instance. I wouldn't want to have to recompile all of angular and friends, when they go entirely untouched. My specific case is the desire to use CDN services for static files within my app.
AngularDart, as my example, contributes a massive 22,000 lines to my compiled javascript, and if I change one little thing within my own app, I can kiss 304 NOT MODIFIED goodbye, let alone CDN savings, for that bulk of untouched angular.
All that being said, is there a way to de-couple dependancies in a transformed dart application? Can I "dynamicly link" my dart libraries? And furthermore, could I hypothetically distribute my dart libraries as dart.js, for use in other transformed code?
No, it doesn't work this way. This could work if the deployable contained the entire dependencies but that's not the case.
pub build includes only the code that's actually used in your app (tree shaking) and this is usually only a small fraction of a dependency. Changing a single character in your code can require including different code from your dependencies. I think treeshaking has a much bigger effect than caching could have.
I have a large application to manage consisting of of three or four executables and as many as fifty .dlls. Many of the source code files are shared across many of the projects.
The problem is a familiar one to many of us - if I change some source code I want to be able to identify which of the binaries will change and, therefore, what it is appropriate to retest.
A simple approach would be simply to compare file sizes. That is an 80% acceptable solution, but there is at least a theoretical possibility of missing something. Secondly, it gives me very little indication as to WHAT has changed; It would be ideal to get some form of report on this so I can then filter out irrelevant (e.g. dates/versions copyrights etc..)
On the plus side :
all my .dcus are in a row - I mean they are all built into a single folder
the build is controlled by a script (.bat)(easy, for example, to emit .obj files if that helps)
svn makes it easy to collect together any (two) revisions for comparison
On the minus side
There is no policy to include all used units in all projects; some units get included because they are on a search path.
Just knowing that a changed unit is used/compiled by a project is not sufficient proof that the binary is affected.
Before I begin writing some code to solve the problem I would like to ask the panel what suggestions they might have as to how to approach this.
The rules of StackOverflow forbid me to ask for recommended software, but if anyone has any positive experiences of continuous integration tools that would help - great
I am open to any suggestion or observation that is relevant in this context.
It seems to me that your question boils down to knowing which units are contained in your various executables. Since you are using search paths, it will be hard for you to work this out ahead of time. The most robust way to find out is to consult the .map file that the compiler emits. This contains a list of all units contained in your executable.
Once you know which units are contained in each executable, you need to know whether or not anything has changed in those units. That information is contained in your revision control system. Put this all together and you have the information that you need.
Of course, just because the source code for a unit has changed, you might argue that re-testing is not needed. Perhaps the only change made was the version, or the date in a copyright label or some such. But it is asking too much to be able to ask a computer to make such a judgement. At some point you need a human to step up and take responsibility.
What is odd about this though is that you are asking the question at all. It seems to me to be enormously risky to attempt partial testing. I cannot understand why you don't simply retest the entire product.
After using it for > 10 years for commercial in-house and freelancer work in large projects, I can recommend to try Apache Ant. It is a build tool which supports dependencies, and has many very helpful features.
Apache Ant also integrates nicely with CI tools such as Hudson/Jenkins, Bamboo etc.
Another suggestion - based on experience with Maven - is to design the general software architecture as modular as possible. If modules (single or multiple source or DCU files in one directory) use a version number in the directory name as a version number, it is possible to control exactly how application are composed from these modules.
If you want to program such a tool yourself the approach would be something like this:
First you need to detect wheter there were any changes made to seperate source files. As you already figured out comparing the file size is bad idea as the file size can stay the same despite lots of changes made to it (as long as there is same amount of text in pas file its size won't change). So instead you could check the last modification time for specific file or create some hash value like MD5 hash for comparison (can be quite slow).
Then you need to generate yourself a dependancy tree which will tell you which files are used for which project/subproject.
Finally based on changes detected in seperate files you check the dependancy tree to see which projects needs to be recompiled.
The problem of such approach is that you would probably have to update the dependancy tree manually each time when new unit is added to the project or an existing one is removed from the project.
But the best way would be to go and use some version controll software istead of reinventing the wheel. I myself like the way how GIT works and I belive that with proper implementation of GIT into the project mannager itself could be quite powerfull do to GIT support of branching/subbranching (each project is its own branch, each version of your software can be its own subbranch).
Now latest version of Delphi does have GIT integration done though SVN but this unfortunately limits some of best GIT functionality. So if you maybe decide to go and integrate GIT support directly into Delphi I'm first in line to use it.
In this article it says: "The Dart VM reads and executes source code, which means there is no compile step between edit and run.". Does that mean that you can exchange source-code on the fly in a running Dart system like in Erlang? Maybe the compiler is removed from the runtime system and then this is no longer possible. So that's why I'm asking.
Dart is run "natively" only in Dartium, which is a flavour of Chrome with DartVM. When you develop an application you still need to compile it it to JavaScript. This way you get fast development lifecycle and in the end you can compile code to JS. Because it's compiled code there is lots more room for compiler to run optimisations on the code. So from my perspective, the compiler is still there and I don't think you would be able to replace code at runtime.
You can send around source code and run it, but it would need to be in a separate isolate. Isolates do have some relationship to Erlang concepts.
The Dart VM doesn't support hot swapping (Called live edit in V8). However, based on mailing list discussions, it sounds like this is something that the authors do want to support in the future.
However, as the others have mentioned, it is possible to dynamically load code into another isolate.