Bower & amd modules - dependency-injection

The case
I'm trying to get to the most convenient solution of setting up a big javascript project.
Requirements are:
Modular javascript: just one object in the global namespace if necessary
Compatible with bower components
Compatible with grunt: building and deployment done by grunt (contrib-usemin or contrib-requirejs)
To my own suprise, this turns out to be a non trivial task.
I'm running into the following issues when using AMD:
Loading bower components cannot alwasy be done easily. Raphael for example cannot be loaded using AMD without modifying the source. Which is really not an option when using bower, since I only push the dependency list to git. Also: loading javascript libraries that do not support AMD, can be shimmed, but consist of more than one file (like jquery-ui) are problematic; I would need to hack that together.
De requirejs optimizer builds everything into one file, not allowing the option to seperate libraries from site scripts. Something that seems a sane thing to do.
When I'm NOT using AMD I run into other issues:
How do I control dependencies in a large project?
A possible solution
So I'm contemplating a solution that would:
Keep it portable, don't force AMD on future users
Prevent global namespace clutter
Remain compatible with bower
Allow usemin to build the whole lot in grunt
It would consist of a small script defining a require( <deps>, <factory> ) and a `define( , , ) function that implements basic module definition and injection. It would not implement any asynchronous loading or queuing of scripts with unmatched dependencies!
Furthermore I will define any module using the named module patter instead of using anonymous modules. Even though this will sacrifice a minimal amount of portability.
Now I can use either requirejs or that tiny dependency injector in combination with manual <script src=""></script> loading. When using the latter option I would still need to register the loaded non-amd libraries using something like this:
define( 'raphael', [], function() { return Raphael; })
What do you think? Am I doing something sane? Reinventing the wheel? Unnecessarily complex?
Update
I think I could use almond (https://github.com/jrburke/almond) to fullfill the abovementioned purpose.

Loading bower components cannot alwasy be done easily. Raphael for example cannot be loaded using AMD without modifying the source
you could use the shim config from requirejs to make modules loaded that are usually not AMD loadable. (or is raphael a really special case?)
De requirejs optimizer builds everything into one file, not allowing the option to seperate libraries from site scripts. Something that seems a sane thing to do.
thats not true imho. read http://requirejs.org/docs/optimization.html#wholemultipage

Related

Does compiled AngularDart pollutes global scope or overrides Standard objects of the browser?

I'm looking for a framework that will allow me to write a SPA and a embeddable library. I would love to have a way to share component between both. So I'm looking for a solution that has relatively small amount of potential conflicts with other frameworks and with AngularDart it self. Including case when library has been included using script tab, yes two versions of AngularDart on the same page. A framework that has less Global Objects, no Standard Object overrides, no Global Event handling and limited polyfill conflicts.
Dart and AngularDart seams what I need, but I also need more details and docs to validate my assumptions. Anything you are able to point out would be very helpful and greatly appreciated (issues, PR, blogs , roadmap, commits, specs, docs)
It's possible to run multiple AngularDart apps on the same page. I've tested AngularDart todo example app embedded in itself. But I need more details on what dart2js is doing and how compiler avoids global scope pollution.
Yes, AngularDart should be well suited for your requirements.
Dart itself shouldn't pollute your scope at all, you can try running dart2js on something trivial (like just print inside main) and verify the code - it creates a closure and executes it, so nothing inside is accessible from outside. There is also no patching of any global JS objects, so you can run it alongside anything without interference. If it's not the case file a bug.
You can run as many AngularDart applications on a single page as you wish. To get them fully isolated you can compile each one separately with dart2js, then they wouldn't be able to access any of each other internals whatsoever.

Handling complex and large dependencies

Problem
I've been developing a game in C++ in my spare time and I've opted to use Bazel as my build tool since I have never had a ton of luck (or fun) working with make or cmake. I also have dependencies in other languages (python for some of the high level scripting). I'm using glfw for basic window handling and high level graphics support and that works well enough but now comes the problem. I'm uncertain on how I should handle dependencies like glfw in a Bazel world.
For some of my dependencies (like gtest and fruit) I can just reference them in my WORKSPACE file and Bazel handles them automagically but glfw hasn't adopted Bazel. So all of this leads me to ask, what should I do about dependencies that don't use Bazel inside a Bazel project?
Current approach
For many of the simpler dependencies I have, I simply created a new_git_repository entry in my WORKSPACE file and created a BUILD file for the library. This works great until you get to really complicated libraries like glfw that have a number of dependencies on their own.
When building glfw for a Linux machine running X11 you now have a dependency on X11 which would mean adding X11 to my Bazel setup. X11 Comes with its own set of dependencies (the X11 libraries like X11Cursor) and so on.
glfw also tries to provide basic joystick support which is provided by default in Linux which is great! Except that this is provided by the kernel which means that the kernel is also a dependency of my project. Now I shouldn't need anything more than the kernel headers this still seems like a lot to bring in.
Alternative Options
The reason I took the approach I've taken so far is to make the dependencies required to spin up a machine that can successfully build my game very minimal. In theory they just need a C/C++ compiler, Java 8, and Bazel and they're off to the races. This is great since it also means I can create a Docker container that has Bazel installed and do CI/CD really easily.
I could sacrifice this ease and just say that you need to have libraries like glfw installed before attempting to compile the game but that brings the whole which version is installed and how is it all configured problem back up that Bazel is supposed to help solve.
Surely there is a simpler solution and I'm overthinking this?
If the glfw project has no BUILD files, then you have the following options:
Build glfw inside a genrule.
If glfw supports some other build system like make, you could create a genrule that runs the tool. This approach has obvious drawbacks, like the not-to-be-underestimated impracticality of having to declare all inputs of that genrule, but it'd be the simplest way of Bazel'izing glfw.
Pre-build glfw.o and check it into your source tree.
You can create a cc_library rule for it, and put the .o file in the srcs. Even though this solution is the least flexible of all because you not only restrict the target platform to whatever the .o was built for, but also make it harder to reproduce the whole build, the benefits are sometimes worth the costs.
I view this approach as a last resort. Even in Bazel's own source code there's one cc_library.srcs that includes a raw object file, because it was worth it, as the commit message of 92caf38 explains.
Require that glfw be installed.
You already considered this option. Some people may prefer this to the other approaches.

Best Practices exposing Bower components

I am building a Spring project with Bower to manage client libraries. I am interested to know what is the best practices way to expose those libraries (or any sort of client libraries managed by a package manager) to the web client.
I can see that I can use a .bowerrc file to choose where to install the files. I could have them install into a static resources folder, one where each of the files installed would be accessible to http requests. It struck me as a potential code smell, however, to expose all the files, instead of the ones that I specifically need.
I could copy individual files into such a directory, or adopt an automated solution to do the same. If this is not considered necessary, however, I would prefer not to expend the effort.
Which of these, or any other solution (if any) is considered the clear best practices way to do this and why? (Please provide a reference to support your answer.) To be clear, I am not interested in individual opinion, but rather if there is a known, clearly preferred, solution.
After looking at what a lot of projects and tutorial suggest, it seems that the clear way to do this is the following:
Use a framework like Grunt or Gulp to separate "built" code from source code. Built code, in this case refers to code that is copied, minified, and/or concatenated into a separate folder. The Grunt or Gulp configuration file should include all application code, as well as select source files from bower components. The running application should reference only these "built" files. The directory of "built" client-side code should be served statically by Spring.

What are the very specific advantages to WebJars compared to simply using Bower?

I am new to WebJars and so far have failed to see any real advantages that they bring to the table compared to simply using Bower for example. What justifies the idea of putting frontend JS into backend JARs?
Please name at least one or more "objective" advantages to using WebJars.
WebJars are useful when you have a JVM-based project and you want to use only one dependency manager / build tool.

Does a transformed Dart app need to be one file?

Lets say I'm making an app where I need to push updates to the client-side regularly. Now most of these update would only ever affect my own code, never the libraries that form up my dependancies. As far as I'm aware, the transformer I'll use when invoking pub build, will take all my libraries, dependancies, and whatever else, and compile them together in a single web/main.dart.js file. This tends to remind me somewhat of static linking, as with languages like C++.
For obvious reasons, compiling like this is something you'd only want to do when finally deploying an app. For most people, it suffices to just use dartium and work off the .dart hierarchy directly. However, what if I were testing in a javascript browser, trying out dart:js code, for instance. I wouldn't want to have to recompile all of angular and friends, when they go entirely untouched. My specific case is the desire to use CDN services for static files within my app.
AngularDart, as my example, contributes a massive 22,000 lines to my compiled javascript, and if I change one little thing within my own app, I can kiss 304 NOT MODIFIED goodbye, let alone CDN savings, for that bulk of untouched angular.
All that being said, is there a way to de-couple dependancies in a transformed dart application? Can I "dynamicly link" my dart libraries? And furthermore, could I hypothetically distribute my dart libraries as dart.js, for use in other transformed code?
No, it doesn't work this way. This could work if the deployable contained the entire dependencies but that's not the case.
pub build includes only the code that's actually used in your app (tree shaking) and this is usually only a small fraction of a dependency. Changing a single character in your code can require including different code from your dependencies. I think treeshaking has a much bigger effect than caching could have.

Resources