Trying to understand requirejs, shim and dependencies while updating code - jquery-ui

Short version:
I'm updating some old libs to try to get them in AMD/requirejs format for management, but some of them have dependencies on old code.
Main Question:
I'm primarily confused as to what to list in the:
define(['what','goes','here'],function('what','needs','to','be','here'){})
and what goes in the shim dependencies list when dealing with combinations of AMD and non-AMD tools, and things like jquery-ui and jquery plugins.
ADDITIONAL INFO
The problem:
One of the older libraries depends on .draggable() from (and older version of) jquery-ui, some old version of a jquery plugin called 'onScreen', a spinner modal called spin.js -- all of which are not AMD friendly. (I also implemented an update to an AMD friendly new version of dropzone)
Two of the older libraries also use a modal library called vex which requires a dependency of vex.dialog. The existing site has an old version that is uglified.
I'm trying not to completely revamp this code as the longer term goal would be to remove those dependencies entirely, but I may not have to the time now to figure out what they are doing.
I've tried every combination of define(['list','of','stuff']) I can think of, but some of the libraries like spin (class Spinner), vex/vex.dialog and onScreen still don't always load properly. (sometimes I get one, but then lose another)
Can I define a shim AND include a list of AMD modules in the define? And if so, do I include the AMD list of dependencies in the shim in require.config? What goes where and why?
My libraries:
ImageSelector (requires AwsHelper, Utilities and ImageLayout below)
-- uses jquery (AMD), dropzone (AMD) and an old jquery plugin called jquery.onscreen.js (non-AMD)
-- depends on vex and vex.dialog (non-AMD)
-- uses .draggable() from old jquery-ui (non-AMD)
-- calls a global function 'loadSpinner' which uses spin.js (non-AMD -- see Utilities below)
ImageLayout (requires AwsHelper and Utilities - has attached instance of ImageSelector as a property .selector for methods that work in conjunction with the selector)
-- uses jquery (AMD)
-- also utilizes vex/vex.dialog (non-AMD)
Utilities
-- I'm trying to move the loadSpinner() function that requires spin.js (class Spinner, non-AMD) into this
-- I've managed thus far to avoid dependencies on things like jquery in this by refactoring code
Long version:
I'm trying to update some website code to use require.js for dependency management and to make the code more portable. But I'm running into a number of dependencies on old code that don't appear to be AMD-ready. Where possible, I'm trying to replace these with updated code and/or replace their functionality entirely, but in a number of cases, the code is minified and it's difficult to get a quick handle on what it's doing.
Rather than getting mired in minutia of trying to figure out and either replace or update these things, I read about how 'shim' can be used in some cases to handle these types of non-AMD code, but I'm still unclear on how to configure them.
Here's what I have... I have three libraries I have updated and one new one I created. One called 'ImageSelector' builds a web-gui to allow uploading files with dropzone. (My reason for updating it is that I converted it from using a local filesystem to using Amazon AWS S3 storage.) A second one called 'ImageLayout' handles the business logic of creating a product layout of photos selected by the user. (ImageSelector is split into two frames, a left one for uploading and sorting user files into folders, a right one for building the layout. Thus ImageSelector is dependent on ImageLayout)
The third library is one I created just with a number of repeatedly use 'utility' functions used across the website. There is an existing structured-code version of this in global scope with just a list of functions like roundPrecision(), sanitizeFilename(), escapeRegex(), baseName(), etc. I was going to build this with static methods, but then realized I can customize it if I spawn instances of it instead (e.g. I can change the characters 'sanitized' for different applications with global instance parameters)
The new one is the AwsHelper which is not a problem as it's entirely new code and handles all the interaction with Amazon AWS and S3. It was created in a define() AMD format while the others I have converted to define()/export format.
Anyway, some functions of the ImageLayout can be used independently by the order system, but for the most part, it's used as a dependency of the ImageSelector. AwsHelper is used mostly by ImageSelector but there are two functions in ImageLayout that utilize it. All of the above use the Utilities library.
My guess is something like this in the config (using ImageSelector as an example, but I'm wondering if "jquery" an "dropzone" need to be in there or the function define or both?)
shim: {
"ImageSelector": {
deps: ["jquery","dropzone","vex","vex.dialog","jquery-ui","jquery.onscreen"]
}
}
Additional require.js semantic questions:
(I'll post these separately if needed, but they may be short-answer and related)
Is there anything anywhere that shows how require.js searches for files? e.g. I understand about r.js for uglifying, but in some cases I can't track down the original code for these things. Can filenames include .min.js on the end or version numbers and will require.js still find them or should I rename and/or symlink files? e.g. jquery.js vs jquery-1.7.min.js for example.
The spin.js referenced above actually includes a class definition called 'Spinner'. How do I represent that in the config/shim?

Well, I posted that based on my experimenting the last 3 days riddled with failures, expecting more trouble. But apparently, shim was straightforward and having the required libs in more than one place (shim definitions and define([])) wasn't a problem.
I took a blind guess going through the examples on the require.js and came up with this configuration and amazingly it worked first try! (which makes me nervous as this is the first time I've gotten this code to work with no errors since trying to import it to require.js)
Here's what I came up with:
requirejs.config({
"baseUrl": "/js/lib",
"paths": {
"ImageSelector" : "../awsS3/ImageSelector",
"ImageLayout" : "../awsS3/ImageLayout",
"AwsHelper" : "../awsS3/AwsHelper",
"Utilities" : "../awsS3/Utilities"
},
"shim": {
"jquery.onscreen": {
"deps": ['jquery'],
"exports": 'jQuery.fn.onScreen'
},
"jquery-ui" : ['jquery'],
"vex.dialog" : ['jquery','vex'],
"vex" : ['jquery'],
"spin" : {
"exports": "Spinner"
},
"aws-sdk" : {
"exports" : "AWS"
},
"Utilities": ["spin"],
"AwsHelper": ["jquery","aws-sdk"],
"ImageSelector": {
"deps" : ["jquery","dropzone","vex","vex.dialog","jquery-ui","jquery.onscreen","ImageLayout","AwsHelper","Utilities"]
},
"ImageLayout": {
"deps" : ["jquery","vex","vex.dialog","Utilities"]
}
}
});
I also noted that some of the version naming was handled in the paths, thus I just named my libs in the paths and got rid of my "app/" directory reference altogether.

Related

CFBundleGetFunctionPointerForName and dlsym return NULL for exported function

I have a fork of the JavaScriptCore framework, where I have added a function of my own, which is exported. The framework compiles just find. Running nm on the framework reveals that the function (JSContextCreateBacktrace_unsafe) is indeed exported:
Leo-Natans-Wix-MPB:JavaScriptCore.framework lnatan$ nm -gU JavaScriptCore.framework/JavaScriptCore | grep JSContextCreateBacktrace
00000000004cb860 T _JSContextCreateBacktrace
00000000004cba10 T _JSContextCreateBacktrace_unsafe
However, I am unable to obtain the pointer of that function using CFBundleGetFunctionPointerForName or dlsym; both return NULL. At first, I used dlopen to open my framework, then tried using CFBundleCreate and then CFBundleGetFunctionPointerForName but that also returns NULL.
What could cause this?
Update
Something fishy is going on. I renamed one of the JSC functions, and nm reflects this. However, dlsym is still able to find the function with the original name, rather than the renamed.
It's hard to track this down since it's highly dependent on your specific environment and circumstances, but it is very likely you're running into this issue because the system image has already been loaded and you haven't changed the name of the framework.
If you look at the source code for dlopen in dyld/dyldAPIS.cpp:1458, you'll notice the context passed to dyld is configured with matchByInstallName = true. This context is then passed to load which executes the various stages necessary for image loading. There are a few phases worth noting:
loadPhase2 in dyld/dyld.cpp:2896 extracts the ending of the framework path and searches for it in the search path
loadPhase5check in dyld/dyld:2712 iterates over all loaded images and determines if any of them have a matching install name, and if one does, it returns that instead of loading a new one.
loadPhase5load in dyld/dyld:2601 finally loads the image if it wasn't loaded/found by any earlier steps. (It's worth noting loadPhase5check is executed first, since image loading is a two pass process.)
Given all of the above, I'd try renaming your framework to something besides JavaScriptCore.framework. Depending on the install name of both the system framework and your framework, I'd also recommend changing the install name. (There are plenty of blog articles and StackOverflow posts that document how to do this using install_name_tool -id.)

How decides Typo3 Neos which Settings.yaml to choose?

i use one Neos installation for multiple domains with different content.
duplicating the package TYPO3.NeosDemoTypo3Org, removing the node-identifier and doing some replacements brought me nearby everything i need.
But only the first Settings.yaml found in Packages/Sites/ seems to be parsed. All changes to the Settings.yaml found in other packages (Test1 and Test2 in the following example) are ignored.
Packages/Sites/TYPO3.NeosDemoTypo3Org/Configuration/Settings.yaml
TYPO3:
Form:
yamlPersistenceManager:
savePath: 'resource://TYPO3.NeosDemoTypo3Org/Private/Form/'
Packages/Sites/UDF.Test1/Configuration/Settings.yaml
TYPO3:
Form:
yamlPersistenceManager:
savePath: 'resource://UDF.Test1/Private/Form/'
Packages/Sites/UDF.Test2/Configuration/Settings.yaml
TYPO3:
Form:
yamlPersistenceManager:
savePath: 'resource://UDF.Test2/Private/Form/'
When i delete the first Settings.yaml (Packages/Sites/UDF.Test2/Configuration/Settings.yaml), the next Setting.yaml in alphabetical order (Packages/Sites/UDF.Test1/Configuration/Settings.yaml) is used for all 3 site packages. When i also delete this file, the Settings.yaml from UDF.Test2 is used and so on.
would be awesome if somebody can enlighten me. I am new to flow and neos and any help is welcome. RTFM, i know, but as described here i have to believe, that it should work like i did?
alternative way?
is it possible not to set the savePath in the site package configuration but in the common settings ./Packages/Application/TYPO3.Form/Configuration/Settings.yaml
I see a {#package} placeholder in
### BASE ELEMENTS ###
# NAMING: base class for everything is RENDERABLE
'TYPO3.Form:Base':
renderingOptions:
templatePathPattern: 'resource://{#package}/Private/Form/{#type}.html'
but this doesn't work here
TYPO3:
Form:
yamlPersistenceManager:
#savePath: '%FLOW_PATH_DATA%Forms/'
savePath: 'resource://{#package}/Private/Form/'
as you see i am not really experienced with this stuff but i am very motivated.
All Settings.yaml are used, but the settings are merged in order of the package loading.
The loading order of packages again is based on their dependencies.
All three packages probably have the same dependencies so they are loaded one after the other (would need to check with which ordering), so third Settings.yaml is loaded, then second Settings.yaml is loaded and overwrites the third, then the first is loaded and again overwrites the second. Every setting path can only be set once, that's why.
In any case what you are trying to archive probably won't work. This is one of the things we have to fix (site package dependent configuration).
A possible workaround is either using a common package with the form configuration and just set the savePath to this package or using diferent subcontexts (like Production/Domain1 Production/Domain2) and setting this setting different per subcontext, then you could define the subcontext by domain (as the sites are triggered by domain anyway).

Am I monkeypatching jQueryUI ProgressBar correctly in this example?

I've got a full bore copy of jQuery UI in the app, so it doesn't matter if I'm loading from the CDN or locally, all I know is it's loaded. (because if we load from the CDN our only option is to monkeypatch the live version, yes?)
I see from: https://github.com/jquery/jquery-ui/blob/master/ui/jquery.ui.progressbar.js that this.min is unfortunately not a settable option (this.options.max in contrast). I need this.min to be -1 in my case (and yes, application wide, we have discussed this internally on the team and we understand the reason for the jQuery decision, we just need it to be otherwise), so my only options seem to be to monkeypatch the prototype or maintain my own plugin. I also see that they are using the "widget" architecture now, for loading the jQuery UI objects.
In this particular application, my scripts are roughly loaded like so:
/javascripts/lib/jquery.min.js
/javascripts/lib/jquery-ui.min.js
...
/javascripts/company.utils.js
/javascripts/company.helpers.js
...
page level includes of javascript libraries
...
page level javascript
So I'm thinking of going into company.utils.js and define a monkeypatch like so:
$.ui.progressbar.prototype.min = -1;
However, I'm curious if this is the right way to monkeypatch this object. Pretty sure it is, but thought I would ask the wider StackOverflow community, and offer something googlable for future searchers.
Yes, that's correct. Alternatively, if you're using jQuery UI 1.9, you can use the widget factory to define your extension:
$.widget( "ui.progressbar", $.ui.progressbar, {
min: -1
});
Though it is slightly more verbose.

Erlang: "extending" an existing module with new functions

I'm currently writing some functions that are related to lists that I could possibly be reused.
My question is:
Are there any conventions or best practices for organizing such functions?
To frame this question, I would ideally like to "extend" the existing lists module such that I'm calling my new function the following way: lists:my_funcion(). At the moment I have lists_extensions:my_function(). Is there anyway to do this?
I read about erlang packages and that they are essentially namespaces in Erlang. Is it possible to define a new namespace for Lists with new Lists functions?
Note that I'm not looking to fork and change the standard lists module, but to find a way to define new functions in a new module also called Lists, but avoid the consequent naming collisions by using some kind namespacing scheme.
Any advice or references would be appreciated.
Cheers.
To frame this question, I would ideally like to "extend" the existing lists module such that I'm calling my new function the following way: lists:my_funcion(). At the moment I have lists_extensions:my_function(). Is there anyway to do this?
No, so far as I know.
I read about erlang packages and that they are essentially namespaces in Erlang. Is it possible to define a new namespace for Lists with new Lists functions?
They are experimental and not generally used. You could have a module called lists in a different namespace, but you would have trouble calling functions from the standard module in this namespace.
I give you reasons why not to use lists:your_function() and instead use lists_extension:your_function():
Generally, the Erlang/OTP Design Guidelines state that each "Application" -- libraries are also an application -- contains modules. Now you can ask the system what application did introduce a specific module? This system would break when modules are fragmented.
However, I do understand why you would want a lists:your_function/N:
It's easier to use for the author of your_function, because he needs the your_function(...) a lot when working with []. When another Erlang programmer -- who knows the stdlb -- reads this code, he will not know what it does. This is confusing.
It looks more concise than lists_extension:your_function/N. That's a matter of taste.
I think this method would work on any distro:
You can make an application that automatically rewrites the core erlang modules of whichever distribution is running. Append your custom functions to the core modules and recompile them before compiling and running your own application that calls the custom functions. This doesn't require a custom distribution. Just some careful planning and use of the file tools and BIFs for compiling and loading.
* You want to make sure you don't append your functions every time. Once you rewrite the file, it will be permanent unless the user replaces the file later. Could use a check with module_info to confirm of your custom functions exist to decide if you need to run the extension writer.
Pseudo Example:
lists_funs() -> ["myFun() -> <<"things to do">>."].
extend_lists() ->
{ok, Io} = file:open(?LISTS_MODULE_PATH, [append]),
lists:foreach(fun(Fun) -> io:format(Io,"~s~n",[Fun]) end, lists_funs()),
file:close(Io),
c(?LISTS_MODULE_PATH).
* You may want to keep copies of the original modules to restore if the compiler fails that way you don't have to do anything heavy if you make a mistake in your list of functions and also use as source anytime you want to rewrite the module to extend it with more functions.
* You could use a list_extension module to keep all of the logic for your functions and just pass the functions to list in this function using funName(Args) -> lists_extension:funName(Args).
* You could also make an override system that searches for existing functions and rewrites them in a similar way but it is more complicated.
I'm sure there are plenty of ways to improve and optimize this method. I use something similar to update some of my own modules at runtime, so I don't see any reason it wouldn't work on core modules also.
i guess what you want to do is to have some of your functions accessible from the lists module. It is good that you would want to convert commonly used code into a library.
one way to do this is to test your functions well, and if their are fine, you copy the functions, paste them in the lists.erl module (WARNING: Ensure you do not overwrite existing functions, just paste at the end of the file). this file can be found in the path $ERLANG_INSTALLATION_FOLDER/lib/stdlib-{$VERSION}/src/lists.erl. Make sure that you add your functions among those exported in the lists module (in the -export([your_function/1,.....])), to make them accessible from other modules. Save the file.
Once you have done this, we need to recompile the lists module. You could use an EmakeFile. The contents of this file would be as follows:
{"src/*", [verbose,report,strict_record_tests,warn_obsolete_guard,{outdir, "ebin"}]}.
Copy that text into a file called EmakeFile. Put this file in the path: $ERLANG_INSTALLATION_FOLDER/lib/stdlib-{$VERSION}/EmakeFile.
Once this is done, go and open an erlang shell and let its pwd(), the current working directory be the path in which the EmakeFile is, i.e. $ERLANG_INSTALLATION_FOLDER/lib/stdlib-{$VERSION}/.
Call the function: make:all() in the shell and you will see that the module lists is recompiled. Close the shell.
Once you open a new erlang shell, and assuming you exported you functions in the lists module, they will be running the way you want, right in the lists module.
Erlang being open source allows us to add functionality, recompile and reload the libraries. This should do what you want, success.

Best practices to develop and maintaing code for complex JQuery/JQueryUI based applications

I'm working on my first very complex JQuery based application.
A single web page can contain hundreds of JQuery related code for example to JQueryUI dialogs.
Now I want to organize code in separated files.
For example I'm moving all initialization dialogs code $("#dialog-xxx").dialog({...}) in separated files and due to reuse I wrap them on single function call like
dialogs.js
function initDialog_1() {
$("#dialog-1").dialog({});
}
function initDialog_2() {
$("#dialog-2").dialog({});
}
This simplifies function code and make caller page clear
$(function() {
// do some init stuff
initDialog_1();
initTooltip_2();
});
Is this the correct pattern?
Are you using more efficient techniques?
I know that splitting code in many js files introduces an ugly band-bandwidth usage so.
Does exist some good practice or tool to 'join' files for production environments?
I imagine some tool that does more work than simply minimize and/or compress JS code.
Some suggestions I might add:
keep all your variables in a globally available, multi-structured object, something like: MyVars = { dialogs: {}, tooltips: {} } and then use that across all your scripts
use call or apply methods for dynamically calling custom function names,if you perhaps want to keep the above object lightweight
For tidying things up, you could read this: http://betterexplained.com/articles/speed-up-your-javascript-load-time
This sounds fairly okay too me. Just two notes:
Use descriptive method names. "initDialog_1" doesn't tell you anything about the dialog it initializes.
While keeping JS code split into several files eases development it harms the felt performance of your interface. You could merge all files into one during build/deployment/runtime of your app. How to do it best heavily depends on your environment though.
I'm working on something fairly complex in JS right now, and have been wondering the same thing. I looked at various "module" implementations but while they look "cool" they don't seem to offer much value.
My plan at this point is to continue referencing lots of script files from my .html page (the plan is to only have one .html page, or very few).
Then when I'm building the release version, I'll write a very simple tool to fit into my build process, which will discover all the scripts I reference from the .html pages and concatenate them into one file, and replace the multiple <script> elements with a single one, so that only one request is necessary in the "release" version.
This will allow the compression to work across all the script text instead of on each separate file (like doing tar followed by gzip) and should make a difference to the script download time (though I should stress I haven't actually implemented it yet).
You usually want to keep all of your javascript inside one file. Less HTTP requests is usually better. If you take a look at the jQuery source, you'll notice that every function and property is right there in the jQuery global object:
jQuery.fn = jQuery.prototype = {
init: function(){ ... },
animate: function() { ... },
each: function() { ... },
// etc
}
However, the pattern you seem to be interested seems similar to the "module" pattern. The YUI framework uses this pattern, and allows developers to "require" different components of the library from the core module via HTTP request. You can read more about YUI here:
http://developer.yahoo.com/yui/3/yui/

Resources