Minifying import paths for modules in webpack - webpack-2

I've got a TypeScript project that uses Webpack successfully to yield ES6, which is run through babel-minify to tree-shake it and produce a significantly smaller bundle output file.
This file appears to contain all the logic from my own program as well as the logic for each of the pieces of the imported libraries I'm using (e.g. rxjs, lodash, etc.)
However, I'm looking through the generated file and it appears that at the top we've got some webpack logic, then a map of the path of the original import to a function that implements it, and well, really a lot of that all the way down, with various portions pointing to dependencies and their path strings.
Now, given that everything is self-contained within this webpack bundle (no other chunks), the inclusion of all the source file names seems to take up a lot of space needlessly.
For example, I'm looking at one section in here for lodash's isBoolean script:
"./node_modules/lodash/isBoolean.js": function(e, t, o) {
var r = o("./node_modules/lodash/_baseGetTag.js"),
s = o("./node_modules/lodash/isObjectLike.js");
e.exports = function(e) {
return !0 === e || !1 === e || s(e) && "[object Boolean]" == r(e)
}
},
Now, it seems like there are a lot of characters being taken up to describe the source file. Since there's no actual dependency at this time on the source file, why can't each key just be replaced with a shorter string identifier throughout, as in the following example:
"a": function(e, t, o) {
var r = o("b"),
s = o("c");
e.exports = function(e) {
return !0 === e || !1 === e || s(e) && "[object Boolean]" == r(e)
}
},
where "a", "b", and "c" are all representative of each place where the original string values occur throughout the entire bundle. This shouldn't impact all strings, but rather just the import file path strings.
There appears to be someone asking a similar question at Webpack compress path names who didn't really get a satisfactory answer, in my opinion.
Is there some option or plugin I can use that could mangle the module path names?

Figured it out after reading through Webpack's source code and figuring out how it actually assembles the whole thing.
I had the NamedModulesPlugin in my config (likely an artifact from one of the various quickstarts out there) and this plugin inserts all the module paths into the output.
Commenting that out (or just removing it) from the config removes it entirely from the output (no mangling necessary).

Related

Reading the content of directory declared with `actions.declare_directory`

Imagine I have a java_binary target triggered by a custom rule that generates source code and places the generated sources under a directory, let's call it "root".
So after the code generation we will have something like this:
// bazel-bin/...../src/com/example/root
root:
-> Foo.java
-> Bar.java
-> utils
-> Baz.java
Now, I have another target, a java_library, that depends on the previously generated sources, so it depends on the custom rule.
My custom rule definition currently looks something like this:
def _code_generator(ctx):
outputDir = ctx.actions.declare_directory("root")
files = [
ctx.actions.declare_file("root/Foo.java"),
ctx.actions.declare_file("root/Bar.java"),
ctx.actions.declare_file("root/utils/Baz.java"),
// and many,
// many other files
]
outputs = []
outputs.append(outputDir)
outputs.extend(files)
ctx.actions.run(
executable = // executable pointing to the java_binary
outputs = outputs
// ....
)
This works. But as you can see, every anticipated file that is to be generated, is hard-coded in the rule definition. This makes it very fragile, should the code generation produce a different set of files in the future (which it will).
(Without specifying each of the files, as shown above, Bazel will fail the build saying that the files have no generating action)
So I was wondering, is there a way to read the content of the root directory and automatically, somehow, declare each of the files as an output?
What I tried:
The documentation of declare_directory says:
The contents of the directory are not directly accessible from Starlark, but can be expanded in an action command with Args.add_all().
And add_all says:
[...] Each directory File item is replaced by all Files recursively contained in that directory.
This sounds like there could be a way to get access to the individual files in the directory, but I am not sure how.
I tried:
outputDir = ctx.actions.declare_directory("root")
//...
args = ctx.actions.args()
args.add_all(outputDir)
with the intention to access the individual files later from args, but the build fails with: "Error in add_all: expected value of type sequence or depset for values, got File".
Any other ideas on how to implement the rule, so that I don't have to hard-code each and every file that will be generated?

Have all Bazel packages expose their documentation files (or any file with a given extension)

Bazel has been working great for me recently, but I've stumbled upon a question for which I have yet to find a satisfactory answer:
How can one collect all files bearing a certain extension from the workspace?
Another way of phrasing the question: how could one obtain the functional equivalent of doing a glob() across a complete Bazel workspace?
Background
The goal in this particular case is to collect all markdown files to run some checks and generate a static site from them.
At first glance, glob() sounds like a good idea, but will stop as soon as it runs into a BUILD file.
Current Approaches
The current approach is to run the collection/generation logic outside of the sandbox, but this is a bit dirty, and I'm wondering if there is a way that is both "proper" and easy (ie, not requiring that each BUILD file explicitly exposes its markdown files.
Is there any way to specify, in the workspace, some default rules that will be added to all BUILD files?
You could write an aspect for this to aggregate markdown files in a bottom-up manner and create actions on those files. There is an example of a file_collector aspect here. I modified the aspect's extensions for your use case. This aspect aggregates all .md and .markdown files across targets on the deps attribute edges.
FileCollector = provider(
fields = {"files": "collected files"},
)
def _file_collector_aspect_impl(target, ctx):
# This function is executed for each dependency the aspect visits.
# Collect files from the srcs
direct = [
f
for f in ctx.rule.files.srcs
if ctx.attr.extension == f.extension
]
# Combine direct files with the files from the dependencies.
files = depset(
direct = direct,
transitive = [dep[FileCollector].files for dep in ctx.rule.attr.deps],
)
return [FileCollector(files = files)]
markdown_file_collector_aspect = aspect(
implementation = _file_collector_aspect_impl,
attr_aspects = ["deps"],
attrs = {
"extension": attr.string(values = ["md", "markdown"]),
},
)
Another way is to do a query on file targets (input and output files known to the Bazel action graph), and process these files separately. Here's an example querying for .bzl files in the rules_jvm_external repo:
$ bazel query //...:* | grep -e ".bzl$"
//migration:maven_jar_migrator_deps.bzl
//third_party/bazel_json/lib:json_parser.bzl
//settings:stamp_manifest.bzl
//private/rules:jvm_import.bzl
//private/rules:jetifier_maven_map.bzl
//private/rules:jetifier.bzl
//:specs.bzl
//:private/versions.bzl
//:private/proxy.bzl
//:private/dependency_tree_parser.bzl
//:private/coursier_utilities.bzl
//:coursier.bzl
//:defs.bzl

How to upload pie file successfully in GraphDB by ontotext

I have the pie file which is used for inference in GraphDB ontotext. I have written the ruleset correctly. while uploading the file it seems ok. But, while creating the repository, it is showing the “Invalid Ruleset file. Please upload valid one” I think the issue is related to the hidden character present inside the file. How to get out if such characters. My file content is :
Prefices
{
rdf : http://www.w3.org/1999/02/22-rdf-syntax-ns#
owl : http://www.w3.org/2002/07/owl#
abc : http://www.xyzabc.com/schema/abcentity#
}
Axioms
{
<abc:isLocatedIn> <rdf:type> <owl:ObjectProperty>
}
Rules
{
Id: isLocatedInHierarchy
a <abc:isLocatedIn> b [Constraint a != b]
b <abc:isLocatedIn> c [Constraint b != c]
a <abc:isLocatedIn> c [Constraint a != c]
}
hidden character present inside the file
Do you mean a Unicode BOM mark? Get an editor that can save without such mark (I strongly recommend Akelpad: http://akelpad.sourceforge.net/), or just save in ASCII.
BTW, writing PIE files with per-property rules is not a good idea. Instead, use a generic rule for transitive property and then declare abc:isLocatedIn transitive in your ontology. The cheapest builtin in which such rule is included is rdfsPlus-optimized. If you select it, then you add to your ontology
abc:isLocatedIn a owl:TransitiveProperty.
However, it's a better idea to keep a "step" property abc:isLocatedIn and then a transitive property on top of it, eg abc:isLocatedTransitive:
abc:isLocatedTransitive a owl:TransitiveProperty.
abc:isLocatedIn rdfs:subPropertyOf abc:isLocatedTransitive.
Finally, there's a more efficient way to compute the transitive closure, see http://rawgit2.com/VladimirAlexiev/my/master/pubs/extending-owl2/index.html#sec-3-1:
abc:isLocatedTransitive ptop:transitiveOver abc:isLocatedIn.
abc:isLocatedIn rdfs:subPropertyOf abc:isLocatedTransitive.
I was also been able to upload successfully your .pie file. Maybe the issue is related to the computer locale or something in the environment. If you are using Windows Notepad++ seems like a logical choice. I guess there is an option to view all the hidden characters, but I've never used it. If you are using Linux there are plenty of choices, even included one like vim or nano which will work just fine.

Lua emulating the require function

In the embeded lua environment (World of Warcraft - WoW) is missing the require function.
I want port one existing lua source code (an great OO-library) for the use it in the WoW. The library itself is relatively small (approx 8 small files) but of course it heavily uses the require.
World of Warcraft loads files and libraries by defining it in an XML file, like:
<Ui xsi:schemaLocation="http://www.blizzard.com/wow/ui/">
<Script file="LibOne.lua"/>
<Script file="LibTwo.lua"/>
</Ui>
but i don't know how the low level library manipulation is done in the WoW.
AFAIK in the WoW is missing even the package. table too. :(
So the question(s): For me, the streamlined way would be write an function which will emulate the require function using the interface available in WoW. The question is how. Could someone give me some directions?
Or as alternative, for the porting the mentioned existing source to WoW, I need replace the require Some.Other.Module lines in the lua sources to something what WoW will understand. What is the equivalent/replacement for such require Some.Module in the WoW?
How the WoW handles modules/libraries at low-level?
You could merge all files into one using one of the various amalgamation scripts, e.g. amalg. Then you can load this file and a stub that implements the require function using the usual WoW way:
<Ui xsi:schemaLocation="http://www.blizzard.com/wow/ui/">
<Script file="RequireStub.lua"/>
<Script file="AllModules.lua"/><!-- amalgamated Lua modules -->
<Script file="YourCode.lua"/>
</Ui>
The file RequireStub.lua could look like:
package = {}
local preload, loaded = {}, {
string = string,
debug = debug,
package = package,
_G = _G,
io = io,
os = os,
table = table,
math = math,
coroutine = coroutine,
}
package.preload, package.loaded = preload, loaded
function require( mod )
if not loaded[ mod ] then
local f = preload[ mod ]
if f == nil then
error( "module '"..mod..[[' not found:
no field package.preload[']]..mod.."']", 1 )
end
local v = f( mod )
if v ~= nil then
loaded[ mod ] = v
elseif loaded[ mod ] == nil then
loaded[ mod ] = true
end
end
return loaded[ mod ]
end
This should emulate enough of the package library to get you a working require that loads modules in the amalgamated file. Different amalgamation scripts might need different bits from package, though, so you probably will have to take a look at the generated Lua source code.
And in the specific case of Coat you might need to implement stubs for other Lua functions as well. E.g. I've seen that Coat uses the debug library ...
WoW environment doesn't have dofile or any other means to read external files at all. You need to explicitly mention all files that must be loaded in .toc file or .xml referenced from .toc.
You can then write your own implementation of require to maintain compatibility with your library, which would be quite trivial as it would only need to parse module name and retrieve it's content from modules.loaded table, but you'd still need to alter original source to make files register in that table and you'll need to manually arrange all files into correct order of loading.
Alternatively you can rearrange files into separate WoW-addons and use its own built-in Dependencies/OptionalDeps facilities or popular LibStub framework to handle loading order automatically.

How to make web_ui compile css files automatically

I'm using web_ui and whenever I change a CSS file in web/css/ it will not be compiled unless I change the web/index.html file. I guess that's because only the file 'web/index.html' is listed as entry point in build.dart.
But adding the stylesheet to the entry points list didn't work.
Is there a way to autocompile CSS files every time they are changed without having to edit the .html files?
Keep in mind that you can edit any .dart or .html file and the compiler will run; it doesn't have to be the entry point file.
Autocompilation of CSS files on change can be achieved by passing the compiler the full flag:
build(['--machine', '--full'], ['web/index.html']);
The machine flag tells the compiler to print messages to the Dart Editor console. For a full list of flags see Build.dart and the Dart Editor Build System.
This method means that every time a file is changed your entire project will be rebuilt instead of the usual incremental approach. If you have a large project this may take a while. Here is a more comprehensive build file that takes advantage of incremental compilation and only rebuilds the whole project if a css file was changed:
List<String> args = new Options().arguments;
bool fullRebuild = false;
for (String arg in args) {
if (arg.startsWith('--changed=') && arg.endsWith('.css')) {
fullRebuild = true;
}
}
if(fullRebuild) {
build(['--machine', '--full'], ['web/index.html']);
} else {
build(args, ['web/index.html']);
}

Resources