NOT mangle export names? - webpack-2

What's new in webpack 2 has this to say about export names:
ES6 export mangling
In cases where it's possible to track export usage, webpack can mangle export names to single char properties.
I am experimenting with webpack and have started with the very simple "cats" example at http://webpack.github.io/docs/usage.html
I have converted the js files to ts and tried to enable sourceMaps.
Still when I set a breakpoint in app.ts on
console.log(cats);
then Chrome stops at the correct place, but I cannot evaluate "cats" in the console (Uncaught reference error: cats is not defined).
If I look into the generated JS, then I find I have to use "r.a" to print the cats.
So: Am I correct that the cause of that is this "export mangling"?
Can it be turned off?

Related

With Jenkins Job Builder (JJB) what's the preferred way to inject values into a static set of job configuration files?

This bounty has ended. Answers to this question are eligible for a +100 reputation bounty. Bounty grace period ends in 14 hours.
frans is looking for an answer from a reputable source.
I have a working set of JJB YAML files successfully creating jobs and folders.
I now want to make certain values I use inside those YAML files configurable i.e. when running jenkins-jobs test|update -r jobfolder I want to set values for folder prefixes (to not damage existing production jobs), names for branches, nodes etc.
I don't want to use JJBs defaults approach for this since I'm already using it for configuration at a different place and it results in conflicts when used in projects and jobs together.
The ideal way of doing this I can think of would be a way to call JJB like this
jenkins-jobs test|update --define "folder-prefix=experimental/,node=test-node" -r jobfolder
Giving me variables I can use in the actual job definition files.
Since this option seemingly doesn't exist, I'm currently trying to provide files which contain those variables and somehow 'inject' them in my project.
Those are the approaches I can think of:
1 - having different configuration folders with YAML files inside, I would use like this:
jenkins-jobs test -r experimental-config:jobfolder
jenkins-jobs test -r production-config:jobfolder
with experimental-config and production-config being folders with additional files containing my configuration I can switch between.
But unfortunately I don't know how I would reference values I've defined in different yaml files. Is that even possible?
2 - having include files like described in the documentation
While that sounds promising I didn't manage to actually make this run. I tried to turn the following 'configuration header' I'm already using:
- dynamic-config: &dynamic-config
name: "dynamic-config"
folder-prefix: "experimental/"
node: "test-node"
[Rest of the file making use of dynamic-config]
into something making use of the !include statement like this:
!include: dynamic-config.yaml.inc
[Rest of the file making use of stuff defined in dynamic-config.yaml.inc]
giving me a seemingly unrelated parser error:
yaml.parser.ParserError: expected '<document start>', but found '<block sequence start>'
in "/home/me/my/project.yml", line 11, column 1
so I tried this snippet, which looks more like the example by putting it inside an existing element:
- dynamic-config: &dynamic-config
name: "dynamic-config"
!include: dynamic-config.yaml.inc
giving me a different error but still an error:
yaml.scanner.ScannerError: while scanning a simple key
in "/home/me/my/project.yml", line 7, column 5
could not find expected ':'
in "/home/me/my/project.yml", line 8, column 5
In both cases it doesn't make a difference whether or not the specified include file exists or not, which makes me doubt you can just 'include' a file like this at all.
What am I doing wrong here? Is there a more obvious / straight forward way to customize a jenkins-jobs run?
Update:
I somehow managed to use the !include tag for individual items now, like this:
- dynamic-config: &dynamic-config
name: "dynamic-config"
folder-prefix: !include: job-configs/active/folder-prefix.inc
branch-name: !include: job-configs/active/branch-name.inc
node-name: !include: job-configs/active/node-name.inc
But I wasn't able to put the whole dynamic-config element (with the anchor) into an include file yet.
2nd update:
Looks like I'm trying something similar as the guy from this question.
Can someone confirm, that this is currently still a problem? What's the JJB way of handling this?

Multiline command-line editing in Gforth console

I have just started learning the Forth programming language.
I'm using Gforth on Ubuntu. In Gforth interactive console, I want to do indentation but it requires changing line. Enter key didn't work, it executed code. For comparison, for example, when one tests JavaScript code in web browser console, shift+enter change line without executing code. I want something like that. What key should I press? Is there a way other than using text editors like vim?
Best.
Gforth doesn't support multiline editing (see the manual).
A workaround is to edit a file in your favorite editor in another window and reload this file in Gforth console as:
include /tmp/scratch.fs
An external file can be also edited in Gforth console via a command like:
"vim /tmp/scratch.fs" system
So a one-liner for that is:
"vim /tmp/scratch.fs" system "/tmp/scratch.fs" included
That can be wrapped into a definition as:
: scratch "vim /tmp/scratch.fs" system "/tmp/scratch.fs" included ;
So the word scratch will open an editor and than load the edited file.
NB: if you use a quite old build of Gforth, you have to use s" ccc" instead of "ccc" for string literals.
To conditionally include/exclude some parts in a file the words [defined] and [if] can be used; to erase the previous instance of the loaded definitions the word marker can be used as:
[defined] _clear [if] _clear [then]
marker _clear
\ some definitions
\ ...
Take into account that usual control-flow words can be used in definitions only.

Fluent-bit Variables in Key configuration

I'm creating a custom Fluent-Bit image and I want a "generic" configuration file that can work on multiple cases, i.e.: it should work with a forward input sometimes and with tail input some other times.
I thought about using environment variables so to only have one input but it seems we cannot set variables in the key part only on the value side (see following code).
When I set the corresponding environment variables in a docker-entrypoint file with corresponding conditions
export INPUT_PATH="/myLogPath"
export INPUT_PATH_TYPE="path"
export INPUT_NAME="tail"
[INPUT]
Name ${INPUT_NAME}
${INPUT_PATH_TYPE} ${INPUT_PATH}
This is the error message I got
[error] [config] tail: unknown configuration property '${INPUT_PATH_TYPE}'. The following properties are allowed: path, exclude_path, key, read_from_head, refresh_interval, watcher_interval, rotate_wait, docker_mode, docker_mode_flush, docker_mode_parser, path_key, ignore_older, buffer_chunk_size, buffer_max_size, skip_long_lines, exit_on_eof, parser, tag_regex, db, db.sync, db.locking, multiline, multiline_flush, parser_firstline, and parser_.
I'm looking for a way to make it dynamic so either to have a single file with dynamic configuration or multiple files which can be included dynamically (#Include requires a static filepath from what I've seen).
EDIT: the only option I see is to have multiple input files (for each use case) and call it dynamically when starting fluent-bit in the docker-entrypoint file
I use a docker-entrypoint and split the input, filters to different files and then depending of the environment variables in the entrypoint I create a symbolic link to the corresponding file

What is the different between PATH=:$PATH and PATH="$PATH:" and other export lines

I have a question about following lines related to adding PATH to enviroment.
export PATH=/usr/loca/cuda/bin:$PATH
export PATH=/usr/local/cuda-9.1/bin${PATH:+:${PATH}}
export PATH="/home/ics_vr/anaconda3/bin:$PATH"
export PATH="$PATH:/home/user/anaconda3/bin"
Regardless the content of path in each export lines, my first question is how do I distinguish thoes lines starting with export PATH=? e.g. grammer and its functions, regardless the variable I used in thoes lines.
Secondly, I see many people use # to comment on/off to switch those path,but this is not convenient. Is there any union way to realize all, without commenting the export line every time?
This is convenient because people want to use system python for example as default, but if the path is settled not properly, anaconda python interpreter will be settled by default. We need a way that default is the system python interpreter, and when I need anaconda, I will use
source activate ENV_I_BUILD
Thank you for your time and help. I am very appreciate on that.
The environment variable PATH is a list of colon separated folder paths where to find executables.
The order in which folder paths are places in this variable is very important. Indeed, if you call a program from the command line, the executable will first be searched in the first folder path, then if it's not there the second and so on...
Anaconda ships with a python installation (either 2.x or 3.x).
If you export:
export PATH="/home/ics_vr/anaconda3/bin:$PATH"
then the python in "/home/ics_vr/anaconda3/bin/anaconda3" will be used preferentially. Thus, if you want to keep the system python by default, you might want to use:
export PATH="$PATH:/path/to/whatever/conda"
The source activate ... will prepend the environment bin folder in your PATH anyway. So if you activate an environment, the system python will be superseeded by the python of the conda env.
As for the two lines:
export PATH=/usr/loca/cuda/bin:$PATH
export PATH=/usr/local/cuda-9.1/bin${PATH:+:${PATH}}
you will have to decide what executables you want first in your PATH variable.
For information, you can set multiple folders in your PATH in one line:
export PATH="$PATH:/usr/loca/cuda/bin:/home/ics_vr/anaconda3/bin:/my/personal/bin"
Do not forget to add what was already in your PATH variable when exporting a new PATH if you do not want to loose basic commands listed in, for example, "/usr/bin" or "/usr/local/bin".

Using custom Environment Variables in JetBrains products for File watcher Arguments

I am trying to use node-sass as File Watcher in WebStorm.
I created a local variable named STYLE with main stylesheet name inside to add it as variable to File Watcher settings everywhere it needed.
But if I add $STYLE$ in a Path I get an error:
/Users/grawl/Sites/sitename/node_modules/node-sass/bin/node-sass app/styles/$STYLE$.scss public/$STYLE$.css
error reading file "app/styles/$STYLE$.scss"
Process finished with exit code 1
IDE just don't interprets my variable.
Also I tried to use %STYLE% but with no luck.
Please do not blame me for direct mapping filenames in File Watcher without using built-in variables like $FileName$ or $FileNameWithoutExtension$ because even WebStorm 9 EAP does not support preprocessor's dependencies except of built-in preprocessors like Sass and Jade.
This case is not only case to use local variables.
For example I want to put into variables my public/ path (that can be dest/ in other projects) and app/ (can be source/). And so on.
So let's figure out this feature.

Resources