I am using node-gyp, and I would like to use the value of an environment variable inside my binding.gyp file.
Here is the hard way (list context):
'<!#(printf "%s" "$FOO")'
But is there an easier way?
To the best of my knowledge from working with gyp (the parent Google's project, not the one that ships with node https://code.google.com/p/gyp/). You can access environment variables the same way you would have in the shell.
For example:
'$(FOO)'
Will return the data stored inside of FOO.
To get that information in a list context, I don't know if there is a better way than what you did, except for perhaps a more concise way:
'<!(echo $FOO)'
You can find Google's gyp input format reference online here.
Related
Let's say I have a rule like this.
foo(
name = "helloworld",
myarray = [
":bar",
"//path/to:qux",
],
)
In this case, myarray is static.
However, I want it to be given by cli, like
bazel run //:helloworld --myarray=":bar,//path/to:qux,:baz,:another"
How is this possible?
Thanks
To get exactly what you're asking for, Bazel would need to support LABEL_LIST in Starlark-defined command line flags, which are documented here:
https://docs.bazel.build/versions/2.1.0/skylark/lib/config.html
and here: https://docs.bazel.build/versions/2.1.0/skylark/config.html
Unfortunately that's not implemented at the moment.
If you don't actually need a list of labels (i.e., to create dependencies between targets), then maybe STRING_LIST will work for you.
If you do need a list of labels, and the different possible values are known, then you can use --define, config_setting(), and select():
https://docs.bazel.build/versions/2.1.0/configurable-attributes.html
The question is, what are you really after. Passing variable, array into the bazel build/run isn't really possible, well not as such and not (mostly) without (very likely unwanted) side effects. Aren't you perhaps really just looking into passing arguments directly to what is being run by the run? I.e. pass it to the executable itself, not bazel?
There are few ways you could sneak stuff in (you'd also in most cases need to come up with a syntax to pass data on CLI and unpack the array in a rule), but many come with relatively substantial price.
You can define your array in a bzl file and load it from where the rule uses it. You can then dump the bzl content rewriting your build/run configuration (also making it obvious, traceable) and load the bits from the rule (only affecting the rule loading and using the variable). E.g, BUILD file:
load(":myarray.bzl", "myarray")
foo(
name = "helloworld",
myarray = myarray,
],
)
And you can then call your build:
$ echo 'myarray=[":bar", "//path/to:qux", ":baz", ":another"]' > myarray.bzl
$ bazel run //:helloworld
Which you can of course put in a single wrapper script. If this really needs to be a bazel array, this one is probably the cleanest way to do that.
--workspace_status_command: you can collection information about your environment, add either or both of the resulting files (depending on whether the inputs are meant to invalidate the rule results or not, you could use volatile or stable status files) as a dependency of your rule and process the incoming file in the what is being executed by the rule (at which point one would wonder why not pass it to as its command line arguments directly). If using stable status file, also each other rule depending on it is invalidated by any change.
You can do similar thing by using --action_env. From within the executable/tool/script underpinning the rule, you can directly access defined environmental variable. However, this also means environment of each rule is affected (not just the one you're targeting); and again, why would it parse the information from environment and not accept arguments on the command line.
There is also --define, but you would not really get direct access it's value as much as you could select() a choice out of possible options.
I'm trying to write a custom Travis env variable to a file for a simple proof of concept thing that I need. However, I'm having trouble getting this to work.
How would I define this in the travis yaml file if my variable is called VARIABLE_X ?
Thanks!
One way to do this is using linux commands, something like:
printenv | grep VARIABLE > all_env
However I don't know how Travis handles the environment (take a look at their docs, here) but it might not work as easily due to encryption, but it should work since your apps wouldn't function if they didn't have the same level of access. If such a case occurs, modifying a few parameters (maybe TRAVIS_SECURE_ENV_VARS) is worth looking into.
If you solved the problem in another way, consider sharing with the community.
Write the environment variable as usual (Shell - Write variable contents to a file)
Define the following within script:
- echo "$VARIABLE_X" > example.txt
I've been wondering, if there is an environment variable to set the search path for #use and #load for the ocaml toplevel.
What I think I know so far:
I can use findlib instead of "raw" #use and #load. findlib looks at some environment variables for the search path.
I can set a search path with -I.
Experiments seem to indicate that CAML_LD_LIBRARY_PATH does not influence #use (script loading) and #load (byte code file loading).
(updated) I can use #directory to add the desired path - but unfortunately this only takes a string literal, so i can't pass something I read from the environment at run time. (Update: I forgot to mention #directory originally and why it doesn't fit my use case).
What I want to do:
Run ocaml programs as scripts
Point ocaml to script libraries and script fragments with an environment variable
Avoid, in some scenarios, to create a full findlib library.
Presently I'm not using ocaml as interpreter, but a wrapper that adds a path to the invocation. I want to avoid the wrapper.
(I hope the questions makes sense now, after you know my use case)
So: Is there an environment variable to set the search path for #use and #load without resorting to a wrapper?
More Details
What I'm currently doing:
#!/usr/bin/env oscript2
#use "MyScript"
#load "SomeModule.cmo"
(* ... more ocaml *)
oscript2 is a wrapper around ocaml that basically sets the search paths for #use and #load, but finally executes the toplevel ocaml withe something like
exec ocaml -I .... ...some-byte-code-modules... "$#"
MyScript and SomeModule.cmo live outside of the "normal" Ocaml search path. The actual location might change, but after login (and working through the profie scripts) there is an environment variable (today it's OSCRIPTLOAD_PATH) that tells me, where alle loadable byte code and ocaml script files might live.
This works well, a variant of that setup has been in use for years (at least 7).
The one thing that bothers me there, is: The wrapper, the simple fact of it's presence, looks homebrew, so I'd like to avoid it, to make a better impression on potential future users of the script collection. What I'd like to do
#!/usr/bin/env ocaml
#use "MyScript"
#load "SomeModule.cmo"
(* ... more ocaml *)
and have ocaml itself pick up the search path from some environment variable (I'm free to change the variable name, that is under my control, but I don't want to install script and byte code libs into the default search path, and, as already stated, I' asking if I can do that without findlib).
Basically (as already stated at the very beginning) I'm looking for an environment variable that controls the search path for #use and #load. I'm not looking for toplevel directives and not for wrappers that retrofit this feature. (Thanks everyone for those suggestions, but unfortunately I've already gone that road, it's feasible, but here I'm looking for an alternative purely for cosmetic reasons).
Recent research didn't bring up that such a variable exists, but I thought I'd be asking here, before giving up on it.
From inside the OCaml toplevel you can use the #directory "foo";; primitive to add an include directory.
Here's a shell script that runs the OCaml toplevel while adding a directory to the search path taken from an environment variable named EXTRA_OCAML_DIR.
#!/bin/sh
ocaml -I "$EXTRA_OCAML_DIR" "$#"
If you run this instead of ocaml, you will have a directory in the load path specified by an environment variable.
It seems a little obvious, but maybe it will spark an idea that is more helpful.
In the Erlang shell i can re-use my variables very well. like this:
1> R = "muzaaya".
"muzaaya"
2> f(R).
ok
3> R = "muzaaya2".
"muzaaya2"
So, i cannot call f(Variable) in my source code because i do not know which module this function belongs to. I have tried modules like: erlang,shell,c, e.t.c. Has anyone tried re-using variables in Erlang Source code, other than just in the Shell ? How did you do it ? Thanks
No, you can't do this inside a module.
The REPL shell is interpreted, the code file is compiled.
The shell comes in handy to test things, but you would not write your web server in a shell. ;-)
It would be possible and not even difficult for the Erlang hackers to implement an f(V) language construct, but it would not fit the Erlang design model.
Mind, no function could accomplish the forgetting of a variable, so it had to be done in a new native language construct.
When compiled, the virtual machine does not know the variables anymore, as Erlang is run by a rather ordinary stack machine, not much different from the JVM.
It just would not be functional programming if one could rebind a variable V.
The functions which are listed when you type help(). in the shell are shell only functions and cannot be used when programming Erlang. f() is one of there functions.
As other have already pointed out f() is a shell command and only exists in the shell. That f(), and all other shell commands, looks like a normal function call is because the only way to do something in Erlang is to call a function. And the shell does not introduce any new syntax. All shell commands behave like normal functions in that they always return a value.
It was not deemed necessary to be able to use f() in normal functions, although there are many who disagree and find the once only binding of variables unnecessarily restrictive.
I'm using a closed-source application that loads Lua scripts and allows some customization through modifying these scripts. Unfortunately that application is not very good at generating useful log output (all I get is 'script failed') if something goes wrong in one of the Lua scripts.
I realize that dynamic languages are pretty much resistant to static code analysis in the way C++ code can be analyzed for example.
I was hoping though, there would be a tool that runs through a Lua script and e.g. warns about variables that have not been defined in the context of a particular script.
Essentially what I'm looking for is a tool that for a script:
local a
print b
would output:
warning: script.lua(1): local 'a' is not used'
warning: script.lua(2): 'b' may not be defined'
It can only really be warnings for most things but that would still be useful! Does such a tool exist? Or maybe a Lua IDE with a feature like that build in?
Thanks, Chris
Automated static code analysis for Lua is not an easy task in general. However, for a limited set of practical problems it is quite doable.
Quick googling for "lua lint" yields these two tools: lua-checker and Lua lint.
You may want to roll your own tool for your specific needs however.
Metalua is one of the most powerful tools for static Lua code analysis. For example, please see metalint, the tool for global variable usage analysis.
Please do not hesitate to post your question on Metalua mailing list. People there are usually very helpful.
There is also lua-inspect, which is based on metalua that was already mentioned. I've integrated it into ZeroBrane Studio IDE, which generates an output very similar to what you'd expect. See this SO answer for details: https://stackoverflow.com/a/11789348/1442917.
For checking globals, see this lua-l posting. Checking locals is harder.
You need to find a parser for lua (should be available as open source) and use it to parse the script into a proper AST tree. Use that tree and a simple variable visibility tracker to find out when a variable is or isn't defined.
Usually the scoping rules are simple:
start with the top AST node and an empty scope
item look at the child statements for that node. Every variable declaration should be added in the current scope.
if a new scope is starting (for example via a { operator) create a new variable scope inheriting the variables in the current scope).
when a scope is ending (for example via } ) remove the current child variable scope and return to the parent.
Iterate carefully.
This will provide you with what variables are visible where inside the AST. You can use this information and if you also inspect the expressions AST nodes (read/write of variables) you can find out your information.
I just started using luacheck and it is excellent!
The first release was from 2015.