Is there a way in OCaml to retrieve all available environment variables?
The OCaml stdlib provides the following in the Sys module:
val getenv : string -> string
But it only show if an env var is available. Is there a way to list all variables in the environment?
You need to use Unix.environment from the unix library (distributed with the OCaml system). Example:
> ocaml unix.cma
OCaml version 4.03.0
# Unix.environment ();;
- : string array = ...
Related
Elixir source may be injected using Code.eval_string/3. I don't see mention of running raw Erlang code in the docs:
https://hexdocs.pm/elixir/Code.html#eval_string/3
I am coming from a Scala world in which Java objects are callable using Scala syntax, and Scala is compiled into Java and visible by intercepting the compiler output (directly generated with scalac).
I get the sense that Elixir does not provide such interoperating features, nor allow injection of custom Erlang into the runtime. Is this the case?
You can use the erlang standard library modules from Elixir, as described here or here.
For example:
def random_integer(upper) do
:rand.uniform(upper) # rand is an erlang library
end
You can also add erlang packages to your mix.exs dependencies and use them in your project, as long as these packages are published on hex or on github.
You can also use erlang and elixir code together in a project as described here.
So yeah, it's perfectly possible to call erlang code from elixir.
Vice-versa is also possible, see here for more information:
Elixir compiles into BEAM byte code (via Erlang Abstract Format). This
means that Elixir code can be called from Erlang and vice versa,
without the need to write any bindings.
Expanding what #zwippie have written:
All remote function calls (by that I mean calling function with explicitly set module/alias) are in form of:
<atom with module name>.<function name>(<arguments>)
# Technically it is the same as:
# apply(module, function_name_as_atom, [arguments])
And all "upper case module names" in Elixir are just atoms:
is_atom(Foo) == true
Foo == :"Elixir.Foo" # => true
So from Elixir viewpoint there is no difference between calling Erlang functions and Elixir functions. It is just different atom passed as the receiving module.
So you can easily call Erlang modules from Elixir. That mean that without much of the hassle you should be able to compile Erlang AST from within Elixir as well:
"rand:uniform(100)"
|> :merl.quote()
|> :erl_eval.expr(#{})
No need for any mental translation.
Additionally you can without any problems mix Erlang and Elixir code in single Mix project. With tree structure like:
.
|`- mix.exs
|`- src
| `- example.erl
`- lib
`- example.ex
Where example.erl is:
-module(example).
-export([hello/0]).
hello() -> <<"World">>.
And example.ex:
defmodule Example do
def print_hello, do: IO.puts(:example.hello())
end
You can compile project and run it with
mix run -e "Example.print_hello()"
And see that Erlang module was successfully compiled and executed from within Elixir code in the same project without problems.
One more thing to watch for when calling erlang code from elixir. erlang uses charlists for strings. When you call a erlang function that takes a string, convert the string to a charlist and convert returned string to a string.
Examples:
iex(17)> :string.to_upper "test"
** (FunctionClauseError) no function clause matching in :string.to_upper/1
The following arguments were given to :string.to_upper/1:
# 1
"test"
(stdlib 3.15.1) string.erl:2231: :string.to_upper/1
iex(17)> "test" |> String.to_charlist() |> :string.to_upper
'TEST'
iex(18)> "test" |> String.to_charlist() |> :string.to_upper |> to_string
"TEST"
iex(19)>
I'm new to waf build tool and I've googled for answers but very few unhelpful links.
Does anyone know?
As wscript is essentially a python script, I suppose I could use the os package?
Don't use the os module, instead use the DEST_* variables:
ctx.load('compiler_c')
print (ctx.env.DEST_OS, ctx.env.DEST_CPU, ctx.env.DEST_BINFMT)
On my machine this would print ('linux', 'x86_64', 'elf'). Then you can dispatch on that.
You can use import at every point where you could use it any other python script.
I prefer using platform for programming a function os-agnostic instead on evaluate some attributes of os.
Writing the Build-related commands example in the waf book os-agnostic, could look something like this:
import platform
top = '.'
out = 'build_directory'
def configure(ctx):
pass
def build(ctx):
if platform.system().lower().startswith('win'):
cp = 'copy'
else:
cp = 'cp'
ctx(rule=cp+' ${SRC} ${TGT}', source='foo.txt', target='bar.txt')
Is it possible to get the operating system in maxima? I have some code that needs the unix / or windows \ for path names. How can I find out which operating system the code is running in?
To give some context, I have the following code:
windows: false$
divider: "/"$
if (windows) then divider: "\\"$
initfile: concat(maxima_userdir, divider, "maxima-init.mac");
load(operatingsystem)$
dir: getcurrentdirectory();
if (substring(dir, slength(dir)) # divider) then dir: concat(dir, divider)$
repo: concat(dir, "$$$.mac")$
live: concat(dir, "live_packages", divider, "$$$.mac")$
with_stdout(initfile, printf(true, ""))$
with_stdout(initfile, printf(true, concat("file_search_maxima: append (file_search_maxima, [
~s,
~s
]);"), repo, live))$
Take a look at the output of build_info, specifically the field host (i.e. foo#host where foo : build_info()). See ? build_info for more information.
On my (Linux) system I get: x86_64-unknown-linux-gnu I think on MS Windows you'll get a string containing windows or at least win or maybe win32.
There may be other ways to figure out the system type so let me know if that doesn't work for you. Also it is possible that there is a global variable floating around which tells the path separator; I would have to look for that.
If you're not adverse to writing a little bit of Lisp code, another approach is to use the file and directory functions in Common Lisp, which are more extensive than in Maxima. See the section on filenames in the Common Lisp Hyperspec. I think maybe MERGE-PATHNAMES and/or MAKE-PATHNAME might be relevant.
I'm working on implementing a feature like Strict Java Deps for rules_scala.
I'd really like to have the ability to configure in runtime if this uses warn or error.
I seem to recall skylark rules can't create and access command-line flags but I don't recall if they can access existing ones?
Main difference is that existing ones are already parsed so maybe they are also passed in some ctx.
The flag you want (strict_java_deps) isn't available through Skylark at the moment. There's no reason we can't add it, though, filed #3295 to track.
For other flags, the context can access the configuration fragments, which can access some of the parsed command line flags. I think what you'd want is ctx.fragments, then use the fragments to get the java fragments, and then get the default_javac_flags from that:
# rules.bzl
def _impl(ctx):
print("flags: %s" % ctx.fragments.java.default_javac_flags)
...
frag = rule(
implementation = _impl,
fragments = ["java"], # Declare that this rule uses java fragments
)
Then:
$ bazel build --javacopt="-g:source,lines" :x
WARNING: /home/kchodorow/test/a/tester.bzl:2:3: flags: ["-g:source,lines"].
I'm writing a program that has both a command-line interface and an interactive mode. In CLI mode it executes one command, prints results and exits. In interactive mode it repeatedly reads commands using GNU readline, executes them and prints results (in spirit of a REPL).
The syntax for commands and their parameters is almost the same regardless of whether they come from command-line or frmo stdin. I would like to maximize code-reuse by using a single framework for parsing both command-line and interactive mode inputs.
My proposed syntax is (square brackets denote optional parts, braces repetition) as follows:
From shell:
program-name {[GLOBAL OPTION] ...} <command> [{<command arg>|<GLOBAL OPTION>|<LOCAL OPTION> ...}]
In interactive mode:
<command> [{<command arg>|<GLOBAL OPTION>|<LOCAL OPTION> ...}]
Local options are only valid for one particular command (different commands may assign a different meaning to one option).
My problem is that there are some differences between the CL and interactive interfaces:
Some global options are only valid from command line (like --help, --version or --config-file). There is obviously also the 'quit'-command which is very important in interactive mode, but using it from CL makes no sense.
To solve this I've searched web and hackage for command-line parsing libraries. The most interesting ones I've found are cmdlib and optparse-applicative. However, I'm quite new to Haskell and even though I can create a working program by copying and modifying example code from library docs, I haven't quite understood the mechanics of these libraries and therefore have not been able to solve my problem.
I have these questions in mind:
How to make a base parser for commands and options that are common to CL and REPL interfaces and then be able to extend the base parser with new commands and options?
How to prevent these libraries from exiting my program upon incorrect input or when '--help' is used?
I plan to add complete i18n support to my program. Therefore I would like to prevent my chosen library from printing any messages, because all messages need to be translated. How to achieve this?
So I wish you could give me some hints on where to go from here. Does cmdlib or optparse-applicative (or some other library) support what I'm looking for? Or should I revert to a hand-crafted parser?
I think you could use my library http://hackage.haskell.org/package/options to do this. The subcommands feature exactly matches the command flag parsing behavior you're looking for.
It'd be a little tricky to share subcommands between two disjoint sets of options, but a helper typeclass should be able to do it. Rough sample code:
-- A type for options shared between CLI and interactive modes.
data CommonOptions = CommonOptions
{ optSomeOption :: Bool
}
instance Options CommonOptions where ...
-- A type for options only available in CLI mode (such as --version or --config-file)
data CliOptions = CliOptions
{ common :: CommonOptions
, version :: Bool
, configFile :: String
}
instance Options CliOptions where ...
-- if a command takes only global options, it can use this subcommand option type.
data NoOptions = NoOptions
instance Options NoOptions where
defineOptions = pure NoOptions
-- typeclass to let commands available in both modes access common options
class HasCommonOptions a where
getCommonOptions :: a -> CommonOptions
instance HasCommonOptions CommonOptions where
getCommonOptions = id
instance HasCommonOptions CliOptions where
getCommonOptions = common
commonCommands :: HasCommonOptions a => [Subcommand a (IO ())]
commonCommands = [... {- your commands here -} ...]
cliCommands :: HasCommonOptions a => [Subcommand a (IO ())]
cliCommands = commonCommands ++ [cmdRepl]
interactiveCommands :: HasCommonOptions a => [Subcommand a (IO ())]
interactiveCommands = commonCommands ++ [cmdQuit]
cmdRepl :: HasCommonOptions a => Subcommand a (IO ())
cmdRepl = subcommand "repl" $ \opts NoOptions -> do
{- run your interactive REPL here -}
cmdQuit :: Subcommand a (IO ())
cmdQuit = subcommand "quit" (\_ NoOptions -> exitSuccess)
I suspect the helper functions like runSubcommand wouldn't be specialized enough, so you'll want to invoke the parser with parseSubcommand once you've split up the input string from the REPL prompt. The docs have examples of how to inspect the parsed options, including checking whether the user requested help.
The options parser itself won't print any output, but it may be difficult to internationalize error messages generated by the default type parsers. Please let me know if there's any changes to the library that would help.