I have a macro go_server that calls go_binary among others. Here's an example of it being used:
go_server(
name = "service",
library = ":go_default_library", # go_binary
args = [
"--respPrefix", "OH HAI",
"--port", "4040",
],
)
Questions:
the args above has an error: OH HAI should be escaped otherwise it gets passed to the shell as two separate arguments. I found that "'OH HAI'" works, but is there a better way - say, a function like strings.shell_escape("OH HAI") or so?
could you point me to an example open-source code instantiation of a bazel rule that has complex args? I'm looking for patterns related to dictionaries, string escaping etc. Or should I use something like jsonnet for managing my args instead?
Thanks!
I'm not aware of a way to escape. To keep the string identical, it would be
"--respPrefix", "\"OH HAI\"",
Related
Let's say I have a rule like this.
foo(
name = "helloworld",
myarray = [
":bar",
"//path/to:qux",
],
)
In this case, myarray is static.
However, I want it to be given by cli, like
bazel run //:helloworld --myarray=":bar,//path/to:qux,:baz,:another"
How is this possible?
Thanks
To get exactly what you're asking for, Bazel would need to support LABEL_LIST in Starlark-defined command line flags, which are documented here:
https://docs.bazel.build/versions/2.1.0/skylark/lib/config.html
and here: https://docs.bazel.build/versions/2.1.0/skylark/config.html
Unfortunately that's not implemented at the moment.
If you don't actually need a list of labels (i.e., to create dependencies between targets), then maybe STRING_LIST will work for you.
If you do need a list of labels, and the different possible values are known, then you can use --define, config_setting(), and select():
https://docs.bazel.build/versions/2.1.0/configurable-attributes.html
The question is, what are you really after. Passing variable, array into the bazel build/run isn't really possible, well not as such and not (mostly) without (very likely unwanted) side effects. Aren't you perhaps really just looking into passing arguments directly to what is being run by the run? I.e. pass it to the executable itself, not bazel?
There are few ways you could sneak stuff in (you'd also in most cases need to come up with a syntax to pass data on CLI and unpack the array in a rule), but many come with relatively substantial price.
You can define your array in a bzl file and load it from where the rule uses it. You can then dump the bzl content rewriting your build/run configuration (also making it obvious, traceable) and load the bits from the rule (only affecting the rule loading and using the variable). E.g, BUILD file:
load(":myarray.bzl", "myarray")
foo(
name = "helloworld",
myarray = myarray,
],
)
And you can then call your build:
$ echo 'myarray=[":bar", "//path/to:qux", ":baz", ":another"]' > myarray.bzl
$ bazel run //:helloworld
Which you can of course put in a single wrapper script. If this really needs to be a bazel array, this one is probably the cleanest way to do that.
--workspace_status_command: you can collection information about your environment, add either or both of the resulting files (depending on whether the inputs are meant to invalidate the rule results or not, you could use volatile or stable status files) as a dependency of your rule and process the incoming file in the what is being executed by the rule (at which point one would wonder why not pass it to as its command line arguments directly). If using stable status file, also each other rule depending on it is invalidated by any change.
You can do similar thing by using --action_env. From within the executable/tool/script underpinning the rule, you can directly access defined environmental variable. However, this also means environment of each rule is affected (not just the one you're targeting); and again, why would it parse the information from environment and not accept arguments on the command line.
There is also --define, but you would not really get direct access it's value as much as you could select() a choice out of possible options.
I'm writing some logic for redis inside lua and almost each of my scripts have something common, it would be really handy to move out this to the shared function but
redis can't use lua's require statement
officially you can't call other redis function(see: https://stackoverflow.com/a/22599862/1812225)
For example I have this snippet literally everywhere
local prefix = "/" .. type
if typeId then
prefix = prefix .. "(" .. typeId .. ")"
end
I'm thinking about some post-processing before feeding scripts to redis but this seems like an over-kill...
What is the best practice to solve/reduce this problem?
Updated:
local registryKey = "/counters/set-" .. type
local updatedKey = "/counters/updated/set-" .. type
if typeId then
redis.call("SAdd", updatedKey, name .. ":" .. typeId)
redis.call("SAdd", registryKey, name .. ":" .. typeId)
else
redis.call("SAdd", updatedKey, name)
redis.call("SAdd", registryKey, name)
end
is another code sample and it can't be trivially moved to client-side as it invokes redis commands, and works as a part of transaction
Thanks!
"Hack" #1
After you SCRIPT LOAD something, you get back a sha1 hash that you can use with EVALSHA. The same sha1 value can be used to call that script from inside another script - simply call the function f_<sha1>. That said, there are some differences in how you pass the KEYS/ARGV structures when used that way.
Note that this is undocumented behavior, which means the behavior could change in a future version of Redis.
Credit for teaching me this goes to Dr. Josiah Carlson who, in turn, credits someone else (IIRC Fritzy). For more information check out his lua-call Python wrapper: https://github.com/josiahcarlson/lua-call
"Hack" #2
Redis sandboxes Lua and puts several restrictions on it in order to maintain sanity. You could go around some of these, e.g. access _G and define your utility function there so it will be available to all scripts (like I did with https://github.com/redislabs/redis-lua-debugger).
However, this is also pretty risky - besides potential replication issues, this usage is untested and could therefore lead to undefined behavior (I managed to crash quite a few instances with my little script ;)).
P.S.
Both hacks require additional administrative work to ensure that these "global" scripts are actually loaded before any other script calls them.
I'd like to understand how directives in Spray work. As per the documentation:
The general anatomy of a directive is as follows:
name(arguments) { extractions =>
... // inner Route
}
My basic understanding is that in the below snippet, 32 is passed as a parameter to method test.
test {
32
}
However, in the above directive name example, it is said arguments are passed into inner route, which is an anonymous function.
Could someone please help me understand the syntax and the flow starting from how the arguments are extracted and passed into an inner route?
You're right that that syntax passes 32 to the function test. What you're missing is that a Directive accepts a function as an argument (remember, we're doing functional programming now so functions are values). If you wanted to write this:
path(IntNumber) {
userId =>
complete(s"Hello user $userId")
}
in a less DSL-ey fashion, you could do this:
val innerFunction: Int => Route = {userId => complete(s"Hello user $userId")}
(path(IntNumber))(innerFunction)
or even this:
def innerMethod(userId: Int): Route = complete(s"Hello user $userId")
(path(IntNumber))(innerMethod)
The mechanics of how this is actually accomplished are... complex; this method makes a Directive implicitly convertible to a function:
implicit def pimpApply[L <: HList](directive: Directive[L])(implicit hac: ApplyConverter[L]): hac.In ⇒ Route = f ⇒ directive.happly(hac(f))
This is using the "magnet pattern" to select an appropriate hac, so that it can take a function in the inner path (with an appropriate number of arguments) if the directive extracts parameters, or a value in the inner path (a plain route) if the directive doesn't extract parameters. The code looks more complicated than it is because scala doesn't have direct support for full dependent typing, so we have to emulate it via implicits. See ApplyConverterInstances for the horrible code this necessitates :/.
The actual extracting all happens when we get an actual route, in the happly method of the specific directive. (If everything used HList everywhere, we could mostly avoid/ignore the preceding horrors). Most extract-ey directives (e.g. path) eventually call hextract:
def hextract[L <: HList](f: RequestContext ⇒ L): Directive[L] = new Directive[L] {
def happly(inner: L ⇒ Route) = ctx ⇒ inner(f(ctx))(ctx)
}
Remember a Route is really just a RequestContext => Unit, so this returns a Route that, when passed a RequestContext:
Runs f on it, to extract the things that need extracting (e.g. URL path components)
Runs inner on that; inner is a function from e.g. path components to the inner route.
Runs that inner route, on the context.
(The following was edited in by a mod from a comment conversation):
Fundamentally it's pretty elegant, and it's great that you can see all the spray code and it's ordinary scala code (I really recommend reading the source when you're confused). But the "bridging" part with the ApplyConverter is complex, and there's really no way around that; it comes of trying to do full dependent types in a language that wasn't really designed for them.
You've got to remember that the spray routing DSL is a DSL; it's the kind of thing that you'd have to have as an external config file in almost any other language. I can't think of a single web framework that offers the same flexibility in routing definitions that spray does with complete compile-time type safety. So yes, some of the things spray does are complex - but as the quote goes, easy things should be easy and hard things should be possible. All the scala-level things are simple; spray is complex, but it would be even more complex (unusably so) in another language.
I see { } are used for closures, and then I believe when a $ is put in front of braces, it is simply doing a variable substitution within a string. I can't find the documentation on how the $ works in the reference ... hard to search on it unfortunately, and the Groovy String documentation is lacking in introducing this. Can you please point me to the documentation and/or explain the "$" operator in Groovy -- how all it can be used? Does Grails extend it at all beyond Groovy?
In a GString (groovy string), any valid Groovy expression can be enclosed in the ${...} including method calls etc.
This is detailed in the following page.
Grails does not extend the usage of $ beyond Groovy. Here are two practical usages of $
String Interpolation
Within a GString you can use $ without {} to evaluate a property path, e.g.
def date = new Date()
println "The time is $date.time"
If you want to evaluate an expression which is more complex than a property path, you must use ${}, e.g.
println "The time is ${new Date().getTime()}"
Dynamic Code Execution
Dynamically accessing a property
def prop = "time"
new Date()."$prop"
Dynamically invoking a method
def prop = "toString"
new Date()."$prop"()
As pointed out in the comments this is really just a special case of string interpolation, because the following is also valid
new Date().'toString'()
$ is not an operator in Groovy. In string substitution it identifies variables within the string - there's no magic there. It's a common format used for inline variables in many template and programming languages.
All special Groovy operators are listed here: http://groovy-lang.org/operators.html
Work in side Jenkins File in pipeline
#!/usr/bin/env groovy
node{
stage ('print'){
def DestPath="D\$\\"
println("DestPath:${DestPath}")
}
}
Pursuant to this question:
Redefining Commands in a New Environment
How does one redefine (or define using \def) a macro that uses parameters? I keep getting an illegal parameter definition in \foo error. Since I require custom delimiters, I can't use \newcommand or \renewcommand.
A general form of my code looks like this:
\newenvironment{foo}{%
...spacing stuff and counter defs...
\def\fooitem#1. #2\endfooitem{%
...stuff...
}
\def\endfooitem{endfoo}
}
{...after material (spacing)...}
This must be possible. Right now I'm using plain-TeX definitions (as I mentioned in the question above) but I'd really like to be consistent with the LaTeX system.
You need to double the # characters for every nested definition. Internally, a \newcommand or a \newenvironment is calling \def.
\newenvironment{foo}{%
...
\def\fooitem##1. ##2\endfooitem{%
...
Besides that, this is the way to do what you're trying to do; there is no pure-LaTeX method to define a macro with delimited arguments.