I have a job that receives a hash argument in its perform method. I want to call it periodically. I defined a CRON to schedule it on the resque_schedule.yml file. I am trying this way:
UpdateInterestHistoryJob:
cron: "0 0 * * * America/Sao_Paulo"
args:
classifier: :SIAPE
However, inside the job, I get the arguments as an array:
["classifier", "SIAPE"]
How do I define it correctly? How do I define the job argument as a hash on the yml file?
I just tested here and a simple dash should be enough:
UpdateInterestHistoryJob:
cron: "* * * * * America/Sao_Paulo"
args:
- classifier: :SIAPE
Also, should you need more arguments in your Resque job, simply place them without further dashes:
UpdateInterestHistoryJob:
cron: "* * * * * America/Sao_Paulo"
args:
- classifier: :SIAPE
another: value
between your Hash and the one included in this example from Ceilingfish I see discrepancy:
You can mark it up like this
feeds:
-
url: 'http://www.google.com'
label: 'default'
Note the spacing is important here. "-" must be indented by a single space (not a tab), and followed by a single space. And url & label must be indented by two spaces (not tabs either).
Additionally this might be helpful: http://www.yaml.org/YAML_for_ruby.html
This is from ww.yaml.org
Simple Inline Hash
Mapping can also be contained on a single line, using the inline syntax. Each key-value pair is separated by a colon, with a comma between each entry in the mapping. Enclose with curly braces.
Yaml
Simple Inline Hash in YAML?
hash: { name: Steve, foo: bar }
Ruby
Simple Inline Hash in Ruby?
{ 'hash' => { 'name' => 'Steve', 'foo' => 'bar' } }
I also include this link from the official YAMLSyntax and there is many explanation about this
Convert Ruby Hash into YAML
https://codedump.io/share/w2EriSJ0wO7T/1/convert-ruby-hash-into-yaml
Related
I need to output some JSON for a customer in a somewhat unusual format. My app is written with Rails 5.
Desired JSON:
{
"key": "\/Date(0000000000000)\/"
}
The timestamp value needs to have a \/ at both the start and end of the string. As far as I can tell, this seems to be a format commonly used in .NET services. I'm stuck trying to get the slashes to output correctly.
I reduced the problem to a vanilla Rails 5 application with a single controller action. All the permutations of escapes I can think of have failed so far.
def index
render json: {
a: '\/Date(0000000000000)\/',
b: "\/Date(0000000000000)\/",
c: '\\/Date(0000000000000)\\/',
d: "\\/Date(0000000000000)\\/"
}
end
Which outputs the following:
{
"a": "\\/Date(0000000000000)\\/",
"b": "/Date(0000000000000)/",
"c": "\\/Date(0000000000000)\\/",
"d": "\\/Date(0000000000000)\\/"
}
For the sake of discussion, assume that the format cannot be changed since it is controlled by a third party.
I have uploaded a test app to Github to demonstrate the problem. https://github.com/gregawoods/test_app_ignore_me
After some brainstorming with coworkers (thanks #TheZanke), we came upon a solution that works with the native Rails JSON output.
WARNING: This code overrides some core behavior in ActiveSupport. Use at your own risk, and apply judicious unit testing!
We tracked this down to the JSON encoding in ActiveSupport. All strings eventually are encoded via the ActiveSupport::JSON.encode. We needed to find a way to short circuit that logic and simply return the unencoded string.
First we extended the EscapedString#to_json method found here.
module EscapedStringExtension
def to_json(*)
if starts_with?('noencode:')
"\"#{self}\"".gsub('noencode:', '')
else
super
end
end
end
module ActiveSupport::JSON::Encoding
class JSONGemEncoder
class EscapedString
prepend EscapedStringExtension
end
end
end
Then in the controller we add a noencode: flag to the json hash. This tells our version of to_json not to do any additional encoding.
def index
render json: {
a: '\/Date(0000000000000)\/',
b: 'noencode:\/Date(0000000000000)\/',
}
end
The rendered output shows that b gives us what we want, while a preserves the standard behavior.
$ curl http://localhost:3000/sales/index.json
{"a":"\\/Date(0000000000000)\\/","b":"\/Date(0000000000000)\/"}
Meditate on this:
Ruby treats forward-slashes the same in double-quoted and single-quoted strings.
"/" # => "/"
'/' # => "/"
In a double-quoted string "\/" means \ is escaping the following character. Because / doesn't have an escaped equivalent it results in a single forward-slash:
"\/" # => "/"
In a single-quoted string in all cases but one it means there's a back-slash followed by the literal value of the character. That single case is when you want to represent a backslash itself:
'\/' # => "\\/"
"\\/" # => "\\/"
'\\/' # => "\\/"
Learning this is one of the most confusing parts about dealing with strings in languages, and this isn't restricted to Ruby, it's something from the early days of programming.
Knowing the above:
require 'json'
puts JSON[{ "key": "\/value\/" }]
puts JSON[{ "key": '/value/' }]
puts JSON[{ "key": '\/value\/' }]
# >> {"key":"/value/"}
# >> {"key":"/value/"}
# >> {"key":"\\/value\\/"}
you should be able to make more sense of what you're seeing in your results and in the JSON output above.
I think the rules for this were originally created for C, so "Escape sequences in C" might help.
Hi I think this is the simplest way
.gsub("/",'//').gsub('\/','')
for input {:key=>"\\/Date(0000000000000)\\/"} (printed)
first gsub will do{"key":"\\//Date(0000000000000)\\//"}
second will get you
{"key":"\/Date(0000000000000)\/"}
as you needed
I am having a problem with variables inside tables. this is essential since I use tables as configuration for my program.
so I have tested the following code that works:
> x = "X"
> t = {["ref"]="table with value: "..x}
> print(t["ref"])
table with value: X
> x = "Y"
> t = {["ref"]="table with value: "..x}
> print(t["ref"])
table with value: Y
it however doesn't work without the second > t = ["ref"]="table with value: "..x
now I went to implement this into my main program witch consists of two files, one witch returns the configuration table. And one file with all the functions and stuff. it looks as following
FILE A (main.lua):
testString = "test1"
print(testString)
local CONFIG = require'config'
print(CONIFG[1].test)
testString = "test2"
print(testString)
local CONFIG = require'config'
print(CONIFG[1].test)
FILE B (config.lua):
local CONFIG = {
{["test"]=[[this is a test: ]]..testString}
}
return CONFIG
now when i run file A (a.k.a. main.lua) i get the following output:
test1
this is a test: test1
test2
this is a test: test1
i can't figure out what i am doing wrong here.. i thought it had something to do with that it was a single string so i made testString a table but that gave me the same result...
(that title really seems scary.. sorry)
require, by design, caches the return value. So if you call require with the same string, it will not execute the script again. It will simply return the previously returned value.
require is for loading modules. And modules should not change their return values based on other global state.
The function you're probably looking for is dofile. This will always load and execute the file (but it has none of the path searching properties of require). Alternatively, you can use loadfile to load the file as a function, then execute that function to regenerate the table whenever you want.
Also:
I am having a problem with variables inside tables.
There are no "variables inside tables". Or at least not the way you mean. Expecting a change to a variable to affect some other value is like expecting this:
a = 5
b = a + 5
a = 10
assert(b == 15, "This will never be true.")
When an expression (whether a + 5 or "table with value: " .. x) is evaluated, it results in a value. The resulting value is in no way dependent on any value or variable from the expression that generated it.
That's why you had to regenerate the value; because values don't change just because some variable changes.
I have a config file parser written in lua.
I'd like to detect values that are environment variables and change them with os.getenv.
It's probably a bit ambitious because I can have values like
"a string with an embedded ${VARIABLE} in here"
or
"another string with an env $VARIABLE"
And I should probably allow escaping them with double $$ to allow a literal $.
How do I do this?
This is what I have so far, but it isn't right
local envvar = string.match(value, "%$([%w_]+)")
if envvar then
print("Envvar=", envvar)
value = value:gsub("(%$[%w_]+)", os.getenv(envvar))
end
For example, I can't figure out how to use the %b balance option here to properly match { } combinations. And make them optional. How do I make this work robustly?
In fact, I realise it's probably more complicated than this. What if more than one environment variable was specified?
local text = [[
Example: ${LANG}, $TEXTDOMAINDIR, $$10.00, $$LANG, $UNDEFINED
Nested braces: {{${SHELL}}}
]]
text = text:gsub('$%$','\0')
:gsub('${([%w_]+)}', os.getenv)
:gsub('$([%w_]+)', os.getenv)
:gsub('%z','$')
print(text)
--> Example: en_US.UTF-8, /usr/share/locale/, $10.00, $LANG, $UNDEFINED
--> Nested braces: {{/bin/bash}}
I am looking at a situation where I'd like to bring some structure to what would be a string in an typical language. And wondering how to use Rebol's parts box to do it.
So let's say I've got a line that looks like this in the original language I'm trying to dialect:
something = ("/foo/mumble" "/foo/${BAR}/baz")
I want to use Rebol's primitives, so certainly a file path. Here is a random example of what I thought of off the top of my head:
something: [%/foo/mumble [%/foo/ BAR %/baz]]
If it were code you'd use REJOIN or COMBINE. But this is not designed to be executed, it's more like a configuration file. You're not supposed to be running arbitrary code, just getting a list of files.
I'm not sure about how feasible it is to stick with strings and yet still have these type as FILE!. Not all characters work in a FILE!, for instance:
>> load "%/foo/${BAR}/baz"
== [%/foo/$ "BAR" /baz]
It makes me wonder what my options are in Rebol data that's supposed to represent a configuration file. I can use plain old strings and do substitutions like other things do. Maybe REWORD with an OBJECT block to represent the environment?
What is the 'reword' function in Rebol and how do I use it?
In any case, I want to know how to represent a filename in a declarative context with environment variable substitutions like this.
I should use file! Your example need "" after %
f: load {%"/foo/${BAR}/baz"}
replace f "${BAR}" "MYVALUE" ;== %/foo/MYVALUE/baz
you could use path! with parens.
the only issue is the root, for which you can use another character to replace the "%" used for files... let's use '! (note this should be a word 'valid character).
when calling to-block on a path! type, it returns each part as its own token... useful.
to-block '!/path/(foo)/file.txt
== [! path (foo) file.txt]
here is a little script which loads three paths and uses parens as a constructed part of the path and uses tags to escape path-illegal characters (like a space!)
environments: make object! [
foo: "FU"
bar: "BR"
]
paths: [
!/path/(foo)/file.txt
!/root/<escape weird chars $>/(bar ".txt")
!/("__" foo)/path/(bar)
]
parse paths [
some [
(print "------" )
set data path! here: ( insert/only here to-block data to-block data )
(out-path: copy %"" )
into [
path-parts: (?? path-parts)
'!
some [
[ set data [word! | tag! | number!] (
append out-path rejoin ["/" to-string data]
)]
|
into [
( append out-path "/")
some [
set data word! ( append out-path rejoin [to-string get in environments data] )
| set data skip ( append out-path rejoin [ to-string data])
]
]
| here: set data skip (to-error rejoin ["invalid path token (" type? data ") here: " mold here])
]
]
(?? out-path)
]
]
Note this works both in Rebol3 and Rebol2
output is as follows:
------
path-parts: [! path (foo) file.txt]
out-path: %/path/FU/file.txt
------
path-parts: [! root <escape weird chars $> (bar ".txt")]
out-path: %/root/escape%20weird%20chars%20$/BR.txt
------
path-parts: [! ("__" foo) path (bar)]
out-path: %/__FU/path/BR
------
I need to load a yaml file into Hash,
What should I do?
I would use something like:
hash = YAML.load(File.read("file_path"))
A simpler version of venables' answer:
hash = YAML.load_file("file_path")
Use the YAML module:
http://ruby-doc.org/stdlib-1.9.3/libdoc/yaml/rdoc/YAML.html
node = YAML::parse( <<EOY )
one: 1
two: 2
EOY
puts node.type_id
# prints: 'map'
p node.value['one']
# prints key and value nodes:
# [ #<YAML::YamlNode:0x8220278 #type_id="str", #value="one", #kind="scalar">,
# #<YAML::YamlNode:0x821fcd8 #type_id="int", #value="1", #kind="scalar"> ]'
# Mappings can also be accessed for just the value by accessing as a Hash directly
p node['one']
# prints: #<YAML::YamlNode:0x821fcd8 #type_id="int", #value="1", #kind="scalar">
http://yaml4r.sourceforge.net/doc/page/parsing_yaml_documents.htm
You may run into a problem mentioned at this related question, namely, that the YAML file or stream specifies an object into which the YAML loader will attempt to convert the data into. The problem is that you will need a related Gem that knows about the object in question.
My solution was quite trivial and is provided as an answer to that question. Do this:
yamltext = File.read("somefile","r")
yamltext.sub!(/^--- \!.*$/,'---')
hash = YAML.load(yamltext)
In essence, you strip the object-classifier text from the yaml-text. Then you parse/load it.