i'm trying to map one value to other, following is my code
what im trying to do is call one api and get that values and pass to another api and map both values in the end, i get the values from both apis but failing to concatenate. how to solve this?
response = conn.get("/api/vhosts")
statistics = JSON.parse(response.body)
statistics.each do |vhosts|
# puts "vhostname: #{vhosts["name"]}"
response1 = conn.get("/api/aliveness-test/#{vhosts["name"]}")
statistics1 = JSON.parse(response1.body)
puts "#{vhosts["name"]} " + statistics1.fetch('status', :unknown)
end
end
Preferably, concatenate string using << is a little more faster and performatic:
puts "#{vhosts["name"]} " << statistics1.fetch('status', :unknown).to_s
The error is telling that you are trying to concatenate a string and a symbol. So, one of the two parts is a symbol, not a string. You have some options.
puts "#{vhosts["name"]} #{statistics1.fetch('status', :unknown)}"
or
puts "#{vhosts["name"]} " + statistics1.fetch('status', :unknown).to_s
I'm trying to create a pandoc filter that will help me summarize data. I've seen some filters that create table of contents, but I'd like to organize the index based on content found within headers.
For instance, below I'd like to provide a summary of content based on tagged dates in headers (some headers will not contain dates...)
[nwatkins#sapporo foo]$ cat test.md
# 1 May 2018
some info
# not a date
some data
# 2 May 2018
some more info
I started off by trying to look at the content of the headers. The intention was to just apply a simple regex for different date/time patterns.
[nwatkins#sapporo foo]$ cat test.lua
function Header(el)
return pandoc.walk_block(el, {
Str = function(el)
print(el.text)
end })
end
Unfortunately, this seems to apply the print state for each space-separated string, rather than a concatenation allowing me to analyze an entire header content:
[nwatkins#sapporo foo]$ pandoc --lua-filter test.lua test.md
1
May
2018
not
...
Is there a canonical way to do this in filters? I have yet to see any helper function in the Lua filters documentation.
Update: the dev version now provides the new functions pandoc.utils.stringify and pandoc.utils.normalize_date. They will become part of the next pandoc release (probably 2.0.6). With these, you can test whether a header contains a date with the following code:
function Header (el)
content_str = pandoc.utils.stringify(el.content)
if pandoc.utils.normalize_date(content_str) ~= nil then
print 'header contains a date'
else
print 'not a date'
end
end
There is no helper function yet, but we have plans to provide a pandoc.utils.tostring function in the very near future.
In the meantime, the following snippet (taken from this discussion) should help you to get what you need:
--- convert a list of Inline elements to a string.
function inlines_tostring (inlines)
local strs = {}
for i = 1, #inlines do
strs[i] = tostring(inlines[i])
end
return table.concat(strs)
end
-- Add a `__tostring` method to all Inline elements. Linebreaks
-- are converted to spaces.
for k, v in pairs(pandoc.Inline.constructor) do
v.__tostring = function (inln)
return ((inln.content and inlines_tostring(inln.content))
or (inln.caption and inlines_tostring(inln.caption))
or (inln.text and inln.text)
or " ")
end
end
function Header (el)
header_text = inlines_tostring(el.content)
end
I receive the following error when I try to run my code:
lua:readFile.lua:7: attempt to call method 'split' (a nil value)
I am teaching myself Lua and doing some exercises. I am trying to parse out the individual values in a text file and then do stuff with them. I can open the file and if I don't try to parse out the values I can print the contents.
I have tried, separately:
dollars, tickets = line:split(" ")
dollars, tickets = line:split("(%w+)", " ")
Along with several other iterations I cannot recall at this point.
Here is my code:
myfile = io.open("C:\\tickets.txt", "r")
if myfile then
print("True") --test print
for line in myfile:lines() do
local dollars, tickets = unpack(line:split(" "))
print(dollars)
end
end
print("Done") --test print
myfile:close()
Here is the content of the tickets.txt file in its entirety:
250 5750
100 28000
50 35750
25 18750
I am obviously missing something in the split method but I do not know enough to know what.
Regards.
If you only want to read numbers from a file and do not want to enforce them to be two on each line, you can use this code:
while true do
local dollars,tickets = myfile:read("*n","*n")
if dollars==nil or tickets==nil then break end
print(dollars)
end
The string library in Lua doesn't include a 'split' function. You will have to implement one yourself (there's examples on the Lua wiki), or use Lua's pattern matching functionality to parse out the pieces. For example, you could do something like this:
local dollars, tickets = line:match("(%d+) (%d+)")
I have data file as in format of TXT , I like to parse the URL field from TXT file using the below ruby code
f = File.open(txt_file, "r")
f.each_line { |line|
rows = line.split(',')
rows[3].each do |url|
next if url=="URL"
puts url
end
}
TXT contains:
name,option,price,URL
"x", "0,0,0,0,0,0", "123.40","http://domain.com/xym.jpg"
"x", "0,0,0,0,0,0", "111.34","http://domain.com/yum.jpg"
output:
0
Why does the output come from the option field "0,0,0,0,0,0"? How do I skip this and get the URL field?
Environment
ruby 1.8.7
rails 2.3.8
gem 1.3.7
I'd check out a CSV parsing tool to make this easier:
require 'rubygems'
require 'faster_csv'
FasterCSV.foreach(txt_file, :quote_char => '"',
:col_sep =>',', :row_sep =>:auto) do |row|
puts row[3] if row[3] != "URL"
break
end
Also, I think you're misunderstanding how the split() would work. If you run split() against one row from your file, you're going to get back an array of columns for that single row, not a multidimensional array as rows[3].each would suggest.
EDIT: Before reading, I completely agree with the answer by Jeff Swensen, I'll leave my answer here regardless.
I'm not entirely sure what your inside loop is for (rows[3].each) Because you can't convert a single line into a 'row' when you only have a single URL. You could split by the ** characters and return an Array of urls but then you still need to remove the extra double quotes, or you could use a Regular Expression, like so:
#!/usr/bin/env ruby
f = DATA
urls = f.readlines.map do |line|
line[/([^"]+)"\*\*/, 1]
end
urls.compact!
p urls
__END__
name ,option,price, **URL**
"x", "0,0,0,0,0,0", "123.40",**"http://domain.com/xym.jpg"**
"x", "0,0,0,0,0,0", "111.34",**"http://domain.com/yum.jpg"**
The call to compact is needed because map will insert nil objects when you hit something that doesn't match that expression. For the String#[] method, see here
The reason that "0" is the result is that your code is blindly splitting on the comma char when you seem to be expecting parsing CSV-style (where column values may contain delimiter chars if the entire column value is enclosed in quotes. I highly suggest using a csv parser. If you are using Ruby 1.9.2, then you will already have access to the FasterCSV library.
If you are sure that the fields you want are always surrounded by double quotations, you can use that as the basis for extracting rather than the comma.
File.open(txt_file) do |f|
f.each_line do |l|
cols = l.scan(/(?<!\\)"(.*?)(?<!\\)"/)
cols[3].tap{|url| puts url if url}
end
end
In your code, the opened IO is not closed. This is a bad practice. It is better to use a block so that you do not forget to close it.
The two (?<!\\)" in the regex match non-escaped double quotations. They use negative lookbehind.
.*? is a non-greedy match, which avoids a match from exceeding a non-escaped double quotation.
tap is to avoid repeating the cols[3] operation twice in puts and if.
Edit again
If you use ruby 1.8.7, you can either
update your regex engine to oniguruma by following easy steps here, http://oniguruma.rubyforge.org/
or
replace the regex. tap cannot be used also. Use the following instead:
.
File.open(txt_file) do |f|
f.each_line do |l|
cols = l.scan(/(?:\A|[^\\])"(.*?[^\\]|)"/)
url = cols[3]
puts url if url
end
end
I would recomment using oniguruma. It is a new regex engine introduced since ruby 1.9, and is much powerful and faster than the one used in ruby 1.8. It can be installed easily on ruby 1.8.
The data is in CSV format, but if all you want to do is grab the last field in the string, then do just that:
text =<<EOT
name,option,price,URL
"x", "0,0,0,0,0,0", "123.40","http://domain.com/xym.jpg"
"x", "0,0,0,0,0,0", "111.34","http://domain.com/yum.jpg"
EOT
require 'pp'
text.lines.map{ |l| l.split(',').last }
If you want to clean up the double-quotes and trailing line-breaks:
text.lines.map{ |l| l.split(',').last.gsub('"', '').chomp }
# => ["URL", "http://domain.com/xym.jpg", "http://domain.com/yum.jpg"]
My client has database of over 400,000 customers. Each customer is assigned a GUID. He wants me to select all the records, create a dynamic "short URL" which includes this GUID as a parameter. Then save this short url to a field on each clients record.
The first question I have is do any of the URL shortening sites allow you to programatically create short urls on the fly like this?
TinyUrl allow you to do it (not widely documented), for example:
http://tinyurl.com/api-create.php?url=http://www.stackoverflow.com/
becomes http://tinyurl.com/6fqmtu
So you could have
http://tinyurl.com/api-create.php?url=http://mysite.com/user/xxxx-xxxx-xxxx-xxxx
to http://tinyurl.com/64dva66.
The guid doesn't end up being that clear, but the URLs should be unique
Note that you'd have to pass this through an HTTPWebRequest and get the response.
You can use Google's URL shortner, they have an API.
Here is the docs for that: http://code.google.com/apis/urlshortener/v1/getting_started.html
This URL is not sufficiently short:?
http://www.clientsdomain.com/?customer=267E7DDD-8D01-4F38-A3D8-DCBAA2179609
NOTE: Personally I think your client is asking for something strange. By asking you to create a URL field on each customer record (which will be based on the Customer's GUID through a deterministic algorithm) he is in fact essentially asking you to denormalize the database.
The algorithm URL shortening sites use is very simple:
Store the URL and map it to it's sequence number.
Convert the sequence number (id) to a fixed-length string.
Using just six lowercase letter for the second step will give you many more (24^6) combinations that the current application needs, and there's nothing preventing the use of a larger sequence at some point in time. You can use shorter sequences if you allow for numbers and/or uppercase letters.
The algorithm for the conversion is a base conversion (like when converting to hex), padding with whatever symbol represents zero. This is some Python code for the conversion:
LOWER = [chr(x + ord('a')) for x in range(25)]
DIGITS = [chr(x + ord('0')) for x in range(10)]
MAP = DIGITS + LOWER
def i2text(i, l):
n = len(MAP)
result = ''
while i != 0:
c = i % n
result += MAP[c]
i //= n
padding = MAP[0]*l
return (padding+result)[-l:]
print i2text(0,4)
print i2text(1,4)
print i2text(12,4)
print i2text(36,4)
print i2text(400000,4)
print i2text(1600000,4)
Results:
0000
0001
000c
0011
kib9
4b21
Your URLs would then be of the form http://mydomain.com/myapp/short/kib9.