On today's Advent of Code I needed to parse strings into integers. The function I wrote for that was
function fd(s::String, fromto::UnitRange)::Bool
try
parse(UInt, s) in fromto
catch ArgumentError
false
end
end
That function was called several times within an isvalid which was called for all inputs to count the number of valid things.
The result was always 0 and the respective tests kept failing. Then I extracted one of the failing test things and debugged into isvalid, it passed!
I rearraged a few things and tested more, the same thing kept happening:
Running the code regularly, fd never returned true.
When stepping through, I got true where expected.
After replacing the parse to
function fd(s::String, fromto::UnitRange)::Bool
parsed = tryparse(UInt, s)
if isnothing(parsed)
false
else
parsed in fromto
end
end
it immediately worked always and the exercise was solved.
Shouldn't these two versions of the function always return the same result? What happened here?
Update1
This was the example that I used:
println("Sample is ", isvalid(Dict(
"hcl" => "#623a2f", # check with regex
"ecl" => "grn", # checked with set
"pid" => "087499704", # check with regex
"hgt" => "74in", # check with regex
"iyr" => "2012", # this was parsed
"eyr" => "2030", # this was parsed
"byr" => "1980", # this was parsed
)))
Update 2
This post only has a subset of the code. If you want to try yourself, you can get the full file at GitHub. I also recorded a video showing the differing behavior with and without debugging.
The thing is demonstrated quite clearly with #assert fd("2010", 2000:2020) == fdt("2010", 2000:2020) failing in one scenario and not in the other.
Related
#some_instance_var = Concurrent::Hash.new
(0...some.length).each do |idx|
fetch_requests[idx] = Concurrent::Promise.execute do
response = HTTP.get(EXTDATA_URL)
if response.status.success?
... # update #some_instance_var
end
# We're going to disregard GET failures here.
puts "I'm here"
end
end
Concurrent::Promise.all?(fetch_requests).execute.wait # let threads finish gathering all of the unique posts first
puts "how am i out already"
When I run this, the bottom line prints first, so it's not doing what I want of waiting for all the threads in the array to finish its work first, hence I keep getting an empty #some_instance_var to work with below this code. What am I writing wrong?
Never mind, I fixed this. That setup is correct, I just had to use the splat operator * for my fetch_requests array inside the all?().
Concurrent::Promise.all?(*fetch_requests).execute.wait
I guess it wanted multiple args instead of one array.
I'm trying to get all Stripe::BalanceTransaction except those they are already in my JsonStripeEvent
What I did =>
def perform(*args)
last_recorded_txt = REDIS.get('last_recorded_stripe_txn_last')
txns = Stripe::BalanceTransaction.all(limit: 100, expand: ['data.source', 'data.source.application_fee'], ending_before: last_recorded_txt)
REDIS.set('last_recorded_stripe_txn_last', txns.data[0].id) unless txns.data.empty?
txns.auto_paging_each do |txn|
if txn.type.eql?('charge') || txn.type.eql?('payment')
begin
JsonStripeEvent.create(data: txn.to_json)
rescue StandardError => e
Rails.logger.error "Error while saving data from stripe #{e}"
REDIS.set('last_recorded_stripe_txn_last', txn.id)
break
end
end
end
end
But It doesnt get the new one from the API.
Can anyone could help me for this ? :)
Thanks
I think it's because the way auto_paging_each works is almost opposite to what you expect :)
As you can see from its source, auto_paging_each calls Stripe::ListObject#next_page, which is implemented as follows:
def next_page(params={}, opts={})
return self.class.empty_list(opts) if !has_more
last_id = data.last.id
params = filters.merge({
:starting_after => last_id,
}).merge(params)
list(params, opts)
end
It simply takes the last (already fetched) item and adds its id as the starting_after filter.
So what happens:
You fetch 100 "latest" (let's say) records, ordered by descending date (default order for BalanceTransaction API according to Stripe docs)
When you call auto_paging_each on this dataset then, it takes the last record, adds its id as the
starting_after filter and repeats the query.
The repeated query returns nothing because there are noting newer (starting later) than the set you initially fetched.
As far as there are no more newer items available, the iteration stops after the first step
What you could do here:
First of all, ensure that my hypothesis is correct :) - put the breakpoint(s) inside Stripe::ListObject and check. Then 1) rewrite your code to use starting_after traversing logic instead of ending_before - it should work fine with auto_paging_each then - or 2) rewrite your code to control the fetching order manually.
Personally, I'd vote for (2): for me slightly more verbose (probably), but straightforward and "visible" control flow is better than poorly documented magic.
I need to output some JSON for a customer in a somewhat unusual format. My app is written with Rails 5.
Desired JSON:
{
"key": "\/Date(0000000000000)\/"
}
The timestamp value needs to have a \/ at both the start and end of the string. As far as I can tell, this seems to be a format commonly used in .NET services. I'm stuck trying to get the slashes to output correctly.
I reduced the problem to a vanilla Rails 5 application with a single controller action. All the permutations of escapes I can think of have failed so far.
def index
render json: {
a: '\/Date(0000000000000)\/',
b: "\/Date(0000000000000)\/",
c: '\\/Date(0000000000000)\\/',
d: "\\/Date(0000000000000)\\/"
}
end
Which outputs the following:
{
"a": "\\/Date(0000000000000)\\/",
"b": "/Date(0000000000000)/",
"c": "\\/Date(0000000000000)\\/",
"d": "\\/Date(0000000000000)\\/"
}
For the sake of discussion, assume that the format cannot be changed since it is controlled by a third party.
I have uploaded a test app to Github to demonstrate the problem. https://github.com/gregawoods/test_app_ignore_me
After some brainstorming with coworkers (thanks #TheZanke), we came upon a solution that works with the native Rails JSON output.
WARNING: This code overrides some core behavior in ActiveSupport. Use at your own risk, and apply judicious unit testing!
We tracked this down to the JSON encoding in ActiveSupport. All strings eventually are encoded via the ActiveSupport::JSON.encode. We needed to find a way to short circuit that logic and simply return the unencoded string.
First we extended the EscapedString#to_json method found here.
module EscapedStringExtension
def to_json(*)
if starts_with?('noencode:')
"\"#{self}\"".gsub('noencode:', '')
else
super
end
end
end
module ActiveSupport::JSON::Encoding
class JSONGemEncoder
class EscapedString
prepend EscapedStringExtension
end
end
end
Then in the controller we add a noencode: flag to the json hash. This tells our version of to_json not to do any additional encoding.
def index
render json: {
a: '\/Date(0000000000000)\/',
b: 'noencode:\/Date(0000000000000)\/',
}
end
The rendered output shows that b gives us what we want, while a preserves the standard behavior.
$ curl http://localhost:3000/sales/index.json
{"a":"\\/Date(0000000000000)\\/","b":"\/Date(0000000000000)\/"}
Meditate on this:
Ruby treats forward-slashes the same in double-quoted and single-quoted strings.
"/" # => "/"
'/' # => "/"
In a double-quoted string "\/" means \ is escaping the following character. Because / doesn't have an escaped equivalent it results in a single forward-slash:
"\/" # => "/"
In a single-quoted string in all cases but one it means there's a back-slash followed by the literal value of the character. That single case is when you want to represent a backslash itself:
'\/' # => "\\/"
"\\/" # => "\\/"
'\\/' # => "\\/"
Learning this is one of the most confusing parts about dealing with strings in languages, and this isn't restricted to Ruby, it's something from the early days of programming.
Knowing the above:
require 'json'
puts JSON[{ "key": "\/value\/" }]
puts JSON[{ "key": '/value/' }]
puts JSON[{ "key": '\/value\/' }]
# >> {"key":"/value/"}
# >> {"key":"/value/"}
# >> {"key":"\\/value\\/"}
you should be able to make more sense of what you're seeing in your results and in the JSON output above.
I think the rules for this were originally created for C, so "Escape sequences in C" might help.
Hi I think this is the simplest way
.gsub("/",'//').gsub('\/','')
for input {:key=>"\\/Date(0000000000000)\\/"} (printed)
first gsub will do{"key":"\\//Date(0000000000000)\\//"}
second will get you
{"key":"\/Date(0000000000000)\/"}
as you needed
In product.rb, I have:
def active_question_description_for_standard_element_and_index(standard, element, index)
active_questions_for_standard_and_element(standard, element)[index-1].try(:description)
end
def active_questions_for_standard_and_element(standard, element)
Rails.cache.fetch([self, standard, element, "product_active_questions_for_standard_and_element"]) {
questions_for_standard_and_element(standard, element).select{|q| q.active}}
end
The active_question_description_for_standard_element_and_index method is giving me this error: NoMethodError: undefined method 'description' for 0:Fixnum.
So, I start with self.touch to bypass the cache and then enter active_questions_for_standard_and_element(standard, element). This returns 0.
"Huh," I think, "that's meant to return an array, even if it's an empty array. It's not meant to return a Fixnum."
So, I try questions_for_standard_and_element(standard, element).select{|q| q.active}, and that returns [], just like you'd expect.
So, why is Rails converting [] to 0? And, how can I stop it?
UPDATE
It seems the issue has something do with Rails.cache because when I remove it from the method everything works. So far I don't know what the issue is however, as writing [] to the cache works just fine; it does not convert to 0 when read back again.
[1] pry(main)> Rails.cache.write('foo', [])
Cache write: foo
Dalli::Server#connect 127.0.0.1:11211
=> true
[2] pry(main)> Rails.cache.read('foo')
Cache read: foo
=> []
UPDATE 2: FOUND THE ANSWER
Another method was writing to the same cache key. Leaving this hear as a prompt to anyone else who has problems with Rails.cache as this is certainly something worth checking in your debugging.
active_questions_for_standard_and_element may indeed return an array, but by applying [index-1] to it, you are getting an element from that array. Unless your array is meant to contain subarrays, you should not expect active_questions_for_standard_and_element(...)[index-1] to result in an array.
I am retrieving results from NCBI's online Blast tool with 'net/http' and 'uri'. To do this I have to search through an html page to check if one of the lines is "Status=WAITING" or "Status=READY". When the Blast tool has finished the status will change to ready and results will be posted on the html page.
I have a working version to check the status and then retrieve the information that I need, but it is inefficient and is broken into two methods when I believe that there could be some way to put them into one.
def waitForBlast(rid)
get = Net::HTTP.post_form(URI.parse('http://www.ncbi.nlm.nih.gov/blast/Blast.cgi?'), {:RID => "#{rid}", :CMD => 'Get'})
get.body.each{|line| (waitForBlast(rid) if line.strip == "Status=WAITING") if line[/Status=/]}
end
def returnBlast(rid)
blast_array = Array.new
get = Net::HTTP.post_form(URI.parse('http://www.ncbi.nlm.nih.gov/blast/Blast.cgi?'), {:RID => "#{rid}", :CMD => 'Get'})
get.body.each{|line| blast_array.push(line[/<a href=#\d+>/][/\d+/]) if line[/<a href=#\d+>/]}
return blast_array
end
The first method checks the status and is my main concern because it is recursive. I believe(and correct me if I'm wrong) that designed as is takes too much computing power when all that I need is some way to recheck the results within the same method(adding in a time delay is a bonus). The second method is fine, but I would prefer if it was combined with the first somehow. Any help appreciated.
Take a look at this implementation. This is what he does:
res='http://www.ncbi.nlm.nih.gov/blast/Blast.cgi?CMD=Get&FORMAT_OBJECT=SearchInfo&RID=' + #rid
while status = open(res).read.scan(/Status=(.*?)$/).to_s=='WAITING'
#logger.debug("Status=WAITING")
sleep(3)
end
I think using the string scanner might be a bit more efficient than iterating over every line in the page, but I haven't looked at it's implementation so I may be wrong.