With Ruby on Rails, I am running a Selenium service. I am running through a web page many times.
Having a bit of trouble as I need it to sometimes wait for the the 'button' to be active (in a not disabled state).
# wait for a specific element to show up
wait = Selenium::WebDriver::Wait.new(:timeout => 10) # seconds
wait.until { driver.find_element(:id => "foo") }
So I am trying to use
the wait.until method below but can't get the syntax. I tried this.
```
def css_removed_timeout(css)
wait = Selenium::WebDriver::Wait.new(timeout: 10)
wait.until { !find_css(css).displayed? }
end```
Is there anyway to detect 'not displayed'? ( is there an 'is_empty' equivalent in Ruby)?
There's an .empty? method on Arrays.
If you include ActiveSupport, there's a .blank? method which works on arrays, strings, and hashes.
More simply you could check if array.length == 0
Related
I was trying to DRY up a Rails controller by extracting a method that includes a guard clause to return prematurely from the controller method in the event of an error. I thought this may be possible using a to_proc, like this pure Ruby snippet:
def foo(string)
processed = method(:breaker).to_proc.call(string)
puts "This step should not be executed in the event of an error"
processed
end
def breaker(string)
begin
string.upcase!
rescue
puts "Well you messed that up, didn't you?"
return
end
string
end
My thinking was that having called to_proc on the breaker method, calling the early return statement in the rescue clause should escape the execution of foo. However, it didn't work:
2.4.0 :033 > foo('bar')
This step should not be executed in the event of an error
=> "BAR"
2.4.0 :034 > foo(2)
Well you messed that up, didn't you?
This step should not be executed in the event of an error
=> nil
Can anyone please
Explain why this doesn't work
Suggest a way of achieving this effect?
Thanks in advance.
EDIT: as people are wondering why the hell I would want to do this, the context is that I'm trying to DRY up the create and update methods in a Rails controller. (I'm trying to be agressive about it as both methods are about 60 LoC. Yuck.) Both methods feature a block like this:
some_var = nil
if (some complicated condition)
# do some stuff
some_var = computed_value
elsif (some marginally less complicated condition)
#error_msg = 'This message is the same in both actions.'
render partial: "show_user_the_error" and return
# rest of controller actions ...
Hence, I wanted to extract this as a block, including the premature return from the controller action. I thought this might be achievable using a Proc, and when that didn't work I wanted to understand why (which I now do thanks to Marek Lipa).
What about
def foo(string)
processed = breaker(string)
puts "This step should not be executed in the event of an error"
processed
rescue ArgumentError
end
def breaker(string)
begin
string.upcase!
rescue
puts "Well you messed that up, didn't you?"
raise ArgumentError.new("could not call upcase! on #{string.inspect}")
end
string
end
After all this is arguably a pretty good use case for an exception.
It seems part of the confusion is that a Proc or lambda for that matter are distinctly different than a closure (block).
Even if you could convert Method#to_proc to a standard Proc e.g. Proc.new this would simply result in a LocalJumpError because the return would be invalid in this context.
You can use next to break out of a standard Proc but the result would be identical to the lambda that you have now.
The reason Method#to_proc returns a lambda is because a lambda is far more representative of a method call than a standard Proc
For Example:
def foo(string)
string
end
bar = ->(string) { string } #lambda
baz = Proc.new {|string| string }
foo
#=> ArgumentError: wrong number of arguments (given 0, expected 1)
bar.()
#=> ArgumentError: wrong number of arguments (given 0, expected 1)
baz.()
#=> nil
Since you are converting a method to a proc object I am not sure why you would also want the behavior to change as this could cause ambiguity and confusion. Please note that for this reason you can not go in the other direction either e.g. lambda(&baz) does not result in a lambda either as metioned Here.
Now that we have explained all of this and why it shouldn't really be done, it is time to remember that nothing is impossible in ruby so this would technically work:
def foo(string)
# place assignment in the guard clause
# because the empty return will result in `nil` a falsey value
return unless processed = method(:breaker).to_proc.call(string)
puts "This step should not be executed in the event of an error"
processed
end
def breaker(string)
begin
string.upcase!
rescue
puts "Well you messed that up, didn't you?"
return
end
string
end
Example
Edit Fixed following toro2k's comment.
Range#include? and Range#cover? seem to be different as seen in the source code 1, 2, and they are different in efficiency.
t = Time.now
500000.times do
("a".."z").include?("g")
end
puts Time.now - t # => 0.504382493
t = Time.now
500000.times do
("a".."z").cover?("g")
end
puts Time.now - t # => 0.454867868
Looking at the source code, Range#include? seems to be more complex than Range#cover?. Why can't Range#include? be simply an alias of Range#cover? What is their difference?
The two methods are designed to do two slightly different things on purpose. Internally they are implemented very differently too. You can take a look at the sources in the documentation and see that .include? is doing a lot more than .cover?
The .cover? method is related to the Comparable module, and checks whether an item would fit between the end points in a sorted list. It will return true even if the item is not in the set implied by the Range.
The .include? method is related to the Enumerable module, and checks whether an item is actually in the complete set implied by the Range. There is some finessing with numerics - Integer ranges are counted as including all the implied Float values (I'm not sure why).
These examples might help:
('a'..'z').cover?('yellow')
# => true
('a'..'z').include?('yellow')
# => false
('yellaa'..'yellzz').include?('yellow')
=> true
Additionally, if you try
('aaaaaa'..'zzzzzz').include?('yellow')
you should notice it takes a much longer time than
('aaaaaa'..'zzzzzz').cover?('yellow')
The main difference is that include is checking whether object is one of range element, and cover is returning whether object is between edge elements. You can see that:
('a'..'z').include?('cc') #=> false
('a'..'z').cover?('cc') #=> true
date_range = {:start_date => (DateTime.now + 1.days).to_date, :end_date => (DateTime.now + 10.days).to_date}
date_range_to_check_for_coverage = {:start_date => (DateTime.now + 5.days).to_date, :end_date => (DateTime.now + 7.days).to_date}
(date_range[:start_date]..date_range[:end_date]).include?((DateTime.now + 5.days).to_date)
#true
(date_range[:start_date]..date_range[:end_date]).cover?((DateTime.now + 5.days).to_date)
#true
(date_range[:start_date]..date_range[:end_date]).include?(date_range_to_check_for_coverage[:start_date]..date_range_to_check_for_coverage[:end_date])
#true
(date_range[:start_date]..date_range[:end_date]).cover?(date_range_to_check_for_coverage[:start_date]..date_range_to_check_for_coverage[:end_date])
#false
Shouldn't the last line return true ?
The reason I am asking is rubocop flags a conflict when I use include? in place of cover?. And clearly, my logic (to check if the range is included in another range) does not work with cover?.
There's a huge performance difference between cover? and include?: special care when using Date ranges
For the reasons already explained: cover? just checks if your argument is between the begin and the end of the range; in include?, you are checking if your argument is actually inside the range, which involves checking every single element of the range, and not just the begin/end.
Let's run a simple benchmark.
date_range = Date.parse("1990-01-01")..Date.parse("2023-01-01");
target_date = Date.parse("2023-01-01");
iterations = 1000;
Benchmark.bmbm do |bm|
bm.report("using include") { iterations.times { date_range.include?(target_date) } }
bm.report("using cover") { iterations.times { date_range.cover?(target_date) } }
end
Results:
Rehearsal -------------------------------------------------
using include 5.466448 0.071381 5.537829 ( 5.578123)
using cover 0.000272 0.000003 0.000275 ( 0.000279)
---------------------------------------- total: 5.538104sec
user system total real
using include 5.498635 0.046663 5.545298 ( 5.557880)
using cover 0.000284 0.000000 0.000284 ( 0.000280)
As you can see, using #cover? is instantenous; you get your results in 0.000ms.
However, using #include? takes almost 5.5 seconds for the same results.
Choose carefully.
I was testing some DB entries in our production server in Rails Console where almost all the commands were producing a huge number of lines of output and causing the ssh channel to hang.
Is there a way to suppress the console/irb screenfuls?
You can append ; nil to your statements.
Example:
users = User.all; nil
irb prints the return value of the last executed statement; thus in this case it'll print only nil since nil is the last executed valid statement.
In search of a solution how to silence the irb/console output, I also found an answer at austinruby.com:
silence irb:
conf.return_format = ""
default output:
conf.return_format = "=> %s\n"
limit to eg 512 chars:
conf.return_format = "=> limited output\n %.512s\n"
running the following within irb works for me:
irb_context.echo = false
irb --simple-prompt --noecho
--simple-prompt - Uses a simple prompt - just >>
--noecho - Suppresses the result of operations
Here, add this to your ~/.irbrc:
require 'ctx'
require 'awesome_print'
module IRB
class Irb
ctx :ap do
def output_value()
ap(#context.last_value)
end
end
ctx :puts do
def output_value()
puts(#context.last_value)
end
end
ctx :p do
def output_value()
p(#context.last_value)
end
end
ctx :quiet do
def output_value()
end
end
end
end
def irb_mode(mode)
ctx(mode) { irb }
end
(Note: You must install the ctx gem first, though awesome_print is optional, of course.)
Now when you are on any console that uses irb, you can do the following:
Normal mode:
irb(main):001:0> { this:'is a complex object', that:[ { will:'probably'}, { be:'good to read' } ], in:{ some:{ formatted:'way'} } }
=> {:this=>"is a complex object", :that=>[{:will=>"probably"}, {:be=>"good to read"}], :in=>{:some=>{:formatted=>"way"}}}
...yep, just what you expect.
awesome_print mode:
irb(main):002:0> irb_mode(:ap)
irb#1(main):001:0> { this:'is a complex object', that:[ { will:'probably'}, { be:'good to read' } ], in:{ some:{ formatted:'way'} } }
=> {
:this => "is a complex object",
:that => [
[0] {
:will => "probably"
},
[1] {
:be => "good to read"
}
],
:in => {
:some => {
:formatted => "way"
}
}
}
...wow, now everything is printing out awesomely! :)
Quiet mode:
irb#1(main):002:0> irb_mode(:quiet)
irb#1(main):001:0> { this:'is a complex object', that:[ { will:'probably'}, { be:'good to read' } ], in:{ some:{ formatted:'way'} } }
irb#1(main):002:0>
... whoah, no output at all? Nice.
Anyways, you can add whatever mode you like, and when you're finished with that mode, just exit out or it, and you'll be back in the previous mode.
Hope that was helpful! :)
Supress Output, In General
Also, depending on your needs, have a look at using quietly or silence_stream for suppressing output in general, not just in the irb/console:
silence_stream(STDOUT) do
users = User.all
end
NOTE: silence_stream removed in Rails 5+.
NOTE: quietly will be deprecated in Ruby 2.2.0 and will eventually be removed. (Thanks BenMorganIO!)
More information can be found here.
Work Around for Rails 5+.
As mentioned above, silence_stream is no longer available because it is not thread safe. There is no thread safe alternative. But if you still want to use silence_stream and are aware that it is not thread safe and are not using it in a multithreaded manner, you can manually add it back as an initializer.
config/initializer/silence_stream.rb
# Re-implementation of `silence_stream` that was removed in Rails 5 due to it not being threadsafe.
# This is not threadsafe either so only use it in single threaded operations.
# See https://api.rubyonrails.org/v4.2.5/classes/Kernel.html#method-i-silence_stream.
#
def silence_stream( stream )
old_stream = stream.dup
stream.reopen( File::NULL )
stream.sync = true
yield
ensure
stream.reopen( old_stream )
old_stream.close
end
Adding nil as a fake return value to silence output works fine, but I prefer to have some indication of what happened. A simple count is often enough. A lot of times, that's easily done by tacking on a count function. So when I'm doing something to a bunch of Discourse topics, I don't want a printout of each of the topic objects. So I add .count at the end of the loop:
Topic.where(...).each do |topic|
...
end.count
Same thing if I'm just assigning something:
(users = User.all).count
Silencing output altogether (or making it something static like nil) deprives me of useful feedback.
named_scope :with_country, lambad { |country_id| ...}
named_scope :with_language, lambad { |language_id| ...}
named_scope :with_gender, lambad { |gender_id| ...}
if params[:country_id]
Event.with_country(params[:country_id])
elsif params[:langauge_id]
Event.with_state(params[:language_id])
else
......
#so many combinations
end
If I get both country and language then I need to apply both of them. In my real application I have 8 different named_scopes that could be applied depending on the case. How to apply named_scopes incrementally or hold on to named_scopes somewhere and then later apply in one shot.
I tried holding on to values like this
tmp = Event.with_country(1)
but that fires the sql instantly.
I guess I can write something like
if !params[:country_id].blank? && !params[:language_id].blank? && !params[:gender_id].blank?
Event.with_country(params[:country_id]).with_language(..).with_gender
elsif country && language
elsif country && gender
elsif country && gender
.. you see the problem
Actually, the SQL does not fire instantly. Though I haven't bothered to look up how Rails pulls off this magic (though now I'm curious), the query isn't fired until you actually inspect the result set's contents.
So if you run the following in the console:
wc = Event.with_country(Country.first.id);nil # line returns nil, so wc remains uninspected
wc.with_state(State.first.id)
you'll note that no Event query is fired for the first line, whereas one large Event query is fired for the second. As such, you can safely store Event.with_country(params[:country_id]) as a variable and add more scopes to it later, since the query will only be fired at the end.
To confirm that this is true, try the approach I'm describing, and check your server logs to confirm that only one query is being fired on the page itself for events.
Check Anonymous Scopes.
I had to do something similar, having many filters applied in a view. What I did was create named_scopes with conditions:
named_scope :with_filter, lambda{|filter| { :conditions => {:field => filter}} unless filter.blank?}
In the same class there is a method which receives the params from the action and returns the filtered records:
def self.filter(params)
ClassObject
.with_filter(params[:filter1])
.with_filter2(params[:filter2])
end
Like that you can add all the filters using named_scopes and they are used depending on the params that are sent.
I took the idea from here: http://www.idolhands.com/ruby-on-rails/guides-tips-and-tutorials/add-filters-to-views-using-named-scopes-in-rails
Event.with_country(params[:country_id]).with_state(params[:language_id])
will work and won't fire the SQL until the end (if you try it in the console, it'll happen right away because the console will call to_s on the results. IRL the SQL won't fire until the end).
I suspect you also need to be sure each named_scope tests the existence of what is passed in:
named_scope :with_country, lambda { |country_id| country_id.nil? ? {} : {:conditions=>...} }
This will be easy with Rails 3:
products = Product.where("price = 100").limit(5) # No query executed yet
products = products.order("created_at DESC") # Adding to the query, still no execution
products.each { |product| puts product.price } # That's when the SQL query is actually fired
class Product < ActiveRecord::Base
named_scope :pricey, where("price > 100")
named_scope :latest, order("created_at DESC").limit(10)
end
The short answer is to simply shift the scope as required, narrowing it down depending on what parameters are present:
scope = Example
# Only apply to parameters that are present and not empty
if (!params[:foo].blank?)
scope = scope.with_foo(params[:foo])
end
if (!params[:bar].blank?)
scope = scope.with_bar(params[:bar])
end
results = scope.all
A better approach would be to use something like Searchlogic (http://github.com/binarylogic/searchlogic) which encapsulates all of this for you.
I'm trying to pass a list of arguments to a backgroundrb
in documentation it says: MiddleMan.worker(:billing_worker).async_charge_customer(:arg => current_customer.id)
but it only works for just one argument, I tried these but none worked for me
args => [1,2,3]
args => {:t=>"t", :r=> "r"}
any ideas how to solve this??
What you are trying seems reasonable to me. I took a look at rails_worker_proxy.rb (from the github source code). From a code read, the async_* methods accept both :arg and :args:
arg,job_key,host_info,scheduled_at,priority = arguments && arguments.values_at(:arg,:job_key,:host,:scheduled_at, :priority)
# allow both arg and args
arg ||= arguments && arguments[:args]
# ...
if worker_method =~ /^async_(\w+)/
method_name = $1
worker_options = compact(:worker => worker_name,:worker_key => worker_key,
:worker_method => method_name,:job_key => job_key, :arg => arg)
run_method(host_info,:ask_work,worker_options)
Can you share a code snippet? Have you added any debugging statements in your code and/or in the backgroundrb code itself? (I usually add a few puts statements and inspect things when things go wrong.)
Lastly, have you considered using delayed_job? It has more traction nowadays in the Rails community.
Actually, the second method you've tried (args => {:t=>"t", :r=> "r"}) should work.
In your worker:
def charge_customer(arg)
customer_id = arg[:customer_id]
customer_name = arg[:customer_name]
#do whatever you need to do with these arguments...
end
And then, you can call the worker like this:
MiddleMan.worker(:billing_worker).async_charge_customer(:arg => { :customer_id => current_customer.id, :customer_name => current_customer.name })
Basically, what you're doing here is pass a single Hash as the one argument the worker accepts. But since a Hash can contain multiple key-value pairs, you can access all of these individually inside your worker.