I have the following code:
list_entities = [{:phone => '0000000000', :name => 'Test', :"#i:type => '1'},{:phone => '1111111111', :name => 'Demo', :"#i:type => '1'}]
list_entities.each do |list_entity|
phone_contact = PhoneContact.create(list_entity.except(:"#i:type"))
add_record_response = api.add_record_to_list(phone_contact, "API Test")
if add_record_response[:add_record_to_list_response][:return][:list_records_inserted] != '0'
phone_contact.update(:loaded_at => Time.now)
end
end
This code is taking an array of hashes and creating a new phone_contact for each one. It then makes an api call (add_record_response) to do something with that phone_contact. If that api call is successful, it updates the loaded_at attribute for that specific phone_contact. Then it starts the loop over.
I am allowed something like 7200 api calls per hour with this service - However, I'm only able to make about 1 api call every 4 seconds right now.
Any thoughts on how I could speed this code block up to make faster api calls?
I would suggest using a thread pool. You can define a unit of work to be done and the number of threads you want to process the work on. This way you can get around the bottleneck of waiting for the server to response on each request. Maybe try something like (disclaimer: this was adapted from http://burgestrand.se/code/ruby-thread-pool/)
require 'thread'
class Pool
def initialize(size)
#size = size
#jobs = Queue.new
#pool = Array.new(#size) do |i|
Thread.new do
Thread.current[:id] = i
catch(:exit) do
loop do
job, args = #jobs.pop
job.call(*args)
end
end
end
end
end
def schedule(*args, &block)
#jobs << [block, args]
end
def shutdown
#size.times do
schedule { throw :exit }
end
#pool.map(&:join)
end
end
p = Pool.new(4)
list_entries.do |list_entry|
p.schedule do
phone_contact = PhoneContact.create(list_entity.except(:"#i:type"))
add_record_response = api.add_record_to_list(phone_contact, "API Test")
if add_record_response[:add_record_to_list_response][:return][:list_records_inserted] != '0'
phone_contact.update(:loaded_at => Time.now)
end
puts "Job #{i} finished by thread #{Thread.current[:id]}"
end
at_exit { p.shutdown }
end
Related
So, I wrote a program that sends a get request to HappyFox (a support ticket web app) and I get a JSON file, Tickets.json.
I also wrote methods that parse the JSON and return a hash with information that I want, i.e tickets with and without a response.
How do I integrate this with my Rails app? I want my HappyFox View (in rails) to show the output of those methods, and give the user the ability to refresh the info whenever they want.
Ruby Code:
require 'httparty'
def happy_fox_call()
auth = { :username => 'REDACTED',
:password => 'REDACTED' }
#tickets = HTTParty.get("http://avatarfleet.happyfox.com/api/1.1/json/tickets/?size=50&page=1",
:basic_auth => auth)
tickets = File.new("Tickets.json", "w")
tickets.puts #tickets
tickets.close
end
puts "Calling API, please wait..."
happy_fox_call()
puts "Complete!"
require 'json'
$data = File.read('/home/joe/API/Tickets.json')
$tickets = JSON.parse($data)
$users = $tickets["data"][3]["name"]
Count each status in ONE method
def count_each_status(*statuses)
status_counters = Hash.new(0)
$tickets["data"].each do |tix|
if statuses.include?(tix["status"]["name"])
#puts status_counters # this is cool! Run this
status_counters[tix["status"]["name"]] += 1
end
end
return status_counters
end
Count tickets with and without a response
def count_unresponded(tickets)
true_counter = 0
false_counter = 0
$tickets["data"].each do |tix|
if tix["unresponded"] == false
false_counter += 1
else true_counter += 1
end
end
puts "There are #{true_counter} tickets without a response"
puts "There are #{false_counter} ticket with a response"
end
Make a function that creates a count of tickets by user
def user_count(users)
user_count = Hash.new(0)
$tickets["data"].each do |users|
user_count[users["user"]["name"]] += 1
end
return user_count
end
puts count_each_status("Closed", "On Hold", "Open", "Unanswered",
"New", "Customer Review")
puts count_unresponded($data)
puts user_count($tickets)
Thank you in advance!
You could create a new module in your lib directory that handles the API call/JSON parsing and include that file in whatever controller you want to interact with it. From there it should be pretty intuitive to assign variables and dynamically display them as you wish.
https://www.benfranklinlabs.com/where-to-put-rails-modules/
In my Rails app I am trying to fetch a number of currency exchange rates from an external service and store them in the cache:
require 'open-uri'
module ExchangeRate
def self.all
Rails.cache.fetch("exchange_rates", :expires_in => 24.hours) { load_all }
end
private
def self.load_all
hashes = {}
CURRENCIES.each do |currency|
begin
hash = JSON.parse(open(URI("http://api.fixer.io/latest?base=#{currency}")).read) #what if not available?
hashes[currency] = hash["rates"]
rescue Timeout::Error
puts "Timeout"
rescue OpenURI::Error => e
puts e.message
end
end
hashes
end
end
This works great in development but I am worried about the production environment. How can I prevent the whole thing from being cached if the external service is not available? How can I ensure ExchangeRate.all always contains data, even if it's old and can't be updated due to an external service failure?
I tried to add some basic error handling but I'm afraid it's not enough.
If you're worried about your external service not being reliable enough to keep up with caching every 24 hours, then you should disable the auto cache expiration, let users work with old data, and set up some kind of notification system to tell you if the load_all fails.
Here's what I'd do:
Assume ExchangeRate.all always returns a cached copy, with no expiration (this will return nil if no cache is found):
module ExchangeRate
def self.all
rates = Rails.cache.fetch("exchange_rates")
UpdateCurrenciesJob.perform_later if rates.nil?
rates
end
end
Create an ActiveJob that handles the updates on a regular basis:
class UpdateCurrenciesJob < ApplicationJob
queue_as :default
def perform(*_args)
hashes = {}
CURRENCIES.each do |currency|
begin
hash = JSON.parse(open(URI("http://api.fixer.io/latest?base=#{currency}")).read) # what if not available?
hashes[currency] = hash['rates'].merge('updated_at' => Time.current)
rescue Timeout::Error
puts 'Timeout'
rescue OpenURI::Error => e
puts e.message
end
if hashes[currency].blank? || hashes[currency]['updated_at'] < Time.current - 24.hours
# send a mail saying "this currency hasn't been updated"
end
end
Rails.cache.write('exchange_rates', hashes)
end
end
Set the job up to run every few hours (4, 8, 12, less than 24). This way, the currencies will load in the background, the clients will always have data, and you will always know if currencies aren't working.
I am working on rails project, and I am generating excel files form User model and then uploading to amazon s3! Everything so far work's perfect, but I want to use also delayed job, and there is where the problem comes,
when I call the method with delayed, the job cant complete, but also I don't get any error! If I run delayed jobs log i got this:
[Worker(host:pure pid:11063)] Starting job worker
[Worker(host:pure pid:11063)] Job Delayed::PerformableMethod (id=77) RUNNING
[Worker(host:pure pid:11063)] Job Delayed::PerformableMethod (id=78) RUNNING
[Worker(host:pure pid:11063)] Job Delayed::PerformableMethod (id=79) RUNNING
[Worker(host:pure pid:11063)] Job Delayed::PerformableMethod (id=80) RUNNING
it seems like the job has stuck and can't finishes the process!
Here are all of my Methods:
User's Controller:
def export
if params[:search]
#users = User.all.order('created_at DESC')
else
#users = User.search(params[:search]).order('created_at DESC')
end
User.delay.export_users_to_xlsx(#users, current_user)
redirect_to action: :index
end
User's Model:
def self.export_users_to_xlsx(users, user)
file = User.create_excel(users)
s3_upload = FileManager.upload_to_s3(file)
link = s3_upload[:file].presigned_url(:get, expires_in: 10.days) if s3_upload
end
def self.create_excel(users)
Spreadsheet.client_encoding = 'UTF-8'
filename = "#{Rails.root}/tmp/#{ DateTime.now.strftime("%m%d%Y%H%M%S") }_users.xlsx"
array = [["Name", "Balance","State","Address"]]
users.each do |user|
array.push([user.name,user.balance,user.state,user.address])
end
ExportXls.create_spreadsheet(array,filename)
end
and here is the lib for creating the spreadsheet:
class ExportXls
def self.create_spreadsheet(content,filename)
Spreadsheet.client_encoding = 'UTF-8'
book = Spreadsheet::Workbook.new
sheet1 = book.create_worksheet
content.each_with_index do |row,index|
row.each do |column|
sheet1.row(index).push column
end
end
book.write filename
return filename
end
end
And this is the method for uploading the file to S3 , which is also in lib :
def self.upload_to_s3(temp_file)
s3 = Aws::S3::Resource.new
begin
obj = s3.bucket(ENV['S3_BUCKET']).object(File.basename(temp_file))
obj.upload_file(temp_file)
File.delete(temp_file) if File.exists?(temp_file)
{ :result => 'success', :file => obj }
rescue Aws::S3::Errors::ServiceError => error
{ :result => 'failed', :message => error.message }
end
end
Any suggestions why i cant get the delayed job to get work?
I am using delayed_job_active_record gem!
Sorry for my bad English!
I am using Puma server to achieve multithreading. Here is my controller:
class PhoneCallsController < ActionController::Base
include ActionController::Live
protect_from_forgery :except => :record_call
# GET: Return messages to JavaScript pop-up
def get_messages
random_num = rand(65536)
log "Start of get_messages, random id = #{random_num}"
sleep 60
#while session[:new_record_date]==session[:prev_record_date] do
# sleep 1
#end
session[:prev_record_date] = session[:new_record_date]
render :status => :ok, :json => ['my message']
log "End of get_messages, random id = #{random_num}"
end
# POST: Record a phone call. Called by PBX.
def record_call
log 'Start of record_call'
data = Hash.from_xml(request.raw_post)
pbx_data = data['PbxData']
pc = PhoneCall.new({
:command => pbx_data['cmd'].strip,
:recipient_ip => pbx_data['RecipientIP'].strip,
:phone_num => pbx_data['PhoneNum'].strip
})
pc.save!
session[:new_record_date] = pc.created_at
debug_text = "call received:\ncmd: #{pc.command}\nRecipientIP: #{pc.recipient_ip}\nPhoneNum: #{pc.phone_num}"
log debug_text
render :status => :ok, :text => debug_text
log 'End of record_call'
end
# Test page to simulate PBX calls
def call_test
end
# Test page to receive pop-up notifications
def messages_test
end
private
def log(msg)
Rails.logger.info msg
puts msg
end
end
So, there are two main methods: get_messages is called by JavaScript and its task is not to return a result until phone call arrives (connection should be hold).
Second method - record_call is called by PBX script when a call is being answered.
The problem is - when only
sleep 60
is mentioned in get_messages then record_call method can be called at any time and the whole allpication isn't being locked.
But if I replace this sleep 60 with
while session[:new_record_date]==session[:prev_record_date] do
sleep 1
end
(as it should be) - this while loop blocks entire application.
What am I doing wrong? Why sleep doesn't block the application, but when it is wrapped into while loop (suppose it is an infinite loop) the application is being locked?
I suppose that the multithreading in my case isn't a true multithreading and sleep is treated in a special way.
I have a situation where i need to call something like this :
class Office
attr_accessor :workers, :id
def initialize
#workers = []
end
def workers worker
type = worker.type
resp = Worker.post("/office/#{#id}/workers.json", :worker => {:type => type})
worker = Worker.new()
resp.to_hash.each_pair do |k,v|
worker.send("#{k}=",v) if worker.respond_to?(k)
end
self.workers << worker
end
end
Worker class
class Worker
attr_accessor :office_id, :type, :id
def initialize(options={})
#office_id = options[:office].nil? ? nil : options[:office].id
#type = options[:type].nil? ? nil : options[:type].camelize
if !#office_id.nil?
resp = self.class.post("/office/#{#office_id}/workers.json", :worker => {:type => #type})
#id = resp.id
office = options[:office]
office.workers = self
end
end
def <<(worker)
if worker
type = worker.type
resp = Worker.post("/office/#{office_id}/workers.json", :worker => {:type => type})
debugger
#id = resp.id
resp.to_hash.each_pair do |k,v|
self.send("#{k}=",v) if self.respond_to?(k)
end
debugger
return self
end
end
I can do something like this very well
office = Office.new()
new_worker = Worker.new()
office.workers new_worker
But i need to do same what i have done above like the following. Before that, i need to change the initialize method of Office to fire up the def <<(worker) method of the worker instance.
class Office
...
def initialize
#workers = Worker.new
#workers.office_id = self.id
end
office = Office.new()
new_worker = Worker.new()
office.workers << new_worker
Now the problem is, the later implementation creates 2 instances of the worker??
I'm not entirely sure, but I suppose you'd like to have this:
class Office
attr_accessor :workers, :id
def initialize
#workers = []
end
alias_method :workers, :return_worker_array
def workers worker
unless worker
return_worker_array
else
type = worker.type
resp = Worker.post("/office/#{#id}/workers.json", :worker => {:type => type})
worker = Worker.new()
resp.to_hash.each_pair do |k,v|
worker.send("#{k}=",v) if worker.respond_to?(k)
return_worker_array << worker
end
end
end
This way you can get rid of Worker#<< entirely and you should also remove the line
office.workers = self
in Worker#initialize since office.workers is supposed to be an array. It's a bad idea to change the type of an attribute (duck-typing would be OK) back and forth because it's likely you lose track of the current state and you will run into errors sooner or later.
To follow "Separation of Concerns", I would recommend to do the entire management of workers solely in Office, otherwise it gets too confusing too quickly and will be much harder to maintain on the long run.
I'm not 100% certain why you aren't getting an error here, but since Office#workers last line is self.workers << worker, you are adding the new worker created in Office#workers (made on the 3rd line of the method), and then returning the workers object, which then gets #<< called again on it with the new worker created outside of the method