It seems Group and channel_session can persist across multiple message sessions and consumers. How does Channels achieve that?
#channel_session_user_from_http
def ws_connect(message):
# Add them to the right group
message.channel_session['room'] = 'room name'
Group("chat-%s" % message.user.username[0]).add(message.reply_channel)
#channel_session_user
def ws_disconnect(message):
if 'room' in message.channel_session:
print('room====', message.channel_session['room'])
Group("chat-%s" % message.user.username[0]).discard(message.reply_channel)
I would like to setup a long existing object, much like a global object which is accessible by every consumer.
You can use #channel_session (or the #channel_session_user) decorator to achieve this. Make sure your object is serializable, then just add it to the user's channel session like this:
#channel_session_user_from_http
def ws_connect(message):
message.session.myobject = {'test': True}
#channel_session_user
def ws_connect(message):
print(message.session.myobject) # should output {'test': True}
Alternatively, just use your DB or redis to persist things like you would normally in Django:
#channel_session_user_from_http
def ws_connect(message):
redis_conn.set('my-persisted-key', "{'test': True}")
#channel_session_user
def ws_connect(message):
print(redis_conn.get('my-persisted-key')) # should output "{'test': True}"
Related
I am trying to do a search with multiple attributes for Address at my Rails API.
I want to search by state, city and/or street. But user doesn't need to send all attributes, he can search only by city if he wants.
So I need something like this: if the condition exists search by condition or return all results of this condition.
Example:
search request: street = 'some street', city = '', state = ''
How can I use rails where method to return all if some condition is nil?
I was trying something like this, but I know that ||:all doesn't work, it's just to illustrate what I have in mind.:
def get_address
address = Adress.where(
state: params[:state] || :all,
city: params[:city] || :all,
street: params[:street] || :all)
end
It's possible to do something like that? Or maybe there is a better way to do it?
This is a more elegant solution using some simple hash manipulation:
def filter_addesses(scope = Adress.all)
# slice takes only the keys we want
# compact removes nil values
filters = params.permit(:state, :city, :street).to_h.compact
scope = scope.where(filters) if filters.any?
scope
end
Once you're passing a column to where, there isn't an option that means "on second thought don't filter by this". Instead, you can construct the relation progressively:
def get_address
addresses = Address.all
addresses = addresses.where(state: params[:state]) if params[:state]
addresses = addresses.where(city: params[:city]) if params[:city]
addresses = addresses.where(street: params[:street]) if params[:street]
addresses
end
I highly recommend using the Searchlight gem. It solves precisely the problem you're describing. Instead of cluttering up your controllers, pass your search params to a Searchlight class. This will DRY up your code and keep your controllers skinny too. You'll not only solve your problem, but you'll have more maintainable code too. Win-win!
So in your case, you'd make an AddressSearch class:
class AddressSearch < Searchlight::Search
# This is the starting point for any chaining we do, and it's what
# will be returned if no search options are passed.
# In this case, it's an ActiveRecord model.
def base_query
Address.all # or `.scoped` for ActiveRecord 3
end
# A search method.
def search_state
query.where(state: options[:state])
end
# Another search method.
def search_city
query.where(city: options[:city])
end
# Another search method.
def search_street
query.where(street: options[:street])
end
end
Then in your controller you just need to search by passing in your search params into the class above:
AddressSearch.new(params).results
One nice thing about this gem is that any extraneous parameters will be scrubbed automatically by Searchlight. Only the State, City, and Street params will be used.
In my Rails application I have a very expensive function that fetches a bunch of conversion rates from an external service once per day:
require 'open-uri'
module Currency
def self.all
#all ||= fetch_all
end
def self.get_rate(from_curr = "EUR", to_curr = "USD")
all[from_curr][to_curr]
end
private
def self.fetch_all
hashes = {}
CURRENCIES.keys.each do |currency|
hash = JSON.parse(open(URI("http://api.fixer.io/latest?base=#{currency}")).read)
hashes[currency] = hash["rates"]
end
hashes
end
end
Is there a way to store the result of this function (a hash) to speed things up? Right now, I am trying to store it in an instance variable #all, which speeds it up a little, however it is not persisted across requests. How can I keep it across requests?
create a file lets say currency_rates.rb in your initializer with the following code:
require 'open-uri'
hashes = {}
CURRENCIES.keys.each do |currency|
hashes[currency] = JSON.parse(open(URI("http://api.fixer.io/latest?base=#{currency}")).read)["rates"]
end
CURRENCY_RATES = hashes
Then write the following rake task which will run daily:
task update_currency_rates: :environment do
require 'open-uri'
hashes = {}
CURRENCIES.keys.each do |currency|
hashes[currency] = JSON.parse(open(URI("http://api.fixer.io/latest?base=#{currency}")).read)["rates"]
end
Constant.const_set('CURRENCY_RATES', hashes)
end
The only drawback is that it will run every time you deploy new version of your app/on restart. You can go with it if you are ok with it.
You can avoid that if you use caching like memcachier or something, then you can do like,
def currency_rates
Rails.cache.fetch('currency_rates', expires_in: 24.hours) do
# write above code in some method and call here which will return hash and thus it will be cached.
end
end
I would initilize the hash lazy like this:
require 'open-uri'
require 'json'
module Currency
def self.get_rate(from_curr = "EUR", to_curr = "USD")
#memorized_result ||={}
#memorized_result.fetch(from_curr) do |not_found_key|
data = JSON.parse(open(URI("http://api.fixer.io/latest?base=# {not_found_key}")).read)
#memorized_result[not_found_key] = data["rates"]
end[to_curr]
end
end
I think you don't need all the exchange rates at all the time. So you can speed up things by fetching only the required one at a time. Over the time you keep all rates in memory.
This is persisted between requests in some edge cases. It depends on your server, for instance, unicorn uses multiple processes. Every process has it's own
#memorized_result variable, which needs to be filled.
If you want to share this data betweend multiple processes or servers then you need a storage for the fetched data which can be shared between multiple processes.
If you need a time to life for your entries then I would tweak #Md. Farhan Memon Rails cache hint like this:
def get_rate(from_curr = "EUR", to_curr = "USD")
Rails.cache.fetch("currency_rates_#{from_curr}_#{to_curr}", expires_in: 24.hours) do
data = JSON.parse(open(URI("http://api.fixer.io/latest?base=#{from_curr}")).read)
data["rates"][to_curr]
end
end
How can I speed the search results generated from pinging the Lastfm api?
Here is the code we're working with:
def self.search(term)
LastfmAPI.artist_search(term).map { |a| Lastfm::Artist.new(a) }
end
# Name and lastfm_id are synonyms
def name
self.lastfm_id
end
def past_events(geo=nil, options={})
events = self.events.past
lastfm_count = LastfmAPI.artist_getPastEvents_count(self.lastfm_id)
# Check if database is current
if events.count == lastfm_count # TODO: && the first event itself matches entirely
# TODO: extract above comparison to method
# return only those in the correct radius
events = events.in_radius(geo) if geo.present?
else
# if not current, make array of Lastfm::Event objects from API call
events = LastfmAPI.artist_getPastEvents_all(self.lastfm_id, lastfm_count).map do |e|
Saver::Events.perform_async(e) # send to worker to save to database
Lastfm::Event.new(e)
end
When you're depending on external services there is not much that you can do to speed up the actual execution of their service. The best you can do is to cache things locally in your own app so that you're not making the round trip as often.
n00b question. I'm trying to loop through every User record in my database. The pseudo code might look a little something like this:
def send_notifications
render :nothing => true
# Randomly select Message record from DB
#message = Message.offset(rand(Message.count)).first
random_message = #message.content
#user = User.all.entries.each do
#user = User.find(:id)
number_to_text = ""
#user.number = number_to_text #number is a User's phone number
puts #user.number
end
end
Can someone fill me in on the best approach for doing this? A little help with the syntax would be great too :)
Here is the correct syntax to iterate over all User :
User.all.each do |user|
#the code here is called once for each user
# user is accessible by 'user' variable
# WARNING: User.all performs poorly with large datasets
end
To improve performance and decrease load, use User.find_each (see doc) instead of User.all. Note that using find_each loses the ability to sort.
Also a possible one-liner for same purpose:
User.all.map { |u| u.number = ""; puts u.number }
I am creating a REST API in rails. I'm using RSpec. I'd like to minimize the number of database calls, so I would like to add an automatic test that verifies the number of database calls being executed as part of a certain action.
Is there a simple way to add that to my test?
What I'm looking for is some way to monitor/record the calls that are being made to the database as a result of a single API call.
If this can't be done with RSpec but can be done with some other testing tool, that's also great.
The easiest thing in Rails 3 is probably to hook into the notifications api.
This subscriber
class SqlCounter< ActiveSupport::LogSubscriber
def self.count= value
Thread.current['query_count'] = value
end
def self.count
Thread.current['query_count'] || 0
end
def self.reset_count
result, self.count = self.count, 0
result
end
def sql(event)
self.class.count += 1
puts "logged #{event.payload[:sql]}"
end
end
SqlCounter.attach_to :active_record
will print every executed sql statement to the console and count them. You could then write specs such as
expect do
# do stuff
end.to change(SqlCounter, :count).by(2)
You'll probably want to filter out some statements, such as ones starting/committing transactions or the ones active record emits to determine the structures of tables.
You may be interested in using explain. But that won't be automatic. You will need to analyse each action manually. But maybe that is a good thing, since the important thing is not the number of db calls, but their nature. For example: Are they using indexes?
Check this:
http://weblog.rubyonrails.org/2011/12/6/what-s-new-in-edge-rails-explain/
Use the db-query-matchers gem.
expect { subject.make_one_query }.to make_database_queries(count: 1)
Fredrick's answer worked great for me, but in my case, I also wanted to know the number of calls for each ActiveRecord class individually. I made some modifications and ended up with this in case it's useful for others.
class SqlCounter< ActiveSupport::LogSubscriber
# Returns the number of database "Loads" for a given ActiveRecord class.
def self.count(clazz)
name = clazz.name + ' Load'
Thread.current['log'] ||= {}
Thread.current['log'][name] || 0
end
# Returns a list of ActiveRecord classes that were counted.
def self.counted_classes
log = Thread.current['log']
loads = log.keys.select {|key| key =~ /Load$/ }
loads.map { |key| Object.const_get(key.split.first) }
end
def self.reset_count
Thread.current['log'] = {}
end
def sql(event)
name = event.payload[:name]
Thread.current['log'] ||= {}
Thread.current['log'][name] ||= 0
Thread.current['log'][name] += 1
end
end
SqlCounter.attach_to :active_record
expect do
# do stuff
end.to change(SqlCounter, :count).by(2)