Cache connection in rails using graphql and graphql-fragment_cache - ruby-on-rails

I have a rails app using the ruby-graphql and graphql-fragment_cache gem and a query for pages:
field :all_pages, PageType.connection_type, null: true
When I resolve and cache it using the graphql-fragment_cache gem all the pages get cached - which makes sense of course.
def all_pages
cache_fragment {
chain = Page
chain = scope_status(chain, context[:stage])
chain = lookahead_for_collection(chain, lookahead)
chain = search_records(chain, search)
chain = order_records(chain, order)
chain.all
}
end
I am using the api in a Gatsby website. So whenever there is an update in my rails app a webhook gets posted to Gatsby Cloud to rebuild the site. This rebuild will fetch all pages with all available fields which is very heavy. Also because they are a lot.
Is there an option to cache all pages but only invalidate a single one (similar to rails.cache.fetch_multi) so that all pages that did not change get fetched from the cache except the one that changed? This would speed up my app significantly.
Thank you!

Related

Heroku timing out when generating a Json needed for a view in a Ruby on Rails Application

My view requires a JSON that is being generated in the controller, the JSON is taking a good amount of time to generate when scaling for larger information sets. My problem is Heroku has a 30 second time limit for loading before it times out and for larger data sets it is timing out.
I have already implemented Redis to go and Sidekiq which has helped me with different features on the application.
The Show View is also paginated when there are larger data sets. No sure what my best option is here.
I am thinking of creating an intermediate page while creating the JSON asynchronously with Sidkiq and then alerting the user when the JSON is created and then allowing the user to go to the show view once loaded, which is a good option, but then I think I'll have an issue with the pagination which I am using the Kaminari Gem for.
Like I said I already use Sidekiq so I think this will be my best option.
def show
# #salesforce = true if #account.Name != nil
#audit_report = audit_report
#measures_updated = MeasuresUpdated.new(audit_report: #audit_report)
#measures_updated = #measures_updated.check_measures_updated
#page_title = "Report based on \"#{#audit_report.name}\""
#context = ShowAuditReportContext.new(
user: current_user,
audit_report: #audit_report).audit_report_as_json
#context_measure_selections = #context[:audit_report][:measure_selections]
#context_measure_selections_array = #context[:audit_report][:measure_selections].to_a
#paginatable_array = Kaminari.paginate_array(#context_measure_selections_array).page(params[:page]).per(15)
end
this is in the controller The #context is the issue, there are a lot of calculcations in the backend placed in different classes that is taking a very long time to load.
I expect the user to visit the show page without timing out due server restrictions.
You might want to think of this long JSON document as more like a report that the user is generating, rather than a view the user is requesting.
Paginating a JSON document will result in invalid JSON because opening and closing curly braces will (probably) not match up. So, perhaps it is more appropriate for the user to download the document. You could inform the user in the view of the location URL at which the document will be made available.
#filename = "Report#{#audit_report}.json"
CreateJsonReportJob.perform_async(#filename, #audit_report, current_user)
Then your Sidekiq job writes the file to that location
json = ShowAuditReportContext.new(
user: current_user,
audit_report: audit_report
).audit_report_as_json
File.open(filename, "w+") do |f|
f.write(json)
end
To get fancier, you could create a table of reports that the user can view and manage. You could use Active Storage to manage the attachments.

How do I cache external api response per user in rails?

I have a rails 4.1 application wherein I have a bookings section. In this section I have multiple sub sections such as Hotels, Vacation rentals etc.
In each of these sections I am fetching data from relevant apis which user can sort and filter.
I would like to cache response per user so that once I have got the data, filtering & sorting is quick with cached data and time is not wasted on making another trip to the other site.
However one issue is users can do this even without logging in.
I have setup memcached store which caches the data fine for the first user but, when second user comes data for first user gets overwritten.
How can I cache data per user(logggd/unlogged) ? (I am willing to change the cache store provided I don't have to spend anything extra for it to work)
Rails actually supports per user "caches" out of the box. Its called the session. By default Rails uses cookies as storage but you can easily swap the storage to use Memcached or Redis to bypass the ~4kB size limit imposed on cookies by the browser.
To use Rails with memcached as the storage backend use the Dalli gem:
# Gemfile
gem 'dalli'
# config/environments/production.rb
config.cache_store = :dalli_store
This will let you store pretty much anything by:
session[:foo] = bar
And the session values can be fetched as long as the user retains his cookie containing the session id.
An alternate approach if you want to keep the performance benefits of using CookieStore for sessions is to use the session id as part of the key used to cache the request in Memcached which would give each user an individual cache.
You can get the session id by calling session.id in the controller.
require 'dalli'
options = { :namespace => "app_v1", :compress => true }
dc = Dalli::Client.new('localhost:11211', options)
cache_key = 'foo-' + session.id
#data = dc.fetch(cache_key) do
do_some_api_call
end
You can do this with view fragments and the regular rails low level cache as well. Just be note that models are not session aware.
See:
http://www.justinweiss.com/articles/how-rails-sessions-work/
http://guides.rubyonrails.org/caching_with_rails.html

Create dynamic sitemap from URL with Ruby on Rails

I am currently working on an application where I scrape information from a number of different sites. To get the deeplink for the desired topic on a site I rely on the sitemap that is provided (e.g. "Forum"). As I am expanding I came across some sites that don't provide a sitemap themselves, so I was wondering if there was any way to generate it within Rails from the top level domain?
I am using Nokogiri and Mechanize to retrieve data, so if there is any functionality that could help to tackle that task it would be easier to integrate.
This can be done with the Spidr gem like so:
url_map = Hash.new { |hash,key| hash[key] = [] }
Spidr.site('http://intranet.com/') do |spider|
spider.every_link do |origin,dest|
url_map[dest] << origin
end
end

Multiple GET requests in Rails?

I'm developing an application, and on one page it requires approximately 12-15 GET requests to be made to an API in the background. My original intent was to make the requests using AJAX from jQuery, but it turns out that it is impossible to do so with the Steam Web API I am using.
Doing this in the Rails controller before the page loads is, for obvious reasons, very slow.
After I get the data from the API, I parse it and send it to the JavaScript using gon. The problem is that I don't know how to get and set the data after the page renders.
Here is what my controller would look like:
def index
#title = "My Stats"
if not session.key?(:current_user) then
redirect_to root_path
else
gon.watch.winlossdata = GetMatchHistoryRawData(session[:current_user][:uid32])
end
end
The function GetMatchHistoryRawData is a helper function that is calling the GET requests.
Using the whenever gem --(possibly, see below)....
Set a value in a queue database table before rendering the page. Using a "cron" task (whenever gem) that monitors the queue table you can make requests to the Steam API and populate a queue result table. On the rendered page you could implement a JavaScript periodic check with AJAX to monitor the queue result table and populate the page once the API returns a result.
Additional Info:
I have not used the whenever gem yet but I did some more reading on it and there might be an issue with the interval not being short enough to make it as close to real time as possible. I am currently doing my job processing with a Java application implementing a timer but have wondered about moving to whenever and CRON. So whenever might not work for you but the idea of an asynchronous processor doing the work of contacting the API is the gist of my answer. If the payload from the Steam API is small and returned fast enough then like what was stated above you could use a direct call via AJAX to the controller and then the Steam API.
Regarding the Rails code it should be pretty much standard.
controller:
def index
# Create a Steam API Queue row in the database and save any pertinent information needed for contacting the Steam API
#unique_id = Model.id # some unique id created for the Steam API queue row
end
# AJAX calls START
def get_api_result
# Check for a result using
params[:unique_id]
# render partial for <div>
end
# AJAX calls end
View: index
# Display your page
# Setup an intermittent AJAX call to "controller#get_api_result" with some unique id #{#unique_id} i.e. params[:unique_id] to identify the Steam API Queue table row, populate the result of the call into a <div>
external_processor_code (Whenever Gem, Java implementation, some Job processor, etc...)
Multiple threads should be available to process the Steam API Queue table and retrieve results every few seconds and populate the result table that will be read by the controller when requested via the AJAX call.
To give a complete example of this type of implementation would take some time so I have briefly, from the conceptual level, outlined it above. There might be some other ways to do this that could be more efficient with the way technology is expanding so please do some investigation.
I hope this is helpful!

How should I cache Facebook call data into my database so rails can use it locally?

I want to start out by generating a model to hold this information:
rails g model UserData UID:string birthday:string likes:string location:string \n
activities:string books:string movies:string music:string tv:string \n
interests:string post_count:string friend_count:string
Then I rake the DB.
Then I create the file $RAILS_ROOT/jobs/update_facebook.rb:
config = YAML::load(File.open("#{RAILS_ROOT}/config/facebook.yml"));
APP_ID = CONFIG['app_id']
fetching_app = FbGraph::Application.new(config['APP_ID']);
access_token = fetching_app.get_access_token(config['production']['client_secret']);
#Don't know what else to put in here
This form would be a bit redundanct... how would you know when to use database number of likes instead going to API to fetch the number of likes? you'd probably query the api way more than you would need, and querying API is bad because it's slow.
I haven't implemented this kind of caching in my app yet, but I understand the optimal way to do this is like:
you cache all API responses
you subscribe to the object you're tracking and implement the callback stuff from facebook real-time updates( http://developers.facebook.com/docs/reference/api/realtime/ )
with this, facebook will ping your app(in some way described in the link) when there's changes in the object
you drop the cache on the API call when facebook notifies the object change, so your app will only request the API when there's changes
that's the process, from what I understood... I'm sure implementation of facebook real-time stuff will make any app a tad faster
This one was simpler than I made it out to be. Just added columns to the User model with a migration:
rails g migration AddDetailsToUsers likes:string bookes:string etc.
Then just rake the migration, and add the following to the controller:
#graph = Koala::Facebook::GraphAPI.new(current_user.token)
current_user.likes = #graph.get_connections("me", "likes")
#Ditto for the other permissions
current_user.save

Resources