My (travel related) web app has a lot of photos and many of my Users (but not all) are going to have very slow internet connections. I want to give users the option, for example, to access my web app from https://slim.example.com and in this version, photos will be replaced by a link that says "Click to view Photos - Niagra Falls" and they will open the photos directly from my s3 photobucket as opposed to having those photos inline and I would also want to not display certain ads that might be slow to load and maybe instead of paginating 20 objects, I would only paginate 10 objects, etc.
Can you please give me an idea of how I can go about doing this?
The app is interactive. The users will be creating content, etc. So if I create two separate web apps they will need to communicate with each other and rely on the same models/databases.
EDIT:
def slim_mode
if request.subdomain == 'slim'
#slim == true
end
end
In the ApplicationController in a before_action you can determine whether the user in a slim mode (by using request.subdomain) and save it in an instance variable.
You can also define a helper method in ApplicationController which has access to this instance variable and conditionally delegates to image_tag helper or link_to helper.
I am creating a new application that allow the users to either create content in the local application database or directly on Facebook.
Imagine the user have a CRUD interface for a Post. I have a created a model for Post that sub classes ActiveRecord::Base. Objects of this class has methods for saving the post to the local database.
However, the user is also able to "tick" and option in my application that says "connect to Facebook". When it is checked the content will not be stored in my local database but it will go directly to Facebook Graph API.
The service layer and controller layer is not aware of where the data actually goes. At least this is my idea.
My question is if I can use the same Post class for both local data and Facebook data? Some of the methods in the Post class would not make sense when the post object contains data from Facebook; such as the save method.
Does that design seem stupid? Should I create another Post class that is just a normal Ruby class without sub classing ActiveRecord::Base? Or are there other better ways?
When designing a class you should make it as lean as possible. A good way to look at it is counting the nouns and verbs that are included in your model. A post can be saved or deleted, but if your save and delete start having logic related to Facebook it's a good sign that this should belong to a different class altogether.
One more note related to Facebook: the new guidelines don't allow posting 'pre-written' posts for a user. Meaning that you won't be able to make a post to a users wall without prompting him with Facebook either way.
I don't see any problems with having Post < ActiveRecord::Base - that is the standard Rails way and it does look like you should implement the standard ways of storing data to your DB and look into the Facebook posting from another angle.
There are some definite problems with that approach - the first is that you will end up with alot of tight couplings to the Facebook API. I have had tons of grief from the FB API's changing without warning or not working as documented.
Another issue is performance - loading data from FB is often painfully slow even compared to reading from a DB across the wire. It is also prone to outrages (at least in my experience).
I would definitely use "proxy objects" which are stored in your database and regularly hydrated from Facebook instead.
I'm new RoR and I can't seem to grasp how to structure my app.
I have an app that pulls data from Google Analytics using garb. After doing some number crunching with the data, the app will populate a Report model and display the report to the user.
Right now, I'm separating the Google Analytics logic using concerns. In my concerns folder, I have a GoogleAnalytics module that is responsible for pulling the data. The Report model includes the GoogleAnalytics module. Before the number crunching in the Report model takes place, I need to clean up and reformat the data. Should this be a responsibility of the GoogleAnalytics module or maybe a helper?
Is there a better practice for including third party services?
The reformatting should go on whatever is responsible for pulling the data from Google Analytics. None of the rest of your app should have to know the format of how Google Analytics returns it's data - the module should convert it into a sensible, standard interface and hide all of that from everyone else.
I would also strongly consider putting this stuff into a service object rather than a module. Including modules gets messy because you when you call a method on an object you don't know where that method is defined. I would only use this pattern if you were including the same module in lots of other models and it was a true DRY play.
A service object would look something like (depending on what params you need to use to pull the data):
class GoogleAnalyticsDataFetcher
attr_accessor :data
def new ga_id
#ga_id = ga_id
end
def fetch
#data = do_some_stuff
end
end
and then you could call it either from your controller or wrap it up inside the Report model somewhere. Then when you go GoogleAnalyticsDataFetcher.new(id).fetch it's incredibly obvious what is going on and where it is defined.
In my application, I want to notify a user, when he/she is mentioned in a comment or a post.
The user handle is #user_name, similar to Facebook.
The database table for mentions looks like:
Mention
mentioned_by: user_id (foreign key)
user_mentioned: user_id (foreign key)
comment_id: (foreign key)
post_id: (foreign key)
I can't figure out a way to implement it though. How do Facebook / Twitter do it?
What I decided to go with, was use ActiveRecord callbacks/ Observer design pattern and whenever a new comment/post is saved to database, I can go through the contents of the post/comment, and look out for any mentions and then execute the notifications as required.
I get the feeling that there are some missing pieces and I am not getting it right.
Is this the best way of doing it?
Facebook and Twitter are not mid-size Rails applications. They are companies. The tech that runs them is distributed and mostly custom, especially in the case of Facebook.
The part that you seem to be grasping for is how they determine who to notify in a performant and scalable way. This is where shit gets real. You can find a lot of information about the architecture behind each on of them, and there is certainly a lot of great stuff to help you think about these things, but ultimately none of it is going to be something you implement into whatever application you're building.
http://www.quora.com/What-is-Facebooks-architecture
Facebook Architecture
http://www.infoq.com/news/2009/06/Twitter-Architecture
http://engineering.twitter.com/2010/10/twitters-new-search-architecture.html
Plenty more juicy details over at Quora.
Unfortunately, none of this gets you closer to your goal. I think the most realistic thing for you to do to start out with woud be to simply tie in a service like Pusher to send messages to clients without worrying about it, use an ActiveRecord Observer to add notifications to a background queue where the workers actually send those notifications to Pusher. This is a day or less of work and it should scale well to at least 10k notifications a day. Once you start having performance problems, ditch the Observer and Pusher for something like Goliath that can handle both of the jobs locally.
Bottom line, learn what you can about large and experienced systems that have been put through the paces, but mainly just to see what problems they ran into and how they solved them. These methods aren't the same among the big guys even, and they are going to vary for each implementation.
Hopefully that helps point you in a good direction. :)
You can use ActiveRecord callbacks while record is saved (like before_save, after_save or before_create, after_create) to go through comment content, find and create all mentions models and save them to db.
I actually am interested in a concrete answer to this myself. I don't know how Facebook and Twitter do it, but I do know from general searches that the gem acts-as-taggable-on could get the job done. Check out https://github.com/mbleigh/acts-as-taggable-on.
Also, this question on stackoverflow might also provide you with some info: Implementing twitter-like hashtag on rails
Good luck. I encourage you to try to get more attention to this question and get a more solid answer than what I've said. :]
Tumblr uses a Redis queuing system (like Resque) I believe to handle the volume.
Do a callback (as you mentioned) and hand it off to Resque. (There was a Railscasts about Resuqe recently)
There is no single recommended approach for this. At an uber level, you may want to look at 'Comet programming', Polling and WebSockets [HTML5] and then choose the right combination. There are a couple of great implementations to manage push notifications in rails. Orbited, Juggernaut, PusherApp, Faye etc. You'll have to dig deep to figure out which of 'em use web-sockets & and fall-back to flash option to handle full support.
Faye gives a Node.js configuration also, but I am not sure about others.
Tentatively the steps would look something like:
Save the content - queue it to parser
Parse the content to find out involved users - Use Nokogiri or equivalent.
Comet/Poll it to involved users in current_session as a separate process if you're looking at Twitter like approach.
//Do other things with Post record
Push notifications to involved users and destroy() when they come online later.
Hope that gives some direction.
I know this question is outdated but I released a MentionSystem gem recently to rubygems.org that allows to create mentions between mentionee objects and mentioners, it also allows you to detect mentions in facebook / twitter styler like #username1 and #hashtag in order to create the mentions.
The gem is hosted at github: https://github.com/pmviva/mention_system
Lets say you have a Post that can mention users in the form of #username.
you have the class
class Post < ActiveRecord::Base
act_as_mentioner
end
and
class User < ActiveRecord::Base
act_as_mentionee
end
Then you define a custom mention processor:
class PostMentionProcessor < MentionSystem::MentionProcessor
def extract_mentioner_content(post)
return post.content
end
def find_mentionees_by_handles(*handles)
User.where(username: handles)
end
end
Then in your Posts controller create action you have:
def create
#post = Post.new(params[:post])
if #post.save
m = PostMentionProcessor.new
m.add_after_callback Proc.new { |post, user| UserMailer.notify_mention(post, user) }
m.process_mentions(post)
end
respond_with #post
end
If your post has #user1, #user2 and #user3 in its content the mention processor will parse user1, user2, user3, will find users with username [user1, user2, user3] and then create the mentions in the database, after each of the mentions it will run the after callback that in the example will send an email notifying the mention between post and user.
I'm using the garb gem to pull some basic stats, like pageviews, from Google Analytics. Everything's working correctly but I can't figure out the best way to test my API calls. Here's a paired down version of my Analytics class:
class Analytics
extend Garb::Model
metrics :pageviews
dimensions :page_path
Username = 'username'
Password = 'password'
WebPropertyId = 'XX-XXXXXXX-X'
# Start a session with google analytics.
#
Garb::Session.login(Username, Password)
# Find the correct web property.
#
Property = Garb::Management::Profile.all.detect {|p| p.web_property_id == WebPropertyId}
# Returns the nubmer of pageviews for a given page.
#
def self.pageviews(path)
Analytics.results(Property, :filters => {:page_path.eql => path}).first.pageviews.to_i
end
# ... a bunch of other methods to pull stats from google analytics ...
end
Pretty simple. But beyond ensuring that the constants are set, I haven't been able to write effective tests. What's the best way to test something like this? Here are some of the problems:
I'd prefer not to actually hit the API in my tests. It's slow and requires an internet connection.
The stats obviously change all the time, making it difficult to set an expectation even if I do hit the API when testing.
I think I want a mock class? But I've never used that pattern before. Any help would be awesome, even just some links to get me on the right path.
Fakeweb is a good place to start. It can isolate your SUT from the network so that slow connections don't affect your tests.
It's hard to know what else to say without knowing more about Garb. Obviously you'll need to know the format of the data to be sent and received from the API, so you can make the appropriate mocks/stubs.
I would suggest creating a testing interface that mimicks the actual calls to the google API. The other option would be to use mocks to create sample data.
I agree that it's best to not hit the actual API, since this does not gain you anything. A call to the actual API might succeed one day and fail the next because the API owners change the response format. Since GA probably won't change it's versioned API I think it's safe to create an interface that you can use in your test environments for faster testing.