Im working on the search feature for my app and I would like to give users the option for some sorting of viewing the recipes. Im trying to make a popular feature but Im not entirely sure where to start and if I need to make any modifications to my schema. Ive read this post http://sorentwo.com/2013/12/30/let-postgres-do-the-work.html and get the jist of what its doing. My main question is how do I track when a page is viewed and use that in the calculations? And also track views over time?
Also for using things like comments as a weight is it better to count the number of comments dynamically (query the comment table and add them up) or keep a column in the recipe table that gets added to whenever a comment is added?
Related
I'm using Firebase database for my application, and basically people post new chapters of different series into the app.
So, I have one parent called "series" which hosts information about the different series:
Then, I have one parent called "chapters" which contains many different children that are the different series keys, and under them are many chapters (so the chapters are under each series).
However, I also want to have a section of the app where the user can view all newly added chapters across all different series, so I made a "latestReleases" parent, which automatically gets added to whenever a new chapter is added to "chapters."
However, the way I am currently displaying latestReleases is to add the entirely of "latestReleases" to an array and then sort by date. Although with a small amount of chapters this works fine, there are now thousands of chapters, so there are thousands of things in latestReleases. Therefore, it takes literally forever for it to load. There must be a better way to do this, correct? I feel like a better way would be to only load part of the latestReleases, and then the user can choose to load more incrementally. However, it this possible? How else would I be able to achieve this? Would I need to create several "latestReleases" parents that get updated automatically? Thanks!
I am currently working on an application in Rails (though language/framework shouldn't matter for this question since it is more of a theoretical one). I'm working on wrapping my head around this problem:
Say I am tracking millions of blogs online and am plugged into their RSS feeds. My app pings these feeds every few few minutes to see if there has been any new activity across any of these millions of blogs. If there is any new activity, I want to alert users of my application who have signed up to receive alerts for specific blogs that there has been an alert.
Does it make sense to have a user_blog_alerts table (where a user can specify custom keywords to be alerted about) and continuously check this table against every new entry that comes in from my feed? And when there is a match, to add them to a queue (using Redis)?
What is the best, most efficient way to build and model this alerting system? Am I even thinking about this in the right way? Are there any good examples or tutorials on this when working with such large amounts of data?
I'm not sure what the right way to do this is, but the thought of continuously scanning a table over and over sounds exhausting (ie. unscalable).
Off the top of my head, what if you created a LIST for every blog in Redis. The values would be the user IDs of those who wanted an alert. The key name would contain the blog id (ex: "user_blog_alerts:12345").
Then when you got a new post for blog 12345 it's a simple lookup to see if that key exists. If it does, then fire off alerts for each user in the list.
Apologies in advance as I'm sure this topic has no doubt been asked before but I couldn't find any post that answers my specific query.
Bearing in mind that I'm new to MVC this is where I have got to. I've got a project developed under VS 2010 using the MVC 3 framework. I've got a search page which consists of 6 fields and a nested model which itself holds around 3 fields.
I can successfully post all this data back to itself and the data is successfully passed as a model and back agian so the fields keep the data which the user has supplied.
Before I move on to actually using this search criteria on another view a thought hit me. I want to keep this search criteria, and possibly even the search results in memory for the duration of the users session.
The reasoning behind this is simply to save my users time by:
a) negating the need to keep re-inputting their search criteria regardless of how they enter or leave the search page
b) speed up the user experience by presenting the search results more quickly
The later isn't as important as the first requirement.
I've done some google searches and indeed had a look through this site on similar topics. From what I've read using sessions (which I would typically use if developing a PHP site) is a no no. From the reasons I've read as to why you shouldn't use sessions seem valid and I'm happy to go along with it.
But now I'm left in a place where I'm scratching my head wondering to myself what exactly is best practice to achieve this simple goal that could be applied to similar situations later down the line in the project.
I also looked at the OutputCache method and that didn't behave as I expected it to. In a test I set the timeout for 30 seconds. After submitting a search I clicked the link to my search page to see if the fields would auto-populate, they didn't. But then clicking the search button the values in the cache were retrieved. I thought I was making progress but when I tried to submit a new value the old value from the cache came back i.e. I couldn't actually change my search criteria with the cache enforced. So I've discounted this as an avenue to explore.
The last option seems to suggest the use of cookies as the most likely candidate, but rightly or wrongly I feel this isn't the best solution. I would have thought the MVC 3 design pattern would have an easier and recommended method of persisting values. I'm sure there is but I've just not discovered it yet.
I have started to use JQuery and again this has been mentioned but I'm not sure this is right direction to take either.
So in summary my question really comes down to what is considered by the wider community as best practice for persisting data in my situation. Effiency, scalability and resiliancy is paramount as I'll have a large global user base that will end up using this web app.
Thanks in advance!
Pete
I'd just use cookies. They're simple to use, you can persist them for as long as you want or have them expire when the users closes their browser, and it doesn't sound like you are storing anything sensitive in them.
I have created a rails application where users can create and apply for jobs.
As you can imagine many of these jobs come from various countries/cities and have different salaries and industries etc. I would like to create a system that will allow my users to filter through all the options to find what they're most interested in.
I would like to use a combination of radio buttons and a salary slider bar (probably Jquery) in my view to select the results that show. I would then like the page to reload (without refreshing - like AJAX) when the user hits a button called filter results.
A good example of the kind of filtering system I would like to achieve can be seen at WIWT.com if you just click on their top filters button they have an excellent filtering system.
It would be great to know where to get started on this and whether there are any easy to use Gems already out there? Also if anyone could point me in the direction of a thorough tutorial that would be great as much of what I have found has been fairly incomplete and based around has_scope.
Thanks!
I'm running a Rails app on Postgres through Heroku.
I'd like to implement similar to Facebook "likes" on my site for various items, such as user comments. What's the smartest way to store these in my database that will be efficient and fast?
The obvious one is just to have a like join table between users and items, something like this:
user_id int
item_id int
item_type string
created_at datettime
However, when being displayed, this would mean every time I pull an item, I would have to pull a join across the entire like table, which could get very big.
The obvious response for this would be to store a counter in the items, for their ongoing like count. However, this won't work because who liked an item matters, both for display next to the item, and also to hide the like button for things a user has already liked.
My plan is to add to all likable items a text field in which I would store a serialized array. That way, every pull of an item would come with the complete list of who liked it. Is there a better way to do this, or is this the recommended approach?
Do you have reason to believe that your dataset is going to be so large that the join is going to be too expensive? Postgres, while not as fast as the fastest RDBMSs out there, is pretty fast these days. I used to run a website that got millions of pageviews a day, and required some pretty complicated queries to generate each page. By doing a bit of simple caching we were able to run it on very modest hardware.
You give up a lot of the benefits of using an RDBMS when you denormalize. I would only do so if I knew I had to. And if that were the case I would consider using something else, like a simple key/value database, for that data. But I think that that's only likely to be the case for you if you have an awful lot of data.