Is there a plugin for this or a gem that I can use. I was thinking about just writing it to a table when a view was called in the controller. Is this the best way? I see stackoverflow has this functionality how do they do it?
Google Analytics - Let Google or some other third-party analytics provider handle it for you for free. I don't think you want to do file writes on every page load - potentially costly. Another option is to store the information in memory and write to the database periodically instead of on every page load.
[EDIT] This is an interesting question. I asked for help on this issue of what's more efficient - db writes vs file writes - there's some good feedback there too.
If you just wanted to get something in there easily you could use a real time analytics provder like W3 Counter
It gives you real time data (as opposed to Google Analytics) and is relatively simple to deploy (a few lines in your global template) but may not give you the granularity that you want. I guess it depends on if you are wanting this information programmatically to display/use in the app or for statistical purposes.
Obviously, there are third party statistics services (Google Analytics, Mint, etc...), but if you must do it yourself then doing a write each time someone hits a page will seriously impact your DB.
I'd write individual hits to an intermediate file on the filesystem or memcached, then fire a task every 10 - 15 minutes that will parse that data and insert it into the database.
Related
I want to store data of all page views on my site in InfluxDB and later I will need to get some analytics info about it. For example, "How many views does post X have?"
The problem is, as I understand it, if I have table page_views with tag(url) that is highly dynamic then I am going to have memory problems.
Right now I store this data in MySQL, but I am starting to face some problems as sometimes I need to store a 1000+ data requests per second.
So is InfluxDB the right tool for this kind of job? Is there some smarter way of storing each page view without tagging the url?
I would like to sync my db (tasks on my db, that have a decription, a date, a start time and an end time, and a user) with Google calendar.
For sync with google i plan to use these components (of course I could somehow write the whole stuff on my own but this is something I can plan for the future now I am short of time, or in alternative can you suggest some working code that connects to google calendar to send/recieve data?).
Now my main problem is not really linked to Delphi programming anyway I must ask a Delphi related questions because other questions get unviewd (like this one i asked).
So I wonder how to do the sync. Note: I do one way sync and the generated calendar will be a read only calendar.
I can set a max number in the past and future to be synced (like 10 days in past and 100 in the future for example). Then the idea I have is this:
as I start the sync app I comletely read the google calendar itmes in the range, I compare one by one with what I have in db and then I "merge" changes. Then on timer I check for differences in my db and i upload changes.
But I am not sure that these is the best solution.
A simplification of the real case is this: imagine it is a CRM with some task assigend to every user. Since beyond every task there is a logic i want to managea that logic only in my application, but the idea of pulishing the calendar to google is that it is then easily available from any mobile device. This is way there is a one way sync. Ic ould also let the calendar not be readonly anyway at every sync I wil "download" the newly inserted tasks but I will ignore the deleted ones and the edited ones. In this second case it is not enough to track changes in db, but I shuold also track changes on google, at least to "intercept" the newly added tasks.
I am aware this is gerneic question but I would like to trigger an answer that can be useful, etiher redirecting me to a sync algorithm or to Delphi sample code or anything that can help me progress on this issue. Thanks.
Google: "calendar sync algorithms"
https://wiki.mozilla.org/Calendar:Syncing_Algorithm
http://today.java.net/pub/a/today/2007/01/16/synchronizing-web-client-database.html
Synchronisation algorithms
The last one actually is funny because it leads right back to StackOverflow ;) Point is: I think there is no need to reinvent the wheel. Ps: The first link contains some useful thoughts similar to yours.
I have a project which provides users with a list of current tasks that need to be completed. Any user can complete any task, and so to ensure that only one user is working on a task at a time I need to be able to 'lock' it. I'm using SignalR for this, so a user requests a lock on a task, and if they are successful (ie. if noone else has locked it) then they will be able to access the further information that they need.
My problem is how to store the list of locked tasks. The original plan was simply to add an additional bit field 'IsLocked' to the Task table and update this when the user requested a lock and when the task was unlocked. We have about 300 concurrent users, however, and a task takes only about 3-4 minutes, meaning huge numbers of additional - and tiny - queries on the database. Therefore we were wondering about in-memory storage, simply storing a list of task ids in a 'lockedTasks' list.
I had considered using caching, but am unsure on the best ways to do this, or even if better alternatives exist. If anyone has any experience in this then some advice would be great thanks
I would avoid memory completely as IIS is not that great with it, if you found your self in the IIS need for refreshing the Application Pool for some sort of reason, your list is simply gone!
Maybe a MemCache system? If it does not loose things in the above way, but...
I would advice to be in the middle, IO File is fast that request data to a Database, specially if it's not in the same machine (witch for security reasons, it should never be), so... why not, and just to hold your list, you don't use one of the currently famous NoSQL database?
MongoDB is a document database that has a .NET Library and it's easy to use, it is not as fast as Memmory, but extremely quicker than Physical databases for what you want.
Normally the NoSQL Database will be hosted in the App_Data folder so it will be extremely fast to access and you can just hold there the task_id and user_id of all locked tasks.
Have you considered stateful filters?
Check out this links for more info:
ASP.NET MVC Filters and Statefulness
Brad Wilson: Advanced MVC
3 - (Video)
Brad Wilson: Advanced MVC 3 - (PDF)
I'm sorry, but if your app can't handle a single query every 3-4 minutes x 300 users, then you're doing something very wrong. Just browsing a site typically generates orders of magnitude more queries than that.
Let's imagine app which is not just another way to post tweets, but something like aggregator and need to store/have access to tweets posted throught.
Since twitter added a limit for API calls, app should/may use some cache, then it should periodically check if tweet was not deleted etc.
How do you manage limits? How do you think good trafficed apps live while not whitelistted?
To name a few.
Aggressive caching. Don't call out to the API unless you have to.
I generally pull down as much data as I can upfront and store it somewhere. Then I operate off the local store until it runs out and needs to be refreshed.
Avoid doing things in real time. Queue up requests and make them on a timer.
If you're on Linux, cronjobs are the easiest way to do this.
Combine requests as much as possible.
Well you have 100 requests per hour, so the question is how do you balance it between the various types of requests. I think the best option is the way is how TweetDeck which allows you to set the percentage and saves the rest of the % for posting (because that is important too):
(source: livefilestore.com)
Around the caching a database would be good, and I would ignore deleted ones - once you have downloaded the tweet it doesn't matter if it was deleted. If you wanted to, you could in theory just try to open the page with the tweet and if you get a 404 then it's been deleted. That means no cost against the API.
A bit of backstory: I am working on an web application that requires quite a bit of time to prep / crunch data before giving it to the user to edit / manipulate. The data request task ~ 15 / 20 secs to complete and a couple secs to process. Once there, the user can manipulate vaules on the fly. Any manipulation of values will require the data to be reprocessed completely.
Update: To avoid confusion, I am only making the data call 1 time (the 15 sec hit) and then wanting to keep the results in memory so that I will not have to call it again until the user is 100% done working with it. So, the first pull will take a while, but, using Ajax, I am going to hit the in-memory data to constantly update and keep the response time to around 2 secs or so (I hope).
In order to make this efficient, I am moving the intial data into memory and using Ajax calls back to the server so that I can reduce processing time to handle the recalculation that occurs w/ this user's updates.
Here is my question, with performance in mind, what would be the best way to storing this data, assuming that only 1 user will be working w/ this data at any given moment.
Also, the user could potentially be working in this process for a few hours. When the user is working w/ the data, I will need some kind of failsafe to save the user's current data (either in a db or in a serialized binary file) should their session be interrupted in some way. In other words, I will need a solution that has an appropriate hook to allow me to dump out the memory object's data in the case that the user gets disconnected / distracted for too long.
So far, here are my musings:
Session State - Pros: Locked to one user. Has the Session End event which will meet my failsafe requirements. Cons: Slowest perf of the my current options. The Session End event is sometimes tricky to ensure it fires properly.
Caching - Pros: Good Perf. Has access to dependencies which could be a bonus later down the line but not really useful in current scope. Cons: No easy failsafe step other than a write based on time intervals. Global in scope - will have to ensure that users do not collide w/ each other's work.
Static - Pros: Best Perf. Easies to maintain as I can directly leverage my current class structures. Cons: No easy failsafe step other than a write based on time intervals. Global in scope - will have to ensure that users do not collide w/ each other's work.
Does anyone have any suggestions / comments on what I option I should choose?
Thanks!
Update: Forgot to mention, I am using VB.Net, Asp.Net, and Sql Server 2005 to perform this task.
I'll vote for secret option #4: use the database for this. If you're talking about a 20+ second turnaround time on the data, you are not going to gain anything by trying to do this in-memory, given the limitations of the options you presented. You might as well set this up in the database (give it a table of its own, or even a separate database if the requirements are that large).
I'd go with the caching method of for storing the data across any page loads. You can name the cache you want to store the data in to avoid conflicts.
For tracking user-made changes, I'd go with a more old-school approach: append to a text file each time the user makes a change and then sweep that file at intervals to save changes back to DB. If you name the files based on the user/account or some other session-unique indicator then there's no issue with conflict and the app (or some other support app, which might be a better idea in general) can sweep through all such files and update the DB even if the session is over.
The first part of this can be adjusted to stagger the write out more: save changes to Session, then write that to file at intervals, then sweep the file at larger intervals. you can tune it to performance and choose what level of possible user-change loss will be possible.
Use the Session, but don't rely on it.
Simply, let the user "name" the dataset, and make a point of actively persisting it for the user, either automatically, or through something as simple as a "save" button.
You can not rely on the session simply because it is (typically) tied to the users browser instance. If they accidentally close the browser (click the X button, their PC crashes, etc.), then they lose all of their work. Which would be nasty.
Once the user has that kind of control over the "persistent" state of the data, you can rely on the Session to keep it in memory and leverage that as a cache.
I think you've pretty much just answered your question with the pros/cons. But if you are looking for some peer validation, my vote is for the Session. Although the performance is slower (do you know by how much slower?), your processing is going to take a long time regardless. Do you think the user will know the difference between 15 seconds and 17 seconds? Both are "forever" in web terms, so go with the one that seems easiest to implement.
perhaps a bit off topic. I'd recommend putting those long processing calls in asynchronous (not to be confused with AJAX's asynchronous) pages.
Take a look at this article and ping me back if it doesn't make sense.
http://msdn.microsoft.com/en-us/magazine/cc163725.aspx
I suggest to create a copy of the data in a new database table (let's call it EDIT) as you send the initial results to the user. If performance is an issue, do this in a background thread.
As the user edits the data, update the table (also in a background thread if performance becomes an issue). If you have to use threads, you must make sure that the first thread is finished before you start updating the rows.
This allows a user to walk away, come back, even restart the browser and commit whenever she feels satisfied with the result.
One possible alternative to what the others mentioned, is to store the data on the client.
Assuming the dataset is not too large, and the code that manipulates it can be handled client side. You could store the data as an XML data island or JSON object. This data could then be manipulated/processed and handled all client side with no round trips to the server. If you need to persist this data back to the server the end resulting data could be posted via an AJAX or standard postback.
If this does not work with your requirements I'd go with just storing it on the SQL server as the other comment suggested.