Ok I looked around and couldn't find the answer to this. So the twitter API doesn't make available the date/time a follower became a follower. If I wanted to figure out the number of new followers per day I would some how have to get the previous days and compare the two. Any idea how to do that effectively (like save snap shot every day and save to db or save a snap shot filter everyday save results).
I'd save a snapshot in the DB for comparison. If you store the name of each follower, you could also find out who quits you each day as well as who your new followers are.
Related
Hi guys so I am trying to make my life a bit easier and figure out how to get the values logged against different tasks by the same person in 1 day so I can basically get the sum of their hours logged in total. Currently I can do this via the browser filter option using:
worklogAuthor = currentUser() AND worklogDate = "2019/01/30"
The problem is it returns entire tasks not just the hours so I need to click through each task and then get the number against the work log. Is there a way I could limit the fields being returned so I just see the work logged and maybe the task id? I see that there is some documentation out there to do that but I haven't been able to get it quite right yet.
Given that it has been 2+ years, I hope you have since found another approach. The issue you are bumping into is simply that the Advanced Search will only return a list of tasks, you can't modify that "select". What I would recommend is leveraging either the reporting or dashboarding functionality. The 2 widgets that I would recommend would be either Issue Statistics, or Workload Pie Chart. Worst case scenario, you can always dump the data out to a csv or retrieve from an API and aggregate your data that way as well.
I'm using an app that allows users to upload and download pics. Similar to snapchat they can view the pics of those they follow. After 24 hrs these pics will be moved to an archive table so users will no longer be able to see them. I'm accomplishing this aspect with mysql partitions.
However, on the client side I need to continuously update the mysql query with the date of the last gotten row from the photos table. I store this date on the iOS app. This becomes problematic if the users logs out and allows someone else to log in. I have to clear this data and have not reference point for either user now.
I have a solution to get around this and I want to know how feasible it is. I would create a trigger that would run each time a user retrieved photos. It would update a column on the users table that would hold the last date they viewed. That way when any user logs back in I will have a reference to that last date they viewed. Is this a good idea? I'm open to any suggestions on how to better this approach seeing as how I need to save the pictures instead of just deleting them.
*note the partitions would work but because I need to ensure photos last a minimum of 24 hrs some photos end up lasting more than 24hrs providing the possibility that a users can still see the photos
Photos Table
*id: binary 16
*users_id: foreign: binary 16
*filename: varchar
*created_at: datetime
The photos are stored on Amazon s3
Add a column
visible int not null
Set visible to 1 on upload.
Also put now() into created_at datetime.
Create an event see below:
http://www.mysqltutorial.org/mysql-triggers/working-mysql-scheduled-event/
Have event run daily or every 6 hours, whatever. Can be cron task instead.
When that thing runs it sets visible to 0 for anything over 1 day old.
When users see someone elses profile pictures they see only visible=1 pictures.
The guy that posted the picture sees all his pictures.
So it is automated and you can be asleep at the switch.
I have an app I'm working on that is a credits system for a store. A customer brings in items and receives a credit and then can turn around and use that credit towards certain goods in the store. I've set it up so every time a credit holder or credit is created,updated, or destroyed the event is logged. I'm wondering if there is an easy way to use the event data from the logs to create a dashboard displaying things such as X number of credits created and Y number of credits used today. This may not be the right way to go about doing this at all and if so feel free to guide me in another direction. Thanks in advance!
You should save the information into a database (in addition) to the log and operate on it in this fashion.
So for example, maybe you have a User it should be a Model and have credits which should be an integer. You can modify this value every time a transaction happens.
You can also create an associated model 'transactions' which belong_to the user and to find out transactions that happened on a certain day, you would be able to pull up all of the transactions of that user in a certain time range.
If your credits work similar to dollars and money. And your transactions are like orders, you may want to look into using the Spree gem. https://github.com/spree/spree
You definitely do not want to be reading from the logs to do very usual actions like you're describing.
This question is very similar to this one, however there are no answers on that one. I posted this one with more clarity in hopes of receiving an answer.
According to this presentation, Twitter incorporates a fanout method to push Tweets to each individual user's timeline in Redis. Obviously, this fanout only takes place when a user you're following Tweets something.
Suppose a new user, who has never followed anyone before (and conversely has no Tweets in their timeline), decides to follow someone. Using just the above method, they would have to wait until the user they followed Tweeted something for anything to show up on their timeline. After some observation, this is not the case. Twitter pulls in the latest Tweets from the user.
Now suppose that a new user follows 5 users, how does Twitter organize and push those Tweets into the user's timeline in Redis?
Suppose a user already follows 5 users and they have a fair amount of Tweets from these users in their timeline. When they follow another 5 users, how are these user's individual Tweets pushed into the initial user's timeline in Redis in the correct order? More importantly, how is it able to calculate how many to bring in from each user (seeing that they cap timelines at 800 Tweets).
Here is a way of how I would try to implement it this if I understand well your question.
Store each tweet in a hash. The key of the hash could be something like: tweet:<tweetID>.
Store the IDs of the tweets of a given user in a sorted set named user:<userID>:tweets. You set the score of the tweet as a unix timestamp, so they appear in the correct order. You can then get a list of the 800 most recent tweet IDs for the user with the instruction ZREVRANGEBYSCORE
ZREVRANGEBYSCORE user:<userID>:tweets +inf -inf LIMIT 0 800
When a user follows a new person, you copy the list of ids returned by this instruction in the timeline of the follower (either in the application code, or using a LUA script). This timeline is once again represented by a sorted set, with unix timestamps as scores. If you do the copy in the application code, which is perfectly acceptable with Redis, don't forget to use pipelining to perform your multiples writes in the sorted set in a unique network operation. It will greatly improve the performances.
To get the timeline content, use pipelining too. Request the tweets ID, using ZREVRANGEBYSCORE with a limit option and/or a timestamp as lower limit if you don't want tweets posted before a certain date.
So I am working on a Rails application, and the person I am designing it for has what seem like extremely hefty data volume requirements. They want to gather ALL posts by a user that logs into the application, and all of the posts for each of their friends for the past year.
Before this particular level of detail was communicated to me, I built the thing using the fb_graph gem and would paginate through posts. I am running into the fact that first it takes a very long time to do this, even when I change the number of posts requested per page. Second, I frequently run into the Oauth error #613, more than 600 requests per 600 seconds. After increasing each request to 200 posts I run into this limit less, but it still takes an incredibly long time to get all of this data.
I am not particularly familiar with the FQL alternative, but it seems to me that we are going to have to either prioritize speed or volume of data. Is there a way that I am missing that would allow me to quickly retrieve this level of information?
Edit: I do save all posts to the database as I retrieve them. What is required is to make one pass through and grab all of the posts for the past year, for the user and friends. This process takes a long time and I am basically wondering if there is any way that it can be sped up.
One thing that I'd like to point out here:
You should implement some kind of local caching for user's posts. I mean, instead of querying FB each time for the posts, you should save the posts in your local database and only check for new posts (whenever needed).
This is faster and saves you many API requests.