Pagination in Microsoft Graph APi to get users - microsoft-graph-api

I am using graph api to get users of an organization. To implement pagination i used $top parameter which also gave me #odata.nextLink to get next page. No I want previous page when user click on Previous button. I tried using $skip, $previous-page=true parameters but did not work.
Links I have used are
https://graph.microsoft.com/v1.0/users?$top=10
https://graph.microsoft.com/v1.0/users?$skip=10 (tried to redirect to the 2nd page from 4th page but this doesn't work)
https://graph.microsoft.com/v1.0/users?$previous-page=true&$top=10 (Only gave 1st 10 users and next link)
Please help me to redirect to previous page.

This isn't supported, nor is it what paging was intended for. Pagination is a performance optimization that works by reducing the amount of data transmitted with each call to the API. It is not designed to directly back a UI.
Your app should be pulling down the data as needed and caching it. When the user moves forward, you fetch data from the API. When the user moves backward, you fetch data from your cache.

Related

How is Instagram's or Facebook feed works in iOS

I'm building an iOS client for a new social network service based on music. It will have a part where user will interact with a 'feed' of news. What is the most convenient approach in building this feed? Should I ask a server for a, like 20 last posts or so? And when user scrolls to the end of first 20 posts I should ask for another 20? Any help appreciated.
What you're looking to do is normally referred to as pagination.
On the back end the data is broken up into chunks. The iOS client will them make some GET request like
www.myAPI.com/feed?page=1
to get the initial chunk of 20 items. Then before the user finishes scrolling to the bottom make another call
www.myAPI.com/feed?page=2
and merge and append results to the tableview.
If you're interested in how Facebook creates such fast tableview consider looking into: https://github.com/facebook/AsyncDisplayKit

Twitter - get number of shares using the new API

So since we no longer can get share counts using Twitter's API anymore, are there any workarounds to get the result, no matter how convoluted?
The only thing I'm thinking of is using the search/tweets endpoint to get tweets page by page and then using the next_results object to get the next page, counting each time.
Obviously this has massive flaws, for a popular search term the next iteration of the loop will probably have duplicates, not to mention too many API calls will invoke the rate limiter.
There's no good way to do it from the API. There are third party services which track shares.
I use http://newsharecounts.com/ - once you've signed up your domain, you can call a URL like:
http://public.newsharecounts.com/count.json?url=https://shkspr.mobi/blog/2015/03/this-is-what-a-graph-of-8000-fake-twitter-accounts-looks-like/
And get back a JSON count
{
"url":"https://shkspr.mobi/blog/2015/03/this-is-what-a-graph-of-8000-fake-twitter-accounts-looks-like/",
"count":739,
"tracked":6,
"historic":733
}
There's also http://opensharecount.com/ which works in a similar way - although I've not had much success with it.

How to show results and still performing a task in Ruby on Rails

I'm trying to implement a tool using Ruby on Rails, which should crawl a webside and search for hyperlinks. There is a problem: if the website has a huge number of links, the user needs to wait a lot of time.
This is probably a naive question: how can I show results (for example 10 results) and the crawling process still running?
Then, the user click "Next" and it shows the next 10 links, and so on.
Imagine that a page has a list of links.
Implement an action in your controller that, given the a position in the links list, gives the next 10 links and returns a json from the data to be displayed.
With javascript, call the just implemented action with zero, get the json, parse it and display in the screen.
Repeat number 2 adding the number of links as parameter to the ajax call until it receives zero links
This will be way more efficient if you get all links on a page in a call, show it to the user, and then repeat to the user. Like the following:
For a given page, add an action that returns all links it have in json
Do a ajax call to that action, take the links, display, and then use each of the given links as parameter, to crawl to these links.
Do this while you have no more links. Keep a links blacklist to avoid cycles.
If you didn't got the ajax part, check the definition of ajax on wikipedia and this question

How to avoid tracking 'clicks' from url shorteners themselves?

Im using google shorten url to short my link , i used this link to post on twitter using my php application.
To track the source of that page link, iam doing visit count update on my php page when some clicked that link. when i passing my url to google shorten api it automatically ping my site so my visit count increasing, and twitter is too doing that same. Because of this i got 2 to 5 clicks count on db. Can anyone help me how to handle this issue? I would like track how many clicks done by user not from this both google shorten api and twitter shorten url api
The easy / lazy way is to do further testing and see if a consistent number of hits come from Google and Twitter. If so, you just pre-adjust the count to subtract that number of hits for every shortened URL.
The more rigorous way is to detect User-Agent headers for each page request. If it's from Google or Twitter, ignore it. If it's from anything else, increment your clicks count.
We had similar kind of problem. When the hit is being made by google, then in the request header it sets From: googlebot(at)googlebot.com. We used this header information and implemented custom logic to exclude hits from google-bots.

Best way to display a Twitter feed (with history) on a Rails site

On a Rails site, I'd like to display a certain Twitter feed, with pagination so the visitor can see previous tweets (as far back as needed).
I implemented it using the Twitter gem, using Search method, which has a nice pagination method, but hit a limitation that Twitter will only return the statutes from the last two weeks. So after going back a couple of pages, it won't fetch anymore.
I could use the user_timeline method, with max_id and then do my own pagination (passing the max_id of the last item viewed back to the controller to fetch the next batch).
Or, I could have a rake task that polls the Twitter feed frequently (with cron), and stores the tweets in the DB. The site would serve those up from the DB instead of querying Twitter at all.
Which would be the best or recommended method? I don't like having to store the Tweets in the DB, but that would also take care of the latency problem of querying Twitter (though I could use fragment caching to overcome that except that I haven't been able to get it working with Ajax).
Thanks for the advice.
I take the opposite view here, storing tweets in your database is not a good idea for a range of reasons.
you can never be sure that you got all the recently added tweets as a whole bunch could be added in quick succession. Sure, you can just make the cron job run more frequently, but then we get to the next problem.
if tweets are deleted, for whatever reason, your app will still cache them, which too me is not bad practice as they would have been removed for a reason.
To be honest, I would not have your app serve the tweets, but have a 'widget' (jquery or such) on the page which would love them once the page has loaded, and look at implementing some form of pagination there instead.
I'd go for storing the tweets in the database.
So even if twitter is offline you won't have some long load problems. You'll just rely on your database and the tweets will be appropriately displayed.
Only your background job will fail because twitter is unavailable. But that's not really a problem.
We download and store the tweets in a local database. I recently wrote a blog post about how we achieved this:
http://www.arctickiwi.com/blog/16-download-you-twitter-feed-using-ruby-on-rails-with-oauth
You can then use will_paginate to handle your pagination and go back as far as you want.

Resources