I'm on a MEAN stack using passportjs and linkedin-passport to retrieve authorize users and retrieve information. However, I want to the limit the number of connections that a user has to 100. Is this possible and if so how do I specify this limit?
Your question is a little ambiguous. It reads like you want to limit the -total- number of connections a user can ever have, which is not possible via the API.
However, what I really think you mean is how to limit the number of connection results returned from an API request, in which case, the documentation on LinkedIn's developer site is quite clear about this:
https://developer.linkedin.com/documents/connections-api
There are start and count parameters you can include in your query for paging through result sets.
Related
I'm designing a web application that needs to retrieve and organize emails from at least 100 mailboxes from our company's domain. Something like:
abc#company.com
cde#company.com
efg#compnay.com
...
My web app needs to check each of these emails every couple of seconds to retrieve the new emails and index them. However, I don't want to hit the API limit for them.
Reading the official documentation, it seems like I can have a single app ID and use it to retrieve all these informations and won't hit the API limit.
If I have 100 mail boxes, and let's say, check each mailbox every 10 seconds (with maximum 4 concurrent threads), is it safe to say that I won't hit any kind of rate limit?
It might be worth mentioning that I'm going to use the Delta Link feature to check for new emails. This will make things faster and I'm not sure if it has any effect on the rate limits.
You really need to decrease the frequency of direct hits and only do that once you receive a notification from a particular mailbox/folder.
See https://learn.microsoft.com/en-us/graph/api/subscription-post-subscriptions?view=graph-rest-1.0&tabs=http for details on Graph event subscription.
I'm using Firebase in my iOS app but I want to ensure a value is never sent from the server to the client.
Users in the app are shown to each other based on a score they have. So a user with a score of 5 will see other users who have a score of 5. I don't want to include this value in the request/response to Firebase.
Where I can manage the server I can have server side logic handle this by looking up the user on the server then calling a function that determines who has the same score and returning the relevant users without the client ever receiving the user score.
With Firebase my understanding is I'd have to send the value to Firebase in a query i.e. get all users with this user's score.
How can I do this without exposing the user's score? I want something along the lines of a node user_scores where I can query the current users score and then using this query another node users to return me the relevant users without having to nest the query on the client and thus expose the score in the request/response?
Many thanks!
Your understanding is pretty much on point, there is no way to make a "dynamic" query like this without actually exposing the varying parameter to the client.
Here are two ideas you could try to use as a workaround:
A variation of "security by obscurity": instead of exposing a single number, obfuscate that value in a way that makes guessing its purpose and other values an unpleasant experience; and share that with the client.
If you keep your users grouped by this key, not just as a flat list where this is a child node, you can use security rules to enforce that the user cannot read any other group than theirs.
(Note that this is also true for numerical values. Security rules are not filters.)
In a much more involved strategy, you could make the query static. Store and maintain a list of matching users per user, so the clients can load their own personal list without any varying parameters sans the UID.
(This is probably not really feasible if there is a lot of movement involved. But it might work in some edge cases.)
Is there a way to get ALL the responses for a single day in one transaction for a specific survey? on the API doc, I know there is the /surveys/{id}/responses/bulk option, and even I can send the start_created_at variable.
But I think that the API response has a max number of records/data it can send, it that case, what could the solution be? Paging through the results?
I'm using the .net API, found at this site, but I can build my own wrapper if necessary.
Reference link to API doc: /Surveys/SURVEY_ID/responses/bulk
Yes you're right the /surveys/{id}/responses/bulk endpoint is what you're looking for, and you can use the start_created_at and end_created_at to filter data to a date range.
The SurveyMonkey API doesn't allow a full dump of all your data, it will always be paginated. By default it'll paginate 50 at a time, but you can change that by using the per_page GET parameter.
That max per_page varies by endpoint, for responses BULK it is 100. So you'll have to fetch 100 at a time, looping through the pages to get all your data.
One alternative is to use webhooks and set up a subscriber, that way you can get new responses in real time and fetch them one by one. That way you can keep your data updated on your side as new responses come in, rather than running a script or endpoint to bulk dump all your data. But this depends on your use case, if you're building something like an export feature, then you'll have to go through the paginated route.
I'm working on a D2L add-on right now and trying to retrieve all the courses the current user is enrolled in. The only way I found so far is using the
GET /d2l/api/lp/(version)/enrollments/myenrollments/ call. This works perfectly for a small amount of courses and is extremely slow for more than approximately 50 courses. Is there any better way to retrieve all the enrollments?
Thanks in advance
For end-users, this call is indeed the one intended to address this need. Since a portion of the performance drop may come from having to process a series of data pages (requiring several calls), you can try several techniques to add a bit of performance here:
You can pre-filter the call based on org unit type: this likely requires you as the app developer to know the org unit type IDs for the org units of interest to your end users. For example, if your main use case here is "student wants to see all the course offerings she's enrolled in", then you can provide the appropriate org unit type ID for course offering org units to your API call. This becomes more difficult if your app must address several different back-end services, or you don't know the org unit type ID used by the back-end service for the relevant org unit types.
You can try using an HTTP library that can pool connections, and batch together the calls that fetch all the data pages you need to get the complete list of enrollments needed. This will provide you with some benefit to overhead on each call, but the performance benefit will likely only be marginal.
Currently, this API route does not allow the caller to request a particular data page size, and allowing that would improve the overall latency involved in this use case: for example, requesting a page size of 500 records could conceivably fetch back all the enrollments in a single call. I would judge page-size requesting to be a completely reasonable feature enhancement to request, and I would encourage you to request it on D2L's Product Idea Exchange; in fact, I'd be rather surprised if someone hasn't already done so.
If I want to download a list of all of my followers by calling the twitter API, how many calls is it? Is it one call or is it the number of followers I have?
Thanks!
Sriram
If you just need the IDs of your followers, you can specify:
http://api.twitter.com/1/followers/ids.json?screen_name=yourScreenName&cursor=-1
The documentation for this call is here. This call will return up to 5,000 follower IDs per call, and you'll have to keep track of the cursor value on each call. If you have less than 5,000 followers, you can omit the cursor parameter.
If, however, you need to get the full details for all your followers, you will need to make some additional API calls.
I recommend using statuses/followers to fetch the follower profiles since you can request up to 100 profiles per API call.
When using statuses/followers, you just specify which user's followers you wish to fetch. The results are returned in the order that the followers followed the specified user. This method does not require authentication, however it does use a cursor, so you'll need manage the cursor ID for each call. Here's an example:
http://api.twitter.com/1/statuses/followers.json?screen_name=yourScreenName&cursor=-1
Alternatively, you can user users/lookup to fetch the follower profiles by specifying a comma-separated list of user IDs. You must authenticate in order to make this request, but you can fetch any user profiles you want -- not just those that are following the specified user. An example call would be:
http://api.twitter.com/1/users/lookup.json?user_id=123123,5235235,456243,4534563
So, if you had 2,000 followers, you would use just one call to obtain all of your follower IDs via followers/ids, if that was all you needed. If you needed the full profiles, you would burn 20 calls using statuses/followers, and you would use 21 calls when alternatively using users/lookup due to the additional call to followers/ids necessary to fetch the IDs.
Note that for all Twitter API calls, I recommend using JSON since it is a much more lightweight document format than XML. You will typically transfer only about 1/3 to 1/2 as much data over the wire, and I find that (in my experience) Twitter times-out less often when serving JSON.
http://dev.twitter.com/doc/get/followers/ids
Reading this, it looks like it should only be 1 call since you're just pulling back an xml or json page. Unless you have more than 5000 followers, in which case you would have to make a call for each page of the paginated values.