Is there a way to get ALL the responses for a single day in one transaction for a specific survey? on the API doc, I know there is the /surveys/{id}/responses/bulk option, and even I can send the start_created_at variable.
But I think that the API response has a max number of records/data it can send, it that case, what could the solution be? Paging through the results?
I'm using the .net API, found at this site, but I can build my own wrapper if necessary.
Reference link to API doc: /Surveys/SURVEY_ID/responses/bulk
Yes you're right the /surveys/{id}/responses/bulk endpoint is what you're looking for, and you can use the start_created_at and end_created_at to filter data to a date range.
The SurveyMonkey API doesn't allow a full dump of all your data, it will always be paginated. By default it'll paginate 50 at a time, but you can change that by using the per_page GET parameter.
That max per_page varies by endpoint, for responses BULK it is 100. So you'll have to fetch 100 at a time, looping through the pages to get all your data.
One alternative is to use webhooks and set up a subscriber, that way you can get new responses in real time and fetch them one by one. That way you can keep your data updated on your side as new responses come in, rather than running a script or endpoint to bulk dump all your data. But this depends on your use case, if you're building something like an export feature, then you'll have to go through the paginated route.
Related
I am ideally looking for an API that returns all the messages posted(including replies) since a given timestamp.
conversations.history is supposed to be the one I should be using, but it does not return replies, it only returns parent message (only if the timestamp of the parent message satisfies the "oldest" param I supply - i.e. if the oldest supplied in the query is later than parent's ts but earlier than replies, neither parent nor replies will be returned).
Hence, I am trying to find if there is any other API that returns the threads based on "oldest" timestamp. i.e. all the threads which have replies since a given timestamp.
Note: I looked at conversations.replies, it is only useful if you know which thread's replies you are fetching.
Currently there is no API to do what you aspire to do.
The best work around is manually fetching all threads data in-memory and then applying filter.
Did you find an alternative solution to this question? I have the same use case and when contacting Slack support I received the same response that we need to use the combination of conversations.history & conversations.replies. This will be quite an intensive and continuously growing number of calls if we need to call conversations.replies for all threaded messages just to filter out the timestamps that fit the date range. This would be catastrophic in the long run.
Ideally slack need to update conversations.replies API to support getting all replies between oldest & latest parameter just like in history.
Another alternative I am considering is to change the implementation and use the Events API instead of the Web Client API and use queueing to store all incoming messages then this will make sure that all messages are captured and stored then apply the required filters.
I was reading the Microsoft Graph API Documentation to batch queries right here and did not find what I need.
Basically I need to combine two or more requests but one depends to another value. I know there is a "dependsOn" feature to wait the other request, it is not what I am looking for.
Request one: GET '/me/joinedTeams';
Request two: GET 'teams/{groupId}/channels';
The "Request one" returns an array of groups and inside these array values there's an id property. Can I batch these two requests using the value of ther first one to get the second?
I am searching a way to do a GET and return all values of one teant like the Microsoft Teams Application does, returning all teams, all chats, etc. Batching requests is the more closer we can get it I think.
Or there is another way to generate the token to https://chatsvcagg.teams.microsoft.com/api/v1/teams/users/me url like Microsoft does?
#Gaspar, multiple api calls can be batched using json batching but any interdependent calls batch can not handle.
If you have any dependency, you have to make separate calls.
Until now we used a SharePoint on-premise custom web service which delivered all users (approx 15,000) inclusive properties like aboutMe, skills, etc. and the direct manager. That job took approx. 15 minutes.
All the data was stored in a Lucene search index.
Now we have to switch to O365.
I am able to get all the desired information from Microsoft Graph but it would take way too long (3 - 5 hours):
Fetch all users via /v1.0/users (with paging)
Iterate through the collection and
get manager for given user via /v1.0/[user-id]/manager
get properties like aboutMe, skills for given User via /v1.0/[user-Id]?$select=aboutMe,skills
Is there any efficient way to do that task?
Ideally, you should just call Microsoft Graph for the data you want on-demand rather than attempting to sync it to your own database.
Assuming you can't do that, you can deduce the time this takes using /delta endpoint (Get incremental changes for users). When you use a delta token, you will only get back resources that have changed (adds, deletes, edits) since your previous request. So your first pass might take a few hours, but subsequent passes should take seconds.
You can control which properties you're "tracking changes" against using the $select query parameter. For example, if you only care about changes to the displayName then using /v1.0/users/delta?$select=displayName will ensure you only receive changes to that property. From the documentation:
If a $select query parameter is used, the parameter indicates that the client prefers to only track changes on the properties or relationships specified in the $select statement. If a change occurs to a property that is not selected, the resource for which that property changed does not appear in the delta response after a subsequent request.
Also, consider batching requests to improve your processes' overall performance. Batching allows you to send multiple queries to Microsoft Graph in a single request and get the complete results back in a single response.
If I want to download a list of all of my followers by calling the twitter API, how many calls is it? Is it one call or is it the number of followers I have?
Thanks!
Sriram
If you just need the IDs of your followers, you can specify:
http://api.twitter.com/1/followers/ids.json?screen_name=yourScreenName&cursor=-1
The documentation for this call is here. This call will return up to 5,000 follower IDs per call, and you'll have to keep track of the cursor value on each call. If you have less than 5,000 followers, you can omit the cursor parameter.
If, however, you need to get the full details for all your followers, you will need to make some additional API calls.
I recommend using statuses/followers to fetch the follower profiles since you can request up to 100 profiles per API call.
When using statuses/followers, you just specify which user's followers you wish to fetch. The results are returned in the order that the followers followed the specified user. This method does not require authentication, however it does use a cursor, so you'll need manage the cursor ID for each call. Here's an example:
http://api.twitter.com/1/statuses/followers.json?screen_name=yourScreenName&cursor=-1
Alternatively, you can user users/lookup to fetch the follower profiles by specifying a comma-separated list of user IDs. You must authenticate in order to make this request, but you can fetch any user profiles you want -- not just those that are following the specified user. An example call would be:
http://api.twitter.com/1/users/lookup.json?user_id=123123,5235235,456243,4534563
So, if you had 2,000 followers, you would use just one call to obtain all of your follower IDs via followers/ids, if that was all you needed. If you needed the full profiles, you would burn 20 calls using statuses/followers, and you would use 21 calls when alternatively using users/lookup due to the additional call to followers/ids necessary to fetch the IDs.
Note that for all Twitter API calls, I recommend using JSON since it is a much more lightweight document format than XML. You will typically transfer only about 1/3 to 1/2 as much data over the wire, and I find that (in my experience) Twitter times-out less often when serving JSON.
http://dev.twitter.com/doc/get/followers/ids
Reading this, it looks like it should only be 1 call since you're just pulling back an xml or json page. Unless you have more than 5000 followers, in which case you would have to make a call for each page of the paginated values.
I am building a Ruby on Rails application where I need to be able to consume a REST API to fetch some data in (Atom) feed format. The REST API has a limit to number of calls made per second as well as per day. And considering the amount of traffic my application may have, I would easily be exceeding the limit.
The solution to that would be to cache the REST API response feed locally and expose a local service (Sinatra) that provides the cached feed as it is received from the REST API. And of course a sweeper would periodically refresh the cached feed.
There 2 problems here.
1) One of the REST APIs is a search API where search results are returned as an ATOM feed. The API takes in several parameters including the search query. What should be my caching strategy so that cached feed can be uniquely identified against the parameters? That is, for example, if I search for say
/search?q=Obama&page=3&per_page=25&api_version=4
and I get a feed response for these parameters. How do I cache the feed so that for the exact same parameters passed in a call some time later, the cached feed is returned and if the parameters change, a new call should be made to the REST API?
2) The other problem is regarding the sweeper. I don't want to sweep a cached feed which is rarely used. That is, search query Best burgers in Somalia would obviously be very less wanted than say Barak Obama. I do have the data of how many consumers have subscribed to the feed. The strategy here should be that given the number of subscribers to this search query, sweep the cached feeds based on how large this number is. Since the caching needs to happen in the Sinatra application, how would one go about implementing this kind of sweeping strategy? Some code will help.
I am open to any ideas here. I want these mechanisms to be very good on performance. Ideally I would want to do this without database and by pure page caching. However, I am open to possibility of trying other things.
Why would you want to replicate the REST service as a Sinatra app? You could easily just make a model inside your existing Rails app to cache the Atom feeds (storing the whole feed as a string inside for example).
a CachedFeed Model which is updated when its "updated_at" is far enough away to be renewed.
You could even use static caching for your cachedFeed Controller to reduce the strain on your system.
Having the cache inside your Rails app would greatly reduce complexity in terms of when to renew your cache or even count the requests performed against the rest api you query.
You could have model logic to distribute the calls you have to the most popular feeds. Tthe search parameter could just an attribute of your model so you can easily find and distinguish them