Pagination is not working for calls resource in Twilio - twilio

Problem: I want API to serve all the calls that were received by any given twilio number. It works just fine initially when the call logs are in 50s but as the number increases the call logs API is becoming very slow as there are too many call logs to fetch and process on our end.
Expected Result: I want to paginate the call logs to fetch only 20 call logs at a time.
I tried using List all calls api
// Download the helper library from https://www.twilio.com/docs/node/install
// Find your Account SID and Auth Token at twilio.com/console
// and set the environment variables. See http://twil.io/secure
const accountSid = process.env.TWILIO_ACCOUNT_SID;
const authToken = process.env.TWILIO_AUTH_TOKEN;
const client = require('twilio')(accountSid, authToken);
client.calls.list({limit: 3})
.then(calls => calls.forEach(c => console.log(c.sid)));
The expected result contain the following:
"calls": [1.., 2.., 3..],
"end": 1,
"first_page_uri": "/2010-04-01/Accounts/ACXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/Calls.json?Status=completed&To=%2B123456789&From=%2B987654321&StartTime=2008-01-02&ParentCallSid=CAXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX&EndTime=2009-01-02&PageSize=2&Page=0",
"next_page_uri": "/2010-04-01/Accounts/ACXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/Calls.json?Status=completed&To=%2B123456789&From=%2B987654321&StartTime=2008-01-02&ParentCallSid=CAXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX&EndTime=2009-01-02&PageSize=2&Page=1&PageToken=PACAaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa0",
"page": 0,
"page_size": 2,
"previous_page_uri": "/2010-04-01/Accounts/ACXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/Calls.json?Status=completed&To=%2B123456789&From=%2B987654321&StartTime=2008-01-02&ParentCallSid=CAXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX&EndTime=2009-01-02&PageSize=2&Page=0",
"start": 0,
"uri": "/2010-04-01/Accounts/ACXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/Calls.json?Status=completed&To=%2B123456789&From=%2B987654321&StartTime=2008-01-02&ParentCallSid=CAXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX&EndTime=2009-01-02&PageSize=2&Page=0"
But in my case its only returning, even though there are more call logs present
"calls": []
Now since I am not able to get next_page_uri, I am not able to paginate.
How can I get next_page_uri?

If you are using the latest version of the Twilio Node module then you can get all your calls in a couple of ways:
The library automatically handles paging for you. Collections, such as
calls, have list and each methods that page under the hood. With
both list and each, you can specify the number of records you want
to receive (limit) and the maximum size you want each page fetch to
be (pageSize). The library will then handle the task for you.
list eagerly fetches all records and returns them as a list, whereas
each streams records and lazily retrieves pages of records as you
iterate over the collection. You can also page manually using the
page method.
For more information about these methods, view the [auto-generated
library https://www.twilio.com/docs/libraries/reference/twilio-node/).
If you want to use paging anyway, it'd probably recommend the page option:
client.calls.page({ pageSize: 10 }, function pageReceived(page) {
page.instances.forEach(function(call) {
console.log(call);
});
if (page.nextPage) {
page.nextPage().then(pageReceived);
}
})

Related

How to paginate (get all) Appwrite using listDocuments?

I can list documents from Appwrite database using: https://appwrite.io/docs/client/database#databaseListDocuments
const sdk = new Appwrite();
sdk
.setEndpoint('https://[HOSTNAME_OR_IP]/v1') // Your API Endpoint
.setProject('5df5acd0d48c2') // Your project ID
;
let promise = sdk.database.listDocuments('[COLLECTION_ID]');
promise.then(function (response) {
console.log(response); // Success
}, function (error) {
console.log(error); // Failure
});
This function supports limit, but it is capped at a maximum 100. What if I have 500 documents? How can I get all documents using this method?
Include an offset parameter with your request, too (keep reading for details).
Rather than return an entire database (which could bring all network traffic to a crawl), AppWrite, being paginated, returns only a single page of results with each request. The limit parameter indicates the maximum amount of data allowed on each page (rather than the maximum number of items available in all).
Including a limit of 100 tells the API server you want a single page of results with 100 documents, and by default, AppWrite will always return the first page -- unless you also specify the page number you want, too. Fortunately, the listDocuments() function allows you to do just that via its offset parameter, which you can think of as a "page number."
To retrieve more items, simply move to the next page:
For example, begin with a limit of 100 (or less) and an offset of 0 (the first page). If the server returns a full page of results (eg, 100 items in this case), then increase the offset by 1 and make another request (This second request will return the next 100 to 199 documents). Continue increasing the offset and making requests until the server returns fewer results than the limit allows. At this point, you'll know you've reached the end of the documents, without forcing the server to choke on all of the results at once.
This is a common procedure with any system returning paginated results, although the parameter names can vary (eg, "page" vs "offset").
As of Appwrite 0.12.0, Appwrite offers 2 forms of pagination: offset and cursor.
Offset pagination is only supported up to 5000 documents. This is because the higher the offset, the longer it takes to get the data. This is an example of offset based pagination:
import { Databases } from "appwrite";
const databases = new Databases(client, "[DATABASE_ID]"); // 'client' comes from setup
// Page 1
const page1 = await databases.listDocuments('movies', [], 25, 0);
// Page 2
const page2 = await databases.listDocuments('movies', [], 25, 25);
Cursor based pagination is much more efficient and, as such, does not have a limit. This is an example of cursor based pagination:
import { Databases } from "appwrite";
const databases = new Databases(client, "[DATABASE_ID]"); // 'client' comes from setup
// Page 1
const page1 = await databases.listDocuments('movies', [], 25, 0);
const lastId = results.documents[results.documents.length - 1].$id;
// Page 2
const page2 = await databases.listDocuments('movies', [], 25, 0, lastId);
For more information on offset vs cursor based pagination, refer to this article. For more information on pagination in Appwrite, refer to the Appwrite Pagination docs.

$batch request resulting in error "Default changeset implementation allows only one operation"

I am making a worklist application using SAPUI5. The problem is that when I create an entry and then create another one right after that, I get the following error:
Default changeset implementation allows only one operation.
I checked the $batch header and I see that there is a MERGE and a POST, with the MERGE updating the previous entry for some reason. Can anyone shed some light? Could it be a backend error and not a UI5 error?
Creating the new entry:
_onMetadataLoaded: function() {
var oModel = this.getView().getModel();
var that = this;
// ...
oModel.read("/USERS_SET", {
success: function(oData) {
var oProperties = {
Qmnum: "0",
Otherstuff: "cool"
};
that._oContext = that._oView.getModel().createEntry("/ENTITYSET", {
properties: oProperties
});
that.getView().setBindingContext(that._oContext);
// ...
}
});
},
handleSavePress: function(oEvent) {
// ...
this.getView().getModel().submitChanges({
success: function(oData) {
// ...
},
error: function(oError) {
// ...
}
});
},
tl-dr: Apparently you must be using the SAP Gateway. If you do not need to process those requests in one transaction then send them in different changesets. If you do not need batch calls at all consider turning it off by supplying your model with "useBatch": false upon instantiation. However if you need to process the requests together in one transaction then you have to read the details below.
In order to understand the problem you have to understand how the gateway and the batch and changeset requests work.
Batch requests consist of multiple requests bundled together. The purpose is to open only one connection and group together relevant requests so that the overhead is minimalized. Changesets form smaller blocks inside batch requests, where modification requests can be bundled and processed together in order to ensure an all-or-nothing characteristic.
So on the gateway side: there are two relevant classes for your OData service, assuming that you have used the SAP Gateway Service Builder (SEGW transaction). There is one with the ending ...DPC and one with ...DPC_EXT. Don't touch the former, it will be always regenerated when you update your service in the service builder. The latter is the one that we will need in this example. You will have to redefine at least two methods:
/IWBEP/IF_MGW_APPL_SRV_RUNTIME~CHANGESET_BEGIN
/IWBEP/IF_MGW_APPL_SRV_RUNTIME~CHANGESET_PROCESS
By default the changeset_begin method will only allow changeset processing for changesets where the number of requests equals to one. This can be handled automatically that's why a limitation exists. If there were more requests one could not ensure their processing automatically as they could have a business dependency on each other.
So make sure to allow a bundled (deferred mode) processing of changesets under the desired conditions:
/IWBEP/IF_MGW_APPL_SRV_RUNTIME~CHANGESET_BEGIN: first call the super->/iwbep/if_mgw_appl_srv_runtime~changeset_begin method in a try catch block, then loop at it_operation_info to decide and narrow down processing only in selected cases and then allow cv_defer_mode only for the selected cases, otherwise throw a /iwbep/cx_mgw_tech_exception=>changeset_not_supported exception.
/IWBEP/IF_MGW_APPL_SRV_RUNTIME~CHANGESET_PROCESS: all requests will be available in the it_changeset_request. Make sure to fill the ct_changeset_response table with the responses.
METHOD /iwbep/if_mgw_appl_srv_runtime~changeset_process.
DATA:
lv_operation_counter TYPE i VALUE 0,
lr_context TYPE REF TO /iwbep/cl_mgw_request,
lr_entry_provider TYPE REF TO /iwbep/if_mgw_entry_provider,
lr_message_container TYPE REF TO /iwbep/if_message_container,
lr_entity_data TYPE REF TO data,
ls_context_details TYPE /iwbep/if_mgw_core_srv_runtime=>ty_s_mgw_request_context,
ls_changeset_response LIKE LINE OF ct_changeset_response.
FIELD-SYMBOLS:
<fs_ls_changeset_request> LIKE LINE OF it_changeset_request.
LOOP AT it_changeset_request ASSIGNING <fs_ls_changeset_request>.
lr_context ?= <fs_ls_changeset_request>-request_context.
lr_entry_provider = <fs_ls_changeset_request>-entry_provider.
lr_message_container = <fs_ls_changeset_request>-msg_container.
ls_context_details = lr_context->get_request_details( ).
CASE ls_context_details-target_entity.
WHEN 'SomeEntity'.
"Do the processing here
WHEN OTHERS.
ENDCASE.
ENDLOOP.
ENDMETHOD.
From the error I can tell you must be using SAP GW :-) This happens only for batch requests containing more than one create/delete/update calls and it's related to transaction security ("all or nothing"). What you have to do is redefining the corresponding GW method, I think it was CHANGESET_BEGIN. See https://archive.sap.com/discussions/thread/3562720 for some details (can't offer more for now...).

How to access Cache.addAll() request array

Having a simple service worker updating, when receiving a message, as below. This works fine and the cache is updated. But next is to leave one of the files inaccessible and trying to get some notification that one is failing. How to list results of the requests?
Looking here
[https://developer.mozilla.org/en-US/docs/Web/API/Cache/addAll][1]
States
"The request objects created during retrieval become keys to the stored response operations."
How is this interpretated? and in code accessed?
self.addEventListener('message', event => {
console.log('EventListener message ' + event.data);
event.waitUntil(
caches.open('minimal_sw').then(cache => {
return cache.addAll(fileList).then(function(responseArray){
console.log('EventListener responseArray ' + responseArray);
self.skipWaiting();
});
})
)
});
Earlier this year, the addAll() behavior was changed so that it will reject if any of the underlying requests return responses that do not have a 2xx status code. This new behavior is implemented in the current version of all browsers that support the Cache Storage API.
So, if you want to detect when one (or more) of the requests fail, you can chain a .catch() to the end of your addAll() call.
But, to answer your question more generally, when you pass an array of strings to addAll(), they're implicitly converted (section 6.4.4.4.1) to Request objects using all of the defaults implied by the Request constructor.
Those Request objects that are created are ephemeral, though, and aren't saved anywhere for use in the subsequent then(). If, for some reason, you really need the actual underlying Request object that was used to make the network request inside of the then(), you can explicitly construct an array of Request objects and pass that to addAll():
var requests = urls.map(url => new Request(url));
caches.open('my-cache').then(cache => {
return cache.addAll(requests).then(() => {
// At this point, `cache` will be populated with `Response` objects,
// and `requests` contains the `Request` objects that were used.
}).catch(error => console.error(`Oops! ${error}`));
});
Of course, if you have a Cache object and want to get a list of the keys (which correspond to the request URLs), you can do that at any point via the keys() method:
caches.open('my-cache')
.then(cache => cache.keys())
.then(keys => console.log(`All the keys: ${keys}`));
There's no need to keep a reference to the original Requests that were used to populate the cache.

How to create a method query that works for an Infinite Scroll loading behavior

I'm creating a page that outputs a list of 1000-3000 records. The current flow is:
User loads a page
jQuery hits the server for all the records and injects them into the page.
Problem here is that those records for some users can take 3+ seconds to return which is a horrible UX.
What I would like to do is the following:
1. User loads a page
2. jQuery hits the server and gets at most 100 records. Then keeps hitting the server in a loop until the records loaded equal the max records.
Idea here is the user gets to see records quickly and doesn't think something broke.
So it's not really an infinite scroll as I don't care about the scroll position but it seems like a similar flow.
How in jQuery can I the the server in a loop? And how in rails can I query taking into account a offset and limit?
Thank you
You can simply query the server for a batch of data over and over again.
There are numerous APIs you can implement. Like:
client: GET request /url/
server: {
data: [ ... ]
rest: resturl
}
client GET request resturl
repeat.
Or you can get the client to pass in parameters saying you want resource 1-100, then 101-200 and do this in a loop.
All the while you will render the data as it comes in.
Your server either needs to let you pass in parameters saying you want record i to i + n.
Or your server needs to get all the data. Store it somewhere then return a chunk of the data along with some kind unique id or url to request another chunk of data and repeat this.
// pseudo jquery code
function next(data) {
render(data.records);
$.when(getData(data.uniqueId)).then(next);
}
function getData(id) {
return $.ajax({
type: "GET",
url: ...
data {
// when id is undefined get server to load all data
// when id is defined get server to send subset of data stored # id.
id: id
},
...
});
}
$.when(getData()).then(next);

Is there a way to get the twitter share count for a specific URL?

I looked through the API documentation but couldn't find it. It would be nice to grab that number to see how popular a url is. Engadget uses the twitter share button on articles if you're looking for an example. I'm attempting to do this through javascript. Any help is appreciated.
You can use the following API endpoint,
http://cdn.api.twitter.com/1/urls/count.json?url=http://stackoverflow.com
Note that the http://urls.api.twitter.com/ endpoint is not public.)
The endpoint will return a JSON string similar to,
{"count":27438,"url":"http:\/\/stackoverflow.com\/"}
On the client, if you are making a request to get the URL share count for your own domain (the one the script is running from), then an AJAX request will work (e.g. jQuery.getJSON). Otherwise, issue a JSONP request by appending callback=?:
jQuery.getJSON('https://cdn.api.twitter.com/1/urls/count.json?url=http://stackoverflow.com/&callback=?', function (data) {
jQuery('#so-url-shares').text(data.count);
});
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<div id="so-url-shares">Calculating...</div>
Update:
As of 21st November 2015, this way of getting twitter share count, does not work anymore. Read more at: https://blog.twitter.com/2015/hard-decisions-for-a-sustainable-platform
This is not possible anymore as from today, you can read more here:
https://twitter.com/twitterdev/status/667836799897591808
And no plans to implement it back, unfortunately.
Up vote so users do not lose time trying out.
Update:
It is however possible via http://opensharecount.com, they provide a drop-in replacement for the old private JSON URL based on searches made via the API (so you don't need to do all that work).
It's based on the REST API Search endpoints. Its still new system, so we should see how it goes. In the future we can expect more of similar systems, because there is huge demand.
this is for url with https (for Brodie)
https://cdn.api.twitter.com/1/urls/count.json?url=YOUR_URL
No.
How do I access the count API to find out how many Tweets my URL has had?
In this early stage of the Tweet Button the count API is private. This means you need to use either our javascript or iframe Tweet Button to be able to render the count. As our systems scale we will look to make the count API public for developers to use.
http://dev.twitter.com/pages/tweet_button_faq#custom-shortener-count
Yes,
https://share.yandex.ru/gpp.xml?url=http://www.web-technology-experts-notes.in
Replace "http://www.web-technology-experts-notes.in" with "your full web page URL".
Check the Sharing count of Facebook, Twitter, LinkedIn and Pinterest
http://www.web-technology-experts-notes.in/2015/04/share-count-and-share-url-of-facebook-twitter-linkedin-and-pininterest.html
Update:
As of 21st November 2015, Twitter has removed the "Tweet count endpoint" API.
Read More: https://twitter.com/twitterdev/status/667836799897591808
The approved reply is the right one. There are other versions of the same endpoint, used internally by Twitter.
For example, the official share button with count uses this one:
https://cdn.syndication.twitter.com/widgets/tweetbutton/count.json?url=[URL]
JSONP support is there adding &callback=func.
I know that is an old question but for me the url http://cdn.api.twitter.com/1/urls/count.json?url=http://stackoverflow.com did not work in ajax calls due to Cross-origin issues.
I solved using PHP CURL, I made a custom route and called it through ajax.
/* Other Code */
$options = array(
CURLOPT_RETURNTRANSFER => true, // return web page
CURLOPT_HEADER => false, // don't return headers
CURLOPT_FOLLOWLOCATION => true, // follow redirects
CURLOPT_MAXREDIRS => 10, // stop after 10 redirects
CURLOPT_ENCODING => "", // handle compressed
CURLOPT_USERAGENT => "test", // name of client
CURLOPT_AUTOREFERER => true, // set referrer on redirect
CURLOPT_CONNECTTIMEOUT => 120, // time-out on connect
CURLOPT_TIMEOUT => 120, // time-out on response
);
$url = $_POST["url"]; //whatever you need
if($url !== ""){
$curl = curl_init("http://urls.api.twitter.com/1/urls/count.json?url=".$url);
curl_setopt_array($curl, $options);
$result = curl_exec($curl);
curl_close($curl);
echo json_encode(json_decode($result)); //whatever response you need
}
It is important to use a POST because passsing url in GET request cause issues.
Hope it helped.
This comment https://stackoverflow.com/a/8641185/1118419 proposes to use Topsy API. I am not sure that API is correct:
Twitter response for www.e-conomic.dk:
http://urls.api.twitter.com/1/urls/count.json?url=http://www.e-conomic.dk
shows 10 count
Topsy response fro www.e-conomic.dk:
http://otter.topsy.com/stats.json?url=http://www.e-conomic.dk
18 count
This way you can get it with jquery. The div id="twitterCount" will be populated automatic when the page is loaded.
function getTwitterCount(url){
var tweets;
$.getJSON('http://urls.api.twitter.com/1/urls/count.json?url=' + url + '&callback=?', function(data){
tweets = data.count;
$('#twitterCount').html(tweets);
});
}
var urlBase='http://http://stackoverflow.com';
getTwitterCount(urlBase);
Cheers!
Yes, there is. As long as you do the following:
Issue a JSONP request to one of the urls:
http://cdn.api.twitter.com/1/urls/count.json?url=[URL_IN_REQUEST]&callback=[YOUR_CALLBACK]
http://urls.api.twitter.com/1/urls/count.json?url=[URL_IN_REQUEST]&callback=[YOUR_CALLBACK]
Make sure that the request you are making is from the same domain as the [URL_IN_REQUEST]. Otherwise, it will not work.
Example:
Making requests from example.com to request the count of example.com/page/1. Should work.
Making requests from another-example.com to request the count of example.com/page/1. Will NOT work.
I just read the contents into a json object via php, then parse it out..
<script>
<?php
$tweet_count_url = 'http://urls.api.twitter.com/1/urls/count.json?url='.$post_link;
$tweet_count_open = fopen($tweet_count_url,"r");
$tweet_count_read = fread($tweet_count_open,2048);
fclose($tweet_count_open);
?>
var obj = jQuery.parseJSON('<?=$tweet_count_read;?>');
jQuery("#tweet-count").html("("+obj.count+") ");
</script>
Simple enough, and it serves my purposes perfectly.
This Javascript class will let you fetch share information from Facebook, Twitter and LinkedIn.
Example of usage
<p>Facebook count: <span id="facebook_count"></span>.</p>
<p>Twitter count: <span id="twitter_count"></span>.</p>
<p>LinkedIn count: <span id="linkedin_count"></span>.</p>
<script type="text/javascript">
var smStats=new SocialMediaStats('https://google.com/'); // Replace with your desired URL
smStats.facebookCount('facebook_count'); // 'facebook_count' refers to the ID of the HTML tag where the result will be placed.
smStats.twitterCount('twitter_count');
smStats.linkedinCount('linkedin_count');
</script>
Download
https://404it.no/js/blog/SocialMediaStats.js
More examples and documentation
Javascript Class For Getting URL Shares On Facebook, Twitter And LinkedIn

Resources