Can I bulk purchase numbers from the Twilio API - twilio

I am trying to purchase mulitple numbers using c# with the Twilio API. However currently we must purchase one number at one time, It takes a lot of time to purchase 10-15 numbers in the loop.
So how can I pass a list of numbers through API so it takes less time to buy numbers from twilio.

Twilio evangelist here.
Today there is no way to buy numbers in bulk via the API. You have to make one API request per number that you want to buy.
If the library is not performing fast enough for you, first I'd love to know what kind of performance you are seeing and what you expect so I can work on improving the library.
Second, I'd suggest looking at just using the built in .NET HTTP client libraries instead of using the Twilio library. The library is pretty general purpose and tuned more for ease of use than performance. If you can use .NET 4 or higher, you can use the TPL to get some good perf gains. I've built samples using the HttpClient library and TPL that resulting in substantially higher requests/sec than the library gives me today.
Hope that helps.

Related

Improving Twilio Speech Recognition of Proper Nouns

I am working in an application that gathers a user's voice input for an IVR. The input we're capturing is a limited set of proper nouns but even though we have added hints for all of the possible options, we very frequently get back unintelligible results, possibly as a result of our users having various accents from all parts of the world. I'm looking for a way to further improve the speech recognition results beyond just using hints. The available Google adaptive classes will not be useful, as there are none that match the type of input that we're gathering. I see that Twilio recently added something called experimental_utterances that may help but I'm finding little technical documentation on what it does or how to implement.
Any guidance on how to improve our speech recognition results?
Google does a decent job doing recognition of proper names, but not in real time just asynchronously. I've not seen a PaaS tool that can do this in real time. I recommend you change your approach and maybe identify callers based on ANI or account number or have them record their name for manual transcription.
david

Twilio: loop through 5-6 numbers until somebody picks up

I am on a team that has to be on-call 24/7. Our team is comprised of 5-6 members and we each take a week. If the business calls our dedicated on-call number (Twilio), I would like that to make an outbound call to a sequential list until somebody on that list answers the phone.
Is this possible using either C# or Python along with Twilio of course? I am not a developer, but if I can be pointed in the right direction I think I can figure it out. It appears Twilio has voicemail detection so I'd imagine I would have to utilize that feature.
Target has an Open-source project you can look at that may already meet your needs, powered by Twilio.
https://github.com/target/goalert
GoAlert GoAlert provides on-call scheduling, automated escalations and
notifications (like SMS or voice calls) to automatically engage the
right person, the right way, and at the right time.

Can a group of 3 researchers share/pool Twitter API tokens to accelerate/improve data collection on a sentiment analysis project?

Our group is working on a sentiment analysis research project. We are trying to use the Twitter API to collect tweets. Out aimed dataset involves a lot of query terms and filters. However, since each of us has a developer account, we were wondering if we can pool API access tokens to accelerate the data collection. For example, we will make an app that allows us to define a configuration file that contains a list of our access tokens that the app will try to use to search for a tweet. This app will be run on our local computer. Since the app uses our individual access tokens, we believe that we are not actually not bypassing or changing any Twitter limit as the record is kept for each access token. Are there any problems legal/technical that may arise from this methodology? Thank you! =D
Here is a pseudocode for what we are trying to do:
1. define a list of search terms such as 'apple', 'banana'
and 'oranges' (we have 100 of these search terms, we are okay
with the 100 limit per tweet)
2. define a list of frequent emotional adjectives such as 'happy', 'sad', 'crazy', etc. (we have have 100 of these) using TF-IDF
3. get the product of the search terms and emotional adjectives,
in total we have 10,000 query terms and we have computed
through the rate limit rules that we would need at least
55 runs of 15-minute sessions with 180 tweets per 15-minute.
55 * 15 = 825 minutes or ~14 hours to collect this amount of tweets.
4. we were thinking of improving the data collection by
pooling access tokens so that we can trim down the time
of collection from 14 hours to ~4 hours, e.g. by dividing the query items into subsets and letting a specific access token work on a subset
We were pushing for this since we just think it's efficient if it's possible and permitted since why not and it might help future researches as well?
The question is, are we actually breaking any Twitter rules or policies by doing this? By sharing one access token per each of us three and creating an app that we name as clones of the research project, we believe that in turn we are also losing something which is the headroom for one more app that we fully control.
I can't find specific rule in Twitter so far about this. Our concern is that we will publish a paper and will publish the app we will program and use for documentation and the app we plan to build. Disclaimer: Only the app's source code will be published and not the dataset because of Twitter's explicit rules about datasets.
This is absolutely not allowed under the Twitter Developer Policy and Agreement.
Twitter developer policy 5a:
Do not do any of the following:
Use a single application API key for multiple use cases or multiple application API keys for the same use case.
Feel free to check with Twitter directly via the developer forums. StackOverflow is not really the best place for this question since it is not specifically a coding question.

About data mining by using twitter data

I plan to write a thesis about using sentiment information to enhance the predictivity of some financial trading model for currency.
The sentiment data should be twitter threads including some keyword, like "EUR.USD". And I will filter out some sentiment words to identify the sentiment. Simple idea. Then we try to see whether here is any relation between the degree of sentiment and the movement of EUR.USD.
My big concern is on twitter data. As we all know that the twitter set up the limit to see the history data. You could only browser back for like 5 days. It is not enough since our strategy based on daily sentiment.
I noticed that google have some fantastic thing like timeline about the twitter updates: http://www.readwriteweb.com/archives/googles_twitter_timeline_lets_you_explore_the_past.php
But first of all, I am in Switzerland and seems I have no such function on my google which is too smart to identify my location and may block some US google version function like this. Secondly, even I could see some fancy interactive google timeline control on my firefox, How could I dig out data from my query and save them? Does google supply such api?
The Google service you mentioned has shut down recently so you won't be able to use it. (http://www.searchenginejournal.com/google-realtime-shuts-down-as-twitter-deal-expires/31007/)
If you need a longer timespan of data to analyze I see the following options:
pay for historical data :) (https://dev.twitter.com/docs/twitter-data-providers)
if you don't want to pay, you need to fetch tweets containing EUR/USD whatever else (you could use the streaming API for this) and store them somehow. Run this service for a while (if possible) and you'll have more than just 5 days of data.

Reverse geocoding services

I'm working on a project that returns information based on the user's location. I also want to display the user's town in text (no map) so they can change it if it's not accurate.
If things go well I hope this will be more than a small experiment, so can anyone recommend a good reverse geocoding service with the least restrictions? I notice that Google/Yahoo have a limit to the number of daily queries along with other usage terms. I basically need to take latitude and longitude and convert them to a city/town (which I presume cannot be done using the HTML5 Geolocation API).
Geocoda just launched a geocoding and spatial database service and offers up to 1K queries a month free, with paid plans starting at $49 for 25,000 queries/month. SimpleGeo just closed their Context API so you may want to look at Geocoda or other alternatives.
You're correct, the browser geolocation API only provides coordinates.
I use SimpleGeo a lot and recommend them. They offer 10K queries a day free then 0.25USD per 1K calls after that. Their Context API is what you're going to want, it pretty much does what is says on the tin. Works server-side and client-side (without requiring you to draw a map, like Google.)
GeoNames can also do this and allows up to 30K "credits" a day, different queries expend different credit amounts. The free service has highly variable performance, the paid service is more consistent. I've used them in the past, but don't much anymore because of the difficulty of automatically dealing with their data, which is more "pure" but less meaningful to most people.

Resources