I am trying to get tweets with either the word "python" in it or the ones that are around my city
This is my code:
StatusListener listener = new MyStatusListener(twitter);
twitterStream.addListener(listener);
FilterQuery query = new FilterQuery();
String[] arr = { "python" };
double lat = 18.5203;
double lon = 73.8567;
double[][] locations = { { lat, lon } }; // for Pune city
query.track(arr);
query.locations(locations);
twitterStream.filter(query);
When I run this I get following exception:
Returned by the Streaming API when one or more of the parameters are not suitable for the resource. The track parameter, for example, would throw this error if:
The track keyword is too long or too short.
The bounding box specified is invalid.
No predicates defined for filtered resource, for example, neither track nor follow parameter defined.
Follow userid cannot be read.
Location track items must be given as pairs of comma separated lat/longs: [Ljava.lang.String;#405ef8c2
[Thu Jun 26 19:06:58 GMT+05:30 2014]Parameter not accepted with the role. 406:Returned by the Search API when an invalid format is specified in the request.
Returned by the Streaming API when one or more of the parameters are not suitable for the resource. The track parameter, for example, would throw this error if:
The track keyword is too long or too short.
The bounding box specified is invalid.
No predicates defined for filtered resource, for example, neither track nor follow parameter defined.
Follow userid cannot be read.
Location track items must be given as pairs of comma separated lat/longs: [Ljava.lang.String;#405ef8c2
I get the same message in pair. If I remove the locations condition, the code works fine. I am not sure what the issue is here. Can someone help please?
That was my bad. I was assuming that if I dont provide a pair, twitter will assume a radius (like it does on web UI- near a city). But I guess you have to provide a bounding box. Giving a bounding box worked for me.
Related
I get a daily email that lists upcoming appointments, and their length. The number of appointments vary from day to day.
The emails go like this:
================
Today's Schedule
9:30 AM
3h
Brazilian Blowout
[Client #1 name]
12:30 PM
1h
Women's Cut
[Client 2 name]
6:00 PM
45m
Men's Cut
[Client #3 name]
Projected Revenue
===================
I want to create an event in a Google Calendar for each appointment, and it seems like zapier MIGHT be able to do this, but all the help resources I can find are very general in nature.
Is this do-able on Zapier? If so, any nudges in the right direction would be awesome.
Any thoughts greatly appreciated.
I had some time to kill and enjoy the odd challenge. So I have put together a solution that should do what you are looking for. I will break it down by steps.
TEMPLATE
Zapier Trigger - Step 1
Type: Trigger
Module: Gmail
Criteria: User Dependent
Comments: For the trigger zap you will want to use a Gmail specific trigger, something to the effect of "execute trigger on emails titled 'xyz'", or "emails labeled 'xyz'" if you setup a filter in your inbox.
Input screenshot:
Output Screenshot:
Zapier Action - Step 2
Type: Action
Module: Code (Python 3)
Comments: The Code offered by Zapier executes whatever (properly written) code you place in its container. It is especially handy as it allows you to incorporate data from previous steps in it through the use of a dictionary variable titled 'input_data'. Zapier offers the Code module in two languages: Javascript and Python. As I am most familiar with Python my solution for this step was written in Python. I will append the code to the end of this answer. Using the data held in the body of the email (retrieved in step 1) we can execute some string manipulations and datetime conversions to break apart the email into its component parts and pass those on to the following Action Step: Create Calendar Event.
Input Screenshot:
Output Screenshot:
Zapier Action - Step 3
Type: Action
Module: Google Calendar - Create Event
Comments: Using the data outputted from the previous code step we can fill out the required fields for creating a new appointment.
Input Screenshot:
Output Screenshot:
PYTHON CODE
from datetime import timedelta, date, datetime
'''
Goal: Extract individual appointment details from variable length email
Steps:
Remove all extraneous and new line characters.
Isolate each individual appointment and group its relevant details.
Derive appointment start and end times using appointment time and duration.
Return all appointments in a list.
'''
def format_appt_times(appt_dict):
appt_start_str = appt_dict.get("appt_start")
appt_dur_str = appt_dict.get("appt_length")
# isolate hour and minutes from appointment time
appt_s_hour = int(appt_start_str[:appt_start_str.find(":")])
if ("pm" in appt_start_str.lower()):
appt_s_hour = 12 if appt_s_hour + 12 >= 24 else appt_s_hour + 12
appt_s_min = int(appt_start_str[appt_start_str.find(":") + 1 :
appt_start_str.find(":") + 3])
# isolate hour and minutes from duration time
appt_d_hour = 0
appt_d_min = 0
if ("h" in appt_dur_str):
appt_d_hour = int(appt_dur_str[:appt_dur_str.find("h")])
if ("m" in appt_dur_str):
appt_d_min = int(appt_dur_str[appt_dur_str.find("m") - 2 : appt_dur_str.find("m")])
# NOTE: adjust timedelta hours depending on your relation to UTC
# create datetime objects for appointment start and end times
time_zone = timedelta(hours=0)
tdy = date.today() - time_zone
duration = timedelta(hours=appt_d_hour, minutes=appt_d_min)
appt_start_dto = datetime(year=tdy.year,
month=tdy.month,
day=tdy.day,
hour=appt_s_hour,
minute=appt_s_min)
appt_end_dto = appt_start_dto + duration
# return properly formatted datetime as string for use in next step.
return (appt_start_dto.strftime("%Y-%m-%dT%H:%M"),
appt_end_dto.strftime("%Y-%m-%dT%H:%M"))
def partition_list(target, part_size):
for data in range(0, len(target), part_size):
yield target[data : data + part_size]
def main():
# Remove all extraneous and new line characters.
email_body = input_data.get("email_body")
head,delin,*email_body,delin,foot = [text for text in email_body.splitlines() if text != ""]
appointment_list = []
# Isolate each individual appointment and group its relevant details.
for text in partition_list(email_body, 4):
template = {
"appt_start" : text[0],
"appt_end" : None,
"appt_length" : text[1],
"appt_title" : text[2],
"appt_client" : text[3]
}
appointment_list.append(template)
for appt in appointment_list:
appt["appt_start"], appt["appt_end"] = format_appt_times(appt)
return appointment_list
return main()
I am not sure of your familiarity with Python, or programming more generally, but the comments in the code explain what each section is doing. If you have any specific questions regarding aspects of the code let me know. Assuming your email template does not change this setup should work exactly as needed. Let me know if anything is unclear.
UPDATE
I thought it best to address your question in the original answer should anyone else have similar questions.
explaining how this code is removing the extra characters:
There is actually a fair bit going on in the first line, so I will do my best to break it down, and provide resources where necessary.
The code in question:
head,delin,*email_body,delin,foot = [text for text in email_body.splitlines() if text != ""]
First step here was to break the text into manageable chunks. I did so with the line email_body.splitlines() which, by default, breaks strings into a list at each newline character found (you can specify your own delimiter).
If we were to inspect the list at this moment its contents would be something of the following:
["================", "", "Today's Schedule", "", "9:30 AM", "", "3h", ..., "[Client #3 name]", "", "Projected Revenue", "", "==================="]
You will notice there is a fair amount of information in there that we really don't want.
First lets look at the "" elements. These are left over as a result of the blank lines between each line of text, which even though they are blank do still have newline characters at the end of them. There a number of ways you could address this within python. We could simply write a for-loop to go through and copy all elements that are not "" to a new list.
To me this felt like additional work, and besides, Python offers list comprehension for just such a scenario. I won't go too deep into list comprehension as there is a lot that can be said about it, and in more insightful ways than I could muster, but it essentially allows you to provide logic against a set of 'data' to form a list. In this case, I specifically wanted to filter out the "" elements returned from the call to splitlines().
And so you will see I address this with the following line
[text for text in email_body.splitlines() if text != ""]
With that we have a list as above less the "" elements. Now we must turn our attention towards the more 'dynamic' garbage strings. Again there are a number of ways to do this. A, not particularly flexible, option could be to simply store the strings we want to remove in variables something to the effect of:
garb_1 = "==================="
garb_2 = "Projected Revenue"
garb_3 = ...
and once again filter the list with yet another for-loop. I instead chose to leverage Python's list unpacking idiom. Which allows us to 'unpack' list objects (and I believe tuples) into variables. As an example:
one, two, three = ["a", "b", "c"]
I'm sure you can guess what is happening above, as long as we provide the same number of variables as are in the list we can 'unpack' it in this fashion. But wait! In our case we don't know how long the list is going to be as it is entirely dependent on the number of appointments you have for any given day. Well this is where star unpacking enters to elevate the functionality. Using my code as the example:
head,delin,*email_body,delin,foot = [text for text in email_body.splitlines() if text != ""]
The *, in plain-English, is saying "I don't know how many elements to expect just give me all of them in a list". As we know that there will always be two lines of garbage at the beginning and end of the email we can assign them to throw away variables and capture everything in between using our variable length *email_body container.
With all of this complete we now have a list with only the data we are looking to capture. If, as you say, there are additional lines of garbage before or after the email_body, you can simply add additional throw away variables to account for them.
Once again feel free to ask any follow up questions.
Michael
Resources
List Comprehension
Star Unpacking
Google Places API for iOS version: 2.2.30010.0
Code:
let filter = GMSAutocompleteFilter()
filter.type = .address
filter.country = "us"
return filter
When searching for example Montrose with filter of type address & country us, the search results display:
The country filter works, but the type filter displays results of type route. Is this the intended behavior?
The Place Autocomplete docs specify:
address instructs the Place Autocomplete service to return only geocoding results with a precise address. Generally, you use this request when you know the user will be looking for a fully specified address.
Maybe I'm misunderstanding what a precise address is, but it seems like the query should only return results with a building number ex. 22 Montrose Ave.
Is it possible to return only places that have building numbers?
You just need to remove filter.type = .address
I have a list of international phone numbers and a List of Country calling codes.
I would like to identify the Country from the numbers but I can't find a fast and elegant way to do it.
Any idea? The only I got is to have an hardcoded check (Eg. "look at the first number, look at the second number: if it's X then check for the third number. If the second number is Y then the Country is Foo", etc.).
I'm using PHP and a DB (MySQL) for the lists, but I think that any pseudocode will help.
Alternatively, you could use a tool like Twilio Lookup.
The CountryCode property is always returned when you make an API request with Lookup.
https://www.twilio.com/docs/api/lookups#lookups-instance-properties
[Disclosure: I work for Twilio]
i was after something similar to this, but i also wanted to determine the region/state - if available. in the end i hacked up something based on a tree of the digits leading digits (spurred on by the description at wikipedia)
my implementation is available as a gist.
I'm currently using an implementation of Google's libphonenumber in Node, which works fairly well. I suppose you could try a PHP implementation, e.g. libphonenumber-for-php.
The hard-coded check can be turned into a decision tree generated automatically from the list of calling codes. Each node of the tree defines the 'current' character, the list of possible following characters (tree nodes) or a country in case it's a terminal node. The root node will be for the leading '+' sign.
The challenge here is that some countries share the same phone country code. E.g. both Canada and the US have phone numbers starting with +1.
I'm using https://github.com/giggsey/libphonenumber-for-php as following:
/**
* Get country
* #param string $phone
* #param string $defaultCountry
* #return string Country code, e.g. 'CA', 'US', 'DE', ...
*/
public static function getCountry($phone, $defaultCountry) {
try {
$PhoneNumberUtil = \libphonenumber\PhoneNumberUtil::getInstance();
$PhoneNumber = $PhoneNumberUtil->parse($phone, $defaultCountry);
$country = $PhoneNumberUtil->getRegionCodeForNumber($PhoneNumber);
return $country;
} catch (\libphonenumber\NumberParseException $e) {
}
return $defaultCountry;
}
You can easily do a simple lookup starting with the first number, then the second, and so on until you find it. This will work correctly because no calling code is a prefix of another code, i.e. the international calling codes form a "prefix code" (the phone system relies on this property).
I'm not good any good at PHP so here is a simple python implementation; hopefully it is easy to follow:
>>> phone_numbers = ["+12345", "+23456", "+34567", "+45678"]
>>> country_codes = { "+1": "USA", "+234": "Nigeria", "+34" : "Spain" }
>>> for number in phone_numbers:
... for i in [2, 3, 4]:
... if number[:i] in country_codes:
... print country_codes[number[:i]]
... break
... else:
... print "Unknown"
...
USA
Nigeria
Spain
Unknown
Essentially you have an associative array between prefixes and countries (which I assume you can easily generate from that Wikipedia article. You try looking up the first digit of the phone number in the associative array. If it's not in the array you try the first two digits, then the first three. If there is no match after three digits then this number doesn't start with a valid international calling code.
i'm using riak to store json documents right now, and i want to sort them based on some attribute, let's say there's a key, i.e
{
"someAttribute": "whatever",
"order": 1
}
so i want to sort the documents based on the "order".
I am currently retrieving the documents in riak with the erlang interface. i can retrieve the document back as a string, but i dont' really know what to do after that. i'm thinking the map function just reduces the json document itself, and in the reduce function, i'd make a check to see whether the item i'm looking at has a higher "order" than the head of the rest of the list, and if so append to beginning, and then return a lists:reverse.
despite my ideas above i've had zero results after almost an entire day, i'm so confused with the erlang interface in riak. can someone provide insight on how to write this map/reduce function, or just how to parse the json document?
As far as I know, You do not have access to Input list in Map. You emit from Map a document as 1 element list.
Inputs (all the docs to handle as {Bucket, Key}) -> Map (handle single doc) -> Reduce (whole list emitted from Map).
Maps are executed per each doc on many nodes whereas Reduce is done once on so called coordinator node (the one where query was called).
Solution:
Define Inputs (as a list or bucket)
Retrieve Value in Map and emit whole doc or {Id, Val_to_sort_by)
Sort in Reduce (using regular list:keysort)
This is not a map reduce solution but you should check out Riak Search.
so i "solved" the problem using javascript, still can't do it using erlang.
here is my query
{"inputs":"test",
"query":[{"map":{"language":"javascript",
"source":"function(value, keyData, arg){ var data = Riak.mapValuesJson(value)[0]; var obj = {}; obj[data.order] = data; return [ obj ];}"}},
{"reduce":{"language":"javascript",
"source":"function(values, arg){ return [ values.reduce(function(acc, item){ for(var order in item){ acc[order] = item[order]; } return acc; }) ];}",
"keep":true}}
]
}
so in the map phase, all i do is create a new array, obj, with the key as the order, and the value as the data itself. so visually, the obj is like this
{"1":{"firstName":"John","order":1}
in the reduce phase, i'm just putting it in the accumulator, so basically that's the sort if you think about it, because when you're done, everything will be put in order for you. so i put 2 json documents for testing, one is above, the ohter is just firstName: Billie, order 2. and here is my result for the query above
[{"1":{"firstName":"John","order":1},"2":{"firstName":"Billie","order":2}}]
so it works! . but i still need to do this in ERLANG, any insights?
I have rails application, which deals with some google map related stuff.the problem is , i have a table which contains latitude & longitude columns. the column types are "float". for some occasions i need to generate the query by the following:
clients.find_all_by_lat_and_lng(latvalue,lngvalue).
I gave the correct & existing lat & lng values to fetch the database value. but i can get only Empty.
I tried with difference dynamic finders like find_by_lat,find_by_lng i get only empty values from the query ...but data are exits in the table.
I guess the Float column type is the culprit here, i don't how to overcome this and get the values. Can one suggest me on this.
NOTE : if i work with same query in windows i get the values. my box is ubuntu here i can't get the values
clients.find(:all, :conditions => ["lat = ? and lng = ?",latvalue, lngvalue])
try this your model name is "clients" ???
or Client