Find County name for a Lat/Long [closed] - geolocation

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 23 days ago.
The community reviewed whether to reopen this question 23 days ago and left it closed:
Original close reason(s) were not resolved
Improve this question
I'm trying to find a webservice that will allow me to get a County name (not Country) for a specific Lat/Long. I would be performing the lookup within a server application (likely a Java application). It doesn't have a be a webservice if there is some library out there I suppose, but I would like up-to-date information. I have looked around quite a bit for an API that supports this, but so far I haven't been able to find one that works. I have tried the Yahoo APIs like so:
http://where.yahooapis.com/geocode?q=39.76144296429947,%20-104.8011589050293
But it doesn't populate the address information. I've tried with some of the "flags" options there too to no avail.
I've also looked around at Googles APIs as well, but I've read multiple places that they don't populate the County.
So does anyone know of any APIs that will take a Lat/Long and return the County associated with that location? And if you have any examples, that would be great.
I'd also like to know which APIs allow for use in a commercial application. A lot of the data I've found says that you can't use the data to make money. I might be reading those wrong, but I'm looking to build a service that I'd likely charge for that would use this data. So I'd need options. Maybe free services while I'm exploring options, and pay services down the road.

Just for completeness, I found another API to get this data that is quite simple, so I thought I'd share.
https://geo.fcc.gov/api/census/
The FCC provides an Block API for exactly this problem and it uses census data to perform the look up.
Their usage limit policy is (From developer#fcc.gov)
We do not have any usage limits for the block conversion API, but we do ask that you try to spread out your requests over time if you can.

Google does populate the county for your example,
http://maps.googleapis.com/maps/api/geocode/json?latlng=39.76144296429947,-104.8011589050293&sensor=false
In the response, look under the key address_components which contains this object representing "Adams" county,
{
long_name: "Adams"
short_name: "Adams"
-types: [
"administrative_area_level_2"
"political"
]
}
Here's from the Geocoding API's docs,
administrative_area_level_2 indicates a second-order civil entity below the country level. Within the United States, these administrative levels are counties. Not all nations exhibit these administrative levels.

Another option:
Download the cities database from http://download.geonames.org/export/dump/
Add each city as a lat/long -> Country mapping to a spatial index such as an R-Tree (some DBs also have the functionality)
Use nearest-neighbour search to find the country corresponding to the closest human settlement for any given point
Advantages:
Does not depend on aa external server to be available
Much faster (easily does thousands of lookups per second)
Disadvantages:
May give wrong answers close to borders, especially in sparsely populated areas

You may want to have look at Tiger data and see if it has polygons containing the county name in an attribute. If it does the Java Geotools API lets you work with this data. You will be performing point in polygon queries for the county polygons followed by a feature attribute look-up.

Maybe this is a great solution.It is in a json format.I always use this in my projects.
http://maps.google.com/maps/geo?ll=10.345561,123.896932
And simply extract the information using php.
$x = file_get_contents("http://maps.google.com/maps/geo?ll=10.345561,123.896932");
$j_decodex = json_decode($x);
print_r($j_decodex);

Related

Human annotation tool for corpora in NLP [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
I am trying to build my own training corpus for Named Entity Recognition, but I don't know if there is already an existing tool for this or if I have to implement one myself.
Basically, what I need to do is take a corpus and manually tag it word by word, which is pretty tedious, but it has to be done.
Can anyone tell me if there is already an existing one and where to get it?
I had a good experience working with BRAT.
GATE is also a very complex tool for annotating, steeper learning curve.
We had a nice experience using DataTurks . They provide nice intuitive UI which allows to add collaborator, insights into data, leaderboard for annotators and some other funky features.
https://dataturks.com
For online annotation of text or HTML corpus of relatively short documents I also recommend BRAT. You will have to go under the hood of the python web application if you want to do anything custom. It also failed to work for me on large HTML documents (100 or so pages).
I have also used stand-alone apps:
Protege + Knowtator: a bit cumbersome to setup / use, but it
works;
Gate: also cumbersome, and it somewhat works. Backup
your annotations at regular intervals as you might get
surprised by a stacktrace that also wiped or corrupted your annotated
corpus (which is just serialized Java objects).
If you are dealing with PDF documents, we built a web-based PDF Annotation Tool: NOTA. It accepts anything printed to PDF, including scans. We do commercial OCR on our end to recover text from images. There is a REST API to create color-coded annotation schemas and pre-populate documents with annotations, as well as a REST API for exporting formatted text and annotation offsets. There is also a JS API you can use to customize any annotation workflows, add metadata to annotations, etc. Relationships are not supported out of the box. Large documents, 200+ pages are supported. Email us and we can give you an API key to try it out. Details and documentation links can be found here. It is free for small research projects.
Here is a screenshot of what the annotations looks like :
I co-develop myself the web-based text annotation tool: tagtog.net
There is nothing to install, and you can define the type of entities you want to annotate. Additionally you can annotation relationships, document labels, and much more. You can upload your documents in many different formats, including PDF or markdown. You can annotate together with your team collaboratively. We have put great care in making the interface easy and beautiful. It looks like this:
You can start right away with a free account. Also I would be happy to help you with any doubt or issue you may have; just ping me or write us an email to the address shown on the website, tagtog.net.
Our annotation tool Prodigy is very scriptable, and is designed for active learning. It integrates especially well with our NLP library spaCy.
We've paid particular attention to the Named Entity Recogntion (NER) annotation workflows, as entity recognition can otherwise be very slow. I have a tutorial video on this:
https://www.youtube.com/watch?v=l4scwf8KeIA
There is this tool called, Dataturks is super simple to use, fully online NLP annotation tool, so that I even can easily push my teammates to complete datasets for our projects.
try TagEditor ,
It is a desktop application designed to annotate text for training with spaCy library.
You can tag Named Entities, Dependencies, Parts of speech, text categories
and print json file.
Example

App Store keywords : using "trademark" keywords [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
I understand that in the App Store rules :
-3.9: Developers are responsible for assigning appropriate keywords for their apps. Inappropriate keywords may be changed/deleted by Apple
-8.5: Use of protected 3rd party material (trademarks, copyrights, trade secrets, otherwise proprietary content) requires a documented rights check which must be provided upon request
I want to use the name of a competitor app as a keyword for my app, but the competitor is complaining that I am doing so (though many other people are also using it too).
For instance(not the real case here), my app is a photo app, I have put "instagram" as keyword.
I believe it's fair to use competitor's name as keyword as long as I don't outrank it. Apple also seems to suggest people to use trademarked name as "descriptors".
What would you suggest?
You are bound to US rules and those of your own country. Assuming rules of the USA and of the UK (I am from the UK which is why I include it here) you can use their trademark as long as you are only using it to draw a comparison or to describe your product, which you are doing here. You cannot use it in such a way that it causes confusion between your product and theirs. By the sound of things here you should be okay.
The only thing to be worried about is that if the other company complains there will be a period where the complaint is reviewed, during that time your app may be blocked from the app store so be prepared to wait while that is resolved.
After writting this I did a quick google and found the link below, it does a better job of explaining this than I have done. Be aware that the information below and in my own answer may not apply in your country. I am not a lawyer, I am only an app developer myself.
http://www.insidecounsel.com/2011/11/08/ip-using-a-competitors-trademark-in-marketing
I don't think it's fair to use competitor's name especially trademarks, copyrights and etc. Apple is doing so to protect your competitor's interest. And many other people succeeding in using it don't implies that you can use it.

Geolocation, Is it possible to get latitude and longitude from address and store locally in my database

I want to be able to run queries locally comparing latitude and longitude of locations so I can run queries for certain addresses I've captured based on distance.
I found a free database that has this information for zip codes but I want this information for more specific addresses. I've looked at google's geolocation service and it appears it's against the TOS to store these values in my database or to use them for anything other than doing stuff with google maps. (If somebody's looked deeper into this and I'm incorrect let me know)
Am I likely to find any (free or pay) service that will let me store these lat/lon values locally? The number of addresses I need is currently pretty small but if my site becomes popular it could expand quite a bit over time to a large number. I just need to get the coordinates of each address entered once though.
This question hasn't received enough attention...
You're correct -- it can't be done with Google's service and still conform to the TOS. Cheers to you for honestly seeking to comply with the TOS.
I work at a company called SmartyStreets where we process addresses and verify addresses -- and geocode them, too. Google's terms don't allow you to store the data returned from the API, and there's pretty strict usage limits before they throttle or cut off your access.
Screen scraping presents many challenges and problems which are both technical and ethical, and I don't suppose I'll get into them here. The Microsoft library linked to by Giorgio is for .NET only.
If you're still serious about doing this, we have a service called LiveAddress which is accessible from any platform or language. It's a RESTful API which can be called using GET or POST for example, and the output is JSON which is easy to parse in pretty much every common language/platform.
Our terms allow you to store the data you collect as long as you don't re-manufacture our product or build your own database in an attempt to duplicate ours (or something of the like). For what you've described, though, it shouldn't be a problem.
Let me know if you have further questions about address geocoding; I'll be happy to help.
By the way, there's some sample code at our GitHub repo: https://github.com/smartystreets/LiveAddressSamples
http://www.zip-info.com/cgi-local/zipsrch.exe?ll=ll&zip=13206&Go=Go could use a screen scraper if you just need to get them once.
Also Microsoft provides this service. Check if this can help you http://msdn.microsoft.com/en-us/library/cc966913.aspx

News API for use in application? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 1 year ago.
Improve this question
I want to include the latest news as part of my application, does anybody know any news APIs or sites where they say you can use their news for your app?
Use RSS. You can choose feeds from wherever you want, they're widely available. (Just make sure the content owners are okay with it if you are only reading specific feeds.) It's in XML, so there are plenty of libraries for reading that. In fact an RSS reader is typically one of the types of things (next to twitter clients) used to teach iPhone programming, so a little googling will find you lots of sample code.
There is a plenty of RSS reader samples on the internet, for Mac OS X or iPhone. You can use custom XML parser or JSON. Check github for samples.
Financial News only:
MyAllies Companies Breaking News with JSON output
Example:
http://app.myallies.com/api/news for all companies recent news.
Or
http://app.myallies.com/api/news/amzn for Amazon news.
Free service, no limit (at least for now)
Why not considering a news search API? Newsriver (https://newsriver.io) mines hundreds of thousand online sources and aggregates their news articles. It's API allows to retrieve structured online news articles via simple search queries.
Not sure if you or anyone else is still looking for a news API but you could get curated financial and business news with relevant financial asset and other tags from here - http://www.cityfalcon.com/financial-news-api. You could also get financial tweets.
If you want stock news specifically I suggest IEX, Alphavantage or stocknewsapi I personally use stock news api as it is the most relevant and the images are great quality. Hope this helps!
Both ContextualWeb's News API and newsapi.org offer very generous free tiers (10,000 requests/month on ContextualWeb and 500 requests/day on newsapi.org), and both are very easy to embed in your app or website. You can easily filter by news outlet or topic. ContextualWeb also uses Natural Language Processing to extract the topics from each article, which might make results easier to parse.
I used datanews.io. Its REST API worked well in my case.
It also can monitor different news queries - be sure to check that out as well.

Music analysis software [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
Greetings
I may have imagined this but does anyone know if Last.fm previously used some form of open source project to perform analysis on music to determine similar music.
As its now moved to a pay version I'd like to make something which can add known music to my playlist. (I hate scanning my computer for similar music manually)
Failing that - does anyone know of any system that I could use to replace this ? Ideally I'd like some form of API / Source code that I can use to automate the whole process into batch jobs.
Thanks,
[edit]
Ideally I was looking for something more along the lines of content matching. I'm the type of person who just throws all my music into one unorganized location. Then being lazy I would ideally expect a playlist to be generated giving me a similar music type of playlist.
Last.fm uses http://www.audioscrobbler.net/ - it also provides access to its database via an API.
[/edit]
Music similarity is not an easy problem.
There are two general approaches to solving this problem.
Approach 1.
Throw data at the problem. This is the approach LastFM and Pandora take. It's basically one huge database which is maintained by either a community or group of experts. Note that to use this approach you will need clean metadata or some kind of audio fingerprinting solution like musicbrainz. Once you have the feature database you can use algorithms such as Pearson correlation coefficient to find similar items.
Approach 2.
Throw algorithms at the problem. In particular, computer audition algorithms. This means you calculate vectors of various features a song contains and using neural nets and a variety of other techniques you find other songs with similar vectors. This approach has been used successfully for automatic genre classification and query by example.
If you are looking for open source software for music analysis, marsyas can do pretty much everything the commercial stuff can do. Its the brain child of George Tzanetakis and on his web site you can find many papers about the state of affairs with computer audition.
There's a web API at The Echo Nest that includes a get_similar web service that allows you to retrieve similar artists to a set of seed artists. You can use this to help build playlists. The Echo Nest also has a set of web APIs that will perform a detailed analysis of a track (similar to the aforementioned Marsyas) that one could use as the basis for an acoustic-based song similarity method. (Caveat, I work at the Echo Nest). Of course, if you use iTunes, there's some canned solutions. iTunes now has a music recommender / playlist generator that will build playlists of songs from simliar artists. Similarly, the company Mufin has an iTunes add on which will perform acoustic analysis of your tracks and use this analysis to build playlists.
If you are interested in building your own music similarity system, I suggest that you take a look at the proceedings for ISMIR (the International Society of Music Information Retrieval). There's quite a bit of research around music similarity and playlisting that you'll find helpful. You can find the proceedings at ismir.net
Wouldn't it be simpler/more efficient to query(build?) some internet database based on genre/style/etc? I used last.fm and similar sites but never felt they did anything more then this (at least the results weren't indicating that) ;)
I am not very sure what exactly you want, but how about MusicBrainz?
To be clear, AudioScrobbler is the tech built by Last.fm to run their service. They collect stats on the tracks which people listen to (also 'Like's of tracks and artists).
So Last.fm does social similarity... users who listened to X also listened to Y - you like X so maybe you will also like Y.
Given a large enough user base submitting stats, social similarity is likely to provide better results than computer analysis approaches. For example, try querying the Last.fm API for similar artists to someone you know - probably comes up with some good matches and a few obscure or oddball ones, which nonetheless reflect real people's listening habits. The more obscure the artist you search for the more likely you'll get weird matches.
Even if you could get the automatic genre classification method described by George Tzanetakis to work well you are missing out on the subjective judgements of quality supplied by real people. eg two tracks both look like 'Jazz' but there are many different kinds of Jazz... and I might be interested in non-Jazz albums that a favourite jazz musician has played on. Social similarity would be more likely to capture that info.
I used to use Predixis Magic Mixer. It will perform a brief analysis of the audio in a file, produce a "finger print" and compared it to fingerprints in a central database. If listed, it would set an identification code which is the result of the analysis of the entire file into the client copy. If not, it would do a full analysis on the client computer (takes a while) and upload that to the central database and keep the local copy as well. From that information it can set up a play list that relates tunes, one to another' depending upon the actual sounds. I have not used it for a few years so I don't know if the central database servers still are in operation, but a web search says no. It should still work, but every file will require full analysis.

Resources