Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 1 year ago.
Improve this question
I want to include the latest news as part of my application, does anybody know any news APIs or sites where they say you can use their news for your app?
Use RSS. You can choose feeds from wherever you want, they're widely available. (Just make sure the content owners are okay with it if you are only reading specific feeds.) It's in XML, so there are plenty of libraries for reading that. In fact an RSS reader is typically one of the types of things (next to twitter clients) used to teach iPhone programming, so a little googling will find you lots of sample code.
There is a plenty of RSS reader samples on the internet, for Mac OS X or iPhone. You can use custom XML parser or JSON. Check github for samples.
Financial News only:
MyAllies Companies Breaking News with JSON output
Example:
http://app.myallies.com/api/news for all companies recent news.
Or
http://app.myallies.com/api/news/amzn for Amazon news.
Free service, no limit (at least for now)
Why not considering a news search API? Newsriver (https://newsriver.io) mines hundreds of thousand online sources and aggregates their news articles. It's API allows to retrieve structured online news articles via simple search queries.
Not sure if you or anyone else is still looking for a news API but you could get curated financial and business news with relevant financial asset and other tags from here - http://www.cityfalcon.com/financial-news-api. You could also get financial tweets.
If you want stock news specifically I suggest IEX, Alphavantage or stocknewsapi I personally use stock news api as it is the most relevant and the images are great quality. Hope this helps!
Both ContextualWeb's News API and newsapi.org offer very generous free tiers (10,000 requests/month on ContextualWeb and 500 requests/day on newsapi.org), and both are very easy to embed in your app or website. You can easily filter by news outlet or topic. ContextualWeb also uses Natural Language Processing to extract the topics from each article, which might make results easier to parse.
I used datanews.io. Its REST API worked well in my case.
It also can monitor different news queries - be sure to check that out as well.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
I am trying to build my own training corpus for Named Entity Recognition, but I don't know if there is already an existing tool for this or if I have to implement one myself.
Basically, what I need to do is take a corpus and manually tag it word by word, which is pretty tedious, but it has to be done.
Can anyone tell me if there is already an existing one and where to get it?
I had a good experience working with BRAT.
GATE is also a very complex tool for annotating, steeper learning curve.
We had a nice experience using DataTurks . They provide nice intuitive UI which allows to add collaborator, insights into data, leaderboard for annotators and some other funky features.
https://dataturks.com
For online annotation of text or HTML corpus of relatively short documents I also recommend BRAT. You will have to go under the hood of the python web application if you want to do anything custom. It also failed to work for me on large HTML documents (100 or so pages).
I have also used stand-alone apps:
Protege + Knowtator: a bit cumbersome to setup / use, but it
works;
Gate: also cumbersome, and it somewhat works. Backup
your annotations at regular intervals as you might get
surprised by a stacktrace that also wiped or corrupted your annotated
corpus (which is just serialized Java objects).
If you are dealing with PDF documents, we built a web-based PDF Annotation Tool: NOTA. It accepts anything printed to PDF, including scans. We do commercial OCR on our end to recover text from images. There is a REST API to create color-coded annotation schemas and pre-populate documents with annotations, as well as a REST API for exporting formatted text and annotation offsets. There is also a JS API you can use to customize any annotation workflows, add metadata to annotations, etc. Relationships are not supported out of the box. Large documents, 200+ pages are supported. Email us and we can give you an API key to try it out. Details and documentation links can be found here. It is free for small research projects.
Here is a screenshot of what the annotations looks like :
I co-develop myself the web-based text annotation tool: tagtog.net
There is nothing to install, and you can define the type of entities you want to annotate. Additionally you can annotation relationships, document labels, and much more. You can upload your documents in many different formats, including PDF or markdown. You can annotate together with your team collaboratively. We have put great care in making the interface easy and beautiful. It looks like this:
You can start right away with a free account. Also I would be happy to help you with any doubt or issue you may have; just ping me or write us an email to the address shown on the website, tagtog.net.
Our annotation tool Prodigy is very scriptable, and is designed for active learning. It integrates especially well with our NLP library spaCy.
We've paid particular attention to the Named Entity Recogntion (NER) annotation workflows, as entity recognition can otherwise be very slow. I have a tutorial video on this:
https://www.youtube.com/watch?v=l4scwf8KeIA
There is this tool called, Dataturks is super simple to use, fully online NLP annotation tool, so that I even can easily push my teammates to complete datasets for our projects.
try TagEditor ,
It is a desktop application designed to annotate text for training with spaCy library.
You can tag Named Entities, Dependencies, Parts of speech, text categories
and print json file.
Example
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
I understand that in the App Store rules :
-3.9: Developers are responsible for assigning appropriate keywords for their apps. Inappropriate keywords may be changed/deleted by Apple
-8.5: Use of protected 3rd party material (trademarks, copyrights, trade secrets, otherwise proprietary content) requires a documented rights check which must be provided upon request
I want to use the name of a competitor app as a keyword for my app, but the competitor is complaining that I am doing so (though many other people are also using it too).
For instance(not the real case here), my app is a photo app, I have put "instagram" as keyword.
I believe it's fair to use competitor's name as keyword as long as I don't outrank it. Apple also seems to suggest people to use trademarked name as "descriptors".
What would you suggest?
You are bound to US rules and those of your own country. Assuming rules of the USA and of the UK (I am from the UK which is why I include it here) you can use their trademark as long as you are only using it to draw a comparison or to describe your product, which you are doing here. You cannot use it in such a way that it causes confusion between your product and theirs. By the sound of things here you should be okay.
The only thing to be worried about is that if the other company complains there will be a period where the complaint is reviewed, during that time your app may be blocked from the app store so be prepared to wait while that is resolved.
After writting this I did a quick google and found the link below, it does a better job of explaining this than I have done. Be aware that the information below and in my own answer may not apply in your country. I am not a lawyer, I am only an app developer myself.
http://www.insidecounsel.com/2011/11/08/ip-using-a-competitors-trademark-in-marketing
I don't think it's fair to use competitor's name especially trademarks, copyrights and etc. Apple is doing so to protect your competitor's interest. And many other people succeeding in using it don't implies that you can use it.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 23 days ago.
The community reviewed whether to reopen this question 23 days ago and left it closed:
Original close reason(s) were not resolved
Improve this question
I'm trying to find a webservice that will allow me to get a County name (not Country) for a specific Lat/Long. I would be performing the lookup within a server application (likely a Java application). It doesn't have a be a webservice if there is some library out there I suppose, but I would like up-to-date information. I have looked around quite a bit for an API that supports this, but so far I haven't been able to find one that works. I have tried the Yahoo APIs like so:
http://where.yahooapis.com/geocode?q=39.76144296429947,%20-104.8011589050293
But it doesn't populate the address information. I've tried with some of the "flags" options there too to no avail.
I've also looked around at Googles APIs as well, but I've read multiple places that they don't populate the County.
So does anyone know of any APIs that will take a Lat/Long and return the County associated with that location? And if you have any examples, that would be great.
I'd also like to know which APIs allow for use in a commercial application. A lot of the data I've found says that you can't use the data to make money. I might be reading those wrong, but I'm looking to build a service that I'd likely charge for that would use this data. So I'd need options. Maybe free services while I'm exploring options, and pay services down the road.
Just for completeness, I found another API to get this data that is quite simple, so I thought I'd share.
https://geo.fcc.gov/api/census/
The FCC provides an Block API for exactly this problem and it uses census data to perform the look up.
Their usage limit policy is (From developer#fcc.gov)
We do not have any usage limits for the block conversion API, but we do ask that you try to spread out your requests over time if you can.
Google does populate the county for your example,
http://maps.googleapis.com/maps/api/geocode/json?latlng=39.76144296429947,-104.8011589050293&sensor=false
In the response, look under the key address_components which contains this object representing "Adams" county,
{
long_name: "Adams"
short_name: "Adams"
-types: [
"administrative_area_level_2"
"political"
]
}
Here's from the Geocoding API's docs,
administrative_area_level_2 indicates a second-order civil entity below the country level. Within the United States, these administrative levels are counties. Not all nations exhibit these administrative levels.
Another option:
Download the cities database from http://download.geonames.org/export/dump/
Add each city as a lat/long -> Country mapping to a spatial index such as an R-Tree (some DBs also have the functionality)
Use nearest-neighbour search to find the country corresponding to the closest human settlement for any given point
Advantages:
Does not depend on aa external server to be available
Much faster (easily does thousands of lookups per second)
Disadvantages:
May give wrong answers close to borders, especially in sparsely populated areas
You may want to have look at Tiger data and see if it has polygons containing the county name in an attribute. If it does the Java Geotools API lets you work with this data. You will be performing point in polygon queries for the county polygons followed by a feature attribute look-up.
Maybe this is a great solution.It is in a json format.I always use this in my projects.
http://maps.google.com/maps/geo?ll=10.345561,123.896932
And simply extract the information using php.
$x = file_get_contents("http://maps.google.com/maps/geo?ll=10.345561,123.896932");
$j_decodex = json_decode($x);
print_r($j_decodex);
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
Greetings
I may have imagined this but does anyone know if Last.fm previously used some form of open source project to perform analysis on music to determine similar music.
As its now moved to a pay version I'd like to make something which can add known music to my playlist. (I hate scanning my computer for similar music manually)
Failing that - does anyone know of any system that I could use to replace this ? Ideally I'd like some form of API / Source code that I can use to automate the whole process into batch jobs.
Thanks,
[edit]
Ideally I was looking for something more along the lines of content matching. I'm the type of person who just throws all my music into one unorganized location. Then being lazy I would ideally expect a playlist to be generated giving me a similar music type of playlist.
Last.fm uses http://www.audioscrobbler.net/ - it also provides access to its database via an API.
[/edit]
Music similarity is not an easy problem.
There are two general approaches to solving this problem.
Approach 1.
Throw data at the problem. This is the approach LastFM and Pandora take. It's basically one huge database which is maintained by either a community or group of experts. Note that to use this approach you will need clean metadata or some kind of audio fingerprinting solution like musicbrainz. Once you have the feature database you can use algorithms such as Pearson correlation coefficient to find similar items.
Approach 2.
Throw algorithms at the problem. In particular, computer audition algorithms. This means you calculate vectors of various features a song contains and using neural nets and a variety of other techniques you find other songs with similar vectors. This approach has been used successfully for automatic genre classification and query by example.
If you are looking for open source software for music analysis, marsyas can do pretty much everything the commercial stuff can do. Its the brain child of George Tzanetakis and on his web site you can find many papers about the state of affairs with computer audition.
There's a web API at The Echo Nest that includes a get_similar web service that allows you to retrieve similar artists to a set of seed artists. You can use this to help build playlists. The Echo Nest also has a set of web APIs that will perform a detailed analysis of a track (similar to the aforementioned Marsyas) that one could use as the basis for an acoustic-based song similarity method. (Caveat, I work at the Echo Nest). Of course, if you use iTunes, there's some canned solutions. iTunes now has a music recommender / playlist generator that will build playlists of songs from simliar artists. Similarly, the company Mufin has an iTunes add on which will perform acoustic analysis of your tracks and use this analysis to build playlists.
If you are interested in building your own music similarity system, I suggest that you take a look at the proceedings for ISMIR (the International Society of Music Information Retrieval). There's quite a bit of research around music similarity and playlisting that you'll find helpful. You can find the proceedings at ismir.net
Wouldn't it be simpler/more efficient to query(build?) some internet database based on genre/style/etc? I used last.fm and similar sites but never felt they did anything more then this (at least the results weren't indicating that) ;)
I am not very sure what exactly you want, but how about MusicBrainz?
To be clear, AudioScrobbler is the tech built by Last.fm to run their service. They collect stats on the tracks which people listen to (also 'Like's of tracks and artists).
So Last.fm does social similarity... users who listened to X also listened to Y - you like X so maybe you will also like Y.
Given a large enough user base submitting stats, social similarity is likely to provide better results than computer analysis approaches. For example, try querying the Last.fm API for similar artists to someone you know - probably comes up with some good matches and a few obscure or oddball ones, which nonetheless reflect real people's listening habits. The more obscure the artist you search for the more likely you'll get weird matches.
Even if you could get the automatic genre classification method described by George Tzanetakis to work well you are missing out on the subjective judgements of quality supplied by real people. eg two tracks both look like 'Jazz' but there are many different kinds of Jazz... and I might be interested in non-Jazz albums that a favourite jazz musician has played on. Social similarity would be more likely to capture that info.
I used to use Predixis Magic Mixer. It will perform a brief analysis of the audio in a file, produce a "finger print" and compared it to fingerprints in a central database. If listed, it would set an identification code which is the result of the analysis of the entire file into the client copy. If not, it would do a full analysis on the client computer (takes a while) and upload that to the central database and keep the local copy as well. From that information it can set up a play list that relates tunes, one to another' depending upon the actual sounds. I have not used it for a few years so I don't know if the central database servers still are in operation, but a web search says no. It should still work, but every file will require full analysis.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 2 years ago.
Improve this question
What do you use to capture webpages, diagram/pictures and code snippets for later reference?
Evernote http://www.evernote.com and delicious http://www.delicious.com
Evernote
Notepad2's clipboard feature (Notepad2.exe /c as a link in Launchy)
Windows Clippings or PrintKey
Firefox extension Page Saver
Delicious
Microsoft OneNote.
I just have an emacs instance running on my home machine, under screen. Whereever I am (and have network) I can connect to it remotely. I stick all useful urls, birthday present ideas, future dates, code snippets, ideas for docs etcetc in there.
I rarely have doodles/diagrams I need to capture, I tend to draw them in ascii in my file if needed.
I must admit I'm a bit stuck if I have no network/wifi somewhere, but that's rarely the case.
I find google notebook is very good for drive by code snippeting and google bookmarks especially as when used with the google toolbar, for web pages.
The benefit of these tools are that they are available from any pc on the web, though a good use of semantic organisation using labels is recommended.
Here's my response to a similar question:
The combination of OneNote with a tablet PC is awesome! I was a bit of a skeptic at first. I used the trial version and then forgot about it. A year later I had an unruly collection of files, project related emails, notebooks and scraps of paper all scattered throughout my life. I went back to OneNote and all my problems went away. Some highlights:
Everything is searchable. The character recognition is good enough that my chicken-scratch meeting notes can be searched. Text within images is searchable.
OneNote syncs with Outlook so finding meeting notes is a breeze.
I now embed all files into OneNote - pdfs, spreadsheets, word docs, images, web clippings.
OneNote is constantly saving all changes so, combined with a scheduled automated backup, everything is in one place and is safe.
There are some built-in collaboration tools I have yet to try but that look useful.
It is SO worth the price. It allows you to get started on a project and avoid all that time spent deciding how to organize things.
Zotero, is a nice plugin for Firefox.
SnagIt
captures everything you could want, and lets you annotate it.
I prefer to use the good old url for delicious
Apart from that i use the Scrapbook extension in firefox when i want to save something on the disk. It's possible to tag the page, edit it and remove those stupids ads before saving it.
I also have a Wiki on a stick that i carry around on a usbkey for code snippets that should go to other clients when i'm travelling around
Mostly, my code snippets are embedded into projects i carry on the same usb key, which allows me to demonstrate some technologies right off to the client and get his advice based on a demonstration, not a listing of code...
For screen shots, I use a mix between ScrapBook and ScreenGrab. They are both firefox plugins that are pretty amazing when you need to get a screenshot of a page for editing. Works great for consulting.
https://addons.mozilla.org/en-US/firefox/addon/427
https://addons.mozilla.org/en-US/firefox/addon/1146
Delicious Bookmarks extension for Firefox
It's a little primitive, but I've been using tiddlywiki (self-contained, single-file wiki) http://www.tiddlywiki.com/ which works good for basic text and markup. I combine it with a plugin to sync it with Outlook's notes (http://syncoutlooknotes.tiddlyspot.com/#SyncOutlookNotes) so that I can then sync it to my blackberry using the standard outlook-blackberry sync mechanism. This has the significant advantage that I can look at my notes and even write new notes when I'm out and about, away from my laptop, or just don't feel like lugging the laptop around to a meeting that I don't really need it for.
I'd prefer using something more advanced like Onenote, but being able to take my notes with my in the little blackberry has turned out to be a significant advantage.
Google Notebook is very convenient tool. You can clip and save any parts of web pages without leaving your browser tab. The Notebook plug-in automatically saves them as separate notes in your notebooks and keep the links back to the original web pages. You can organize your clippings later by moving them between your notebooks and/or tagging them. Very good for code snippets and references.