Does anyone notice quandl WIKI or EOD data are different than Yahoo or Bloomberg? I notice this when I was comparing data providers, which I am using AAPL as a test. AAPL split its stock 7 for 1 on Jun 9, 2014, so I think it is an ideal candidate to compare data.
Here is a picture of data comparison:
Do you know why they are different and which one I should trust? If I should trust neither, is there any other free data provider I can trust?
It depends on whether you adjust the dividends too (the series starting at 52 is adjusted for dividends, the series starting at 58 is not).
For reference, Bloomberg data:
Unadjusted price Adjusted for split Adjusted for split & div
27/12/2011 406.53 58.0757 52.4533
28/12/2011 402.64 57.52 51.9514
29/12/2011 405.12 57.8743 52.2714
etc.
Related
Writing an algorithm to extract some keywords like rent, deposit, liabilities etc. from rent agreement document. I used "naive bayes classifier" but the output is not giving desired output:
my training data is like:
train = [
("refundable security deposit Rs 50000 numbers equal 5 months","deposit"),
("Lessee pay one month's advance rent Lessor","security"),
("eleven (11) months commencing 1st march 2019","duration"),
("commence 15th feb 2019 valid till 14th jan 2020","startdate")]
The below code is not giving desired keyword:
classifier.classify(test_data_features)
Please share if there are any libraries in NLP to accomplish this.
Seems like you need to make your specific NER(Named Entity Recognizer) for parsing your unstructured document.
where you need to tag every word of your sentence into certain labels. Based on the surrounding words and context window your trained NER will be able to give you the results which you looking for.
Check standford corenlp implementation of NER.
I've been researching CLDR and IANA in order to find a centralized mapping of UN/LOCODEs to Olsen Timezones.
Ideally I would like to have for example:
+--------------+--------------------+
|un_locode |timezone |
+--------------+--------------------+
|USLAX | America/Los_Angeles|
+--------------+--------------------+
for every UN/LOCODE.
Are my nube skills failing me in understanding how to use these sources to reach my goal? (If so please help point me towards the scripting that would allow me to automate providing these mappings).
Or, do these sources fail to have the data correlation that I'm looking for? (If so please let me know if you have a reliable source).
We faced the exact same problem and hence had to provide a solution.
This solution involves linking the UN/LOCODES database with a geolocation/timezone database.
There are a few caveats to this approach that were captured by Matt Johnson's answer and the accompanying comments.
Namely:
the UN/LOCODE database of coordinates is not complete[1] and sometime has inaccurate data[2]
in some cases, a 1 to 1 mapping between the UN/LOCODE and a timezone is impossible due to the political nature of the timezones.
the two points above are worsened by the inaccuracy of free coordinates-to-timezone databases. It is helpful to get a dataset that also includes territorial waters so that ports timezones can be properly linked to the country they belong.
The following repository https://github.com/Portchain/un_locodes_sql contains the code to extract and link the data. It outputs a SQL file that can be imported into a PostgreSQL DB.
The geolocation/timezone data is based on the geo-tz[3] module which seems to source its data from timezone-boundary-builder[4].
Again, the list provided by our repository is of course incomplete and inaccurate. If you see any error in the data, please open a github issue and let's make an accurate, open source list of UN/LOCODE, coordinates and timezone information.
[1] For example, both Los Angeles and San Francisco, USA (USLAX & USSFO) are missing coordinates in the UN/LOCODE database.
[2] The petroleum port of Abu al Bukhoosh (AEABU) is situated in Abu Dhabi (UAE). Its coordinates in the UN/LOCODE database position the port right in the middle of the Persian Gulf (https://www.port-directory.com/ports/abu_al_bukhoosh/). When resolved, this causes the timezone to be unknown.
[3] https://github.com/evansiroky/node-geo-tz
[4] https://github.com/evansiroky/timezone-boundary-builder
The GeoNames free database of cities (which is available to download) provides: city names, latitude/longitude and, most importantly, timezone information. You can fairly quickly make your own database connecting this information with the UN/LOCODE code lists based on the name/country/coordinates.
I've not seen such a source. You could try to create one by mapping the lat/lon coordinates for those entries that have them, and correlating to IANA time zone by one of the methods listed here.
However, be sure to read Wikipedia's article about UN/LOCODE, especially describing errors with coordinates. Also note that many of the coordinates simply not in the data - why? I don't know.
The list of UN/LOCODE for the US is here, and show Los Angeles to be US LAX (not UNLAX). Its coordinates field is blank.
If you can find some other reliable source of UN/LOCODE to lat/lon, then you are in business. A quick search found that GeoNames claims to have this in their premium data subscription, but I haven't investigated further.
CLDR's map is here: https://unicode.org/reports/tr35/#Time_Zone_Identifiers
I saw CLDR tagged but not mentioned.
first time user of this forum - guidance on how to provide enough information is very appreciated. I am trying to replicate the presentation of data used in the Medical education field. This will help improve the quality of examiners' marking of trainees in a Clinical Exam. What I would like to communicate will be similar to what is already communicated in the College of General Practitioners regarding one of their own exams, please see www.gp10.com.au/slides/thursday/slide29.pdf to help understand what it is I want to present. I have access to Excel, SPSS and R, so any help with any of these would be great. However as a first attempt I have used SPSS and created 3 variables: dummy variable, a "station score" and a "global rating score"(GRS). The "station score"(ST) is a value between 0 and 10 (non-integers) and is on the y-axis similar to the pdf presentation of "Candidate Final Marks". The x-axis is the "global rating scale", an integer from 1 to 6 and is represented in the pdf as the "Overall Performance Scale". When I use SPSS's boxplot I get a boxplot as depicted.
.
What I would like to do is overlay a single examiners own scoring of X number of examinees. So for one examiner (examiner A) provided the following marks:
ST: 5.53,7.38,7.38,7.44,6.81
GRS: 3,4,4,5,3
(this is transposed into two columns).
Whether it be SPSS, Excel or R how would I be able to overlay the box and whisker plots with the individual data points provided by the one examiner? This would help show the degree to which the examiners' marking styles are in concordance with the expected distribution of ST scores across GRS. Any help greatly appreciated! I like Excel graphics but I have found it very difficult to work with when choosing the examiners' data as a separate series - somehow the examiners' GRS scores do not line up nicely on the x-axis. I am very new to R but am also very interested in R, and would expend time to get a good result in R if a good result is viable. I understand JMP may be preferable for this type of thing but access to this may not be possible.
I'm interested in the Btdigg.org which is called a "DHT search engine". According to this article, it doesn't store any content and even has no database. Then how does it work? Doesn't it need to gather meta infos and store them in database like other normal search engines? After a user submit a query, it scans the DHT network and return the results in "real time"? Is this possible?
I don't have specific insight into BTDigg, but I believe the claim that there is not database (or something that acts like a database) is a false statement. The author of that article might have been referring to something more specific that you might encounter in a traditional torrent site, where actual .torrent files are stored for instance.
This is how a BTDigg-like site works:
You run a bunch of DHT nodes, specifically with the purpose of "eaves dropping" on DHT traffic, to be introduced to info-hashes that people talk about.
join those swarms and download the metadata (.torrent file) by using the ut_metadata extension
index the information you find in there, map it to the info-hash
Provide a front-end for that index
If you want to luxury it up a bit you can also periodically scrape the info-hashes you know about to gather stats over time and maybe also figure out when swarms die out and should be removed from the index.
So, the claim that you don't store .torrent files nor any content is true.
It is not realistic to search the DHT in real-time, because the DHT is not organized around keyword searches, you need to build and maintain the index continuously, "in the background".
EDIT:
Since this answer, an optimization (BEP 51) has been implemented in some DHT clients that lets you query which info-hashes they are hosting, significantly reducing the cost of indexing.
For a deep understanding of DHT and its applications, see Scott Wolchok's paper and presentation "Crawling BitTorrent DHTs for Fun and Profit". He presents the autonomous search engine idea as a sidenote to his study of DHT's security:
PDF of his paper:
https://www.usenix.org/legacy/event/woot10/tech/full_papers/Wolchok.pdf
His presentation at DEFCON 18 (parts 1 & 2)
http://www.youtube.com/watch?v=v4Q_F4XmNEc
http://www.youtube.com/watch?v=mO3DfLtKPGs
https://www.usenix.org/legacy/event/woot10/tech/full_papers/Wolchok.pdf
The method used in Section 3 seems to suggest a database to store all the torrent data is required. While performance is better, it may not be a true DHT search engine.
Section 8, while less efficient, seems to be a DHT search engine as long as the keywords are the store values.
From Section 3, Bootstrapping Bittorent Search:
"The system handles user queries by treating the
concatenation of each torrent's filenames and description as a
document in the typical information retrieval model and using an
inverted index to match keywords to torrents. This has the advantage
of being well supported by popular open-source relational DBMSs. We
rank the search results according to the popularity of the torrent,
which we can infer from the number of peers listed in the DHT"
From Section 8, Related Work:
the usual approach to distributing search using a DHT is
with an inverted index, by storing each (keyword, list of matching
documents) pair as a key-value pair in the DHT. Joung et al. [17]
describe this approach and point out its performance problems: the
Zipf distribution of keywords among files results in very skewed load
balance, document information is replicated once for each keyword in
the document, and it is difficult to rank documents in a distributed
environment
It is divided into two steps.
To achieve bep_0005 protocol got infohash, you do not need to implement all protocol requires only now find_node (request), get_peers (response), announce_peer (response). Here's one of my open source dhtspider.
To achieve bep_0009 protocol got metainfo index it, here are my own a bittorrent search engine, every day can get unique infohash 300w +, effective metainfo 50w +.
I am using the NWS REST API as my weather service for an app I am making. I was initially reluctant to use NWS because of its bad documentation, but I couldn't resist as it is offered completely free.
Now that I am trying to use it, I am running into some difficulty. When making a request for multiple days, the minimum temperature appears nil for several days.
(EDIT: As I have been testing the API more I have found that it is not always the minimum temperatures that are nil. It can be a max temp or a precipitation, it seems completely random. If you would like to make test calls using their web interface, you can do so here: http://graphical.weather.gov/xml/sample_products/browser_interface/ndfdBrowserByDay.htm
and here: http://graphical.weather.gov/xml/sample_products/browser_interface/ndfdXML.htm)
Here is an example of a request the minimum temperatures are empty: http://graphical.weather.gov/xml/sample_products/browser_interface/ndfdBrowserClientByDay.php?listLatLon=40.863235,-73.714780&format=24%20hourly&numDays=7
Surprisingly, on their website, the minimum temperatures are available:
http://forecast.weather.gov/MapClick.php?textField1=40.83&textField2=-73.70
You'll see under the Minimum temperatures that it is filled with about 5 (sometimes less, it is inconsistent) blank fields that say <value xsi:nil="true"/>
If anybody can help me it would be greatly appreciated, using the NWS API can be a little overwhelming at times.
Thanks,
The nil values, from what I can understand of the documentation, here and here, simply indicate that the data is unavailable.
Without making assumptions about NOAA's data architecture, it's conceivable that the information available via the API may differ from what their website displays.
Missing values are represented by an empty element and xsi:nil=”true” (R2.2.1).
Nil values being returned seems to involve the time period. Notice the difference between the time-layout keys (see section 5.3.2) in 1 in these requests:
k-p24h-n7-1
k-p24h-n6-1
The data times are different.
<layout-key> element
The key is derived using the following convention:
“k” stands for key.
“p24h” implies a data period length of 24 hours.
“n7” means that the number of data times is 7.
“1” is a sequential number used to keep the layout keys unique.
Here, startDate is the factor. Leaving it off includes more time and might account for some requested data not yet being available.
Per documentation:
The beginning day for which you want NDFD data. If the string is empty, the start date is assumed to be the earliest available day in the database. This input is only needed if one wants to shorten the time window data is to be retrieved for (less than entire 7 days worth), e.g. if user wants data for days 2-5.
I'm not experiencing the randomness you mention. The folks on NOAA's Yahoo! Groups forum might be able to tell you more.