Neo4j : Enterprise version 3.2
I see a tremendous difference between the following two calls in terms for speed. Here are the settings and query/API.
Page Cache : 16g | Heap : 16g
Number of row/nodes -> 600K
cypher code (ignore syntax if any) | Time Taken : 50 sec.
using periodic commit 10000
load with headers from 'file:///xyx.csv' as row with row
create(n:ObjectTension) set n = row
From Java (session pool, with 15 session at time as an example):
Thread_1 : Time Taken : 8 sec / 10K
Map<String,Object> pList = new HashMap<String, Object>();
try(Transaction tx = Driver.session().beginTransaction()){
for(int i = 0; i< 10000; i++){
pList.put(i, i * i);
params.put("props",pList);
String query = "Create(n:Label {props})";
// String query = "create(n:Label) set n = {props})";
tx.run(query, params);
}
Thread_2 : Time taken is 9 sec / 10K
Map<String,Object> pList = new HashMap<String, Object>();
try(Transaction tx = Driver.session().beginTransaction()){
for(int i = 0; i< 10000; i++){
pList.put(i, i * i);
params.put("props",pList);
String query = "Create(n:Label {props})";
// String query = "create(n:Label) set n = {props})";
tx.run(query, params);
}
.
.
.
Thread_3 : Basically the above code is reused..It's just an example.
Thread_N where N = (600K / 10K)
Hence, the over all time taken is around 2 ~ 3 mins.
The question are the following?
How does CSV load handles internally? Like does it open single session and multiple transactions within?
Or
Create multiple session based on the parameter passed as "Using periodic commit 10000", with this 600K/10000 is 60 session? etc
What's the best way to write via Java?
The idea is achieve the same write performance as CSV load via Java. As the csv load 12000 nodes in ~5 seconds or even better.
Your Java code is doing something very different than your Cypher code, so it really makes no sense to compare processing times.
You should change your Java code to read from the same CSV file. File IO is fairly expensive, but your Java code is not doing any.
Also, whereas your pure Cypher query is creating nodes with a fixed (and presumably relatively small) number of properties, your Java pList is growing in size with every loop iteration -- so that each Java loop creates nodes with between 1 to 10K properties! This may be the main reason why your Java code is much slower.
[UPDATE 1]
If you want to ignore the performance difference between using and not using a CSV file, the following (untested) code should give you an idea of what similar logic would look like in Java. In this example, the i loop assumes that your CSV file has 10 columns (you should adjust the loop to use the correct column count). Also, this example gives all the nodes the same properties, which is OK as long as you have not created a contrary uniqueness constraint.
Session session = Driver.session();
Map<String,Object> pList = new HashMap<String, Object>();
for (int i = 0; i < 10; i++) {
pList.put(i, i * i);
}
Map<String, Map> params = new HashMap<String, Map>();
params.put("props", pList);
String query = "create(n:Label) set n = {props})";
for (int j = 0; j < 60; j++) {
try (Transaction tx = session.beginTransaction()) {
for(int k = 0; k < 10000; k++){
tx.run(query, params);
}
}
}
[UPDATE 2 and 3, copied from chat and then fixed]
Since the Cypher planner is able to optimize, the actual internal logic is probably a lot more efficient than the Java code I provided (above). If you want to also optimize your Java code (which may be closer to the code that Cypher actually generates), try the following (untested) code. It sends 10000 rows of data in a single run() call, and uses the UNWIND clause to break it up into individual rows on the server.
Session session = Driver.session();
Map<String, Integer> pList = new HashMap<String, Integer>();
for (int i = 0; i < 10; i++) {
pList.put(Integer.toString(i), i*i);
}
List<Map<String,Integer>> rows = Collections.nCopies(1, pList);
Map<String, List> params = new HashMap<String, List>();
params.put("rows", rows);
String query = "UNWIND {rows} AS row CREATE(n:Label) SET n = {row})";
for (int j = 0; j < 60; j++) {
try (Transaction tx = session.beginTransaction()) {
tx.run(query, params);
}
}
You can try are creating the nodes using Java API, instead of relying on Cypher:
createNode - http://neo4j.com/docs/java-reference/current/javadocs/org/neo4j/graphdb/GraphDatabaseService.html#createNode-org.neo4j.graphdb.Label...-
setProperty - http://neo4j.com/docs/java-reference/current/javadocs/org/neo4j/graphdb/PropertyContainer.html#setProperty-java.lang.String-java.lang.Object-
Also, as predecessor had mentioned, props variable has different values for your cases.
Additionally, notice that every iteration you are performing query parsing (String query = "Create(n:Label {props})";) - unless it is optimized out by neo4j itself.
I have been using the following URL to fetch historical data from yahoo finance for quite some time now but it stopped working as of yesterday.
https://ichart.finance.yahoo.com/table.csv?s=SPY
When browsing to this site it says:
Will be right back...
Thank you for your patience.
Our engineers are working quickly to resolve the issue.
However, since this issue is still existing since yesterday I am starting to think that they discontinued this service?
My SO search only pointed me to this topic, which was related to https though...
Is anyone else experiencing this issue?
How can I resolve this problem? Do they offer a different access to their historical data?
Yahoo has gone to a Reactjs front end which means if you analyze the request headers from the client to the backend you can get the actual JSON they use to populate the client side stores.
Hosts:
query1.finance.yahoo.com HTTP/1.0
query2.finance.yahoo.com HTTP/1.1
(difference between HTTP/1.0 & HTTP/1.1)
If you plan to use a proxy or persistent connections use query2.finance.yahoo.com. But for the purposes of this post, the host used for the example URLs is not meant to imply anything about the path it's being used with.
Fundamental Data
(substitute your symbol for: AAPL)
/v10/finance/quoteSummary/AAPL?modules=
Inputs for the ?modules= query:
[
'assetProfile',
'summaryProfile',
'summaryDetail',
'esgScores',
'price',
'incomeStatementHistory',
'incomeStatementHistoryQuarterly',
'balanceSheetHistory',
'balanceSheetHistoryQuarterly',
'cashflowStatementHistory',
'cashflowStatementHistoryQuarterly',
'defaultKeyStatistics',
'financialData',
'calendarEvents',
'secFilings',
'recommendationTrend',
'upgradeDowngradeHistory',
'institutionOwnership',
'fundOwnership',
'majorDirectHolders',
'majorHoldersBreakdown',
'insiderTransactions',
'insiderHolders',
'netSharePurchaseActivity',
'earnings',
'earningsHistory',
'earningsTrend',
'industryTrend',
'indexTrend',
'sectorTrend']
Example URL: querying for all of the above modules
https://query2.finance.yahoo.com/v10/finance/quoteSummary/AAPL?modules=assetProfile%2CsummaryProfile%2CsummaryDetail%2CesgScores%2Cprice%2CincomeStatementHistory%2CincomeStatementHistoryQuarterly%2CbalanceSheetHistory%2CbalanceSheetHistoryQuarterly%2CcashflowStatementHistory%2CcashflowStatementHistoryQuarterly%2CdefaultKeyStatistics%2CfinancialData%2CcalendarEvents%2CsecFilings%2CrecommendationTrend%2CupgradeDowngradeHistory%2CinstitutionOwnership%2CfundOwnership%2CmajorDirectHolders%2CmajorHoldersBreakdown%2CinsiderTransactions%2CinsiderHolders%2CnetSharePurchaseActivity%2Cearnings%2CearningsHistory%2CearningsTrend%2CindustryTrend%2CindexTrend%2CsectorTrend
The %2C is the Hex representation of , and needs to be inserted between each module you request. details about the hex encoding bit(if you care)
Options contracts
/v7/finance/options/AAPL (current expiration)
/v7/finance/options/AAPL?date=1679011200 (March 17, 2023 expiration)
Example URL:
https://query2.finance.yahoo.com/v7/finance/options/AAPL (current expiration)
https://query2.finance.yahoo.com/v7/finance/options/AAPL?date=1679011200 (Match 17, 2023 expiration)
Any valid future expiration represented as a UNIX timestamp can be used in the ?date= query. If you query for the current expiration the JSON response will contain a list of all the valid expirations that can be used in the ?date= query. (here is a post explaining converting human-readable dates to UNIX timestamp in Python)
Price
/v8/finance/chart/AAPL?symbol=AAPL&period1=0&period2=9999999999&interval=3mo
Possible inputs for &interval=: 1m, 5m, 15m, 30m, 90m, 1h, 1d, 5d, 1wk, 1mo, 3mo
m (minute) intervals are limited to 30days with period1 and period2 spaning a maximum of 7 days per/request. Exceeding either of these limits will result in an error and will not round
h (hour) interval is limited to 730days with no limit to span. Exceeding this will result in an error and will not round
period1=: UNIX timestamp representation of the date you wish to start at.
d (day), wk (week), mo (month) intervals with values less than the initial trading date will be rounded up to the initial trading date.
period2=: UNIX timestamp representation of the date you wish to end at.
For all intervals: values greater than the last trading date will be rounded down to the most recent timestamp available.
Add pre & post market data
&includePrePost=true
Add dividends & splits
&events=div%7Csplit
%7C is hex for |. , will work but internally yahoo uses pipe
Example URL:
https://query1.finance.yahoo.com/v8/finance/chart/AAPL?symbol=AAPL&period1=0&period2=9999999999&interval=1d&includePrePost=true&events=div%7Csplit
The above request will return all price data for ticker AAPL on a 1-day interval including pre and post-market data as well as dividends and splits.
Note: the values used in the price example URL for period1= & period2= are to demonstrate the respective rounding behavior of each input.`
It looks like they have started adding a required cookie, but you can retrieve this fairly easily, for example:
GET https://uk.finance.yahoo.com/quote/AAPL/history
Responds with the header in the form:
set-cookie:B=xxxxxxxx&b=3&s=qf; expires=Fri, 18-May-2018 00:00:00 GMT; path=/; domain=.yahoo.com
You should be able to read this and attach it to your .csv request:
GET https://query1.finance.yahoo.com/v7/finance/download/AAPL?period1=1492524105&period2=1495116105&interval=1d&events=history&crumb=tO1hNZoUQeQ
cookie: B=xxxxxxxx&b=3&s=qf;
Note the crumb query parameter, this seems to correspond to your cookie in some way. Your best bet is to scrape this from the HTML response to your initial GET request. Within that response, you can do a regex search for: "CrumbStore":\{"crumb":"(?<crumb>[^"]+)"\} and extract the crumb matched group.
It looks like once you have that crumb value though you can use it with the same cookie on any symbol/ticker for the next year meaning you shouldn't have to do the scrape too frequently.
To get current quotes just load:
https://query1.finance.yahoo.com/v8/finance/chart/AAPL?interval=2m
With:
AAPL substituted with your stock ticker
interval one of [1m, 2m, 5m, 15m, 30m, 60m, 90m, 1h, 1d, 5d, 1wk, 1mo, 3mo]
optional period1 query param with your epoch range start date e.g. period1=1510340760
optional period2 query param with your epoch range end date e.g. period2=1510663712
I managed to work out a .NET class to obtain valid token (cookie and crumb) from Yahoo Finance
For complete API library in fetching historical data from new Yahoo Finance, you may visit YahooFinanceAPI in Github
Here is the class to grab the cookie and crumb
Token.cs
using System;
using System.Diagnostics;
using System.Net;
using System.IO;
using System.Text.RegularExpressions;
namespace YahooFinanceAPI
{
/// <summary>
/// Class for fetching token (cookie and crumb) from Yahoo Finance
/// Copyright Dennis Lee
/// 19 May 2017
///
/// </summary>
public class Token
{
public static string Cookie { get; set; }
public static string Crumb { get; set; }
private static Regex regex_crumb;
/// <summary>
/// Refresh cookie and crumb value Yahoo Fianance
/// </summary>
/// <param name="symbol">Stock ticker symbol</param>
/// <returns></returns>
public static bool Refresh(string symbol = "SPY")
{
try
{
Token.Cookie = "";
Token.Crumb = "";
string url_scrape = "https://finance.yahoo.com/quote/{0}?p={0}";
//url_scrape = "https://finance.yahoo.com/quote/{0}/history"
string url = string.Format(url_scrape, symbol);
HttpWebRequest request = (HttpWebRequest)HttpWebRequest.Create(url);
request.CookieContainer = new CookieContainer();
request.Method = "GET";
using (HttpWebResponse response = (HttpWebResponse)request.GetResponse())
{
string cookie = response.GetResponseHeader("Set-Cookie").Split(';')[0];
string html = "";
using (Stream stream = response.GetResponseStream())
{
html = new StreamReader(stream).ReadToEnd();
}
if (html.Length < 5000)
return false;
string crumb = getCrumb(html);
html = "";
if (crumb != null)
{
Token.Cookie = cookie;
Token.Crumb = crumb;
Debug.Print("Crumb: '{0}', Cookie: '{1}'", crumb, cookie);
return true;
}
}
}
catch (Exception ex)
{
Debug.Print(ex.Message);
}
return false;
}
/// <summary>
/// Get crumb value from HTML
/// </summary>
/// <param name="html">HTML code</param>
/// <returns></returns>
private static string getCrumb(string html)
{
string crumb = null;
try
{
//initialize on first time use
if (regex_crumb == null)
regex_crumb = new Regex("CrumbStore\":{\"crumb\":\"(?<crumb>.+?)\"}",
RegexOptions.CultureInvariant | RegexOptions.Compiled, TimeSpan.FromSeconds(5));
MatchCollection matches = regex_crumb.Matches(html);
if (matches.Count > 0)
{
crumb = matches[0].Groups["crumb"].Value;
}
else
{
Debug.Print("Regex no match");
}
//prevent regex memory leak
matches = null;
}
catch (Exception ex)
{
Debug.Print(ex.Message);
}
GC.Collect();
return crumb;
}
}
}
Updated 1 Jun 17
credits to #Ed0906
modify crumb regex pattern to Regex("CrumbStore\":{\"crumb\":\"(?<crumb>.+?)\"}"
For the python lovers out there, I've updated the yahooFinance.py in tradingWithPython library.
There is also an example notebook based on the tips by Ed0906, demonstrating how to get the data step by step. See it on
In this forum: https://forums.yahoo.net/t5/Yahoo-Finance-help/Is-Yahoo-Finance-API-broken/td-p/250503/page/3
Nixon said:
Hi All - This feature was discontinued by the Finance team and they will not be reintroducing that functionality.
The URL for downloading historical data is now something like this:
https://query1.finance.yahoo.com/v7/finance/download/SPY?period1=1492449771&period2=1495041771&interval=1d&events=history&crumb=9GaimFhz.WU
Note the above URL will not work for you or anyone else. You'll get something like this:
{
"finance": {
"error": {
"code": "Unauthorized",
"description": "Invalid cookie"
}
}
}
It seems that Yahoo is now using some hashing to prevent people from accessing the data like you did. The URL varies with each session so it's very likely that you can't do this with a fixed URL anymore.
You'll need to do some scrapping to get the correct URL from the main page, for example:
https://finance.yahoo.com/quote/SPY/history?p=SPY
I had found another yahoo site that does not require cookies, but generates jason output: https://query1.finance.yahoo.com/v7/finance/chart/YHOO?range=2y&interval=1d&indicators=quote&includeTimestamps=true
it was pointed out from here: https://www.stock-data-solutions.com/kb/how-to-load-historical-prices-from-yahoo-finance-to-excel.htm
As it turned out they seem to support 'perod1' and 'period2' (in unix time) parameters which could be used instead of the 'interval'.
String quoteSite = "https://query1.finance.yahoo.com/v7/finance/chart/"
+ symbolName + "?"
+ "period1=" + period1
+ "&period2=" + period2
+ "&interval=1d&indicators=quote&includeTimestamps=true";
And the following parses Jason for me:
JSONObject topObj = new JSONObject(inp);
Object error = topObj.getJSONObject("chart").get("error");
if (!error.toString().equals("null")) {
System.err.prinltn(error.toString());
return null;
}
JSONArray results = topObj.getJSONObject("chart").getJSONArray("result");
if (results == null || results.length() != 1) {
return null;
}
JSONObject result = results.getJSONObject(0);
JSONArray timestamps = result.getJSONArray("timestamp");
JSONObject indicators = result.getJSONObject("indicators");
JSONArray quotes = indicators.getJSONArray("quote");
if (quotes == null || quotes.length() != 1) {
return null;
}
JSONObject quote = quotes.getJSONObject(0);
JSONArray adjcloses = indicators.getJSONArray("adjclose");
if (adjcloses == null || adjcloses.length() != 1) {
return null;
}
JSONArray adjclose = adjcloses.getJSONObject(0).getJSONArray("adjclose");
JSONArray open = quote.getJSONArray("open");
JSONArray close = quote.getJSONArray("close");
JSONArray high = quote.getJSONArray("high");
JSONArray low = quote.getJSONArray("low");
JSONArray volume = quote.getJSONArray("volume");
I'm in the same boat. Getting there slowly. The download link on the historical prices page still works. So I added the export cookies extension to firefox, logged in to yahoo, dumped the cookies. Used the crumb value from interactive session and I was able to retrieve values. Here's part of a test perl script that worked.
use Time::Local;
# create unix time variables for start and end date values: 1/1/2014 thru 12/31/2017
$p1= timelocal(0,0,0,1,0,114);
$p2= timelocal(0,0,0,31,11,117);
$symbol = 'AAPL';
# create variable for string to be executed as a system command
# cookies.txt exported from firefox
# crumb variable retrieved from yahoo download data link
$task = "wget --load-cookies cookies.txt --no-check-certificate -T 30 -O $symbol.csv \"https://query1.finance.yahoo.com/v7/finance/download/$symbol?period1=$p1&period2=$p2&interval=1d&events=history&crumb=7WhHVu5N4e3\" ";
#show what we're executing
print $task;
# execute system command using backticks
`$task`;
#output is AAPL.csv
It'll take a while to automate what I do. Hopefully yahoo will simplify or give some guidance on it if they really intend for people to use it.
Fully working PHP example, based on this post and related sources:
function readYahoo($symbol, $tsStart, $tsEnd) {
preg_match('"CrumbStore\":{\"crumb\":\"(?<crumb>.+?)\"}"',
file_get_contents('https://uk.finance.yahoo.com/quote/' . $symbol),
$crumb); // can contain \uXXXX chars
if (!isset($crumb['crumb'])) return 'Crumb not found.';
$crumb = json_decode('"' . $crumb['crumb'] . '"'); // \uXXXX to UTF-8
foreach ($http_response_header as $header) {
if (0 !== stripos($header, 'Set-Cookie: ')) continue;
$cookie = substr($header, 14, strpos($header, ';') - 14); // after 'B='
} // cookie looks like "fkjfom9cj65jo&b=3&s=sg"
if (!isset($cookie)) return 'Cookie not found.';
$fp = fopen('https://query1.finance.yahoo.com/v7/finance/download/' . $symbol
. '?period1=' . $tsStart . '&period2=' . $tsEnd . '&interval=1d'
. '&events=history&crumb=' . $crumb, 'rb', FALSE,
stream_context_create(array('http' => array('method' => 'GET',
'header' => 'Cookie: B=' . $cookie))));
if (FALSE === $fp) return 'Can not open data.';
$buffer = '';
while (!feof($fp)) $buffer .= implode(',', fgetcsv($fp, 5000)) . PHP_EOL;
fclose($fp);
return $buffer;
}
Usage:
$csv = readYahoo('AAPL', mktime(0, 0, 0, 6, 2, 2017), mktime(0, 0, 0, 6, 3, 2017));
Python
I used this code to get cookie (copied from fix-yahoo-finance):
def get_yahoo_crumb_cookie():
"""Get Yahoo crumb cookie value."""
res = requests.get('https://finance.yahoo.com/quote/SPY/history')
yahoo_cookie = res.cookies['B']
yahoo_crumb = None
pattern = re.compile('.*"CrumbStore":\{"crumb":"(?P<crumb>[^"]+)"\}')
for line in res.text.splitlines():
m = pattern.match(line)
if m is not None:
yahoo_crumb = m.groupdict()['crumb']
return yahoo_cookie, yahoo_crumb
then this code to get response:
cookie, crumb = get_yahoo_crumb_cookie()
params = {
'symbol': stock.symbol,
'period1': 0,
'period2': int(time.time()),
'interval': '1d',
'crumb': crumb,
}
url_price = 'https://query1.finance.yahoo.com/v7/finance/download/{symbol}'
response = requests.get(url_price, params=params, cookies={'B': cookie})
This looks nice as well http://blog.bradlucas.com/posts/2017-06-03-yahoo-finance-quote-download-python/
For java lovers.
You can access your cookies from a URLConnection this way.
// "https://finance.yahoo.com/quote/SPY";
URLConnection con = url.openConnection();
...
for (Map.Entry<String, List<String>> entry : con.getHeaderFields().entrySet()) {
if (entry.getKey() == null
|| !entry.getKey().equals("Set-Cookie"))
continue;
for (String s : entry.getValue()) {
// store your cookie
...
}
}
now you can search for the crumb in the yahoo site:
String crumb = null;
InputStream inStream = con.getInputStream();
InputStreamReader irdr = new InputStreamReader(inStream);
BufferedReader rsv = new BufferedReader(irdr);
Pattern crumbPattern = Pattern.compile(".*\"CrumbStore\":\\{\"crumb\":\"([^\"]+)\"\\}.*");
String line = null;
while (crumb == null && (line = rsv.readLine()) != null) {
Matcher matcher = crumbPattern.matcher(line);
if (matcher.matches())
crumb = matcher.group(1);
}
rsv.close();
and finally, setting the cookie
String quoteUrl = "https://query1.finance.yahoo.com/v7/finance/download/IBM?period1=1493425217&period2=1496017217&interval=1d&events=history&crumb="
+ crumb
...
List<String> cookies = cookieStore.get(key);
if (cookies != null) {
for (String c: cookies)
con.setRequestProperty("Cookie", c);
}
...
con.connect();
I used a php script using fopen() to access the financial data, here are the snippets that I modified to get it back to work:
Creating the timestamps for start date and end date:
$timestampStart = mktime(0,0,0,$startMonth,$startDay,$startYear);
$timestampEnd = mktime(0,0,0,$endMonth,$endDay,$endYear);
Force fopen() to send the required cookie with hard coded values:
$cookie="YourCookieTakenFromYahoo";
$opts = array(
'http'=>array(
'method'=>"GET",
'header'=>"Accept-language: en\r\n" .
"Cookie: B=".$cookie."\r\n"
)
);
$context = stream_context_create($opts);
Use fopen() to get the csv file:
$ticker="TickerSymbol";
$crumb="CrumbValueThatMatchesYourCookieFromYahoo";
$handle = fopen("https://query1.finance.yahoo.com/v7/finance/download/".$ticker."?period1=".$timestampStart."&period2=".$timestampEnd."&interval=1d&events=history&crumb=".$crumb."", "r", false, $context);
Now you can do all the magic you did before inside this while loop:
while (!feof($handle) ) {
$line_of_text = fgetcsv($handle, 5000);
}
Make sure to set your own values for $ticker, $crumb and $cookie in the snippets above.
Follow Ed0906's approach on how to retrieve $crumb and $cookie.
I am the author of this service
Basic info here
Daily prices
You need to be familiar with RESTFUL services.
https://quantprice.herokuapp.com/api/v1.1/scoop/day?tickers=MSFT&date=2017-06-09
Historical prices
You have to provide a date range :
https://quantprice.herokuapp.com/api/v1.1/scoop/period?tickers=MSFT&begin=2012-02-19&end=2012-02-20
If you don't provide begin or end it will use the earliest or current date:
https://quantprice.herokuapp.com/api/v1.1/scoop/period?tickers=MSFT&begin=2012-02-19
Multiple tickers
You can just comma separate tickers:
https://quantprice.herokuapp.com/api/v1.1/scoop/period?tickers=IBM,MSFT&begin=2012-02-19
Rate limit
All requests are rate limited to 10 requests per hour. If you want to register for a full access API send me DM on twitter. You will receive an API key to add to the URL.
We are setting up a paypal account for paid subscription without rates.
List of tickers available
https://github.com/robomotic/valueviz/blob/master/scoop_tickers.csv
I am working also to provide fundamental data and company data from EDGAR.
Cheers.
VBA
Here are some VBA functions that download and extract the cookie / crumb pair and return these in a Collection, and then use these to download the csv file contents for a particular code.
The containing project should have a reference to the 'Microsoft XML, v6.0' library added (other version might be fine too with some minor changes to the code).
Sub Test()
Dim X As Collection
Set X = FindCookieAndCrumb()
Debug.Print X!cookie
Debug.Print X!crumb
Debug.Print YahooRequest("AAPL", DateValue("31 Dec 2016"), DateValue("30 May 2017"), X)
End Sub
Function FindCookieAndCrumb() As Collection
' Tools - Reference : Microsoft XML, v6.0
Dim http As MSXML2.XMLHTTP60
Dim cookie As String
Dim crumb As String
Dim url As String
Dim Pos1 As Long
Dim X As String
Set FindCookieAndCrumb = New Collection
Set http = New MSXML2.ServerXMLHTTP60
url = "https://finance.yahoo.com/quote/MSFT/history"
http.Open "GET", url, False
' http.setProxy 2, "https=127.0.0.1:8888", ""
' http.setRequestHeader "Accept", "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8"
' http.setRequestHeader "Accept-Encoding", "gzip, deflate, sdch, br"
' http.setRequestHeader "Accept-Language", "en-ZA,en-GB;q=0.8,en-US;q=0.6,en;q=0.4"
http.setRequestHeader "User-Agent", "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36"
http.send
X = http.responseText
Pos1 = InStr(X, "CrumbStore")
X = Mid(X, Pos1, 44)
X = Mid(X, 23, 44)
Pos1 = InStr(X, """")
X = Left(X, Pos1 - 1)
FindCookieAndCrumb.Add X, "Crumb"
'======================================
X = http.getResponseHeader("set-cookie")
Pos1 = InStr(X, ";")
X = Left(X, Pos1 - 1)
FindCookieAndCrumb.Add X, "Cookie"
End Function
Function YahooRequest(ShareCode As String, StartDate As Date, EndDate As Date, CookieAndCrumb As Collection) As String
' Tools - Reference : Microsoft XML, v6.0
Dim http As MSXML2.XMLHTTP60
Dim cookie As String
Dim crumb As String
Dim url As String
Dim UnixStartDate As Long
Dim UnixEndDate As Long
Dim BaseDate As Date
Set http = New MSXML2.ServerXMLHTTP60
cookie = CookieAndCrumb!cookie
crumb = CookieAndCrumb!crumb
BaseDate = DateValue("1 Jan 1970")
If StartDate = 0 Then StartDate = BaseDate
UnixStartDate = (StartDate - BaseDate) * 86400
UnixEndDate = (EndDate - BaseDate) * 86400
url = "https://query1.finance.yahoo.com/v7/finance/download/" & ShareCode & "?period1=" & UnixStartDate & "&period2=" & UnixEndDate & "&interval=1d&events=history&crumb=" & crumb
http.Open "GET", url, False
http.setRequestHeader "Cookie", cookie
http.send
YahooRequest = http.responseText
End Function
For those Excel/VBA users I have used the suggestions above to develop a VBA method to extract historical prices from the updated Yahoo website. The key code snippets are listed below and I have also provided my testing workbook.
First a request to get the Crumb and Cookie values set before attempting to extract the data from Yahoo for the prices..
Dim strUrl As String: strUrl = "https://finance.yahoo.com/lookup?s=%7B0%7D" 'Symbol lookup used to set the values
Dim objRequest As WinHTTP.WinHttpRequest
Set objRequest = New WinHttp.WinHttpRequest
With objRequest
.Open "GET", strUrl, True
.setRequestHeader "Content-Type", "application/x-www-form-urlencoded; charset=UTF-8"
.send
.waitForResponse
strCrumb = strExtractCrumb(.responseText)
strCookie = Split(.getResponseHeader("Set-Cookie"), ";")(0)
End With
See the following Yahoo Historical Price Extract link to my website for a sample file and more details on the method I have used to extract historical security prices from the Yahoo website
I was on the same boat. I managed to get the CSV downloaded from Yahoo with some vb.net frankencode I made from bits and pieces off Google, SOF and some head-scratching.
However, I discovered Intrinio (look it up), signed up, and my free account gets me 500 historic data api calls a day, with much more data and much more accurate than Yahoo. I rewrote my code for the Intrinio API, and I'm happy as a clam.
BTW, I don't work or have anything to do with Intrinio, but they saved my butt big time...
You actually don't need to do 2 requests to get Yahoo data. I use this link https://ca.finance.yahoo.com/quote/AAAP/history?period1=1474000669&period2=1505536669&interval=1d&filter=history&frequency=1d
You could grab the cookie from the this but instead it includes that data for you historical quote in Json format. After I download the page I scarpe the Json data out of it. Saves a url request.
Javascript
Find cookie;
match = document.cookie.match(new RegExp('B=([^;]+)'));
alert (match[1]);
Find crumb;
i=document.body.innerHTML.search("CrumbStore")
if (i>=0) alert (document.body.innerHTML.substr(i+22,11))
Find crumb for mobile;
i=document.body.innerHTML.search('USER={\"crumb\":');
if (i>=0) alert(document.body.innerHTML.substr(i+15,11));
and it's probably best to wait for the page (e.g https://finance.yahoo.com/quote/goog) to load up first, you can
check it with;
document.readyState
An alternative approach to those mentioned so far (Yahoo, Google and Intrinio) is to get the historical data from Alpha Vantage for free. Their web service delivers intra-day, daily, adjusted stock prices and 50+ technical indicators. They even deliver straight to Excel - also for free - through Deriscope. (I am the author of the latter.)
If you are trying to connect yahooFinance api with java. just add the following dependency.
<dependency>
<groupId>com.yahoofinance-api</groupId>
<artifactId>YahooFinanceAPI</artifactId>
<version>3.13.0</version>
</dependency>
For Python 3 users change to
url='https://query1.finance.yahoo.com/v7/finance/download/AAAP?period1=1494605670&period2=1495815270&interval=1d&events=history&crumb=IJ.ilcJlkrZ'
from
url='https://chartapi.finance.yahoo.com/instrument/1.0/AAAP/chartdata;type=quote;range=10d/csv/'
and
response = request.urlopen(url)
to
response = requests.get(url,cookies={'B':cookie})
data in response.text
the data format is totally different but at least its working fine for now
There is a fix that I have found to work well. Please see my post:
Yahoo Finance API / URL not working: Python fix for Pandas DataReader where I followed the steps in https://pypi.python.org/pypi/fix-yahoo-finance to: $ pip install fix_yahoo_finance --upgrade --no-cache-dir (and also upgraded pandas_datareader to be sure) and tested ok:
from pandas_datareader import data as pdr
import fix_yahoo_finance
data = pdr.get_data_yahoo('BHP.AX', start='2017-04-23', end='2017-05-24')
Also note that the order of the last 2 data columns are 'Adj Close' and 'Volume' so for my purpose, I have reset the columns to the original order:
cols = ['Date', 'Open', 'High', 'Low', 'Close', 'Volume', 'Adj Close']
data = data.reindex(columns=cols)
I've combined some of the above ideas that handles the crumb / cookie refresh, specifically from #Dennis, and created a vb.net class that can be called like this:
Dim f = Await YahooFinanceFactory.CreateAsync
Dim items1 = Await f.GetHistoricalDataAsync("SPY", #1/1/2018#)
Dim items2 = Await f.GetHistoricalDataAsync("^FTSE", #1/1/2018#)
The class itself is here:
Imports System.Net
Imports System.Net.Http
Imports System.Text.RegularExpressions
Namespace YahooFinance
Public Class YahooHistoryPrice
Public Property [Date] As DateTime
Public Property Open As Double
Public Property High As Double
Public Property Low As Double
Public Property Close As Double
Public Property Volume As Double
Public Property AdjClose As Double
End Class
Public Class YahooFinanceFactory
Public Property Cookie As String
Public Property Crumb As String
Public Property CrumbUrl As String = "https://finance.yahoo.com/quote/{0}?p={0}"
Public Property DownloadUrl As String = "https://query1.finance.yahoo.com/v7/finance/download/{0}?period1={1}&period2={2}&interval=1d&events={3}&crumb={4}"
Public Property Timeout As Integer = 5
Public Property NoRefreshRetries As Integer = 10
Public Property NoDownloadRetries As Integer = 10
Private Property Regex_crumb As Regex
Public Shared Async Function CreateAsync(Optional noRefreshRetries As Integer = 10, Optional noDownloadRetries As Integer = 10, Optional timeout As Integer = 5, Optional crumbUrl As String = "https://finance.yahoo.com/quote/{0}?p={0}", Optional downloadUrl As String = "https://query1.finance.yahoo.com/v7/finance/download/{0}?period1={1}&period2={2}&interval=1d&events={3}&crumb={4}") As Task(Of YahooFinanceFactory)
Return Await (New YahooFinanceFactory With {
.NoRefreshRetries = noRefreshRetries,
.NoDownloadRetries = noDownloadRetries,
.Timeout = timeout,
.CrumbUrl = crumbUrl,
.DownloadUrl = downloadUrl
}).RefreshAsync()
End Function
Public Async Function GetHistoricalDataAsync(symbol As String, dateFrom As Date) As Task(Of IEnumerable(Of YahooHistoryPrice))
Dim count As Integer = 0
If Not IsValid Then
Throw New Exception("Invalid YahooFinanceFactory instance")
End If
Dim csvData = Await GetRawAsync(symbol, dateFrom, Now).ConfigureAwait(False)
If csvData IsNot Nothing Then
Return ParsePrice(csvData)
End If
Return Array.Empty(Of YahooHistoryPrice)
End Function
Public Async Function GetRawAsync(symbol As String, start As DateTime, [end] As DateTime) As Task(Of String)
Dim count = 0
While count < NoDownloadRetries
Try
Dim cookies = New CookieContainer
cookies.Add(New Cookie("B", If(Cookie.StartsWith("B="), Cookie.Substring(2), Cookie), "/", ".yahoo.com"))
Using handler = New HttpClientHandler With {.CookieContainer = cookies}
Using client = New HttpClient(handler) With {.Timeout = TimeSpan.FromSeconds(Timeout)}
Dim httpResponse = Await client.GetAsync(GetDownloadUrl(symbol, start)).ConfigureAwait(False)
Return Await httpResponse.Content.ReadAsStringAsync
End Using
End Using
Catch ex As Exception
If count >= NoDownloadRetries - 1 Then
Throw
End If
End Try
count += 1
End While
Throw New Exception("Retries exhausted")
End Function
Private Function ParsePrice(ByVal csvData As String) As IEnumerable(Of YahooHistoryPrice)
Dim lst = New List(Of YahooHistoryPrice)
Dim rows = csvData.Split(Convert.ToChar(10))
For i = 1 To rows.Length - 1
Dim row = rows(i)
If String.IsNullOrEmpty(row) Then
Continue For
End If
Dim cols = row.Split(","c)
If cols(1) = "null" Then
Continue For
End If
Dim itm = New YahooHistoryPrice With {.Date = DateTime.Parse(cols(0)), .Open = Convert.ToDouble(cols(1)), .High = Convert.ToDouble(cols(2)), .Low = Convert.ToDouble(cols(3)), .Close = Convert.ToDouble(cols(4)), .AdjClose = Convert.ToDouble(cols(5))}
If cols(6) <> "null" Then
itm.Volume = Convert.ToDouble(cols(6))
End If
lst.Add(itm)
Next
Return lst
End Function
Public ReadOnly Property IsValid() As Boolean
Get
Return Not String.IsNullOrWhiteSpace(Cookie) And Not String.IsNullOrWhiteSpace(Crumb)
End Get
End Property
Public Function GetDownloadUrl(symbol As String, dateFrom As Date, Optional eventType As String = "history") As String
Return String.Format(DownloadUrl, symbol, Math.Round(DateTimeToUnixTimestamp(dateFrom), 0), Math.Round(DateTimeToUnixTimestamp(Now.AddDays(-1)), 0), eventType, Crumb)
End Function
Public Function GetCrumbUrl(symbol As String) As String
Return String.Format(Me.CrumbUrl, symbol)
End Function
Public Function DateTimeToUnixTimestamp(dateTime As DateTime) As Double
Return (dateTime.ToUniversalTime() - New DateTime(1970, 1, 1, 0, 0, 0, 0, DateTimeKind.Utc)).TotalSeconds
End Function
Private Async Function RefreshAsync(Optional symbol As String = "SPY") As Task(Of YahooFinanceFactory)
Dim count = 0
While count < NoRefreshRetries And Not IsValid
Try
Using client = New HttpClient With {.Timeout = TimeSpan.FromSeconds(Timeout)}
Dim httpResponse = Await client.GetAsync(GetCrumbUrl(symbol)).ConfigureAwait(False)
Me.Cookie = httpResponse.Headers.First(Function(f) f.Key = "Set-Cookie").Value.FirstOrDefault?.Split(";"c)(0)
Dim html = Await httpResponse.Content.ReadAsStringAsync
Me.Crumb = GetCrumb(html)
If Crumb IsNot Nothing Then
Return Me
End If
End Using
Catch ex As Exception
If count >= NoRefreshRetries - 1 Then
Cookie = ""
Crumb = ""
Throw
End If
End Try
count += 1
End While
Cookie = ""
Crumb = ""
Throw New Exception("Could not refresh YahooFinanceFactory")
End Function
Private Function GetCrumb(html As String) As String
Dim crumb As String = Nothing
If Regex_crumb Is Nothing Then
Regex_crumb = New Regex("CrumbStore"":{""crumb"":""(?<crumb>.+?)""}", RegexOptions.CultureInvariant Or RegexOptions.Compiled, TimeSpan.FromSeconds(5))
End If
Dim matches As MatchCollection = Regex_crumb.Matches(html)
If matches.Count > 0 Then
crumb = matches(0).Groups("crumb").Value
crumb = System.Text.RegularExpressions.Regex.Unescape(crumb)
Else
Throw New Exception("Regex no match")
End If
Return crumb
End Function
End Class
End Namespace
Why not using the ready one which provides full access. without malfunctioning:
tickers='AAPL'
from pandas_datareader import data as wb
new_data = pd.DataFrame()
for t in tickers :
new_data[t] = wb.DataReader(t, data_source ='yahoo', start = '2004-1-1')['Adj Close']
a = new_data[t]
It's possible to get current and historical data from google finance api. Works very good for me.
I am trying to extract topic from 7 millons of Twitter data. I have assumed each tweet as a document. So, I stored all tweets in a file where each line (or tweet) treated as a document. I used this file as a input file for Mallet api.
public static void LDAModel(int numofK,int numbofIteration,int numberofThread,String outputDir,InstanceList instances) throws Exception
{
// Create a model with 100 topics, alpha_t = 0.01, beta_w = 0.01
// Note that the first parameter is passed as the sum over topics, while
// the second is the parameter for a single dimension of the Dirichlet prior.
int numTopics = numofK;
ParallelTopicModel model = new ParallelTopicModel(numTopics, 1.0, 0.01);
model.addInstances(instances);
// Use two parallel samplers, which each look at one half the corpus and combine
// statistics after every iteration.
model.setNumThreads(numberofThread);
// Run the model for 50 iterations and stop (this is for testing only,
// for real applications, use 1000 to 2000 iterations)
model.setNumIterations(numbofIteration);
model.estimate();
// Show the words and topics in the first instance
// The data alphabet maps word IDs to strings
Alphabet dataAlphabet = instances.getDataAlphabet();
FeatureSequence tokens = (FeatureSequence) model.getData().get(0).instance.getData();
LabelSequence topics = model.getData().get(0).topicSequence;
Formatter out = new Formatter(new StringBuilder(), Locale.US);
for (int position = 0; position < tokens.getLength(); position++) {
// out.format("%s-%d ", dataAlphabet.lookupObject(tokens.getIndexAtPosition(position)), topics.getIndexAtPosition(position));
out.format("%s-%d ", dataAlphabet.lookupObject(tokens.getIndexAtPosition(position)), topics.getIndexAtPosition(position));
}
System.out.println(out);
// Estimate the topic distribution of the first instance,
// given the current Gibbs state.
double[] topicDistribution = model.getTopicProbabilities(0);
// Get an array of sorted sets of word ID/count pairs
ArrayList<TreeSet<IDSorter>> topicSortedWords = model.getSortedWords();
// Show top 10 words in topics with proportions for the first document
String topicsoutput="";
for (int topic = 0; topic < numTopics; topic++) {
Iterator<IDSorter> iterator = topicSortedWords.get(topic).iterator();
out = new Formatter(new StringBuilder(), Locale.US);
out.format("%d\t%.3f\t", topic, topicDistribution[topic]);
int rank = 0;
while (iterator.hasNext() && rank < 10) {
IDSorter idCountPair = iterator.next();
out.format("%s (%.0f) ", dataAlphabet.lookupObject(idCountPair.getID()), idCountPair.getWeight());
//out.format("%s ", dataAlphabet.lookupObject(idCountPair.getID()));
rank++;
}
System.out.println(out);
}
// Create a new instance with high probability of topic 0
StringBuilder topicZeroText = new StringBuilder();
Iterator<IDSorter> iterator = topicSortedWords.get(0).iterator();
int rank = 0;
while (iterator.hasNext() && rank < 10) {
IDSorter idCountPair = iterator.next();
topicZeroText.append(dataAlphabet.lookupObject(idCountPair.getID()) + " ");
rank++;
}
// Create a new instance named "test instance" with empty target and source fields.
InstanceList testing = new InstanceList(instances.getPipe());
testing.addThruPipe(new Instance(topicZeroText.toString(), null, "test instance", null));
TopicInferencer inferencer = model.getInferencer();
double[] testProbabilities = inferencer.getSampledDistribution(testing.get(0), 10, 1, 5);
System.out.println("0\t" + testProbabilities[0]);
File pathDir = new File(outputDir + File.separator+ "NumofTopics"+numTopics); //FIXME replace all strings with constants
pathDir.mkdir();
String DirPath = pathDir.getPath();
String stateFile = DirPath+File.separator+"output_state.gz";
String outputDocTopicsFile = DirPath+File.separator+"output_doc_topics.txt";
String topicKeysFile = DirPath+File.separator+"output_topic_keys";
PrintWriter writer=null;
String topicKeysFile_fromProgram = DirPath+File.separator+"output_topic";
try {
writer = new PrintWriter(topicKeysFile_fromProgram, "UTF-8");
writer.print(topicsoutput);
writer.close();
} catch (Exception e) {
e.printStackTrace();
}
model.printTopWords(new File(topicKeysFile), 11, false);
model.printDocumentTopics(new File (outputDocTopicsFile));
model.printState(new File (stateFile));
}
public static void main(String[] args) throws Exception{
// Begin by importing documents from text to feature sequences
ArrayList<Pipe> pipeList = new ArrayList<Pipe>();
// Pipes: lowercase, tokenize, remove stopwords, map to features
pipeList.add( new CharSequenceLowercase() );
pipeList.add( new CharSequence2TokenSequence(Pattern.compile("\\p{L}[\\p{L}\\p{P}]+\\p{L}")) );
pipeList.add( new TokenSequenceRemoveStopwords(new File("H:\\Data\\stoplists\\en.txt"), "UTF-8", false, false, false) );
pipeList.add( new TokenSequence2FeatureSequence() );
InstanceList instances = new InstanceList (new SerialPipes(pipeList));
Reader fileReader = new InputStreamReader(new FileInputStream(new File("E:\\Thesis Data\\DataForLDA\\freshnewData\\cleanTweets.txt")), "UTF-8");
instances.addThruPipe(new CsvIterator (fileReader, Pattern.compile("^(\\S*)[\\s,]*(\\S*)[\\s,]*(.*)$"),
3, 2, 1)); // data, label, name fields
int numberofTopic=5;
int numberofIteration=50;
int numberofThread=6;
String outputDir="J:\\Topics\\";
//int numberofTopic=5;
LDAModel(numberofTopic,numberofIteration,numberofThread,outputDir,instances);
TimeUnit.SECONDS.sleep(30);
numberofTopic=10; }
I have got three files from the above program.
1. state file
2. topic proportion file
3. key topic list
I would like to find out the number of documents allocated per topic.
For example I got the following output from key topic list file
0.004 obama (5471) canada (5283) woman (5152) vote (4879) police(3965)
where first column means topic serial number, second column means topic weight, third column means words under this topic (number of words)
Here, I got number of words under this topic but I would also like to show the number of documents where I got this topic. It would be helpful to show this output as a separate file like this. For example,
Topic 1: doc1(80%) doc2(70%) .......
Could anyone please give some idea or any source code for this?
Thanks.
The information you are looking for is contained in the file "2. topic proportion" you mentioned. Note that every document contains each topic with some percentage (although the percentages may be large for one topic and extremly small for others). You will have to decide what you want to extract from the file: The dominant topic (it is in column 3); The dominant topic, but only when the percentage is at least 50% (sometimes, two topics have almost the same percentage) ...