IBrokers R API and same day intraday prices - interactive-brokers

I think this is be an IB API more than the IBrokers R package.
I am using reqHistoricalData to get 30 minutes intraday historical data. The market is open and I am not getting the same day's data. I only get yesterday's data.
Is it possible to get the same day intraday bar data?
here is the code I am using, it only gives data for the previous day, not same day.
library(tidyverse)
library(IBrokers)
tws = twsConnect()
contract <- twsEquity('VOD','SMART')
VOD_intraday = IBrokers::reqHistoricalData(tws, Contract = contract, endDateTime = "20210408 13:24:28", barSize = "1 min", duration = "1 D")
VOD_intraday %>% as.data.frame() %>% rownames_to_column(var = "time") %>% arrange(desc(time)) %>% head()
It's 13:27 GMT on 2021-04-08 and London is open. And here is the response - it only gives data from 2020-04-07:
> contract <- twsEquity('VOD','SMART')
> VOD_intraday = IBrokers::reqHistoricalData(tws, Contract = contract, endDateTime = "20210408 13:24:28", barSize = "1 min", duration = "1 D")
waiting for TWS reply on VOD .... done.
> VOD_intraday %>% as.data.frame() %>% rownames_to_column(var = "time") %>% arrange(desc(time)) %>% head()
time VOD.Open VOD.High VOD.Low VOD.Close VOD.Volume VOD.WAP VOD.hasGaps VOD.Count
1 2021-04-07 20:59:00 18.96 18.98 18.95 18.98 1131 18.958 0 265
2 2021-04-07 20:58:00 18.96 18.96 18.95 18.96 90 18.957 0 42
3 2021-04-07 20:57:00 18.96 18.97 18.95 18.95 258 18.960 0 72
4 2021-04-07 20:56:00 18.96 18.96 18.95 18.95 124 18.959 0 58
5 2021-04-07 20:55:00 18.96 18.96 18.95 18.96 56 18.958 0 34
6 2021-04-07 20:54:00 18.95 18.96 18.95 18.95 26 18.951 0 12
Instead of VOD, you can use SPY, MSFT or any US security while the US market is open.
Edit: It turns out you need realtime subscription to get same day data. The answer below works.

One has to specify the ending time, or leave it blank to get the most recent data available.
Try this:
VOD_intraday = IBrokers::reqHistoricalData(tws, Contract = contract, endTime = "", barSize = "1 min", duration = "1 D")
Here's the execution when I run it:
> library(tidyverse)
> library(IBrokers)
IBrokers version 0.9-10. Implementing API Version 9.64
IBrokers comes with NO WARRANTY. Not intended for production use!
See ?IBrokers for details.
> tws = twsConnect()
> contract <- twsEquity('SPY','SMART')
> VOD_intraday = IBrokers::reqHistoricalData(tws, Contract = contract, endDateTime = "", barSize = "1 min", duration = "1 D")
waiting for TWS reply on SPY ........... done.
> head(VOD_intraday)
SPY.Open SPY.High SPY.Low SPY.Close SPY.Volume SPY.WAP SPY.hasGaps SPY.Count
2021-04-08 07:30:00 407.93 407.98 407.68 407.80 5042 407.846 0 1709
2021-04-08 07:31:00 407.81 408.00 407.74 407.98 1615 407.844 0 1065
2021-04-08 07:32:00 407.99 408.05 407.81 407.90 2451 407.932 0 1560
2021-04-08 07:33:00 407.89 407.98 407.88 407.95 2353 407.932 0 1300
2021-04-08 07:34:00 407.95 407.97 407.81 407.81 1708 407.907 0 1012
2021-04-08 07:35:00 407.82 407.86 407.61 407.67 2729 407.726 0 1458
And for symbol VOD:
> contract <- twsEquity('VOD','SMART')
> VOD_intraday = IBrokers::reqHistoricalData(tws, Contract = contract, endDateTime = "", barSize = "1 min", duration = "1 D")
waiting for TWS reply on VOD .... done.
> head(VOD_intraday)
VOD.Open VOD.High VOD.Low VOD.Close VOD.Volume VOD.WAP VOD.hasGaps VOD.Count
2021-04-08 07:30:00 18.95 18.95 18.91 18.92 246 18.921 0 49
2021-04-08 07:31:00 18.91 18.91 18.90 18.90 69 18.905 0 31
2021-04-08 07:32:00 18.90 18.90 18.87 18.87 237 18.881 0 44
2021-04-08 07:33:00 18.87 18.87 18.86 18.87 45 18.870 0 20
2021-04-08 07:34:00 18.87 18.87 18.85 18.86 173 18.860 0 57
2021-04-08 07:35:00 18.86 18.87 18.85 18.86 39 18.859 0 19

Related

Print and save to excel issues

I have the following script and I have problems on >printing and
saving. Any ideas or help welcome.
import pandas as pd
import numpy as np
df = pd.DataFrame({'A': [3513, 3514, 3517],
'B':['lname1', 'lname2', 'lname3'],
'C':['fname1', 'fname2', 'fname3'],
},index=np.arange(3,dtype=int))
def vamos(df):
for x in df.index:
s = (df.loc[x,'A'])
digits = list(map(int, str(s)))
Sum = (sum(digits))
df = df.assign(column_2=(Sum))
df['column_3'] = 20 - Sum
print(df)
df.to_excel("book_Sum.xlsx")
if __name__ == '__main__':
vamos(df)
This is what I get with print(df):
A B C column_2 column_3
0 3513 lname1 fname1 12 8
1 3514 lname2 fname2 12 8
2 3517 lname3 fname3 12 8
A B C column_2 column_3
0 3513 lname1 fname1 13 7
1 3514 lname2 fname2 13 7
2 3517 lname3 fname3 13 7
A B C column_2 column_3
0 3513 lname1 fname1 16 4
1 3514 lname2 fname2 16 4
2 3517 lname3 fname3 16 4
And this when I save to excel. df.to_excel("book_Sum.xlsx")
A B C column_2 column_3
0 3513 lname1 fname1 16 4
1 3514 lname2 fname2 16 4
2 3517 lname3 fname3 16 4

Define a function : Fibonacci Sequence

Define a function to implement Fibonacci Sequence: 1, 1, 2, 3, 5, 8, 13, 21, 34. Please use the function output first 20 figures of Fibonacci Sequence.
Here is a python implementation
def fib(n):
a, b = 0, 1
while a < n:
print(a, end=' ')
a, b = b, a+b
print()
fib(5000)
Output
0 1 1 2 3 5 8 13 21 34 55 89 144 233 377 610 987 1597 2584 4181
A recursive implementation
memo = [-1] * 21
memo[0] = 0
memo[1] = 1
print(memo[0], end=' ')
print(memo[1], end=' ')
def fibrec(n):
if(memo[n] == -1):
memo[n] = fibrec(n-2) + fibrec(n-1)
print(memo[n], end=' ')
return memo[n]
fibrec(20)
Output
0 1 1 2 3 5 8 13 21 34 55 89 144 233 377 610 987 1597 2584 4181 6765

Extract text from wrk output

I'm running a load test with wrk2 as a job on Jenkins. I'd like to send the results of the load test to Graylog but I only want to store the Requests/Sec and average latency.
Here's what the output looks like:
Running 30s test # https://example.com
1 threads and 100 connections
Thread calibration: mean lat.: 8338.285ms, rate sampling interval: 19202ms
Thread Stats Avg Stdev Max +/- Stdev
Latency 16.20s 6.17s 29.64s 65.74%
Req/Sec 5.00 0.00 5.00 100.00%
Latency Distribution (HdrHistogram - Recorded Latency)
50.000% 15.72s
75.000% 20.81s
90.000% 24.58s
99.000% 29.13s
99.900% 29.66s
99.990% 29.66s
99.999% 29.66s
100.000% 29.66s
Detailed Percentile spectrum:
Value Percentile TotalCount 1/(1-Percentile)
4497.407 0.000000 1 1.00
7561.215 0.100000 11 1.11
11100.159 0.200000 22 1.25
12582.911 0.300000 33 1.43
14565.375 0.400000 44 1.67
15720.447 0.500000 54 2.00
16416.767 0.550000 60 2.22
17301.503 0.600000 65 2.50
18464.767 0.650000 71 2.86
19185.663 0.700000 76 3.33
20807.679 0.750000 81 4.00
21479.423 0.775000 84 4.44
22347.775 0.800000 87 5.00
22527.999 0.825000 90 5.71
23216.127 0.850000 93 6.67
23478.271 0.875000 95 8.00
23805.951 0.887500 96 8.89
24723.455 0.900000 98 10.00
25067.519 0.912500 99 11.43
25395.199 0.925000 101 13.33
26525.695 0.937500 102 16.00
26525.695 0.943750 102 17.78
26705.919 0.950000 103 20.00
28065.791 0.956250 104 22.86
28065.791 0.962500 104 26.67
28377.087 0.968750 105 32.00
28377.087 0.971875 105 35.56
28475.391 0.975000 106 40.00
28475.391 0.978125 106 45.71
28475.391 0.981250 106 53.33
29130.751 0.984375 107 64.00
29130.751 0.985938 107 71.11
29130.751 0.987500 107 80.00
29130.751 0.989062 107 91.43
29130.751 0.990625 107 106.67
29655.039 0.992188 108 128.00
29655.039 1.000000 108 inf
#[Mean = 16199.756, StdDeviation = 6170.105]
#[Max = 29638.656, Total count = 108]
#[Buckets = 27, SubBuckets = 2048]
----------------------------------------------------------
130 requests in 30.02s, 13.44MB read
Socket errors: connect 0, read 0, write 0, timeout 1192
Requests/sec: 4.33
Transfer/sec: 458.47KB
Does anyone know how I could go about extracting Requests/sec (at the bottom) and the latency average to send as JSON parameters?
The expected output would be: "latency": 16.2, "requests_per_second": 4.33
You didn't provide the expected output so your question isn't clear but is this what you want?
$ awk 'BEGIN{a["Latency"]; a["Requests/sec:"]} ($1 in a) && ($2 ~ /[0-9]/){print $1, $2}' file
Latency 16.20s
Requests/sec: 4.33
Updated based on you adding expected output to your question:
$ awk '
BEGIN { map["Latency"]="latency"; map["Requests/sec:"]="requests_per_second" }
($1 in map) && ($2 ~ /[0-9]/) { printf "%s\"%s\": %s", ofs, map[$1], $2+0; ofs=", " }
END { print "" }
' file
"latency": 16.2, "requests_per_second": 4.33

Parsing Blocks of Data in REBOL

I have (games scores) data in this format:
Hotspurs Giants 356 6 275 4 442 3
Fierce Lions Club 371 3 2520 5 0 4
Mountain Tigers 2519 2 291 6 342 1
Shooting Stars Club 2430 5 339 1 2472 2
Gun Tooters 329 4 2512 2 2470 6
Banshee Wolves 301 1 2436 3 412 5
The first two/three words represent the club's names, thereafter follows 6 blocks of data per row which represents the club's round-by-round scores and opponent index (starting from 1). In the data above 3 rounds have been played by each team. Hotspurs Giants (index 1) played Banshee Wolves (6) in the 1st round scoring 356 to Banshee's 301, in round 2 Hotspurs Giants played Shooting Stars Club (4) scoring 275 - 339 and in round 3 played Mountain Tigers (3) scoring 442 to Tiger's 342
My question is how to parse this blocks of data in the most efficient way possible such that each club's data will be in the format below considering that a club's name may comprise of two (2) or more words.
Viz
[Club Round Score Opponent Opponent-Score] for each club
Assuming data is:
data: [
Hotspurs Giants 356 6 275 4 442 3
Fierce Lions Club 371 3 2520 5 0 4
Mountain Tigers 2519 2 291 6 342 1
Shooting Stars Club 2430 5 339 1 2472 2
Gun Tooters 329 4 2512 2 2470 6
Banshee Wolves 301 1 2436 3 412 5
]
I think this solves the problem, please check the result:
clubs: copy []
parse data [
some [
copy club some word!
copy numbers some number!
(append clubs reduce [form club numbers])
|
skip
]
]
new-line/all/skip clubs yes 2
list: copy []
parse clubs [
some [
set club string! into [
copy numbers some number! (
i: 1
foreach [score index] numbers [
append list reduce [
club score
pick clubs index * 2 - 1
pick pick clubs index * 2 i
]
i: i + 2
]
)
]
| skip
]
]
new-line/all/skip list yes 4
Afterwards if you probe clubs you should get:
CLUBS is a block of value: [
"Hotspurs Giants" [356 6 275 4 442 3]
"Fierce Lions Club" [371 3 2520 5 0 4]
"Mountain Tigers" [2519 2 291 6 342 1]
"Shooting Stars Club" [2430 5 339 1 2472 2]
"Gun Tooters" [329 4 2512 2 2470 6]
"Banshee Wolves" [301 1 2436 3 412 5]
]
And if you probe list the output is:
LIST is a block of value: [
"Hotspurs Giants" 356 "Banshee Wolves" 301
"Hotspurs Giants" 275 "Shooting Stars Club" 339
"Hotspurs Giants" 442 "Mountain Tigers" 342
"Fierce Lions Club" 371 "Mountain Tigers" 2519
"Fierce Lions Club" 2520 "Gun Tooters" 2512
"Fierce Lions Club" 0 "Shooting Stars Club" 2472
"Mountain Tigers" 2519 "Fierce Lions Club" 371
"Mountain Tigers" 291 "Banshee Wolves" 2436
"Mountain Tigers" 342 "Hotspurs Giants" 442
"Shooting Stars Club" 2430 "Gun Tooters" 329
"Shooting Stars Club" 339 "Hotspurs Giants" 275
"Shooting Stars Club" 2472 "Fierce Lions Club" 0
"Gun Tooters" 329 "Shooting Stars Club" 2430
"Gun Tooters" 2512 "Fierce Lions Club" 2520
"Gun Tooters" 2470 "Banshee Wolves" 412
"Banshee Wolves" 301 "Hotspurs Giants" 356
"Banshee Wolves" 2436 "Mountain Tigers" 291
"Banshee Wolves" 412 "Gun Tooters" 2470
]
Here is one example (using Rebol 3) showing how this could be done:
club-data: map [] ; store data in hash map is one option
foreach line read/lines %games-scores.txt [
fields: split line space
; lets take last 6 cols of data
scores: reverse collect [loop 6 [keep to-integer take/last fields]]
; and whats left is the club name
club-name: form fields
; build club data
club-data/(club-name): scores
]
Above assumes data is in file games-scores.txt and returns you a MAP! (hash map) called club-data where your club data would look like this:
make map! [
"Hotspurs Giants" [356 6 275 4 442 3]
"Fierce Lions Club" [371 3 2520 5 0 4]
"Mountain Tigers" [2519 2 291 6 342 1]
"Shooting Stars Club" [2430 5 339 1 2472 2]
"Gun Tooters" [329 4 2512 2 2470 6]
"Banshee Wolves" [301 1 2436 3 412 5]
]
One caveat... READ/LINES will load whole file into memory. So if games-scores.txt is big you should look at using OPEN instead and read in one line at a time.
Update - re: your comment here is same example in Rebol 2 [tested in REBOL/Core 2.7.8.2.5 (2-Jan-2011)]:
club-data: make hash! [] ; of course doesn't have to be hash!
foreach line read/lines %games-scores.txt [
fields: parse line none
scores: reverse collect [loop 6 [keep to-integer take/last fields]]
club-name: form fields
append club-data reduce [club-name scores]
]

Try to simulate a neural network in MATLAB by myself

I tried to create a neural network to estimate y = x ^ 2. So I created a fitting neural network and gave it some samples for input and output. I tried to build this network in C++. But the result is different than I expected.
With the following inputs:
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26
27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49
50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 -1
-2 -3 -4 -5 -6 -7 -8 -9 -10 -11 -12 -13 -14 -15 -16 -17 -18 -19 -20 -21 -22 -23 -24 -25 -26 -27 -28 -29 -30 -31 -32 -33 -34 -35 -36 -37 -38 -39 -40 -41 -42 -43 -44 -45 -46 -47 -48 -49 -50 -51 -52 -53 -54 -55 -56 -57 -58 -59 -60 -61 -62 -63 -64 -65 -66 -67 -68 -69 -70 -71
and the following outputs:
0 1 4 9 16 25 36 49 64 81 100 121 144 169 196 225 256 289 324 361 400
441 484 529 576 625 676 729 784 841 900 961 1024 1089 1156 1225 1296
1369 1444 1521 1600 1681 1764 1849 1936 2025 2116 2209 2304 2401 2500
2601 2704 2809 2916 3025 3136 3249 3364 3481 3600 3721 3844 3969 4096
4225 4356 4489 4624 4761 4900 5041 1 4 9 16 25 36 49 64 81 100 121 144
169 196 225 256 289 324 361 400 441 484 529 576 625 676 729 784 841
900 961 1024 1089 1156 1225 1296 1369 1444 1521 1600 1681 1764 1849
1936 2025 2116 2209 2304 2401 2500 2601 2704 2809 2916 3025 3136 3249
3364 3481 3600 3721 3844 3969 4096 4225 4356 4489 4624 4761 4900 5041
I used fitting tool network. with matrix rows. Training is 70%, validation is 15% and testing is 15% as well. The number of hidden neurons is two. Then in command lines I wrote this:
purelin(net.LW{2}*tansig(net.IW{1}*inputTest+net.b{1})+net.b{2})
Other information :
My net.b[1] is: -1.16610230053776 1.16667147712026
My net.b[2] is: 51.3266249426358
And net.IW(1) is: 0.344272596370387 0.344111217766824
net.LW(2) is: 31.7635369693519 -31.8082184881063
When my inputTest is 3, the result of this command is 16, while it should be about 9. Have I made an error somewhere?
I found the Stack Overflow post Neural network in MATLAB that contains a problem like my problem, but there is a little difference, and the differences is in that problem the ranges of input and output are same, but in my problem is no. That solution says I need to scale out the results, but how can I scale out my result?
You are right about scaling. As was mentioned in the linked answer, the neural network by default scales the input and output to the range [-1,1]. This can be seen in the network processing functions configuration:
>> net = fitnet(2);
>> net.inputs{1}.processFcns
ans =
'removeconstantrows' 'mapminmax'
>> net.outputs{2}.processFcns
ans =
'removeconstantrows' 'mapminmax'
The second preprocessing function applied to both input/output is mapminmax with the following parameters:
>> net.inputs{1}.processParams{2}
ans =
ymin: -1
ymax: 1
>> net.outputs{2}.processParams{2}
ans =
ymin: -1
ymax: 1
to map both into the range [-1,1] (prior to training).
This means that the trained network expects input values in this range, and outputs values also in the same range. If you want to manually feed input to the network, and compute the output yourself, you have to scale the data at input, and reverse the mapping at the output.
One last thing to remember is that each time you train the ANN, you will get different weights. If you want reproducible results, you need to fix the state of the random number generator (initialize it with the same seed each time). Read the documentation on functions like rng and RandStream.
You also have to pay attention that if you are dividing the data into training/testing/validation sets, you must use the same split each time (probably also affected by the randomness aspect I mentioned).
Here is an example to illustrate the idea (adapted from another post of mine):
%%# data
x = linspace(-71,71,200); %# 1D input
y_model = x.^2; %# model
y = y_model + 10*randn(size(x)).*x; %# add some noise
%%# create ANN, train, simulate
net = fitnet(2); %# one hidden layer with 2 nodes
net.divideFcn = 'dividerand';
net.trainParam.epochs = 50;
net = train(net,x,y);
y_hat = net(x);
%%# plot
plot(x, y, 'b.'), hold on
plot(x, x.^2, 'Color','g', 'LineWidth',2)
plot(x, y_hat, 'Color','r', 'LineWidth',2)
legend({'data (noisy)','model (x^2)','fitted'})
hold off, grid on
%%# manually simulate network
%# map input to [-1,1] range
[~,inMap] = mapminmax(x, -1, 1);
in = mapminmax('apply', x, inMap);
%# propagate values to get output (scaled to [-1,1])
hid = tansig( bsxfun(#plus, net.IW{1}*in, net.b{1}) ); %# hidden layer
outLayerOut = purelin( net.LW{2}*hid + net.b{2} ); %# output layer
%# reverse mapping from [-1,1] to original data scale
[~,outMap] = mapminmax(y, -1, 1);
out = mapminmax('reverse', outLayerOut, outMap);
%# compare against MATLAB output
max( abs(out - y_hat) ) %# this should be zero (or in the order of `eps`)
I opted to use the mapminmax function, but you could have done that manually as well. The formula is a pretty simply linear mapping:
y = (ymax-ymin)*(x-xmin)/(xmax-xmin) + ymin;

Resources