rCharts - highcharts speed - highcharts

General question on the speed of (rCharts) highcharts rendering.
Given the following code
rm(list = ls())
require(rCharts)
set.seed(2)
time_stamp<-seq(from=as.POSIXct("2014-05-20 01:00",tz=""),to=as.POSIXct("2014-05-22 20:00",tz=""),by="1 min")
Data1<-abs(rnorm(length(time_stamp))*50)
Data2<-rnorm(length(time_stamp))
time<-as.numeric(time_stamp)*1000
CombData=data.frame(time,Data1,Data2)
CombData$Data1=round(CombData$Data1,2);CombData$Data2=round(CombData$Data2,2);
HCGraph <- Highcharts$new()
HCGraph$yAxis(list(list(title = list(text = 'Data1')),
list(title = list(text = 'Data2'),
opposite =TRUE)))
HCGraph$series(data = toJSONArray2(CombData[,c('time','Data1')], json = F, names = F),enableMouseTracking=FALSE,shadow=FALSE,name = "Data1",type = "line")
HCGraph$series(data = toJSONArray2(CombData[,c('time','Data2')], json = F, names = F),enableMouseTracking=FALSE,shadow=FALSE,name = "Data2",type = "line",yAxis=1)
HCGraph$xAxis(type = "datetime"); HCGraph$chart(zoomType = "x")
HCGraph$plotOptions(column=list(animation=FALSE),shadow=FALSE,line=list(marker=list(enabled=FALSE)));
HCGraph
Produces a highcharts graph of 2 series each 4021 points in length and renders immediately.
However, if I increase the timespan to say 10 days (8341 points) the resulting plot can take several minutes to generate.
I'm aware there are several modifications that can be made to highcharts for better performance,
Highcharts Performance Enhancement Method?,
however, are there any changes I can make from an R / rCharts perspective to speed up performance?
Cheers

Related

How do I derive statistical power value using Microsoft excel or Googlesheets?

I am trying to reverse engineer hypothesis test results from an online calculator using Microsoft excel or Googlesheets. The inputs/outputs for the online calculator are shown in the screenshot
I have used the following excel functions to replicate the conversion rates and p-value -
control_exposures = 917
control_conversions = 126
variant_exposures = 1002
variant_conversions = 142
control_cvr_rate = control_conversions/control_exposures = 13.74%
variant_cvr_rate = variant_conversions/variant_exposures = 14.17%
control_std_error = sqrt((control_cvr_rate*(1-control_cvr_rate)/control_exposures)) = 1.14%
variant_std_error = sqrt((variant_cvr_rate*(1-variant_cvr_rate)/variant_exposures)) = 1.10%
z_score = (control_cvr_rate- variant_cvr_rate)/sqrt(power(control_std_error,2)+power(variant_std_error,2)) = -0.2724
p_value = normdist(z_score,0,1,TRUE) = 0.3927
Based on this, how do I derive that statistical power value of 5.9% in the screenshot?

how do I use TimeSeriesSplit in xgb.cv

I have trying to run XGBoost for time series analysis. these are my codes which are used else where
xgb1 = xgb.XGBRegressor(learning_rate=0.1,n_estimators=n_estimators,max_depth=max_depth,min_child_weight=min_child_weight,gamma=0,subsample=0.8,colsample_bytree=0.8,
reg_alpha=reg_alpha,objective='reg:squarederror', nthread=4, scale_pos_weight=1, seed=27)
xgb_param = xgb1.get_xgb_params()
dmatrix = xgb.DMatrix(data=X_train, label=y_train)
cv_folds = 5
early_stopping_rounds = 50
cvresults = xgb.cv(dtrain=dmatrix, params = xgb_param,num_boost_round=xgb1.get_params()['n_estimators'], nfold=cv_folds,
metrics='rmse', early_stopping_rounds=early_stopping_rounds)
Obvious issue here is that I want to cross validate timeseries data and hence can't use the cv_folds = 5.
(How) can I use the TimeseriesSplit function within xgb.cv?
thanks,

Sklearn SVR shows worst result after scaling

Following code works quite well when used without scaling, but when scaling is applied results are too far from actual. Here is the code:
data =(1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63).
model = SVR(kernel='poly', C=1e3, degree=3)
data_min = min(data)
data_max = max(data)
diff = data_max - data_min
data_scaled = []
for i in range(0,len(data)):
data_scaled.append((data[i]-data_min)/diff)
data_scaled = np.matrix(data_scaled)
data_scaled = data_scaled.reshape(-1,1)
y = (1,8,27,64,125,216,343,512,729,1000,1331,1728,2197,2744,3375,4096,4913,5832,6859,8000,9261,10648,12167,13824,15625,17576,19683,21952,24389,27000,29791,32768,35937,39304,42875,46656,50653,54872,59319,64000,68921,74088,79507,85184,91125,97336,103823,110592,117649,125000,132651,140608,148877,157464,166375,175616,185193,195112,205379,216000,226981,238328,250047)
model.fit(data_scaled, y)
predicted = model.predict(data_scaled)

How to Display a decomposition wavelet in 3 level?

I want to display a decomposition wavelet in 3 level.
so can any help me in give a Matlab function to display it?
[cA cH cV cD]=dwt2(a,waveletname);
out=[cA cH;cV cD];
figure;imshow(out,[]);
That only works for the first level.
actually, I want to representation square mode such wavemenu in Matlab.
example of the view decomposition
I am fairly new to it.
thanx.
You should use the function wavedec2(Image,numberOfLevels,'wname') with the amount of levels that you need.
For more information look at
http://www.mathworks.com/help/wavelet/ref/wavedec2.html
Code for example with db1
clear all
im = imread('cameraman.tif');
[c,s] = wavedec2(im,3,'db1');
A1 = appcoef2(c,s,'db1',1);
[H1,V1,D1] = detcoef2('all',c,s,1);
A2 = appcoef2(c,s,'db1',2);
[H2,V2,D2] = detcoef2('all',c,s,2);
A3 = appcoef2(c,s,'db1',3);
[H3,V3,D3] = detcoef2('all',c,s,3);
V1img = wcodemat(V1,255,'mat',1);
H1img = wcodemat(H1,255,'mat',1);
D1img = wcodemat(D1,255,'mat',1);
A1img = wcodemat(A1,255,'mat',1);
V2img = wcodemat(V2,255,'mat',1);
H2img = wcodemat(H2,255,'mat',1);
D2img = wcodemat(D2,255,'mat',1);
A2img = wcodemat(A2,255,'mat',1);
V3img = wcodemat(V3,255,'mat',1);
H3img = wcodemat(H3,255,'mat',1);
D3img = wcodemat(D3,255,'mat',1);
A3img = wcodemat(A3,255,'mat',1);
mat3 = [A3img,V3img;H3img,D3img];
mat2 = [mat3,V2img;H2img,D2img];
mat1 = [mat2,V1img;H1img,D1img];
imshow(uint8(mat1))
The final result

Pc-Stable from pcalg

I am using the pc-stable from the package ‘pcalg’ version 2.0-10 to learn the structure . what I understand this algorithm does not effect the the order of the input data because it is order_independent. when I run it with different order ,I got different graph. can any one help me with this issue and this is my code.
library(pracma)
randindexMatriax <- matrix(0,10,ncol(TrainData))
numberUnique_val_col = vector()
pdf("Graph for Test PC Stable with random order.pdf")
par(mfrow=c(2,1))
for (i in 1:10)
{
randindex<-randperm(1:ncol(TrainData))
randindexMatriax[i,]<-randindex
TrainDataRandOrder<-data[,randindex]
V <- colnames( TrainDataRandOrder)
UD <-data.frame(TrainDataRandOrder)
numberUnique_val_col= sapply(UD,function(x)length(unique(x)))
suffStat <- list(dm = TrainDataRandOrder,nlev = c(numberUnique_val_col[1],numberUnique_val_col[2], numberUnique_val_col[3],numberUnique_val_col[4],
numberUnique_val_col[5],numberUnique_val_col[6], numberUnique_val_col[7],
numberUnique_val_col[8],numberUnique_val_col[9],
numberUnique_val_col[10],numberUnique_val_col[11],
numberUnique_val_col[12],numberUnique_val_col[13],
numberUnique_val_col[14],numberUnique_val_col[15],
numberUnique_val_col[16],numberUnique_val_col[17],
numberUnique_val_col[18],numberUnique_val_col[19], numberUnique_val_col[20]), adaptDF = FALSE)
pc.fit <- pc(suffStat, indepTest= disCItest, alpha=0.01, labels=V, fixedGaps = NULL, fixedEdges = NULL,NAdelete = TRUE, m.max = Inf,skel.method = "stable", conservative = TRUE,solve.confl = TRUE, verbose = TRUE)
The "Stable" part of PC-Stable only affects the Skeleton phase of the algorithm. The Orientation phase is still order-dependent. Do the two graphs have identical "skeletons"? That is, if you convert all directed edges into undirected edges, are the two graphs identical?
If not, you may have uncovered a bug in pcalg! Please post a sample dataset and two orderings of the columns that produce graphs with different skeletons.

Resources