Rrd4j not able to receive samples from MongoDB - rrd4j

I have integrated rrd4j (3.1) with mongodb (3.2). But the sample data doesn't seem to be persistent after closing RrdDb and then re initializing the object. I see the binary data getting updated in mongo db when the rrdDb.close() is called. And on open the data is being queried from mongodb and the binary data is getting updated to byte buffer. But on dumping the data after the reconnect, all the sample data is is replace by NaN. Could someone please help me on this.?
Adding the code for RrdDb intialising, Mongo Ractory creation and the dump outputs before close and after reconnect.
RrdDb intialization
RrdDef rrdDef = new RrdDef(rrdPath,(System.currentTimeMillis() - TimeUnit.DAYS.toMillis(300))/1000,10);
rrdDef.addDatasource(dsName, DsType.GAUGE , 20, 1d, Double.MAX_VALUE);
int secondUnitStep = (int)(TimeUnit.MINUTES.toSeconds(1)/10);
int secondUnitRows = (int)(TimeUnit.HOURS.toSeconds(1)/TimeUnit.MINUTES.toSeconds(1));
rrdDef.addArchive(ConsolFun.TOTAL, 0.5, 6, 60);
RrdDb rrdDB = new RrdDb(rrdDef,rrdMongoFactory);
Rrd Mongo Factory class
package mx.july.jmx.proximity.util;
public class RrdMongoFactory extends RrdBackendFactory {
private String name;
private final DBCollection rrdCollection;
public RrdMongoFactory(String name, DBCollection rrdCollection) {
this.name = name;
this.rrdCollection = rrdCollection;
this.rrdCollection.createIndex(new BasicDBObject("path", 1), "path_idx");
RrdBackendFactory.registerAndSetAsDefaultFactory(this);
}
public void setName(String name) {
this.name = name;
}
public String getName() {
return name;
}
/** {#inheritDoc} */
#Override
protected RrdBackend open(String path, boolean readOnly) throws IOException {
return new RrdMongoDBBackend(path, rrdCollection);
}
/** {#inheritDoc} */
#Override
protected boolean exists(String path) throws IOException {
BasicDBObject query = new BasicDBObject();
query.put("path", path);
return rrdCollection.findOne(query) != null;
}
/** {#inheritDoc} */
#Override
protected boolean shouldValidateHeader(String path) throws IOException {
return false;
}
}
RrdDb dump before close
== HEADER ==
signature:RRD4J, version 0.2 lastUpdateTime:1488536144 step:10 dsCount:1 arcCount:3
== DATASOURCE ==
DS:320:GAUGE:20:1.0:1.7976931348623157E308
lastValue:8.0 nanSeconds:0 accumValue:32.0
== ARCHIVE ==
RRA:TOTAL:0.5:6:60
interval [1488532560, 1488536100]
accumValue:117.0 nanSteps:0
Robin 5/60: NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN +1.7860000000E02 NaN +1.7130000000E02 +1.6600000000E02
RrdDb dump after reconnect
== HEADER ==
signature:RRD4J, version 0.2 lastUpdateTime:1462616268 step:10 dsCount:1 arcCount:3
== DATASOURCE ==
DS:320:GAUGE:20:1.0:1.7976931348623157E308
lastValue:NaN nanSeconds:8 accumValue:0.0
== ARCHIVE ==
RRA:TOTAL:0.5:6:60
interval [1462612680, 1462616220]
accumValue:NaN nanSteps:4
Robin 0/60: NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN

Got the solution for this from another group. Thought of posting this here too.
So basically for each run you need to open the rrdDb connection at the satrt and close it at the end of the run.
When creating RrdDb object for the very first time, you need to define the RrdDef and the back-end factory.(code snippet for the same posted earlier). At the end of the run when the data is collected, you need to close the RrdDb connection (rrdDb.close()). Only then the data will be written to the backend (in this case mongo db). On the next open rather than defining the rrdDef just pass the rrdPath as an attribute to rrdDb object, syntax for the same is mentioned below, using which mongo make a query and retrieves the corresponding information.
rrdDB = new RrdDb(rrdPath, rrdMongoShardFactory);
Please note if there is no entry for the path in mongo, then the above command will throw a file not found exception. Sample snippet given below
public RRDInfo openRrdDB() throws IOException {
try{
rrdDB = new RrdDb(rrdPath, rrdMongoShardFactory);
}catch(FileNotFoundException e) {
rrdDef = new RrdDef(rrdPath,(System.currentTimeMillis() - TimeUnit.DAYS.toMillis(300))/1000,Configurator.RRD_STEP_INTERVAL);
rrdDef.addDatasource(dsName, DsType.GAUGE , Configurator.RRD_STEP_INTERVAL*2, 1d, Double.MAX_VALUE);
int secondUnitStep = (int)(TimeUnit.MINUTES.toSeconds(1)/Configurator.RRD_STEP_INTERVAL);
int secondUnitRows = (int)(TimeUnit.HOURS.toSeconds(1)/TimeUnit.MINUTES.toSeconds(1));
int hourUnitStep = (int)(TimeUnit.HOURS.toSeconds(1)/Configurator.RRD_STEP_INTERVAL);
int hourUnitRows = (int)(TimeUnit.DAYS.toSeconds(2)/TimeUnit.HOURS.toSeconds(1));
int dayUnitStep = (int)(TimeUnit.DAYS.toSeconds(1)/Configurator.RRD_STEP_INTERVAL);
int dayUnitRows = (int)(TimeUnit.DAYS.toSeconds(365)/TimeUnit.DAYS.toSeconds(1));
rrdDef.addArchive(ConsolFun.TOTAL, 0.5, secondUnitStep, secondUnitRows);
rrdDef.addArchive(ConsolFun.TOTAL, 0.5, hourUnitStep, hourUnitRows);
rrdDef.addArchive(ConsolFun.TOTAL, 0.5, dayUnitStep, dayUnitRows);
rrdDB = new RrdDb(rrdDef,rrdMongoShardFactory);
}
return this;
}

Related

How to properly use MeanEncoder for categorical encoding in a k fold loop

I want to use MeanEncoder from the feature-engine in my k-fold loop for encoding categorical data. It seems that after the tranform step the encoder introduces NaN values for certain columns in my dataset. The code is as follows
from sklearn.model_selection import KFold
from sklearn import linear_model
kf = KFold(n_splits=2)
linear_reg = linear_model.LinearRegression()
kfold_rmse = []
X = housing.drop(columns=['Price'], axis=1)
y = housing['Price']
for train_index, test_index in kf.split(X):
X_train, X_test = X.iloc[train_index, :], X.iloc[test_index, :]
y_train, y_test = y.iloc[train_index], y.iloc[test_index]
X_train.drop(columns=['BuildingArea','YearBuilt', 'Rooms'], axis=1, inplace=True)
X_test.drop(columns=['BuildingArea','YearBuilt', 'Rooms'], axis=1, inplace=True)
random_imputer = RandomSampleImputer(variables=['Car', 'CouncilArea'])
random_imputer.fit(X_train)
X_train = random_imputer.transform(X_train)
X_test = random_imputer.transform(X_test)
X_train[descrete_var] = X_train[descrete_var].astype('O')
X_test[descrete_var] = X_test[descrete_var].astype('O')
mean_encoder = MeanEncoder(variables=categorical_var+descrete_var)
mean_encoder.fit(X_train,y_train)
print(X_test.isnull().mean()) # <--------- No NaN columns
X_train = mean_encoder.transform(X_train)
X_test = mean_encoder.transform(X_test)
print(X_test.isnull().mean()) # # <--------- NaN columns introduced
# Fit the model
# linear_reg_model = linear_reg.fit(X_train, y_train)
# y_pred_linear_reg = linear_reg_model.predict(X_test)
# # Calculate the RMSE for each fold and append it
# rmse = mean_squared_error(y_test, y_pred_linear_reg, squared=False)
# kfold_rmse.append(rmse)
For further context, here is the output I get:
...
Suburb 0.0
Type 0.0
Method 0.0
SellerG 0.0
Distance 0.0
Postcode 0.0
Bedroom2 0.0
Bathroom 0.0
Car 0.0
Landsize 0.0
CouncilArea 0.0
Regionname 0.0
Propertycount 0.0
Month_name 0.0
day 0.0
Year 0.0
dtype: float64
Suburb 0.000000
Type 0.000000
Method 0.000000
SellerG 0.014138
Distance 0.000000
Postcode 0.000000
Bedroom2 0.000000
Bathroom 0.000295
...
Month_name 0.000000
day 0.191605
Year 0.000000
This obviously causes problems for the model prediction because LinearRegression can not accept NaN values. I think this may be an issue with how I'm using MeanEncoder in the loop with kfold. Is there something I'm doing wrong or not understanding about either the k-fold process or MeanEncoder?
Your test folds contain categories unseen at training time, and the encoder by default encodes those as NaN. From the documentation:
errors: string, default=’ignore’
Indicates what to do, when categories not present in the train set are encountered during transform. If ‘raise’, then rare categories will raise an error. If ‘ignore’, then rare categories will be set as NaN and a warning will be raised instead.

Showing the latest value in a line chart with Aspose.Words Net

Following the Working with Charts tutorial, I've been trying to show the latest value in a chart line series for the current or previous year. I receive two matrices as shown below in which their Count property is always either between one and two. As said, the matrix of DateTime may have one or two lists having each at least 365 days. The second matrix has the corresponding double for each day.
List<List<DateTime>> AxisX, List<List<double>> AxisY, List<string> seriesName
shape = (Shape)doc.Range.Bookmarks[bookmarkName.GetDescription()].BookmarkStart.NextSibling;
Chart chart = shape.Chart;
chart.Legend.Position = LegendPosition.TopRight;
chart.Legend.Overlay = true;
chart.Series.Clear();
chart.AxisX.CategoryType = AxisCategoryType.Time;
chart.AxisX.BaseTimeUnit = AxisTimeUnit.Days;
chart.AxisX.MajorUnitScale = AxisTimeUnit.Months;
chart.AxisX.MajorUnit = 1;
chart.AxisX.TickLabelAlignment = ParagraphAlignment.Center;
if (eixoX.Any() && eixoX.ElementAt(0).Any())
{
chart.AxisX.Scaling.Maximum = new AxisBound(eixoX.ElementAt(0).Last());
chart.AxisX.Scaling.Minimum = new AxisBound(eixoX.ElementAt(0).First());
}
Until now I was able to show the latest value in the chart if the AxisY[1] has continuous values like this:
chart.AxisX.NumberFormat.FormatCode = "mmm";
eixosY.ForEach((eixoY, i) =>
{
if (eixoX.ElementAt(i).Any() && eixoY.Any())
{
var x = eixoX.ElementAt(i).ToArray();
var y = eixoY.ToArray();
var s = nomeSeries.ElementAt(i);
chart.Series.Add(s, x, y);
}
});
And then I iterate through AxisY[1] and get the latest value as follows:
var array = chart.Series.Count - 1;
var serie = eixosY[array].ToArray();
var last = 0;
for (int i = serie.Length - 1; i >= 0; i--)
{
if (Double.IsNaN(serie[i])) continue;
last = i;
break;
}
var labels = chart.Series[array].DataLabels;
ChartDataLabel l = labels.Add(last);
l.ShowValue = true;
That produces the following result as expected:
Now the problem. When AxisY[1] is non-continuous I can't get the same result. Instead, I got something like this:
I just can't show the tuple X and Y value as in the previous image. I wanted the chart to show the value 57.1 since it is the latest value for the data in AxisY[1]:
- y {double[366]} double[]
[0] 29.9338 double
[1] 29.5862 double
[2] NaN double
[3] NaN double
[4] NaN double
[5] NaN double
[6] NaN double
[7] NaN double
[8] NaN double
[9] NaN double
[10] NaN double
[11] NaN double
[12] NaN double
[13] NaN double
[14] NaN double
[15] NaN double
[16] NaN double
[17] NaN double
[18] NaN double
[19] NaN double
[20] NaN double
[21] NaN double
[22] NaN double
[23] NaN double
[24] NaN double
[25] NaN double
[26] NaN double
[27] NaN double
[28] NaN double
[29] NaN double
[30] NaN double
[31] NaN double
[32] NaN double
[33] NaN double
[34] 23.282 double
[35] NaN double
[36] NaN double
[37] NaN double
.
.
.
[147] NaN double
[148] 16.5327 double
.
.
.
.
[254] 57.1 double
Any help will be much appreciated. Thanks in advance.
In your case you do not need to use DataLabels.Add method, since it is obsolete. It is enough to set ChartSeries.HasDataLabels property to true and then access the required DataLabel by index. I created a simple code example where X values has NaN values in the middle and at the end. The data label is displayed correctly as seen on the screenshot.
Document doc = new Document();
DocumentBuilder builder = new DocumentBuilder(doc);
Shape shape = builder.InsertChart(ChartType.Line, 500, 300);
Chart chart = shape.Chart;
chart.Series.Clear();
double[] xValues = new double[] { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12 };
double[] yValues = new double[] { 1, 2, double.NaN, double.NaN, 5, 6, 7, 8, 9, 10, 11, double.NaN };
ChartSeries series = chart.Series.Add("Test", xValues, yValues);
// Determine the last no NaN value index.
int last = yValues.Length - 1;
while (double.IsNaN(yValues[last]) && last > 0)
last--;
// Enable data labels.
series.HasDataLabels = true;
// Display the value of the last not NaN value.
series.DataLabels[last].ShowValue = true;
doc.Save(#"C:\Temp\out.docx");

DesicionTreeClassifier: Input contains NaN, infinity or a value too large for dtype('float32')

clf = DecisionTreeClassifier()
scoring = 'accuracy'
score = cross_val_score(clf, train_data, target, cv=k_fold, n_jobs=1, error_score='raise')
print(score)
after run this code I have error:
ValueError: Input contains NaN, infinity or a value too large for
dtype('float32').
So how I can fix it?
Decision Trees doesn't accept NaN / infinity values.
Try doing (assuming that train_data is a Pandas DataFrame):
train_data.fillna(0, inplace = True)
This will replace all NaN values by 0.
If you don't want this, the only thing to do is to delete entries with NaN data :
train_data.dropna(inplace = True)
If this is not a DataFrame, try adding this line before the fillna method:
train_data = pd.DataFrame(train_data)

How does this binary encoder function work?

I'm trying to understand the logic behind this binary encoder.
It automatically takes categorical variables and dummy codes them (similar to one-hot-encoding on sklearn), but reduces the number of output columns equal to the log2 of the length of unique values.
Basically, when I used this library, I noticed that my dummy variables are limited to only a few of the unique values. Upon further investigation I noticed this #staticmethod, which take the log2 of the len of unique values in a categorical variable.
My question is WHY? I realize that this reduces the dimensionality of the output data, but what is the logic behind doing this? How does taking the log2 determine how many digits are needed to represent the data?
def calc_required_digits(X, col):
"""
figure out how many digits we need to represent the classes present
"""
return int( np.ceil(np.log2(len(X[col].unique()))) )
Full source code:
"""Binary encoding"""
import copy
import pandas as pd
import numpy as np
from sklearn.base import BaseEstimator, TransformerMixin
from category_encoders.ordinal import OrdinalEncoder
from category_encoders.utils import get_obj_cols, convert_input
__author__ = 'willmcginnis'
[docs]class BinaryEncoder(BaseEstimator, TransformerMixin):
"""Binary encoding for categorical variables, similar to onehot, but stores categories as binary bitstrings.
Parameters
----------
verbose: int
integer indicating verbosity of output. 0 for none.
cols: list
a list of columns to encode, if None, all string columns will be encoded
drop_invariant: bool
boolean for whether or not to drop columns with 0 variance
return_df: bool
boolean for whether to return a pandas DataFrame from transform (otherwise it will be a numpy array)
impute_missing: bool
boolean for whether or not to apply the logic for handle_unknown, will be deprecated in the future.
handle_unknown: str
options are 'error', 'ignore' and 'impute', defaults to 'impute', which will impute the category -1. Warning: if
impute is used, an extra column will be added in if the transform matrix has unknown categories. This can causes
unexpected changes in dimension in some cases.
Example
-------
>>>from category_encoders import *
>>>import pandas as pd
>>>from sklearn.datasets import load_boston
>>>bunch = load_boston()
>>>y = bunch.target
>>>X = pd.DataFrame(bunch.data, columns=bunch.feature_names)
>>>enc = BinaryEncoder(cols=['CHAS', 'RAD']).fit(X, y)
>>>numeric_dataset = enc.transform(X)
>>>print(numeric_dataset.info())
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 506 entries, 0 to 505
Data columns (total 16 columns):
CHAS_0 506 non-null int64
RAD_0 506 non-null int64
RAD_1 506 non-null int64
RAD_2 506 non-null int64
RAD_3 506 non-null int64
CRIM 506 non-null float64
ZN 506 non-null float64
INDUS 506 non-null float64
NOX 506 non-null float64
RM 506 non-null float64
AGE 506 non-null float64
DIS 506 non-null float64
TAX 506 non-null float64
PTRATIO 506 non-null float64
B 506 non-null float64
LSTAT 506 non-null float64
dtypes: float64(11), int64(5)
memory usage: 63.3 KB
None
"""
def __init__(self, verbose=0, cols=None, drop_invariant=False, return_df=True, impute_missing=True, handle_unknown='impute'):
self.return_df = return_df
self.drop_invariant = drop_invariant
self.drop_cols = []
self.verbose = verbose
self.impute_missing = impute_missing
self.handle_unknown = handle_unknown
self.cols = cols
self.ordinal_encoder = None
self._dim = None
self.digits_per_col = {}
[docs] def fit(self, X, y=None, **kwargs):
"""Fit encoder according to X and y.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Training vectors, where n_samples is the number of samples
and n_features is the number of features.
y : array-like, shape = [n_samples]
Target values.
Returns
-------
self : encoder
Returns self.
"""
# if the input dataset isn't already a dataframe, convert it to one (using default column names)
# first check the type
X = convert_input(X)
self._dim = X.shape[1]
# if columns aren't passed, just use every string column
if self.cols is None:
self.cols = get_obj_cols(X)
# train an ordinal pre-encoder
self.ordinal_encoder = OrdinalEncoder(
verbose=self.verbose,
cols=self.cols,
impute_missing=self.impute_missing,
handle_unknown=self.handle_unknown
)
self.ordinal_encoder = self.ordinal_encoder.fit(X)
for col in self.cols:
self.digits_per_col[col] = self.calc_required_digits(X, col)
# drop all output columns with 0 variance.
if self.drop_invariant:
self.drop_cols = []
X_temp = self.transform(X)
self.drop_cols = [x for x in X_temp.columns.values if X_temp[x].var() <= 10e-5]
return self
[docs] def transform(self, X):
"""Perform the transformation to new categorical data.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Returns
-------
p : array, shape = [n_samples, n_numeric + N]
Transformed values with encoding applied.
"""
if self._dim is None:
raise ValueError('Must train encoder before it can be used to transform data.')
# first check the type
X = convert_input(X)
# then make sure that it is the right size
if X.shape[1] != self._dim:
raise ValueError('Unexpected input dimension %d, expected %d' % (X.shape[1], self._dim, ))
if not self.cols:
return X
X = self.ordinal_encoder.transform(X)
X = self.binary(X, cols=self.cols)
if self.drop_invariant:
for col in self.drop_cols:
X.drop(col, 1, inplace=True)
if self.return_df:
return X
else:
return X.values
[docs] def binary(self, X_in, cols=None):
"""
Binary encoding encodes the integers as binary code with one column per digit.
"""
X = X_in.copy(deep=True)
if cols is None:
cols = X.columns.values
pass_thru = []
else:
pass_thru = [col for col in X.columns.values if col not in cols]
bin_cols = []
for col in cols:
# get how many digits we need to represent the classes present
digits = self.digits_per_col[col]
# map the ordinal column into a list of these digits, of length digits
X[col] = X[col].map(lambda x: self.col_transform(x, digits))
for dig in range(digits):
X[str(col) + '_%d' % (dig, )] = X[col].map(lambda r: int(r[dig]) if r is not None else None)
bin_cols.append(str(col) + '_%d' % (dig, ))
X = X.reindex(columns=bin_cols + pass_thru)
return X
[docs] #staticmethod
def calc_required_digits(X, col):
"""
figure out how many digits we need to represent the classes present
"""
return int( np.ceil(np.log2(len(X[col].unique()))) )
[docs] #staticmethod
def col_transform(col, digits):
"""
The lambda body to transform the column values
"""
if col is None or float(col) < 0.0:
return None
else:
col = list("{0:b}".format(int(col)))
if len(col) == digits:
return col
else:
return [0 for _ in range(digits - len(col))] + col
My question is WHY? I realize that this reduces the dimensionality of
the output data, but what is the logic behind doing this?
Basically, the issue of categorical encoding is to make your algorithm it's dealing with categorical features. Therefore, several methods are available for doing it, including binary encoding. Actually, it's logic is close to the logic of One Hot Encoding (OHE), if you understood it.
For binary encoding, each unique label in your categorical vector is associated randomly to a number between (0) and (the number of unique labels-1). Now, you encode this number in base 2 and "transcript" the previous number in 0 and 1 through the newly created columns.
As an example, let's say your dataset as three different labels: 'A', 'B' & 'C'.
The following correspondance is randomly built:
'A' -> 1 -> 01;
'B' -> 2 > 10;
'C' -> 0 -> 00.
Therefore, an example of encoding of a given dataset is:
index my_category enc_category_0 enc_category_1
0 A, 1, 0
1, B, 0, 1
2, C, 0, 0
3 A, 1, 0
Regarding the utility of it, as you said it's reduce the dimensionality. Besides, I guess it helps not having too much zeros in the encoded columns as with OHE. Here is an interesting post: https://medium.com/data-design/visiting-categorical-features-and-encoding-in-decision-trees-53400fa65931
How does taking the log2 determine how many digits are needed to represent the data?
If you understood the working principle, you understand the use of the log2. Computing the log2 of a number retrives the necessary number of digits for a binary encoding of this number. Example: [log2(10)]=[3.32]=4, 4 digits are needed for binary encode 10.
For more info about the implementation and code example: http://contrib.scikit-learn.org/categorical-encoding/_modules/category_encoders/binary.html#BinaryEncoder
Hope I was clear,
Tchau

torch7: Filtering out NaN values

Given any general float torch.Tensor, possibly containing some NaN values, I am looking for an efficient method to either replace all the NaN values in it with zero, or remove them altogether and filter out the "useful" values in another new Tensor.
I am aware that a trivial way to do this is to manually iterate through all the values in the given tensor (and correspondingly replace them with zero or reject them for the new tensor).
Is there some pre-defined Torch function or a combination of functions which can achieve this more efficiently in terms of performance, which relies on the inherent CPU-GPU optimisations of Torch?
Well, it looks like there is no function in torch checking tensor for NaNs. But since NaN != NaN, there's a work around:
a = torch.rand(4, 5)
a[2][3] = tonumber('nan')
nan_mask = a:ne(a)
notnan_mask = a:eq(a)
print(a)
0.2434 0.1731 0.3440 0.3340 0.0519
0.0932 0.4067 nan 0.1827 0.5945
0.3020 0.1035 0.5415 0.3329 0.7881
0.6108 0.9498 0.0406 0.9335 0.3582
[torch.DoubleTensor of size 4x5]
print(nan_mask)
0 0 0 0 0
0 0 1 0 0
0 0 0 0 0
0 0 0 0 0
[torch.ByteTensor of size 4x5]
Having these masks, you can efficiently extract NaN/not NaN values and replace them with whatever you want:
print(a[notnan_mask])
...
[torch.DoubleTensor of size 19]
a[nan_mask] = 42
print(a)
0.2434 0.1731 0.3440 0.3340 0.0519
0.0932 0.4067 42.0000 0.1827 0.5945
0.3020 0.1035 0.5415 0.3329 0.7881
0.6108 0.9498 0.0406 0.9335 0.3582
[torch.DoubleTensor of size 4x5]

Resources