I am trying to understand more about certain surprising results i see in implementing a tf graph .
The graph i am working with is just a forest (bunch of trees). This is just a plain forward inference graph , and nothing related to training. I am sharing the snippets for 2 implementation
code snippet 1:
with tf.name_scope("main"):
def get_tree_output(offset):
loop_vars = (offset,)
leaf_indice = tf.while_loop(cond,
body,
loop_vars,
back_prop=False,
parallel_iterations=1,
name="while_loop")
tree_score = tf.gather(score_tensor, leaf_indice, name="tree-scores")
output = tf.add(tree_score, output)
leaf_indices = tf.map_fn(get_tree_output,
tree_offsets_tensor,
dtype=INT_TYPE,
parallel_iterations=n_trees,
back_prop=False,
name="tree-scores")
tree_scores = tf.gather(score_tensor, leaf_indices, name="tree-scores")
output = tf.reduce_sum(tree_scores, name="sum-output")
output = tf.sigmoid(output, name="sigmoid-output")
code snippet 2:
with tf.name_scope("main"):
tree_offsets_tensor = tf.constant(tree_offsets, dtype=INT_TYPE, name="tree_offsets_tensor")
loop_vars = (tree_offsets_tensor,)
leaf_indices = tf.while_loop(cond,
body,
loop_vars,
back_prop=False,
parallel_iterations=n_trees,
name="while_loop")
tree_scores = tf.gather(score_tensor, leaf_indices, name="tree-scores")
output = tf.reduce_sum(tree_scores, name="sum-output")
output = tf.sigmoid(output, name="sigmoid-output")
The rest of the code is exactly the same : the constant tensors , variables, condition and body for the while loop. thread and parallelism was also the same in both case
code snippet2 : takes about 500 micro sec to do inference
code snippet 1 : take about 12 milli sec to do inference
The difference is that in snippet 1 , I use map_fn to operate on tree_offset_tensor, where as in snippet 2 , I get rid of that map_fn, and just directly use that tensor, so as I understand in snippet1 get_tree_output method gets called with one element from tree_offset_tensor, we are having multiple while_loop for each individual offset value, whereas in snippet 2 we just have one while_loop that just takes multiple offset values (basically the offset_tensor).
I also tried another variation for snippet , instead of using the map_fn I write a hand written for loop
code snippet 1 (variation for loop) :
output = 0
with tf.name_scope("main"):
for offset in tree_offsets:
loop_vars = (offset,)
leaf_indice = tf.while_loop(cond,
body,
loop_vars,
back_prop=False,
parallel_iterations=1,
name="while_loop")
tree_score = tf.gather(score_tensor, leaf_indice, name="tree-scores")
output = tf.add(tree_score, output)
#leaf_indices = tf.map_fn(get_tree_output,
# tree_offsets_tensor, dtype=INT_TYPE,
# parallel_iterations=n_trees, back_prop=False,
# name="tree-scores")
#tree_scores = tf.gather(score_tensor, leaf_indices, name="tree-scores")
#output = tf.reduce_sum(tree_scores, name="sum-output")
output = tf.sigmoid(output, name="sigmoid-output")
This gives minor improvement : 9 millisec
Related
I wrote a script using xgboost to predict soil class for a certain area using data from field and satellite images. The script as below:
`
rm(list=ls())
library(xgboost)
library(caret)
library(raster)
library(sp)
library(rgeos)
library(ggplot2)
setwd("G:/DATA")
data <- read.csv('96PointsClay02finalone.csv')
head(data)
summary(data)
dim(data)
ras <- stack("Allindices04TIFF.tif")
names(ras) <- c("b1", "b2", "b3", "b4", "b5", "b6", "b7", "b10", "b11","DEM",
"R1011", "SCI", "SAVI", "NDVI", "NDSI", "NDSandI", "MBSI",
"GSI", "GSAVI", "EVI", "DryBSI", "BIL", "BI","SRCI")
set.seed(27) # set seed for generating random data.
# createDataPartition() function from the caret package to split the original dataset into a training and testing set and split data into training (80%) and testing set (20%)
parts = createDataPartition(data$Clay, p = .8, list = F)
train = data[parts, ]
test = data[-parts, ]
#define predictor and response variables in training set
train_x = data.matrix(train[, -1])
train_y = train[,1]
#define predictor and response variables in testing set
test_x = data.matrix(test[, -1])
test_y = test[, 1]
#define final training and testing sets
xgb_train = xgb.DMatrix(data = train_x, label = train_y)
xgb_test = xgb.DMatrix(data = test_x, label = test_y)
#defining a watchlist
watchlist = list(train=xgb_train, test=xgb_test)
#fit XGBoost model and display training and testing data at each iteartion
model = xgb.train(data = xgb_train, max.depth = 3, watchlist=watchlist, nrounds = 100)
#define final model
model_xgboost = xgboost(data = xgb_train, max.depth = 3, nrounds = 86, verbose = 0)
summary(model_xgboost)
#use model to make predictions on test data
pred_y = predict(model_xgboost, xgb_test)
# performance metrics on the test data
mean((test_y - pred_y)^2) #mse - Mean Squared Error
caret::RMSE(test_y, pred_y) #rmse - Root Mean Squared Error
y_test_mean = mean(test_y)
rmseE<- function(error)
{
sqrt(mean(error^2))
}
y = test_y
yhat = pred_y
rmseresult=rmseE(y-yhat)
(r2 = R2(yhat , y, form = "traditional"))
cat('The R-square of the test data is ', round(r2,4), ' and the RMSE is ', round(rmseresult,4), '\n')
#use model to make predictions on satellite image
result <- predict(model_xgboost, ras[1:(nrow(ras)*ncol(ras))])
#create a result raster
res <- raster(ras)
#fill in results and add a "1" to them (to get back to initial class numbering! - see above "Prepare data" for more information)
res <- setValues(res,result+1)
#Save the output .tif file into saved directory
writeRaster(res, "xgbmodel_output", format = "GTiff", overwrite=T)
`
The script works well till it reachs
result <- predict(model_xgboost, ras[1:(nrow(ras)*ncol(ras))])
it takes some time then gives this error:
Error in predict.xgb.Booster(model_xgboost, ras[1:(nrow(ras) * ncol(ras))]) :
Feature names stored in `object` and `newdata` are different!
I realize that I am doing something wrong in that line. However, I do not know how to apply the xgboost model to a raster image that represents my study area.
It would be highly appreciated if someone give a hand, enlightened me, and helped me solve this problem....
My data as csv and raster image can be found here.
Finally, I got the reason for this error.
It was my mistake as the number of columns in the traning data was not the same as in the number of layers in the satellite image.
I have the following when error when trying to use the preProcess function from the caret package. The predict function works for knn and median imputation, but gives an error for bagging. How should I edit my call to the predict function.
Reproducible example:
data = iris
set.seed(1)
data = as.data.frame(lapply(data, function(cc) cc[ sample(c(TRUE, NA), prob = c(0.8, 0.2), size = length(cc), replace = TRUE) ]))
preprocess_values = preProcess(data, method = c("bagImpute"), verbose = TRUE)
data_new = predict(preprocess_values, data)
This gives the following error:
> data_new = predict(preprocess_values, data)
Error in UseMethod("predict") :
no applicable method for 'predict' applied to an object of class "NULL"
The preprocessing/imputation functions in caret work only for numerical variables.
As stated in the help of preProcess
x a matrix or data frame. Non-numeric predictors are allowed but will be ignored.
You most likely found a bug in the part that should ignore the non numerical variables which throws an uninformative error instead of ignoring them.
If you remove the factor variable the imputation works
library(caret)
df <- iris
set.seed(1)
df <- as.data.frame(lapply(data, function(cc) cc[ sample(c(TRUE, NA), prob = c(0.8, 0.2), size = length(cc), replace = TRUE) ]))
df <- df[,-5] #remove factor variable
preprocess_values <- preProcess(df, method = c("bagImpute"), verbose = TRUE)
data_new <- predict(preprocess_values, df)
The last line of code works but results in a bunch of warnings:
In cprob[tindx] + pred :
longer object length is not a multiple of shorter object length
These warnings are not from caret but from the internal call to ipred::bagging which is called internally by caret::preProcess. The cause for these errors are instances in the data where there are 3 NA values in a row, when they are removed
df <- df[rowSums(sapply(df, is.na)) != 3,]
preprocess_values <- preProcess(df, method = c("bagImpute"), verbose = TRUE)
data_new <- predict(preprocess_values, df)
the warnings disappear.
You should check out recipes, and specifically step_bagimpute, to overcome the above mentioned limitations.
I am working on a Binary Classification Machine Learning Problem and I am trying to balance the training set as I have an imbalanced target class variable. I am using Py-Spark for building the model.
Below is the code which is working to balance the data
train_initial, test = new_data.randomSplit([0.7, 0.3], seed = 2018)
train_initial.groupby('label').count().toPandas()
label count
0 0.0 712980
1 1.0 2926
train_new = train_initial.sampleBy('label', fractions={0: 2926./712980, 1: 1.0}).cache()
The above code performs under-sampling, but I think this might lead to loss of information. However, I am not sure how to perform upsampling. I also tried to use sample function as below:
train_up = train_initial.sample(True, 10.0, seed = 2018)
Although, it increases the count of 1 in my data set, it also increases the count of 0 and gives the below result.
label count
0 0.0 7128722
1 1.0 29024
Can someone please help me to achieve up-sampling in py-spark.
Thanks a lot in Advance!!
The problem is that you are oversampling the whole data frame. You should filter the data from the two classes
df_class_0 = df_train[df_train['label'] == 0]
df_class_1 = df_train[df_train['label'] == 1]
df_class_1_over = df_class_1.sample(count_class_0, replace=True)
df_test_over = pd.concat([df_class_0, df_class_1_over], axis=0)
the example comes from : https://www.kaggle.com/rafjaa/resampling-strategies-for-imbalanced-datasets
Please note that there are better way to perform oversampling (e.g. SMOTE)
For anyone trying to do random oversampling on a imbalanced dataset in pyspark. The following code will get you started (in this snippet 0 is the mayority class , and 1 is the class to be oversampled):
df_a = df.filter(df['label'] == 0)
df_b = df.filter(df['label'] == 1)
a_count = df_a.count()
b_count = df_b.count()
ratio = a_count / b_count
df_b_overampled = df_b.sample(withReplacement=True, fraction=ratio, seed=1)
df = df_a.unionAll(df_b_oversampled)
I might be quite late to the rescue here. But this is what I would recommend:
Step 1. Sample only for label = 1
train_1= train_initial.where(col('label')==1).sample(True, 10.0, seed = 2018)
step 2. Merge this data with label = 0 data
train_0=train_initial.where(col('label')==0)
train_final = train_0.union(train_1)
PS: please import the col with
from pyspark.sql.functions import col
Below code takes only 32*32 input, I want to feed in 128*128 images, how to go about it. The code is from the tutorial - https://github.com/awjuliani/TF-Tutorials/blob/master/DCGAN.ipynb
def generator(z):
zP = slim.fully_connected(z,4*4*256,normalizer_fn=slim.batch_norm,\
activation_fn=tf.nn.relu,scope='g_project',weights_initializer=initializer)
zCon = tf.reshape(zP,[-1,4,4,256])
gen1 = slim.convolution2d_transpose(\
zCon,num_outputs=64,kernel_size=[5,5],stride=[2,2],\
padding="SAME",normalizer_fn=slim.batch_norm,\
activation_fn=tf.nn.relu,scope='g_conv1', weights_initializer=initializer)
gen2 = slim.convolution2d_transpose(\
gen1,num_outputs=32,kernel_size=[5,5],stride=[2,2],\
padding="SAME",normalizer_fn=slim.batch_norm,\
activation_fn=tf.nn.relu,scope='g_conv2', weights_initializer=initializer)
gen3 = slim.convolution2d_transpose(\
gen2,num_outputs=16,kernel_size=[5,5],stride=[2,2],\
padding="SAME",normalizer_fn=slim.batch_norm,\
activation_fn=tf.nn.relu,scope='g_conv3', weights_initializer=initializer)
g_out = slim.convolution2d_transpose(\
gen3,num_outputs=1,kernel_size=[32,32],padding="SAME",\
biases_initializer=None,activation_fn=tf.nn.tanh,\
scope='g_out', weights_initializer=initializer)
return g_out
def discriminator(bottom, reuse=False):
dis1 = slim.convolution2d(bottom,16,[4,4],stride=[2,2],padding="SAME",\
biases_initializer=None,activation_fn=lrelu,\
reuse=reuse,scope='d_conv1',weights_initializer=initializer)
dis2 = slim.convolution2d(dis1,32,[4,4],stride=[2,2],padding="SAME",\
normalizer_fn=slim.batch_norm,activation_fn=lrelu,\
reuse=reuse,scope='d_conv2', weights_initializer=initializer)
dis3 = slim.convolution2d(dis2,64,[4,4],stride=[2,2],padding="SAME",\
normalizer_fn=slim.batch_norm,activation_fn=lrelu,\
reuse=reuse,scope='d_conv3',weights_initializer=initializer)
d_out = slim.fully_connected(slim.flatten(dis3),1,activation_fn=tf.nn.sigmoid,\
reuse=reuse,scope='d_out', weights_initializer=initializer)
return d_out
Below is the error which I get when I feed 128*128 images.
Trying to share variable d_out/weights, but specified shape (1024, 1) and found shape (16384, 1).
The generator is generating 32*32 images, and thus when we feed any other dimension in discriminator, it results in the given error.
The solution is to generate 128*128 images from the generator, by
1. Adding more no. of layers(2 in this case)
2. Changing the input to the generator
zP = slim.fully_connected(z,16*16*256,normalizer_fn=slim.batch_norm,\
activation_fn=tf.nn.relu,scope='g_project',weights_initializer=initializer)
zCon = tf.reshape(zP,[-1,16,16,256])
We are trying to use Learning2Search from vowpal-wabbit for NER
We are using ATIS dataset.
In ATIS there are 127 Entities (including Others category)
Training set has 4978 and test has 893 sentences.
How ever when we run it on test set it is mapping everything either class 1(Airline name) or class 2(Airport code)
Which is wired.
We tried another dataset (https://github.com/glample/tagger/tree/master/dataset), same behavior.
Looks like I am not using it the right way. Any pointers will be of great help.
Code snippet :
with open("/tweetsdb/ner/datasets/atis.pkl") as f:
train, test, dicts = cPickle.load(f)
idx2words = {v: k for k, v in dicts['words2idx'].iteritems()}
idx2labels = {v: k for k, v in dicts['labels2idx'].iteritems()}
idx2tables = {v: k for k, v in dicts['tables2idx'].iteritems()}
#Convert the dataset into a format compatible with Vowpal Wabbit
training_set = []
for i in xrange(len(train[0])):
zip_label_ent_idx = zip(train[2][i], train[0][i])
label_ent_actual = [(int(i[0]), idx2words[i[1]]) for i in zip_label_ent_idx]
training_set.append(label_ent_actual)
# Do like wise to get test chunk
class SequenceLabeler(pyvw.SearchTask):
def __init__(self, vw, sch, num_actions):
pyvw.SearchTask.__init__(self, vw, sch, num_actions)
sch.set_options( sch.AUTO_HAMMING_LOSS | sch.AUTO_CONDITION_FEATURES )
def _run(self, sentence):
output = []
for n in range(len(sentence)):
pos,word = sentence[n]
with self.vw.example({'w': [word]}) as ex:
pred = self.sch.predict(examples=ex, my_tag=n+1, oracle=pos, condition=[(n,'p'), (n-1, 'q')])
output.append(pred)
return output
vw = pyvw.vw("--search 3 --search_task hook --ring_size 1024")
Code for training the model:
#Training
sequenceLabeler = vw.init_search_task(SequenceLabeler)
for i in xrange(3):
sequenceLabeler.learn(training_set[:10])
Code for Prediction:
pred = []
for i in random.sample(xrange(len(test_set)), 10):
test_example = [ (999, word[1]) for word in test_set[i] ]
test_labels = [ label[0] for label in test_set[i] ]
print 'input sentence:', ' '.join([word[1] for word in test_set[i]])
print 'actual labels:', ' '.join([str(label) for label in test_labels])
print 'predicted labels:', ' '.join([str(pred) for pred in sequenceLabeler.predict(test_example)])
To see the full code, pls refer to this notebook:
https://github.com/nsanthanam/ner/blob/master/vowpal_wabbit_atis.ipynb
I am also new to this algorithm, but did some pilot studies recently.
To your problem, the answer is that you set a wrong parameter in
vw = pyvw.vw("--search 3 --search_task hook --ring_size 1024")
Here, the search should be set as '127', and in this way, vw will use your 127 tags.
vw = pyvw.vw("--search 127 --search_task hook --ring_size 1024")
Also, my feeling is that vw doesn't work really well with so many tags. I might be wrong, please let me know your result :)