Is there a way to plot both the training losses and validation losses on the same graph?
It's easy to have two separate scalar summaries for each of them individually, but this puts them on separate graphs. If both are displayed in the same graph it's much easier to see the gap between them and whether or not they have begin to diverge due to overfitting.
Is there a built in way to do this? If not, a work around way? Thank you much!
The work-around I have been doing is to use two SummaryWriter with different log dir for training set and cross-validation set respectively. And you will see something like this:
Rather than displaying the two lines separately, you can instead plot the difference between validation and training losses as its own scalar summary to track the divergence.
This doesn't give as much information on a single plot (compared with adding two summaries), but it helps with being able to compare multiple runs (and not adding multiple summaries per run).
Just for anyone coming accross this via a search: The current best practice to achieve this goal is to just use the SummaryWriter.add_scalars method from torch.utils.tensorboard. From the docs:
from torch.utils.tensorboard import SummaryWriter
writer = SummaryWriter()
r = 5
for i in range(100):
writer.add_scalars('run_14h', {'xsinx':i*np.sin(i/r),
'xcosx':i*np.cos(i/r),
'tanx': np.tan(i/r)}, i)
writer.close()
# This call adds three values to the same scalar plot with the tag
# 'run_14h' in TensorBoard's scalar section.
Expected result:
Many thanks to niko for the tip on Custom Scalars.
I was confused by the official custom_scalar_demo.py because there's so much going on, and I had to study it for quite a while before I figured out how it worked.
To show exactly what needs to be done to create a custom scalar graph for an existing model, I put together the following complete example:
# + <
# We need these to make a custom protocol buffer to display custom scalars.
# See https://developers.google.com/protocol-buffers/
from tensorboard.plugins.custom_scalar import layout_pb2
from tensorboard.summary.v1 import custom_scalar_pb
# >
import tensorflow as tf
from time import time
import re
# Initial values
(x0, y0) = (-1, 1)
# This is useful only when re-running code (e.g. Jupyter).
tf.reset_default_graph()
# Set up variables.
x = tf.Variable(x0, name="X", dtype=tf.float64)
y = tf.Variable(y0, name="Y", dtype=tf.float64)
# Define loss function and give it a name.
loss = tf.square(x - 3*y) + tf.square(x+y)
loss = tf.identity(loss, name='my_loss')
# Define the op for performing gradient descent.
minimize_step_op = tf.train.GradientDescentOptimizer(0.092).minimize(loss)
# List quantities to summarize in a dictionary
# with (key, value) = (name, Tensor).
to_summarize = dict(
X = x,
Y_plus_2 = y + 2,
)
# Build scalar summaries corresponding to to_summarize.
# This should be done in a separate name scope to avoid name collisions
# between summaries and their respective tensors. The name scope also
# gives a title to a group of scalars in TensorBoard.
with tf.name_scope('scalar_summaries'):
my_var_summary_op = tf.summary.merge(
[tf.summary.scalar(name, var)
for name, var in to_summarize.items()
]
)
# + <
# This constructs the layout for the custom scalar, and specifies
# which scalars to plot.
layout_summary = custom_scalar_pb(
layout_pb2.Layout(category=[
layout_pb2.Category(
title='Custom scalar summary group',
chart=[
layout_pb2.Chart(
title='Custom scalar summary chart',
multiline=layout_pb2.MultilineChartContent(
# regex to select only summaries which
# are in "scalar_summaries" name scope:
tag=[r'^scalar_summaries\/']
)
)
])
])
)
# >
# Create session.
with tf.Session() as sess:
# Initialize session.
sess.run(tf.global_variables_initializer())
# Create writer.
with tf.summary.FileWriter(f'./logs/session_{int(time())}') as writer:
# Write the session graph.
writer.add_graph(sess.graph) # (not necessary for scalars)
# + <
# Define the layout for creating custom scalars in terms
# of the scalars.
writer.add_summary(layout_summary)
# >
# Main iteration loop.
for i in range(50):
current_summary = sess.run(my_var_summary_op)
writer.add_summary(current_summary, global_step=i)
writer.flush()
sess.run(minimize_step_op)
The above consists of an "original model" augmented by three blocks of code indicated by
# + <
[code to add custom scalars goes here]
# >
My "original model" has these scalars:
and this graph:
My modified model has the same scalars and graph, together with the following custom scalar:
This custom scalar chart is simply a layout which combines the original two scalar charts.
Unfortunately the resulting graph is hard to read because both values have the same color. (They are distinguished only by marker.) This is however consistent with TensorBoard's convention of having one color per log.
Explanation
The idea is as follows. You have some group of variables which you want to plot inside a single chart. As a prerequisite, TensorBoard should be plotting each variable individually under the "SCALARS" heading. (This is accomplished by creating a scalar summary for each variable, and then writing those summaries to the log. Nothing new here.)
To plot multiple variables in the same chart, we tell TensorBoard which of these summaries to group together. The specified summaries are then combined into a single chart under the "CUSTOM SCALARS" heading. We accomplish this by writing a "Layout" once at the beginning of the log. Once TensorBoard receives the layout, it automatically produces a combined chart under "CUSTOM SCALARS" as the ordinary "SCALARS" are updated.
Assuming that your "original model" is already sending your variables (as scalar summaries) to TensorBoard, the only modification necessary is to inject the layout before your main iteration loop starts. Each custom scalar chart selects which summaries to plot by means of a regular expression. Thus for each group of variables to be plotted together, it can be useful to place the variables' respective summaries into a separate name scope. (That way your regex can simply select all summaries under that name scope.)
Important Note: The op which generates the summary of a variable is distinct from the variable itself. For example, if I have a variable ns1/my_var, I can create a summary ns2/summary_op_for_myvar. The custom scalars chart layout cares only about the summary op, not the name or scope of the original variable.
Here is an example, creating two tf.summary.FileWriters which share the same root directory. Creating a tf.summary.scalar shared by the two tf.summary.FileWriters. At every time step, get the summary and update each tf.summary.FileWriter.
import os
import tqdm
import tensorflow as tf
def tb_test():
sess = tf.Session()
x = tf.placeholder(dtype=tf.float32)
summary = tf.summary.scalar('Values', x)
merged = tf.summary.merge_all()
sess.run(tf.global_variables_initializer())
writer_1 = tf.summary.FileWriter(os.path.join('tb_summary', 'train'))
writer_2 = tf.summary.FileWriter(os.path.join('tb_summary', 'eval'))
for i in tqdm.tqdm(range(200)):
# train
summary_1 = sess.run(merged, feed_dict={x: i-10})
writer_1.add_summary(summary_1, i)
# eval
summary_2 = sess.run(merged, feed_dict={x: i+10})
writer_2.add_summary(summary_2, i)
writer_1.close()
writer_2.close()
if __name__ == '__main__':
tb_test()
Here is the result:
The orange line shows the result of the evaluation stage, and correspondingly, the blue line illustrates the data of the training stage.
Also, there is a very useful post by TF team to which you can refer.
For completeness, since tensorboard 1.5.0 this is now possible.
You can use the custom scalars plugin. For this, you need to first make tensorboard layout configuration and write it to the event file. From the tensorboard example:
import tensorflow as tf
from tensorboard import summary
from tensorboard.plugins.custom_scalar import layout_pb2
# The layout has to be specified and written only once, not at every step
layout_summary = summary.custom_scalar_pb(layout_pb2.Layout(
category=[
layout_pb2.Category(
title='losses',
chart=[
layout_pb2.Chart(
title='losses',
multiline=layout_pb2.MultilineChartContent(
tag=[r'loss.*'],
)),
layout_pb2.Chart(
title='baz',
margin=layout_pb2.MarginChartContent(
series=[
layout_pb2.MarginChartContent.Series(
value='loss/baz/scalar_summary',
lower='baz_lower/baz/scalar_summary',
upper='baz_upper/baz/scalar_summary'),
],
)),
]),
layout_pb2.Category(
title='trig functions',
chart=[
layout_pb2.Chart(
title='wave trig functions',
multiline=layout_pb2.MultilineChartContent(
tag=[r'trigFunctions/cosine', r'trigFunctions/sine'],
)),
# The range of tangent is different. Let's give it its own chart.
layout_pb2.Chart(
title='tan',
multiline=layout_pb2.MultilineChartContent(
tag=[r'trigFunctions/tangent'],
)),
],
# This category we care less about. Let's make it initially closed.
closed=True),
]))
writer = tf.summary.FileWriter(".")
writer.add_summary(layout_summary)
# ...
# Add any summary data you want to the file
# ...
writer.close()
A Category is group of Charts. Each Chart corresponds to a single plot which displays several scalars together. The Chart can plot simple scalars (MultilineChartContent) or filled areas (MarginChartContent, e.g. when you want to plot the deviation of some value). The tag member of MultilineChartContent must be a list of regex-es which match the tags of the scalars that you want to group in the Chart. For more details check the proto definitions of the objects in https://github.com/tensorflow/tensorboard/blob/master/tensorboard/plugins/custom_scalar/layout.proto. Note that if you have several FileWriters writing to the same directory, you need to write the layout in only one of the files. Writing it to a separate file also works.
To view the data in TensorBoard, you need to open the Custom Scalars tab. Here is an example image of what to expect https://user-images.githubusercontent.com/4221553/32865784-840edf52-ca19-11e7-88bc-1806b1243e0d.png
The solution in PyTorch 1.5 with the approach of two writers:
import os
from torch.utils.tensorboard import SummaryWriter
LOG_DIR = "experiment_dir"
train_writer = SummaryWriter(os.path.join(LOG_DIR, "train"))
val_writer = SummaryWriter(os.path.join(LOG_DIR, "val"))
# while in the training loop
for k, v in train_losses.items()
train_writer.add_scalar(k, v, global_step)
# in the validation loop
for k, v in val_losses.items()
val_writer.add_scalar(k, v, global_step)
# at the end
train_writer.close()
val_writer.close()
Keys in the train_losses dict have to match those in the val_losses to be grouped on the same graph.
Tensorboard is really nice tool but by its declarative nature can make it difficult to get it to do exactly what you want.
I recommend you checkout Losswise (https://losswise.com) for plotting and keeping track of loss functions as an alternative to Tensorboard. With Losswise you specify exactly what should be graphed together:
import losswise
losswise.set_api_key("project api key")
session = losswise.Session(tag='my_special_lstm', max_iter=10)
loss_graph = session.graph('loss', kind='min')
# train an iteration of your model...
loss_graph.append(x, {'train_loss': train_loss, 'validation_loss': validation_loss})
# keep training model...
session.done()
And then you get something that looks like:
Notice how the data is fed to a particular graph explicitly via the loss_graph.append call, the data for which then appears in your project's dashboard.
In addition, for the above example Losswise would automatically generate a table with columns for min(training_loss) and min(validation_loss) so you can easily compare summary statistics across your experiments. Very useful for comparing results across a large number of experiments.
Please let me contribute with some code sample in the answer given by #Lifu Huang. First download the loger.py from here and then:
from logger import Logger
def train_model(parameters...):
N_EPOCHS = 15
# Set the logger
train_logger = Logger('./summaries/train_logs')
test_logger = Logger('./summaries/test_logs')
for epoch in range(N_EPOCHS):
# Code to get train_loss and test_loss
# ============ TensorBoard logging ============#
# Log the scalar values
train_info = {
'loss': train_loss,
}
test_info = {
'loss': test_loss,
}
for tag, value in train_info.items():
train_logger.scalar_summary(tag, value, step=epoch)
for tag, value in test_info.items():
test_logger.scalar_summary(tag, value, step=epoch)
Finally you run tensorboard --logdir=summaries/ --port=6006and you get:
Related
So my goal is basically implementing global top-k subsampling. Gradient sparsification is quite simple and I have already done this building on stateful clients example, but now I would like to use encoders as you have recommended here at page 28. Additionally I would like to average only the non-zero gradients, so say we have 10 clients but only 4 have nonzero gradients at a given position for a communication round then I would like to divide the sum of these gradients to 4, not 10. I am hoping to achieve this by summing gradients at numerator and masks, 1s and 0s, at denominator. Also moving forward I will add randomness to gradient selection so it is imperative that I create those masks concurrently with gradient selection. The code I have right now is
import tensorflow as tf
from tensorflow_model_optimization.python.core.internal import tensor_encoding as te
#te.core.tf_style_adaptive_encoding_stage
class GrandienrSparsificationEncodingStage(te.core.AdaptiveEncodingStageInterface):
"""An example custom implementation of an `EncodingStageInterface`.
Note: This is likely not what one would want to use in practice. Rather, this
serves as an illustration of how a custom compression algorithm can be
provided to `tff`.
This encoding stage is expected to be run in an iterative manner, and
alternatively zeroes out values corresponding to odd and even indices. Given
the determinism of the non-zero indices selection, the encoded structure does
not need to be represented as a sparse vector, but only the non-zero values
are necessary. In the decode mehtod, the state (i.e., params derived from the
state) is used to reconstruct the corresponding indices.
Thus, this example encoding stage can realize representation saving of 2x.
"""
ENCODED_VALUES_KEY = 'stateful_topk_values'
INDICES_KEY = 'indices'
SHAPES_KEY = 'shapes'
ERROR_COMPENSATION_KEY = 'error_compensation'
def encode(self, x, encode_params):
shapes_list = [tf.shape(y) for y in x]
flattened = tf.nest.map_structure(lambda y: tf.reshape(y, [-1]), x)
gradients = tf.concat(flattened, axis=0)
error_compensation = encode_params[self.ERROR_COMPENSATION_KEY]
gradients_and_error_compensation = tf.math.add(gradients, error_compensation)
percentage = tf.constant(0.1, dtype=tf.float32)
k_float = tf.multiply(percentage, tf.cast(tf.size(gradients_and_error_compensation), tf.float32))
k_int = tf.cast(tf.math.round(k_float), dtype=tf.int32)
values, indices = tf.math.top_k(tf.math.abs(gradients_and_error_compensation), k = k_int, sorted = False)
indices = tf.expand_dims(indices, 1)
sparse_gradients_and_error_compensation = tf.scatter_nd(indices, values, tf.shape(gradients_and_error_compensation))
new_error_compensation = tf.math.subtract(gradients_and_error_compensation, sparse_gradients_and_error_compensation)
state_update_tensors = {self.ERROR_COMPENSATION_KEY: new_error_compensation}
encoded_x = {self.ENCODED_VALUES_KEY: values,
self.INDICES_KEY: indices,
self.SHAPES_KEY: shapes_list}
return encoded_x, state_update_tensors
def decode(self,
encoded_tensors,
decode_params,
num_summands=None,
shape=None):
del num_summands, decode_params, shape # Unused.
flat_shape = tf.math.reduce_sum([tf.math.reduce_prod(shape) for shape in encoded_tensors[self.SHAPES_KEY]])
sizes_list = [tf.math.reduce_prod(shape) for shape in encoded_tensors[self.SHAPES_KEY]]
scatter_tensor = tf.scatter_nd(
indices=encoded_tensors[self.INDICES_KEY],
updates=encoded_tensors[self.ENCODED_VALUES_KEY],
shape=[flat_shape])
nonzero_locations = tf.nest.map_structure(lambda x: tf.cast(tf.where(tf.math.greater(x, 0), 1, 0), tf.float32) , scatter_tensor)
reshaped_tensor = [tf.reshape(flat_tensor, shape=shape) for flat_tensor, shape in
zip(tf.split(scatter_tensor, sizes_list), encoded_tensors[self.SHAPES_KEY])]
reshaped_nonzero = [tf.reshape(flat_tensor, shape=shape) for flat_tensor, shape in
zip(tf.split(nonzero_locations, sizes_list), encoded_tensors[self.SHAPES_KEY])]
return reshaped_tensor, reshaped_nonzero
def initial_state(self):
return {self.ERROR_COMPENSATION_KEY: tf.constant(0, dtype=tf.float32)}
def update_state(self, state, state_update_tensors):
return {self.ERROR_COMPENSATION_KEY: state_update_tensors[self.ERROR_COMPENSATION_KEY]}
def get_params(self, state):
encode_params = {self.ERROR_COMPENSATION_KEY: state[self.ERROR_COMPENSATION_KEY]}
decode_params = {}
return encode_params, decode_params
#property
def name(self):
return 'gradient_sparsification_encoding_stage'
#property
def compressible_tensors_keys(self):
return False
#property
def commutes_with_sum(self):
return False
#property
def decode_needs_input_shape(self):
return False
#property
def state_update_aggregation_modes(self):
return {}
I have run some simple tests manually following the steps you outlined here at page 45. It works but I have some questions/problems.
When I use list of tensors of same shape (ex:2 2x25 tensors) as input,x, of encode it works without any issues but when I try to use list of tensors of different shapes (2x20 and 6x10) it gives and error saying
InvalidArgumentError: Shapes of all inputs must match: values[0].shape = [2,20] != values1.shape = [6,10] [Op:Pack] name: packed
How can I resolve this issue? As i said I want to use global top-k so it is essential I encode entire trainable model weights at once. Take the cnn model used here, all the tensors have different shapes.
How can I do the averaging I described at the beginning? For example here you have done
mean_factory = tff.aggregators.MeanFactory(
tff.aggregators.EncodedSumFactory(mean_encoder_fn), # numerator
tff.aggregators.EncodedSumFactory(mean_encoder_fn), # denominator )
Is there a way to repeat this with one output of decode going to numerator and other going to denominator? How can I handle dividing 0 by 0? tensorflow has divide_no_nan function, can I use it somehow or do I need to add eps to each?
How is partition handled when I use encoders? Does each client get a unique encoder holding a unique state for it? As you have discussed here at page 6 client states are used in cross-silo settings yet what happens if client ordering changes?
Here you have recommended using stateful clients example. Can you explain this a bit further? I mean in the run_one_round where exactly encoders go and how are they used/combined with client update and aggregation?
I have some additional information such as sparsity I want to pass to encode. What is the suggested method for doing that?
Here are some answers, hope it helps:
If you want to treat all of the aggregated structure just as a single tensor, use concat_factory as the outermost aggregator. That will concatenate entire structure to a rank-1 Tensor at clients, and then unpack back to the original structure at the end. Example use: tff.aggregators.concat_factory(tff.aggregators.MeanFactory(...))
Note the encoding stage objects are meant to work with a single tensor, so what you describe with identical tensors probably works only accidentally.
There are two options.
a. Modify the client training code such that the weights being passed to the weighted aggregator are already what you want it to be (zero/one
mask). In the stateful clients example you link, that would be here. You will then get what you need by default (by summing the numerator).
b. Modify UnweightedMeanFactory to do exactly the variant of averaging you describe and use that. Start would be modifying this
(and 4.) I think that is what you would need to implement. The same way existing client states are initialized in the example here, you would need extend it to contain the aggregator states, and make sure those are sampled together with the clients, as done here. Then, to integrate the aggregators in the example you would need to replace this hard-coded tff.federated_mean. An example of such integration is in the implementation of tff.learning.build_federated_averaging_process, primarily here
I am not sure what the question is. Perhaps get the previous working (seems like a prerequisite to me), and then clarify and ask in a new post?
I use TensorFlow to build super-resolution convolutional neural network for enhancing image resolution. The network accepts a low-resolution image as input and produces a high-resolution image as output.
For training, I use tf.estimator.Estimator
def get_estimator(run_config=None, params=None):
"""Return the model as a Tensorflow Estimator object.
Args:
run_config (RunConfig): Configuration for Estimator run.
params (HParams): hyperparameters.
"""
return tf.estimator.Estimator(
model_fn=model_fn, # First-class function
params=params, # HParams
config=run_config # RunConfig
)
wrapped by tf.contrib.learn.Experiment
def experiment_fn(run_config, params):
"""Create an experiment to train and evaluate the model.
Args:
run_config (RunConfig): Configuration for Estimator run.
params (HParam): Hyperparameters
Returns:
(Experiment) Experiment for training the mnist model.
"""
# You can change a subset of the run_config properties as
run_config = run_config.replace(save_checkpoints_steps=params.min_eval_frequency)
estimator = get_estimator(run_config, params)
# # Setup data loaders
train_input_fn = get_input_fn(params.filenames, params.epoch, True, params.batch_size)
eval_input_fn = get_input_fn(params.filenames, 1, False, params.batch_size)
# Define the experiment
experiment = tf.contrib.learn.Experiment(
estimator=estimator, # Estimator
train_input_fn=train_input_fn, # First-class function
eval_input_fn=eval_input_fn, # First-class function
train_steps=params.train_steps, # Minibatch steps
min_eval_frequency=params.min_eval_frequency, # Eval frequency
eval_steps=params.eval_steps # Minibatch steps
)
return experiment
And I run it via tf.contrib.learn.learn_runner as follow:
def run_experiment(config, session):
assert os.path.exists(config.tfrecord_dir)
assert os.path.exists(os.path.join(config.tfrecord_dir, config.dataset, config.subset))
save_config(config.summaries_dir, config)
filenames = get_tfrecord_files(config)
batch_number = min(len(filenames), config.train_size) // config.batch_size
logging.info('Total number of batches %d' % batch_number)
params = tf.contrib.training.HParams(
learning_rate=config.learning_rate,
device=config.device,
epoch=config.epoch,
batch_size=config.batch_size,
min_eval_frequency=100,
train_steps=None, # Use train feeder until its empty
eval_steps=1, # Use 1 step of evaluation feeder
filenames=filenames
)
run_config = tf.contrib.learn.RunConfig(model_dir=config.checkpoint_dir)
learn_runner.run(
experiment_fn=experiment_fn, # First-class function
run_config=run_config, # RunConfig
schedule="train_and_evaluate", # What to run
hparams=params # HParams
)
The class Experiment provides method train_and_evaluate that evaluate during training.
My question is: How can I get an evaluation result(an output image) during training cnn? I want to see a temporal training result.
My project on github
I think you're looking for adding an image summary to your model using tf.summary.image.
It makes it easy to visualize images during training in Tensorboard:
def model_fn(...):
...
# max_outputs control the number of images in the batch you want to display
tf.summary.image("train_images", images, max_outputs=3)
# ...
return tf.estimator.EstimatorSpec(...)
During evaluation, I don't think there is an easy way to display an image inside tf.estimator. The issue is that during evaluation, only integer or float values can be displayed.
In more details, at eval time you return eval_metric_ops containing for instance your accuracy. TensorFlow will display every integer or float value from this dict in TensorBoard, but will give you a warning if you try to display anything else (ex: images). (Source code: function _write_dict_to_summary)
WARNING:tensorflow:Skipping summary for eval_images, must be a float, np.float32, np.int64, np.int32 or int.
A workaround could be to get back the value of the images outside of tf.estimator and display them manually in TensorBoard.
Edit: there is another related question on stackoverflow, and two GitHub issue here and here to track progress on this.
From what I understand, they will try to make it easy to return an image summary in eval_metric_ops that will automatically appear in TensorBoard.
I am basically clustering some of my documents using mini_batch_kmeans and kmeans algorithm. I simply followed the tutorial is the scikit-learn website the link for that is given below:
http://scikit-learn.org/stable/auto_examples/text/document_clustering.html
They are using some of the method for the vectorizing one of which is HashingVectorizer. In the hashingVectorizer they are making a pipeline with TfidfTransformer() method.
# Perform an IDF normalization on the output of HashingVectorizer
hasher = HashingVectorizer(n_features=opts.n_features,
stop_words='english', non_negative=True,
norm=None, binary=False)
vectorizer = make_pipeline(hasher, TfidfTransformer())
Once doing so, the vectorizer what I get from that does not have the method get_feature_names(). But since I am using it for clustering, I need to get the "terms" using this "get_feature_names()"
terms = vectorizer.get_feature_names()
for i in range(true_k):
print("Cluster %d:" % i, end='')
for ind in order_centroids[i, :10]:
print(' %s' % terms[ind], end='')
print()
How do I solve this error?
My whole code is show below:
X_train_vecs, vectorizer = vector_bow.count_tfidf_vectorizer(_contents)
mini_kmeans_batch = MiniBatchKmeansTechnique()
# MiniBatchKmeans without the LSA dimensionality reduction
mini_kmeans_batch.mini_kmeans_technique(number_cluster=8, X_train_vecs=X_train_vecs,
vectorizer=vectorizer, filenames=_filenames, contents=_contents, is_dimension_reduced=False)
The count vectorizor piped with tfidf.
def count_tfidf_vectorizer(self,contents):
count_vect = CountVectorizer()
vectorizer = make_pipeline(count_vect,TfidfTransformer())
X_train_vecs = vectorizer.fit_transform(contents)
print("The count of bow : ", X_train_vecs.shape)
return X_train_vecs, vectorizer
and the mini_batch_kmeans class is as below:
class MiniBatchKmeansTechnique():
def mini_kmeans_technique(self, number_cluster, X_train_vecs, vectorizer,
filenames, contents, svd=None, is_dimension_reduced=True):
km = MiniBatchKMeans(n_clusters=number_cluster, init='k-means++', max_iter=100, n_init=10,
init_size=1000, batch_size=1000, verbose=True, random_state=42)
print("Clustering sparse data with %s" % km)
t0 = time()
km.fit(X_train_vecs)
print("done in %0.3fs" % (time() - t0))
print()
cluster_labels = km.labels_.tolist()
print("List of the cluster names is : ",cluster_labels)
data = {'filename':filenames, 'contents':contents, 'cluster_label':cluster_labels}
frame = pd.DataFrame(data=data, index=[cluster_labels], columns=['filename', 'contents', 'cluster_label'])
print(frame['cluster_label'].value_counts(sort=True,ascending=False))
print()
grouped = frame['cluster_label'].groupby(frame['cluster_label'])
print(grouped.mean())
print()
print("Top Terms Per Cluster :")
if is_dimension_reduced:
if svd != None:
original_space_centroids = svd.inverse_transform(km.cluster_centers_)
order_centroids = original_space_centroids.argsort()[:, ::-1]
else:
order_centroids = km.cluster_centers_.argsort()[:, ::-1]
terms = vectorizer.get_feature_names()
for i in range(number_cluster):
print("Cluster %d:" % i, end=' ')
for ind in order_centroids[i, :10]:
print(' %s' % terms[ind], end=',')
print()
print("Cluster %d filenames:" % i, end='')
for file in frame.ix[i]['filename'].values.tolist():
print(' %s,' % file, end='')
print()
Pipeline doesn't have get_feature_names() method, as it is not straightforward to implement this method for Pipeline - one needs to consider all pipeline steps to get feature names. See https://github.com/scikit-learn/scikit-learn/issues/6424, https://github.com/scikit-learn/scikit-learn/issues/6425, etc. - there is a lot of related tickets and several attempts to fix it.
If your pipeline is simple (TfidfVectorizer followed by MiniBatchKMeans) then you can get feature names from TfidfVectorizer.
If you want to use HashingVectorizer, it is more complicated, as HashingVectorizer doesn't provide feature names by design. HashingVectorizer doesn't store vocabulary, and uses hashes instead - it means it can be applied in online setting, and that it dosn't require any RAM - but the tradeoff is exactly that you don't get feature names.
It is still possible to get feature names from HashingVectorizer though; to do this you need to apply it for a sample of documents, store which hashes correspond to which words, and this way learn what these hashes mean, i.e. what are the feature names. There may be collisions, so it is not possible to be 100% sure the feature name is correct, but usually this approach works ok. This approach is implemented in eli5 library; see http://eli5.readthedocs.io/en/latest/tutorials/sklearn-text.html#debugging-hashingvectorizer for an example. You will have to do something like this, using InvertableHashingVectorizer:
from eli5.sklearn import InvertableHashingVectorizer
ivec = InvertableHashingVectorizer(vec) # vec is a HashingVectorizer instance
# X_sample is a sample from contents; you can use the
# whole contents array, or just e.g. every 10th element
ivec.fit(content_sample)
hashing_feat_names = ivec.get_feature_names()
Then you can use hashing_feat_names as your feature names, as TfidfTransformer doesn't change input vector size and just scales the same features.
From the make_pipeline documentation:
This is a shorthand for the Pipeline constructor; it does not require, and
does not permit, naming the estimators. Instead, their names will be set
to the lowercase of their types automatically.
so, in order to access the feature names, after you have fitted to data, you can:
# Perform an IDF normalization on the output of HashingVectorizer
from sklearn.feature_extraction.text import HashingVectorizer, TfidfVectorizer
from sklearn.pipeline import make_pipeline
hasher = HashingVectorizer(n_features=10,
stop_words='english', non_negative=True,
norm=None, binary=False)
tfidf = TfidfVectorizer()
vectorizer = make_pipeline(hasher, tfidf)
# ...
# fit to the data
# ...
# use the instance's class name to lower
terms = vectorizer.named_steps[tfidf.__class__.__name__.lower()].get_feature_names()
# or to be more precise, as used in `_name_estimators`:
# terms = vectorizer.named_steps[type(tfidf).__name__.lower()].get_feature_names()
# btw TfidfTransformer and HashingVectorizer do not have get_feature_names afaik
Hope this helps, good luck!
Edit: After seeing your updated question with the example you follow, #Vivek Kumar is correct, this code terms = vectorizer.get_feature_names() will not run for the pipeline but only when:
vectorizer = TfidfVectorizer(max_df=0.5, max_features=opts.n_features,
min_df=2, stop_words='english',
use_idf=opts.use_idf)
I need to be able to plot e.g. the cost function values as a function of some parameter (for example the bias b below). If e.g. my graph is something like (pseudocode)
y = g(W x + b),
cost = sum(y ** 2),
where W and b are tf.Variables, I'd like to change b from, say 0 to 1 and plot the values of cost.
Please note that I do not want to call eval or sesssion.run after each change of b because of the overhead! E.g. for 100 plot points that would take forever.
I know of the existence of tf.assign, but doing something like [assign, cost, assign, cost, ...] and evaluating that doesn't seem to work
I guess I could update the value of b inside the graph and call cost after each update, but I wouldn't really want to change the graph
So how could I do this in an efficient manner? Thank you in advance!
EDIT: actually this is probably impossible to do without calling eval/run between the iterations... oh well...
In tensor-flow if you use variables you can only evaluate them only after an initialization. So you cannot probably evaluate them without a session.
but you can change the parameters like the following way
import tensorflow as tf
my_var = tf.Variable(10)
with tf.Session() as sess:
sess.run(my_var.initializer)
print(sess.run(my_var.assign_sub(2))) #>> 8
print(sess.run(my_var.assign_sub(2))) #>> 6
This sounds like a use case for feeding a different value at each step. Assuming b is a scalar variable, you could code your loop with something like the following:
import numpy as np
sess = tf.Session()
# Vary `b_val` from 0 to 1 in 100 steps.
for b_val in np.linspace(0, 1, 100):
# Evaluate `cost` using `b = b_val`.
cost_val = sess.run(cost, feed_dict={b: b_val})
# Do something with `cost_val`....
How can one add batch mechanism to the input_fn in the TensorFlow Wide & Deep Learning Tutorial overcoming that some features are represented as tf.Sparsetensor objects?
I have made many attempts around adding tf.train.slice_input_producer and tf.train.batchto the original code (see below), but have failed miserably so far.
I would like to keep the global working of that input_fn as it is handy while while training and evaluating the model.
Can someone help, please?
def input_fn(df):
# Creates a dictionary mapping from each continuous feature column name (k) to
# the values of that column stored in a constant Tensor.
continuous_cols = {k: tf.constant(df[k].values)
for k in CONTINUOUS_COLUMNS}
# Creates a dictionary mapping from each categorical feature column name (k)
# to the values of that column stored in a tf.SparseTensor.
categorical_cols = {k: tf.SparseTensor(indices=[[i, 0] for i in range(df[k].size)],
values=df[k].values,
shape=[df[k].size, 1]) for k in CATEGORICAL_COLUMNS}
# Merges the two dictionaries into one.
feature_cols = dict(continuous_cols.items() + categorical_cols.items())
# Converts the label column into a constant Tensor.
labels = tf.constant(df[LABEL_COLUMN].values)
'''
Changes from here:
'''
features_slices, features_slices = tf.train.slice_input_producer([features_cols, labels], ...)
features_batches, labels_batches = tf.train.batch([features_slices, features_slices], ...)
# Returns the feature and label batches.
return features_batches, labels_batches