What does graph argument in tf.Session() do? - machine-learning

I am having trouble understanding the graph argument in the tf.Session(). I tried looking up at the TensorFlow website :link but couldn't understand much.
I am trying to find out the different between tf.Session() and tf.Session(graph=some_graph_inserted_here).
Question Context
Code A (Not Working):
def predict():
with tf.name_scope("predict"):
with tf.Session() as sess:
saver = tf.train.import_meta_graph("saved_models/testing.meta")
saver.restore(sess, "saved_models/testing")
loaded_graph = tf.get_default_graph()
output_ = loaded_graph.get_tensor_by_name('loss/network/output_layer/BiasAdd:0')
_x = loaded_graph.get_tensor_by_name('x:0')
print sess.run(output_, feed_dict={_x: np.array([12003]).reshape([-1, 1])})
This code gives the following error: ValueError: cannot add op with name hidden_layer1/kernel/Adam as that name is already used when trying to load the graph at saver = tf.train.import_meta_graph("saved_models/testing.meta")
Code B (Working):
def predict():
with tf.name_scope("predict"):
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
saver = tf.train.import_meta_graph("saved_models/testing.meta")
saver.restore(sess, "saved_models/testing")
output_ = loaded_graph.get_tensor_by_name('loss/network/output_layer/BiasAdd:0')
_x = loaded_graph.get_tensor_by_name('x:0')
print sess.run(output_, feed_dict={_x: np.array([12003]).reshape([-1, 1])})
The codes does not work if I replace loaded_graph = tf.Graph() with loaded_graph = tf.get_default_graph(). Why?
Full Code if it helps:
(https://gist.github.com/duemaster/f8cf05c0923ebabae476b83e895619ab)

The TensorFlow Graph is an object which contains your various tf.Tensor and tf.Operation.
When you create these tensors (e.g. using tf.Variable or tf.constant) or operations (e.g. tf.matmul), they will be added to the default graph (look at the graph member of these object to get the graph they belong to). If you haven't specified anything, it will be the graph you get when calling the tf.get_default_graph method.
But you could also work with multiple graphes using a context manager:
g = tf.Graph()
with g.as_default():
[your code]
Suppose you created several graphes in your code, you then need to put the graph you and to run as an argument of the tf.Session method to specify TensorFlow which one to run.
In Code A, you
work with the default graph,
try to import the meta graph into it (which fails because it already contains some of the nodes) and,
would restore the model into it,
while in Code B, you
create a fresh new graph,
import the meta graph into it (which succeeds because it's an empty graph) and
restore it.
Useful link:
tf.Graph API
Edit:
This piece of code makes the Code A work (I reset the default graph to a fresh one, and I removed the predict name_scope).
def predict():
tf.reset_default_graph()
with tf.Session() as sess:
saver = tf.train.import_meta_graph("saved_models/testing.meta")
saver.restore(sess, "saved_models/testing")
loaded_graph = tf.get_default_graph()
output_ = loaded_graph.get_tensor_by_name('loss/network/output_layer/BiasAdd:0')
_x = loaded_graph.get_tensor_by_name('x:0')
print(sess.run(output_, feed_dict={_x: np.array([12003]).reshape([-1, 1])}))

In Tensorflow, you are constructing graphs. By default, Tensorflow creates a default (sorry for tautology) graph (which you could access using tf.get_default_graph()). By default, any new Session object uses this default graph.
In your case, you already have a graph (which is a default one), and you also saved exactly this graph into meta file. Then, you are trying to recover this graph using tf.train.import_meta_graph(). However, since your session uses a default graph, and you are trying to recover an identical one, you are encountering an error since this operation is trying to duplicate the nodes, which is forbidden.
When you explicitly create a new graph object by calling tf.Graph() and create a Session object using this graph (but not the default one) everything is fine since the nodes are created in another graph.

The function tf.train.import_meta_graph("saved_models/testing.meta") add all the nodes from the meta file to the current graph. In the first code, the current graph is the default_graph which already has the ops defined, so the error. In the second case, you are loading the nodes to a new graph and so it works fine!.

When you create a Session you're placing a graph into a specified device.
If no graph is specified, the Session constructor tries to build a graph using the default one (that you can get using tf.get_default_graph).
Your code A doesn't work because in the current session already exists a graph and that graph already contains the same exact node you're trying to import.
Your code B works because you're placing into the Session a new empyt graph (created with tf.Graph()): when you import the graph definition there's no collision among the existing nodes in the current session (that are 0, because the graph is empty) and the ones you're importing

Related

PyDrake CollisionFilterManager Not Applying Filter

I have a system that consists of a robotic manipulator and an object. I want to evaluate signed distances between all collision geometries in the system while excluding collisions between the fingertips of the end-effector and the convex geometries that make up the collision geometry of the object. However, when I use the CollisionFilterManager to try to apply the relevant exclusions, my code still computes signed distances between fingertips and object geometries when calling ComputeSignedDistancePairClosestPoints() much later on downstream.
I have a container class that has the plant and its SceneGraph as attributes. When initializing this class, I try to filter the collisions. Here are the relevant parts of the code:
class SimplifiedClass:
def __init__(self, ...):
# initializing plant, contexts, and query object port
self.diagram = function_generating_diagram_with_plant(...)
self.plant = self.diagram.GetSubsystemByName("plant")
self.scene_graph = self.diagram.GetSubsystemByName("scene_graph")
_diag_context = self.diagram.CreateDefaultContext()
self.plant_context = self.plant.GetMyMutableContextFromRoot(_diag_context)
self.sg_context = self.scene_graph.GetMyMutableContextFromRoot(_diag_context)
self.qo_port = self.scene_graph.get_query_output_port()
# applying filters
cfm = self.scene_graph.collision_filter_manager()
inspector = self.query_object.inspector()
fingertip_geoms = []
obj_collision_geoms = []
gids = inspector.GetAllGeometryIds()
for g in gids:
name = inspector.GetName(g)
# if name.endswith("ds_collision") or name.endswith("collision_1"):
if name.endswith("tip_collision_1") and "algr" in name:
fingertip_geoms.append(g)
elif name.startswith("obj_collision"):
obj_collision_geoms.append(g)
ftip_set = GeometrySet(fingertip_geoms)
obj_set = GeometrySet(obj_collision_geoms)
cfm.Apply(
CollisionFilterDeclaration()
.ExcludeBetween(ftip_set, obj_set)
.ExcludeWithin(obj_set)
)
#property
def query_object(self):
return self.qo_port.Eval(self.sg_context)
def function_that_computes_signed_distances(self, ...):
# this function calls ComputeSignedDistancePairClosestPoints(), which
# computes signed distances even between filtered geometry pairs
I've confirmed that the correct geometry IDs are being retrieved during initialization (at least the names are correct), but if I do a simple test where I see how many SignedDistancePairs are returned from calling ComputeSignedDistancePairClosestPoints() before and after the attempt to apply the filter, the same number of signed distance pairs are returned, which implies the filter had no effect even immediately after declaring it. I also confirm the geometries that should be filtered are not by examining the names associated with signed distance pairs during my downstream function call.
Is there an obvious bug in my code? If not, where else could the bug be located besides here?
The problem is an age-old problem: model vs context.
In short, SceneGraph stores an interior model so you can construct as you go. When you create a context a copy of that model is placed in the context. That copy is independent. If you continue to modify SceneGraph's model, you'll only observe a change in future contexts you allocate.
In your code above, you've already allocated a context. You acquire a collision filter manager using cfm = self.scene_graph.collision_filter_manager(). This is the SceneGraph model version. You want the other one where you get the manager from the context: self.scene_graph.collision_filter_manager(self.sg_context) as documented here.
Alternatively, you can modify the collision filters before you allocate a context. Or throw out the old context and reallocate. All are viable choices.

How to know the shape of sparse tensor in tensorflow 2.8

I am trying to understand the code given here by Google. It has a line as below in the function def build_model(ratings, embedding_dim=3, init_stddev=1.)
U = tf.Variable(tf.random_normal(
[A_train.dense_shape[0], embedding_dim], stddev=init_stddev))
Its assigning random values to user vector U. What is not clear is how is A_train.dense_shape[0] getting its value from. All the online documentation states that without using an session.run we cant get value from an tensor, since I am using tensorflow 2.8 so hoepfully without using session.run we will get values. Now the problem is when I try to print the same inside or out side the function I am not getting satisfactory result even with tensorflow2.X
Below are all the print that I have tried
tf.print(A_train.dense_shape[0])
print(A_train.dense_shape[0])
Any suggestion what I am doing wrong here. My tensorflow version is 2.8.2
When we write tf.print(A_train.dense_shape[0]) then the calculation is still in graph, this graph must then be executed, we can do that using below code
trr, ter = split_dataframe(ratings) ## this function is defined in the colab notebook given by Google
A_trr = build_rating_sparse_tensor(trr) ## this function is defined in the colab notebook given by Google
A_trr_shape=tf.print(A_trr.dense_shape[1]) ## print the output
with tf.Session() as sess:
sess.run(A_trr_shape) ## execute the shape graph

When are placeholders necessary?

Every TensorFlow example I've seen uses placeholders to feed data into the graph. But my applications work fine without placeholders. According to the documentation, using placeholders is the "best practice", but they seem to make the code unnecessarily complex.
Are there any occasions when placeholders are absolutely necessary?
According to the documentation, using placeholders is the "best practice"
Hold on, this quote is out-of-context and could be misinterpreted. Placeholders are the best practice when feeding data through feed_dict.
Using a placeholder makes the intent clear: this is an input node that needs feeding. Tensorflow even provides a placeholder_with_default that does not need feeding — but again, the intent of such a node is clear. For all purposes, a placeholder_with_default does the same thing as a constant — you can indeed feed the constant to change its value, but is the intent clear, would that not be confusing? I doubt so.
There are other ways to input data than feeding and AFAICS all have their uses.
A placeholder is a promise to provide a value later.
Simple example is to define two placeholders a,b and then an operation on them like below .
a = tf.placeholder(tf.float32)
b = tf.placeholder(tf.float32)
adder_node = a + b # + provides a shortcut for tf.add(a, b)
a,b are not initialized and contains no data Because they were defined as placeholders.
Other approach to do same is to define variables tf.Variable and in this case you have to provide an initial value when you declare it.
like :
tf.global_variables_initializer()
or
tf.initialize_all_variables()
And this solution has two drawbacks
Performance wise that you need to do one extra step with calling
initializer however these variables are updatable .
in some cases you do not know the initial values for these variables
so you have to define it as a placeholder
Conclusion :
use tf.Variable for trainable variables such as weights (W) and biases (B) for your model or when Initial values are required in
general.
tf.placeholder allows you to create operations and build computation graph, without needing the data. In TensorFlow
terminology, we then feed data into the graph through these
placeholders.
I really like Ahmed's answer and I upvoted it, but I would like to provide an alternative explanation that might or might not make things a bit clearer.
One of the significant features of Tensorflow is that its operation graphs are compiled and then executed outside of the original environment used to build them. This allows Tensorflow do all sorts of tricks and optimizations, like distributed, platform independent calculations, graph interoperability, GPU computations etc. But all of this comes at the price of complexity. Since your graph is being executed inside its own VM of some sort, you have to have a special way of feeding data into it from the outside, for example from your python program.
This is where placeholders come in. One way of feeding data into your model is to supply it via a feed dictionary when you execute a graph op. And to indicate where inside the graph this data is supposed to go you use placeholders. This way, as Ahmed said, placeholder is a sort of a promise for data supplied in the future. It is literally a placeholder for things you will supply later. To use an example similar to Ahmed's
# define graph to do matrix muliplication
x = tf.placeholder(tf.float32)
y = tf.placeholder(tf.float32)
# this is the actual operation we want to do,
# but since we want to supply x and y at runtime
# we will use placeholders
model = tf.matmul(x, y)
# now lets supply the data and run the graph
init = tf.global_variables_initializer()
with tf.Session() as session:
session.run(init)
# generate some data for our graph
data_x = np.random.randint(0, 10, size=[5, 5])
data_y = np.random.randint(0, 10, size=[5, 5])
# do the work
result = session.run(model, feed_dict={x: data_x, y: data_y}
There are other ways of supplying data into the graph, but arguably, placeholders and feed_dict is the most comprehensible way and it provides most flexibility.
If you want to avoid placeholders, other ways of supplying data are either loading the whole dataset into constants on graph build or moving the whole process of loading and pre-processing the data into the graph by using input pipelines. You can read up on all of this in the TF documentation.
https://www.tensorflow.org/programmers_guide/reading_data

save binarizer together with sklearn model

I'm trying to build a service that has 2 components. In component 1, I train a machine learning model using sklearn by creating a Pipeline. This model gets serialized using joblib.dump (really numpy_pickle.dump). Component 2 runs in the cloud, loads the model trained by (1), and uses it to label text that it gets as input.
I'm running into an issue where, during training (component 1) I need to first binarize my data since it is text data, which means that the model is trained on binarized input and then makes predictions using the mapping created by the binarizer. I need to get this mapping back when (2) makes predictions based on the model so that I can output the actual text labels.
I tried adding the binarizer to the pipeline like this, thinking that the model would then have the mapping itself:
p = Pipeline([
('binarizer', MultiLabelBinarizer()),
('vect', CountVectorizer(min_df=min_df, ngram_range=ngram_range)),
('tfidf', TfidfTransformer()),
('clf', OneVsRestClassifier(clf))
])
But I get the following error:
model = p.fit(training_features, training_tags)
*** TypeError: fit_transform() takes 2 positional arguments but 3 were given
My goal is to make sure the binarizer and model are tied together so that the consumer knows how to decode the model's output.
What are some existing paradigms for doing this? Should I be serializing the binarizer together with the model in some other object that I create? Is there some other way of passing the binarizer to Pipeline so that I don't have to do that, and would I be able to get the mappings back from the model if I did that?
Your intuition that you should add the MultiLabelBinarizer to the pipeline was the right way to solve this problem. It would have worked, except that MultiLabelBinarizer.fit_transform does not take the fit_transform(self, X, y=None) method signature which is now standard for sklearn estimators. Instead, it has a unique fit_transform(self, y) signature which I had never noticed before. As a result of this difference, when you call fit on the pipeline, it tries to pass training_tags as a third positional argument to a function with two positional arguments, which doesn't work.
The solution to this problem is tricky. The cleanest way I can think of to work around it is to create your own MultiLabelBinarizer that overrides fit_transform and ignores its third argument. Try something like the following.
class MyMLB(MultiLabelBinarizer):
def fit_transform(self, X, y=None):
return super(MultiLabelBinarizer, self).fit_transform(X)
Try adding this to your pipeline in place of the MultiLabelBinarizer and see what happens. If you're able to fit() the pipeline, the last problem that you'll have is that your new MyMLB class has to be importable on any system that will de-pickle your now trained, pickled pipeline object. The easiest way to do this is to put MyMLB into its own module and place a copy on the remote machine that will be de-pickling and executing the model. That should fix it.
I misunderstood how the MultiLabelBinarizer worked. It is a transformer of outputs, not of inputs. Not only does this explain the alternative fit_transform() method signature for that class, but it also makes it fundamentally incompatible with the idea of inclusion in a single classification pipeline which is limited to transforming inputs and making predictions of outputs. However, all is not lost!
Based on your question, you're already comfortable with serializing your model to disk as [some form of] a .pkl file. You should be able to also serialize a trained MultiLabelBinarizer, and then unpack it and use it to unpack the outputs from your pipeline. I know you're using joblib, but I'll write this up this sample code as if you're using pickle. I believe the idea will still apply.
X = <training_data>
y = <training_labels>
# Perform multi-label classification on class labels.
mlb = MultiLabelBinarizer()
multilabel_y = mlb.fit_transform(y)
p = Pipeline([
('vect', CountVectorizer(min_df=min_df, ngram_range=ngram_range)),
('tfidf', TfidfTransformer()),
('clf', OneVsRestClassifier(clf))
])
# Use multilabel classes to fit the pipeline.
p.fit(X, multilabel_y)
# Serialize both the pipeline and binarizer to disk.
with open('my_sklearn_objects.pkl', 'wb') as f:
pickle.dump((mlb, p), f)
Then, after shipping the .pkl files to the remote box...
# Hydrate the serialized objects.
with open('my_sklearn_objects.pkl', 'rb') as f:
mlb, p = pickle.load(f)
X = <input data> # Get your input data from somewhere.
# Predict the classes using the pipeline
mlb_predictions = p.predict(X)
# Turn those classes into labels using the binarizer.
classes = mlb.inverse_transform(mlb_predictions)
# Do something with predicted classes.
<...>
Is this the paradigm for doing this? As far as I know, yes. Not only that, but if you desire to keep them together (which is a good idea, I think) you can serialize them as a tuple as I did in the example above so they stay in a single file. No need to serialize a custom object or anything like that.
Model serialization via pickle et al. is the sklearn approved way to save estimators between runs and move them between computers. I've used this process successfully many times before, including in productions systems with success.

How to use Tensorflow inference models to generate deepdream like images

I am using a custom image set to train a neural network using Tensorflow API. After successful training process I get these checkpoint files containing values of different training var. I now want to get an inference model from these checkpoint files, I found this script which does that, which I can then use to generate deepdream images as explained in this tutorial. The problem is when I load my model using:
import tensorflow as tf
model_fn = 'export'
graph = tf.Graph()
sess = tf.InteractiveSession(graph=graph)
with tf.gfile.FastGFile(model_fn, 'rb') as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
t_input = tf.placeholder(np.float32, name='input')
imagenet_mean = 117.0
t_preprocessed = tf.expand_dims(t_input-imagenet_mean, 0)
tf.import_graph_def(graph_def, {'input':t_preprocessed})
I get this error:
graph_def.ParseFromString(f.read())
self.MergeFromString(serialized)
raise message_mod.DecodeError('Unexpected end-group tag.')
google.protobuf.message.DecodeError: Unexpected end-group tag.
The script expect a protocol buffer file, I am not sure the script I am using to generate inference models is giving me proto buffer files or not.
Can someone please suggest what am I doing wrong, or is there a better way to achieve this. I simply want to convert checkpoint files generated by tensor to proto buffer.
Thanks
The link to the script you ran is broken, but in any case the recommended thing is not to try to generate an inference model from a checkpoint, but rather to embed code at the end of your training program that will emit a "SavedModel" export (which is not the same thing as a checkpoint).
Please see [1], and in particular the heading "Building a Saved Model". Note that a Saved Model constitutes multiple files, one of which is indeed a protocol buffer (which directly answers your question I hope); the others are variable files and (optional) asset files.
[1] https://www.tensorflow.org/programmers_guide/saved_model

Resources