TensorFlow - Invalid argument: Reshape:0 is both fed and fetched - machine-learning

Is there a way to both feed and fetch the same variable in Tensorflow? If not, why is this not allowed?
I'm getting this error:
StatusNotOK: Invalid argument: Reshape:0 is both fed and fetched.

You can not have a Tensor that is both fed and fetched. The work-around is to add "tf.identity" op and fetch that
tf.reset_default_graph()
a = tf.placeholder(tf.int32)
a_copy = tf.identity(a)
sess = tf.InteractiveSession()
sess.run(a_copy, feed_dict={a:1})

I just realized that my error was occurring because I was running on a deprecated version of TensorFlow. I'm still interested to hear about how variables can appear in both feed and fetch though!

Related

How to know the shape of sparse tensor in tensorflow 2.8

I am trying to understand the code given here by Google. It has a line as below in the function def build_model(ratings, embedding_dim=3, init_stddev=1.)
U = tf.Variable(tf.random_normal(
[A_train.dense_shape[0], embedding_dim], stddev=init_stddev))
Its assigning random values to user vector U. What is not clear is how is A_train.dense_shape[0] getting its value from. All the online documentation states that without using an session.run we cant get value from an tensor, since I am using tensorflow 2.8 so hoepfully without using session.run we will get values. Now the problem is when I try to print the same inside or out side the function I am not getting satisfactory result even with tensorflow2.X
Below are all the print that I have tried
tf.print(A_train.dense_shape[0])
print(A_train.dense_shape[0])
Any suggestion what I am doing wrong here. My tensorflow version is 2.8.2
When we write tf.print(A_train.dense_shape[0]) then the calculation is still in graph, this graph must then be executed, we can do that using below code
trr, ter = split_dataframe(ratings) ## this function is defined in the colab notebook given by Google
A_trr = build_rating_sparse_tensor(trr) ## this function is defined in the colab notebook given by Google
A_trr_shape=tf.print(A_trr.dense_shape[1]) ## print the output
with tf.Session() as sess:
sess.run(A_trr_shape) ## execute the shape graph

Invalid order in Rapidsai cuml ARIMA

I'm trying to find right parameters for ARIMA but not able to use parameters higher than 4. Here is the code.
from cuml.tsa.arima import ARIMA
p = 5
q = 0
P = 1
Q = 0
model = ARIMA(train, order=(p,0,q), seasonal_order=(P,0,Q,24), simple_differencing= False)
model.fit()
forecast_df = model.forecast(10)
forecast_df
Error message
ValueError: ERROR: Invalid order. Required: p,q,P,Q <= 4
Is there any way to use parameters higher than 4. I have used higher parameters with statsmodel library but as my data is large I need GPU support provided by this library.
I am the main contributor to this model.
Unfortunately, it is currently impossible to use values greater than 4 for these parameters due to implementation reasons.
I see that you have opened a GitHub issue, thanks for that. We will consider adding support for higher parameter values and keep you updated on the GitHub issue.

How to fetch values using Permutation Feature Importance

I have a dataset with 5K (and 60 features) records focused on binary classification.
Please note that this solution doesn't work here
I am trying to generate feature importance using Permutation Feature Importance. However, I get the below error. Can you please look at my code and let me know whether I am making any mistake?
import eli5
from eli5.sklearn import PermutationImportance
logreg =LogisticRegression()
model = logreg.fit(X_train_std, y_train)
perm = PermutationImportance(model, random_state=1)
eli5.show_weights(perm, feature_names = X.columns.tolist())
I get an error like as shown below
AttributeError: 'PermutationImportance' object has no attribute 'feature_importances_'
Can you help me resolve this error?
If you look at your attributes of PermutationImportance object via
ord(perm)
you can see all attributes and methods BUT after you fit your PI object, meaning that you need to do:
perm = PermutationImportance(model, random_state=1).fit(X_train,y)

Can I get a solution using "timeout" when using Optimize.minimize()?

I'm trying to minimize a variable, but z3 takes to long in order to give me a solution.
And I would like to know if it's possible to get a solution when timeout gets triggered.
If yes how can i do that?
Thx in advance!
If by "solution" you mean the latest approximation of the optimal value, then you may be able to retrieve it, provided that the optimization algorithm being used finds any intermediate solution along the way. (Some optimization algorithms --like, e.g., maxres-- don't find any intermediate solution).
Example:
import z3
o = z3.Optimize()
o.add(...very hard problem...)
cf = z3.Int('cf')
o.add(cf = ...)
obj = o.minimize(cf)
o.set(timeout=...)
res = o.check()
print(res)
print(obj.upper())
Even when res = unknown because of a timeout, the objective instance contains the latest approximation of the optimum value found by z3 before the timeout.
Unfortunately, I am not sure whether it is also possible to retrieve the corresponding sub-optimal model with o.model() (or any other method).
For OptiMathSAT, I show how to retrieve the latest approximation of the optimum value and the corresponding model in the unit-test timeout.py.

Torch: luajit out of memory on simple task

I am trying to load the MNIST dataset in the th repl and do mean subtraction by the following:
file = torch.load('data/mnist.t7/train_32x32.t7', 'ascii')
data = file.data:type(torch.getdefaulttensortype())
mean = data:mean()
data:add(-mean)
The last line causes the following error:
.../torch/install/bin/luajit: not enough memory
I am running this on a laptop with 16GB of RAM. Also MNIST has already been loaded into data so not sure why doing data:add(-mean) would cause this issue. Any ideas?
Thanks
The problem was that it was trying to print the whole matrix (which is large) to the console.
This can be overcome by doing either
data = data:add(-mean)
or
data:add(-mean); - notice the semicolon
Answer provided by Soumith Chintala on the torch gitter.

Resources