ColumnTransformer , remainder - passthrough gives me Error - machine-learning

I have a small dataset, like this
_d=pd.DataFrame([
[1,2.0,'a','mango','2017-07-07',1],
[2,2.55,'b','apple','2017-08-07',0],
[3,5.7,np.nan,'bannan',np.nan,1],
[4,np.nan,'d','grpaes','2017-09-07',1],
[5,5.7,'e','pineapple','2017-10-07',0],
[6,8.3,np.nan,'orange','2017-01-07',0],
[5,5.7,'e',np.nan,'2017-10-07',1],
[6,np.nan,'f',np.nan,np.nan,0],
[7,6.8,'g','pomegranate','2017-02-07',1],
[np.nan,55.5,'h','water melon','2017-03-07',0],
[9,6.8,'i','mango',np.nan,1],
[10,3.5,np.nan,'orange','2017-06-07',1],
[11,2.78,'k','pomegranate','2017-09-07',0]
] ,columns=['ind','score','grade','group','da','target']
)
To handle NaN values and encode the category features, I used this code
y=_d['target']
x=_d.drop(['target'],axis=1)
int_columns=_d.select_dtypes(['float64','int64']).columns
obj_columns=_d.select_dtypes(['object','category']).columns
int_pipeline=Pipeline([
('impute_values',SimpleImputer(missing_values=np.nan,strategy='mean')),
('scaling',StandardScaler())
])
cat_pipeline=Pipeline([
('cat_impute',SimpleImputer(strategy='constant',fill_value='missing')),
('encoding',OneHotEncoder(drop='first'))
])
column_trans=ColumnTransformer(transformers=[
('int_p',int_pipeline,['ind', 'score']),
('cat_p',cat_pipeline,['grade', 'group'])
],remainder='passthrough')
mdl_pipeline=Pipeline([
('value_transform',column_trans)
])
transformed_data=mdl_pipeline.fit_transform(x,y)
When I run this code, I get the following error
ValueError: could not convert string to float: '2017-07-07'
The above exception was the direct cause of the following exception:
ValueError Traceback (most recent call
last) Input In [253], in <cell line: 27>()
18 column_trans=ColumnTransformer(transformers=[
19 ('int_p',int_pipeline,['ind', 'score']),
20 ('cat_p',cat_pipeline,['grade', 'group'])
21 ],remainder='passthrough')
23 mdl_pipeline=Pipeline([
24 ('value_transform',column_trans)
25 # ,('mdl',LogisticRegression())
26 ])
---> 27 transformed_data=mdl_pipeline.fit_transform(x,y)
File ~\Anaconda3\lib\site-packages\sklearn\pipeline.py:434, in
Pipeline.fit_transform(self, X, y, **fit_params)
432 fit_params_last_step = fit_params_steps[self.steps[-1][0]]
433 if hasattr(last_step, "fit_transform"):
--> 434 return last_step.fit_transform(Xt, y, **fit_params_last_step)
435 else:
436 return last_step.fit(Xt, y, **fit_params_last_step).transform(Xt)
File
~\Anaconda3\lib\site-packages\sklearn\compose_column_transformer.py:699,
in ColumnTransformer.fit_transform(self, X, y)
696 self._validate_output(Xs)
697 self._record_output_indices(Xs)
--> 699 return self._hstack(list(Xs))
File
~\Anaconda3\lib\site-packages\sklearn\compose_column_transformer.py:783,
in ColumnTransformer._hstack(self, Xs)
778 converted_Xs = [
779 check_array(X, accept_sparse=True, force_all_finite=False)
780 for X in Xs
781 ]
782 except ValueError as e:
--> 783 raise ValueError(
784 "For a sparse output, all columns should "
785 "be a numeric or convertible to a numeric."
786 ) from e
788 return sparse.hstack(converted_Xs).tocsr()
789 else:
ValueError: For a sparse output, all columns should be a numeric or
convertible to a numeric.
The value error
ValueError: could not convert string to float: '2017-07-07'
doesnt make any sense, as I have set remainder='passthrough', In the columnTransformer, Why is my code not working

Set sparse_threshold=0 of your ColumnTransformer. Otherwise, according to doc:
If the output of the different transformers contains sparse matrices, these will be stacked as a sparse matrix if the overall density is lower than this value. Use sparse_threshold=0 to always return dense. When the transformed output consists of all dense data, the stacked result will be dense, and this keyword will be ignored.
So it's trying to convert results of OneHotEncoder to sprase matrix but it can't since sparse matrices require numerical values (hence attempt to convert to something numerical)

Related

why max_samples does not accept float type?

I am doing machine learning.Here I want to find the best triple (max_samples, n_trees and threshold) that gives the greatest performance in terms of area under ROC curve and area under recall precison curve.
Here is the code:
def meilleur_triplet(x,classes):
for n_trees in np.arange(100,160,10):
for sample_size in np.arange(0.1,1,0.1):
for threshold in np.arange(0.4,1,0.1):
model=IforestLocal(sample_size,n_trees)
model.fit(x)
y_pred,y_score=model.predict(x,threshold)
auc=roc_auc_score(classes,y_pred)
auc_pr=average_precision_score(classes,y_pred)
Now when I use max_samples with a range of int I don't have an error however if it's in float I have the following error:
**
TypeError Traceback (most recent call last)
Input In [201], in <cell line: 1>()
----> 1 meilleur_triplet(X_glass,y_glass)
Input In [200], in meilleur_triplet(x, classes)
6 for threshold in np.arange(0.4,1,0.1):#(0.4,1,0.1)
8 model=IforestLocal(sample_size,n_trees)
----> 9 model.fit(x)
File ~\Desktop\THESE\Maurras\Code_Maurras\iforest_D.py:45, in IsolationForest.fit(self, X)
42 self.sample_size = len_x
44 for i in range(self.n_trees):
---> 45 sample_idx = random.sample(list(range(len_x)), self.sample_size)
46 # TODO: Must be deleted before compute the memory consumption of the methods
47 self.samples.append(sample_idx)
File ~\anaconda3\lib\random.py:450, in Random.sample(self, population, k, counts)
448 if not 0 <= k <= n:
449 raise ValueError("Sample larger than population or is negative")
--> 450 result = [None] * k
451 setsize = 21 # size of a small set minus size of an empty list
452 if k > 5:
TypeError: can't multiply sequence by non-int of type 'numpy.float64'
**
This is where I called the function
meilleur_triplet(X_glass,y_glass)
Thank you please help me

Decision Tree - Exporting image via Graphviz error

I'm trying to build a Decision Tree using gridsearch and a pipeline, but I get an error when I try to export the image using graphviz. I looked online and couldn't find anything; one potential problem would've been if I didn't use the best_estimator_ instance, but I did in this case.
Everything works (getting accuracy and other metrics) except the exporting graph part.
def TreeOpt(X, y):
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=1)
std_scl = StandardScaler()
dec_tree = tree.DecisionTreeClassifier()
pipe = Pipeline(steps=[('std_slc', std_scl),
('dec_tree', dec_tree)])
criterion = ['gini', 'entropy']
max_depth = list(range(1,15))
parameters = dict(dec_tree__criterion=criterion,
dec_tree__max_depth=max_depth)
tree_gs = GridSearchCV(pipe, parameters)
tree_gs.fit(X_train, y_train)
export_graphviz(
tree_gs.best_estimator_,
out_file=("dec_tree.dot"),
feature_names=None,
class_names=None,
filled=True)
But I get
<ipython-input-2-bb91ec6ba0d9> in <module>
37 filled=True)
38
---> 39 DecTreeOptimizer(X = df.drop(['quality'], axis=1), y = df.quality)
40
<ipython-input-2-bb91ec6ba0d9> in DecTreeOptimizer(X, y)
30 print("Best score: " + str(tree_GS.best_score_))
31
---> 32 export_graphviz(
33 tree_GS.best_estimator_,
34 out_file=("dec_tree.dot"),
~\AppData\Local\Programs\Python\Python39\lib\site-packages\sklearn\utils\validation.py in inner_f(*args, **kwargs)
61 extra_args = len(args) - len(all_args)
62 if extra_args <= 0:
---> 63 return f(*args, **kwargs)
64
65 # extra_args > 0
~\AppData\Local\Programs\Python\Python39\lib\site-packages\sklearn\tree\_export.py in export_graphviz(decision_tree, out_file, max_depth, feature_names, class_names, label, filled, leaves_parallel, impurity, node_ids, proportion, rotate, rounded, special_characters, precision)
767 """
768
--> 769 check_is_fitted(decision_tree)
770 own_file = False
771 return_string = False
~\AppData\Local\Programs\Python\Python39\lib\site-packages\sklearn\utils\validation.py in inner_f(*args, **kwargs)
61 extra_args = len(args) - len(all_args)
62 if extra_args <= 0:
---> 63 return f(*args, **kwargs)
64
65 # extra_args > 0
~\AppData\Local\Programs\Python\Python39\lib\site-packages\sklearn\utils\validation.py in check_is_fitted(estimator, attributes, msg, all_or_any)
1096
1097 if not attrs:
-> 1098 raise NotFittedError(msg % {'name': type(estimator).__name__})
1099
1100
NotFittedError: This Pipeline instance is not fitted yet. Call 'fit' with appropriate arguments before using this estimator.```
After long searches, finally found the answer here :Plot best decision tree with pipeline and GridsearchCV
The best_estimator_ attribute returns a pipeline instead of an object, so I just had to query it like this: best_estimator_[1] (and then I found a whole other lot of problems with my code, but that's part 2).
I will leave this here in case anyone else is going to need it. Cheers!

ValueError: 'mean_squared_error' is not a valid scoring value

So, I have been working on my first ML project and as part of that I have been trying out various models from sci-kit learn and I wrote this piece of code for a random forest model:
#Random Forest
reg = RandomForestRegressor(random_state=0, criterion = 'mse')
#Apply grid search for best parameters
params = {'randomforestregressor__n_estimators' : range(100, 500, 200),
'randomforestregressor__min_samples_split' : range(2, 10, 3)}
pipe = make_pipeline(reg)
grid = GridSearchCV(pipe, param_grid = params, scoring='mean_squared_error', n_jobs=-1, iid=False, cv=5)
reg = grid.fit(X_train, y_train)
print('Best MSE: ', grid.best_score_)
print('Best Parameters: ', grid.best_estimator_)
y_train_pred = reg.predict(X_train)
y_test_pred = reg.predict(X_test)
tr_err = mean_squared_error(y_train_pred, y_train)
ts_err = mean_squared_error(y_test_pred, y_test)
print(tr_err, ts_err)
results_train['random_forest'] = tr_err
results_test['random_forest'] = ts_err
But, when I run this code, I get the following error:
KeyError Traceback (most recent call last)
~\anaconda3\lib\site-packages\sklearn\metrics\_scorer.py in get_scorer(scoring)
359 else:
--> 360 scorer = SCORERS[scoring]
361 except KeyError:
KeyError: 'mean_squared_error'
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
<ipython-input-149-394cd9e0c273> in <module>
5 pipe = make_pipeline(reg)
6 grid = GridSearchCV(pipe, param_grid = params, scoring='mean_squared_error', n_jobs=-1, iid=False, cv=5)
----> 7 reg = grid.fit(X_train, y_train)
8 print('Best MSE: ', grid.best_score_)
9 print('Best Parameters: ', grid.best_estimator_)
~\anaconda3\lib\site-packages\sklearn\utils\validation.py in inner_f(*args, **kwargs)
71 FutureWarning)
72 kwargs.update({k: arg for k, arg in zip(sig.parameters, args)})
---> 73 return f(**kwargs)
74 return inner_f
75
~\anaconda3\lib\site-packages\sklearn\model_selection\_search.py in fit(self, X, y, groups, **fit_params)
652 cv = check_cv(self.cv, y, classifier=is_classifier(estimator))
653
--> 654 scorers, self.multimetric_ = _check_multimetric_scoring(
655 self.estimator, scoring=self.scoring)
656
~\anaconda3\lib\site-packages\sklearn\metrics\_scorer.py in _check_multimetric_scoring(estimator, scoring)
473 if callable(scoring) or scoring is None or isinstance(scoring,
474 str):
--> 475 scorers = {"score": check_scoring(estimator, scoring=scoring)}
476 return scorers, False
477 else:
~\anaconda3\lib\site-packages\sklearn\utils\validation.py in inner_f(*args, **kwargs)
71 FutureWarning)
72 kwargs.update({k: arg for k, arg in zip(sig.parameters, args)})
---> 73 return f(**kwargs)
74 return inner_f
75
~\anaconda3\lib\site-packages\sklearn\metrics\_scorer.py in check_scoring(estimator, scoring, allow_none)
403 "'fit' method, %r was passed" % estimator)
404 if isinstance(scoring, str):
--> 405 return get_scorer(scoring)
406 elif callable(scoring):
407 # Heuristic to ensure user has not passed a metric
~\anaconda3\lib\site-packages\sklearn\metrics\_scorer.py in get_scorer(scoring)
360 scorer = SCORERS[scoring]
361 except KeyError:
--> 362 raise ValueError('%r is not a valid scoring value. '
363 'Use sorted(sklearn.metrics.SCORERS.keys()) '
364 'to get valid options.' % scoring)
ValueError: 'mean_squared_error' is not a valid scoring value. Use sorted(sklearn.metrics.SCORERS.keys()) to get valid options.
So, I tried running it by removing the scoring='mean_squared_error' from GridSearchCV(pipe, param_grid = params, scoring='mean_squared_error', n_jobs=-1, iid=False, cv=5). When I do that, the code runs perfectly and gives a decent enough training and testing error.
Regardless of that, I can't figure out why with scoring='mean_squared_error' parameter in GridSearchCV function throws me that error. What am I doing wrong?
According to the documentation:
All scorer objects follow the convention that higher return values are better than lower return values. Thus metrics which measure the distance between the model and the data, like metrics.mean_squared_error, are available as neg_mean_squared_error which return the negated value of the metric.
This means that you have to pass scoring='neg_mean_squared_error' in order to evaluate the grid search results with Mean Squared Error.

xarray: rolling mean of dask array conflicting sizes for data and coordinate in rolling operation

I am trying to do a rolling mean to a dask array within xarray. My issue may lay in the rechunking before the rolling mean. I am getting a ValueError of conflicting sizes between data and coordinates. However, this arises within the rolling operation as I don't think there are conflicts in the data and coords of the array before going into the rolling operation.
Apologies for not creating data to test but my project data is quick to play with:
import xarray as xr
remote_data = xr.open_dataarray('http://iridl.ldeo.columbia.edu/SOURCES/.Models'\
'/.SubX/.RSMAS/.CCSM4/.hindcast/.zg/dods',
chunks={'L': 1, 'S': 1})
da = remote_data.isel(P=0,L=0,M=0,X=0,Y=0)
da_day_clim = da.groupby('S.dayofyear').mean('S')
print(da_day_clim)
#<xarray.DataArray 'zg' (dayofyear: 366)>
#dask.array<shape=(366,), dtype=float32, chunksize=(1,)>
#Coordinates:
# L timedelta64[ns] 12:00:00
# Y float32 -90.0
# M float32 1.0
# X float32 0.0
# P int32 500
# * dayofyear (dayofyear) int64 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 ...
# Do a 31-day rolling mean
# da_day_clim.rolling(dayofyear=31, center=True).mean()
# This brings up:
#ValueError: The overlapping depth 30 is larger than your
#smallest chunk size 1. Rechunk your array
#with a larger chunk size or a chunk size that
#more evenly divides the shape of your array.
# Read http://xarray.pydata.org/en/stable/dask.html
# and found http://xarray.pydata.org/en/stable/generated/xarray.Dataset.chunk.html#xarray.Dataset.chunk
# I could make a little PR to add the .chunk() into the ValeError message. Thoughts?
# Rechunk. Played around with a few values but decided on
# the len of dayofyear
da_day_clim2 = da_day_clim.chunk({'dayofyear': 366})
print(da_day_clim2)
#<xarray.DataArray 'zg' (dayofyear: 366)>
#dask.array<shape=(366,), dtype=float32, chunksize=(366,)>
#Coordinates:
# L timedelta64[ns] 12:00:00
# Y float32 -90.0
# M float32 1.0
# X float32 0.0
# P int32 500
# * dayofyear (dayofyear) int64 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 ...
# Rolling mean on this
da_day_clim_smooth = da_day_clim2.rolling(dayofyear=31, center=True).mean()
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-57-6acf382cdd3d> in <module>()
4 da_day_clim = da.groupby('S.dayofyear').mean('S')
5 da_day_clim2 = da_day_clim.chunk({'dayofyear': 366})
----> 6 da_day_clim_smooth = da_day_clim2.rolling(dayofyear=31, center=True).mean()
~/anaconda/envs/SubXNAO/lib/python3.6/site-packages/xarray/core/rolling.py in wrapped_func(self, **kwargs)
307 if self.center:
308 values = values[valid]
--> 309 result = DataArray(values, self.obj.coords)
310
311 return result
~/anaconda/envs/SubXNAO/lib/python3.6/site-packages/xarray/core/dataarray.py in __init__(self, data, coords, dims, name, attrs, encoding, fastpath)
224
225 data = as_compatible_data(data)
--> 226 coords, dims = _infer_coords_and_dims(data.shape, coords, dims)
227 variable = Variable(dims, data, attrs, encoding, fastpath=True)
228
~/anaconda/envs/SubXNAO/lib/python3.6/site-packages/xarray/core/dataarray.py in _infer_coords_and_dims(shape, coords, dims)
79 raise ValueError('conflicting sizes for dimension %r: '
80 'length %s on the data but length %s on '
---> 81 'coordinate %r' % (d, sizes[d], s, k))
82
83 if k in sizes and v.shape != (sizes[k],):
ValueError: conflicting sizes for dimension 'dayofyear': length 351 on the data but length 366 on coordinate 'dayofyear'
The length 351 is related to 366-351=15 (half the window).
This turned out to be a bug in Xarray and was fixed in https://github.com/pydata/xarray/pull/2122
The fix will be in Xarray 0.10.4 which is slated for imminent release.

Tensorflow. Switching from BasicRNNCell to LSTMCell

I have built a RNN with BasicRNN now I want to use the LSTMCell but the passage does not seem trivial. What should I change?
First i define all the placeholders and variables:
X_placeholder = tf.placeholder(tf.float32, [batch_size, truncated_backprop_length, embedding_size])
Y_placeholder = tf.placeholder(tf.int32, [batch_size, truncated_backprop_length])
init_state = tf.placeholder(tf.float32, [batch_size, state_size])
W = tf.Variable(np.random.rand(state_size, num_classes),dtype=tf.float32)
b = tf.Variable(np.zeros((batch_size, num_classes)), dtype=tf.float32)
W2 = tf.Variable(np.random.rand(state_size, num_classes),dtype=tf.float32)
b2 = tf.Variable(np.zeros((batch_size, num_classes)), dtype=tf.float32)
Then I unstack the labels:
labels_series = tf.transpose(batchY_placeholder)
labels_series = tf.unstack(batchY_placeholder, axis=1)
inputs_series = X_placeholder
Then i define my RNN:
cell = tf.contrib.rnn.BasicLSTMCell(state_size, state_is_tuple = False)
states_series, current_state = tf.nn.dynamic_rnn(cell, inputs_series, initial_state = init_state)
The error that I get is:
InvalidArgumentError Traceback (most recent call last)
/home/deepnlp2017/.local/lib/python3.5/site-packages/tensorflow/python/framework/common_shapes.py in _call_cpp_shape_fn_impl(op, input_tensors_needed, input_tensors_as_shapes_needed, debug_python_shape_fn, require_shape_fn)
669 node_def_str, input_shapes, input_tensors, input_tensors_as_shapes,
--> 670 status)
671 except errors.InvalidArgumentError as err:
/home/deepnlp2017/anaconda3/lib/python3.5/contextlib.py in __exit__(self, type, value, traceback)
65 try:
---> 66 next(self.gen)
67 except StopIteration:
/home/deepnlp2017/.local/lib/python3.5/site-packages/tensorflow/python/framework/errors_impl.py in raise_exception_on_not_ok_status()
468 compat.as_text(pywrap_tensorflow.TF_Message(status)),
--> 469 pywrap_tensorflow.TF_GetCode(status))
470 finally:
InvalidArgumentError: Dimensions must be equal, but are 50 and 100 for 'rnn/while/basic_lstm_cell/mul' (op: 'Mul') with input shapes: [32,50], [32,100].
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
<ipython-input-19-2ac617f4dde4> in <module>()
4 #cell = tf.contrib.rnn.BasicRNNCell(state_size)
5 cell = tf.contrib.rnn.BasicLSTMCell(state_size, state_is_tuple = False)
----> 6 states_series, current_state = tf.nn.dynamic_rnn(cell, inputs_series, initial_state = init_state)
/home/deepnlp2017/.local/lib/python3.5/site-packages/tensorflow/python/ops/rnn.py in dynamic_rnn(cell, inputs, sequence_length, initial_state, dtype, parallel_iterations, swap_memory, time_major, scope)
543 swap_memory=swap_memory,
544 sequence_length=sequence_length,
--> 545 dtype=dtype)
546
547 # Outputs of _dynamic_rnn_loop are always shaped [time, batch, depth].
/home/deepnlp2017/.local/lib/python3.5/site-packages/tensorflow/python/ops/rnn.py in _dynamic_rnn_loop(cell, inputs, initial_state, parallel_iterations, swap_memory, sequence_length, dtype)
710 loop_vars=(time, output_ta, state),
711 parallel_iterations=parallel_iterations,
--> 712 swap_memory=swap_memory)
713
714 # Unpack final output if not using output tuples.
/home/deepnlp2017/.local/lib/python3.5/site-packages/tensorflow/python/ops/control_flow_ops.py in while_loop(cond, body, loop_vars, shape_invariants, parallel_iterations, back_prop, swap_memory, name)
2624 context = WhileContext(parallel_iterations, back_prop, swap_memory, name)
2625 ops.add_to_collection(ops.GraphKeys.WHILE_CONTEXT, context)
-> 2626 result = context.BuildLoop(cond, body, loop_vars, shape_invariants)
2627 return result
2628
/home/deepnlp2017/.local/lib/python3.5/site-packages/tensorflow/python/ops/control_flow_ops.py in BuildLoop(self, pred, body, loop_vars, shape_invariants)
2457 self.Enter()
2458 original_body_result, exit_vars = self._BuildLoop(
-> 2459 pred, body, original_loop_vars, loop_vars, shape_invariants)
2460 finally:
2461 self.Exit()
/home/deepnlp2017/.local/lib/python3.5/site-packages/tensorflow/python/ops/control_flow_ops.py in _BuildLoop(self, pred, body, original_loop_vars, loop_vars, shape_invariants)
2407 structure=original_loop_vars,
2408 flat_sequence=vars_for_body_with_tensor_arrays)
-> 2409 body_result = body(*packed_vars_for_body)
2410 if not nest.is_sequence(body_result):
2411 body_result = [body_result]
/home/deepnlp2017/.local/lib/python3.5/site-packages/tensorflow/python/ops/rnn.py in _time_step(time, output_ta_t, state)
695 skip_conditionals=True)
696 else:
--> 697 (output, new_state) = call_cell()
698
699 # Pack state if using state tuples
/home/deepnlp2017/.local/lib/python3.5/site-packages/tensorflow/python/ops/rnn.py in <lambda>()
681
682 input_t = nest.pack_sequence_as(structure=inputs, flat_sequence=input_t)
--> 683 call_cell = lambda: cell(input_t, state)
684
685 if sequence_length is not None:
/home/deepnlp2017/.local/lib/python3.5/site-packages/tensorflow/contrib/rnn/python/ops/core_rnn_cell_impl.py in __call__(self, inputs, state, scope)
182 i, j, f, o = array_ops.split(value=concat, num_or_size_splits=4, axis=1)
183
--> 184 new_c = (c * sigmoid(f + self._forget_bias) + sigmoid(i) *
185 self._activation(j))
186 new_h = self._activation(new_c) * sigmoid(o)
/home/deepnlp2017/.local/lib/python3.5/site-packages/tensorflow/python/ops/math_ops.py in binary_op_wrapper(x, y)
882 if not isinstance(y, sparse_tensor.SparseTensor):
883 y = ops.convert_to_tensor(y, dtype=x.dtype.base_dtype, name="y")
--> 884 return func(x, y, name=name)
885
886 def binary_op_wrapper_sparse(sp_x, y):
/home/deepnlp2017/.local/lib/python3.5/site-packages/tensorflow/python/ops/math_ops.py in _mul_dispatch(x, y, name)
1103 is_tensor_y = isinstance(y, ops.Tensor)
1104 if is_tensor_y:
-> 1105 return gen_math_ops._mul(x, y, name=name)
1106 else:
1107 assert isinstance(y, sparse_tensor.SparseTensor) # Case: Dense * Sparse.
/home/deepnlp2017/.local/lib/python3.5/site-packages/tensorflow/python/ops/gen_math_ops.py in _mul(x, y, name)
1623 A `Tensor`. Has the same type as `x`.
1624 """
-> 1625 result = _op_def_lib.apply_op("Mul", x=x, y=y, name=name)
1626 return result
1627
/home/deepnlp2017/.local/lib/python3.5/site-packages/tensorflow/python/framework/op_def_library.py in apply_op(self, op_type_name, name, **keywords)
761 op = g.create_op(op_type_name, inputs, output_types, name=scope,
762 input_types=input_types, attrs=attr_protos,
--> 763 op_def=op_def)
764 if output_structure:
765 outputs = op.outputs
/home/deepnlp2017/.local/lib/python3.5/site-packages/tensorflow/python/framework/ops.py in create_op(self, op_type, inputs, dtypes, input_types, name, attrs, op_def, compute_shapes, compute_device)
2395 original_op=self._default_original_op, op_def=op_def)
2396 if compute_shapes:
-> 2397 set_shapes_for_outputs(ret)
2398 self._add_op(ret)
2399 self._record_op_seen_by_control_dependencies(ret)
/home/deepnlp2017/.local/lib/python3.5/site-packages/tensorflow/python/framework/ops.py in set_shapes_for_outputs(op)
1755 shape_func = _call_cpp_shape_fn_and_require_op
1756
-> 1757 shapes = shape_func(op)
1758 if shapes is None:
1759 raise RuntimeError(
/home/deepnlp2017/.local/lib/python3.5/site-packages/tensorflow/python/framework/ops.py in call_with_requiring(op)
1705
1706 def call_with_requiring(op):
-> 1707 return call_cpp_shape_fn(op, require_shape_fn=True)
1708
1709 _call_cpp_shape_fn_and_require_op = call_with_requiring
/home/deepnlp2017/.local/lib/python3.5/site-packages/tensorflow/python/framework/common_shapes.py in call_cpp_shape_fn(op, input_tensors_needed, input_tensors_as_shapes_needed, debug_python_shape_fn, require_shape_fn)
608 res = _call_cpp_shape_fn_impl(op, input_tensors_needed,
609 input_tensors_as_shapes_needed,
--> 610 debug_python_shape_fn, require_shape_fn)
611 if not isinstance(res, dict):
612 # Handles the case where _call_cpp_shape_fn_impl calls unknown_shape(op).
/home/deepnlp2017/.local/lib/python3.5/site-packages/tensorflow/python/framework/common_shapes.py in _call_cpp_shape_fn_impl(op, input_tensors_needed, input_tensors_as_shapes_needed, debug_python_shape_fn, require_shape_fn)
673 missing_shape_fn = True
674 else:
--> 675 raise ValueError(err.message)
676
677 if missing_shape_fn:
ValueError: Dimensions must be equal, but are 50 and 100 for 'rnn/while/basic_lstm_cell/mul' (op: 'Mul') with input shapes: [32,50], [32,100].
You should consider giving the error trace. Otherwise it is hard (or impossible) to help.
I reproduced the situation and found that the issue was coming from state unpacking, i.e. line c, h = state.
Try to set state_is_tuple to false i.e.
cell = tf.contrib.rnn.BasicLSTMCell(state_size, state_is_tuple=False)
I'm not sure why this is happening. Are you loading a previous model? What is your tensorflow version?
More information on TensorFlow RNN Cells:
I would suggest you to take a look at: WildML post, section "RNN CELLS, WRAPPERS AND MULTI-LAYER RNNS".
It states that:
BasicRNNCell – A vanilla RNN cell.
GRUCell – A Gated Recurrent Unit cell.
BasicLSTMCell – An LSTM cell based on Recurrent Neural Network Regularization. No peephole connection or cell clipping.
LSTMCell – A more complex LSTM cell that allows for optional peephole connections and cell clipping.
MultiRNNCell – A wrapper to combine multiple cells into a multi-layer cell.
DropoutWrapper – A wrapper to add dropout to input and/or output connections of a cell.
Given this, I would suggest you to switch from BasicRNNCell to BasicLSTMCell. Where Basic here means "use it unless you know what you are doing". If you want to try LSTMs without going into details, thats the way to go. It may be straightforward, just replace with it and voilà!
If not, share some of your code + error.
Hope it helps
The problem seems to be with the init_state variable.
Basic RNN cells only have one state variable while LSTM has a visible and a hidden state. Specify the options state_is_tuple=False will concat the two state variables into one, therefore double the size of what you have specified in the init_state declaration.
To avoid this, one can use the built-in zero_state method for an LSTMCell to initialize the state in the correct way without worrying about size differences.
So it would simply be:
init_state = cell.zero_state(batch_size, dtype)
Of course will will have to be placed after the line where the cell is built.

Resources