I have the the following line of code which fails:
contact_forces = contact_forces / (np.linalg.norm(contact_forces, axis=-1) + 1e-5)[..., np.newaxis]
*** TypeError: loop of ufunc does not support argument 0 of type pydrake.symbolic.Variable which has no callable conjugate method
Below, I show the various debugging outputs in PDB. Basically the norm fails if you specify the axis argument.
(Pdb) contact_forces
array([[[Variable('x(0)', Continuous), Variable('x(1)', Continuous)],
[Variable('x(2)', Continuous), Variable('x(3)', Continuous)],
[Variable('x(4)', Continuous), Variable('x(5)', Continuous)]]],
dtype=object)
(Pdb) np.linalg.norm(contact_forces, axis=-1)
*** TypeError: loop of ufunc does not support argument 0 of type pydrake.symbolic.Variable which has no callable conjugate method
(Pdb) np.linalg.norm(contact_forces)
<Expression "sqrt((pow(x(0), 2) + pow(x(1), 2) + pow(x(2), 2) + pow(x(3), 2) + pow(x(4), 2) + pow(x(5), 2)))">
(Pdb) contact_forces.shape
(1, 3, 2)
Related
The motivation is this: I want to evaluate some expressions in Drake using my initial guess.
(Pdb) sum_torques[0][0]
<Expression "(0.22118181414246227 * F(0) + 0.025169594403141499 * F(1) + 0.24812114211450653 * F(3) + 0.11159816296412806 * F(6) - 0.58491827687679454 * F(10))">
(Pdb) self.F
array([[Variable('F(0)', Continuous)],
[Variable('F(1)', Continuous)],
[Variable('F(2)', Continuous)],
[Variable('F(3)', Continuous)],
[Variable('F(4)', Continuous)],
[Variable('F(5)', Continuous)],
[Variable('F(6)', Continuous)],
[Variable('F(7)', Continuous)],
[Variable('F(8)', Continuous)],
[Variable('F(9)', Continuous)],
[Variable('F(10)', Continuous)],
[Variable('F(11)', Continuous)],
[Variable('F(12)', Continuous)],
[Variable('F(13)', Continuous)],
[Variable('F(14)', Continuous)]], dtype=object)
How can I easily evaluate the first expression? I thought Evaluate would work, but it doesn't:
(Pdb) sum_torques[0][0].Evaluate(prog.GetInitialGuess(self.F))
*** TypeError: Evaluate(): incompatible function arguments. The following argument types are supported:
1. (self: pydrake.symbolic.Expression, env: Dict[pydrake.symbolic.Variable, float] = {}, generator: pydrake.common._module_py.RandomGenerator = None) -> float
2. (self: pydrake.symbolic.Expression, generator: pydrake.common._module_py.RandomGenerator) -> float
Invoked with: <Expression "(0.22118181414246227 * F(0) + 0.025169594403141499 * F(1) + 0.24812114211450653 * F(3) + 0.11159816296412806 * F(6) - 0.58491827687679454 * F(10))">, array([10.6457846 , 10.32468297, 10.51971119, 10.00005536, 10.31186022,
10.42545154, 10.88533766, 10.67987946, 10.45612977, 10.48340862,
10.78873943, 10.22944183, 10.8802976 , 10.31369239, 10.95745086])
I could of course rewrite the sum_torques expression using prog.InitialGuess for every expression I want to evaluate, but this is extremely cumbersome... I thought there would be some easy way to evaluate the expressions without doing that.
To evaluate a symbolic::Expression, you will need to provide a dictionary that maps each variable to its value. Here is the code
env = {self.F[i][0] : prog.GetInitialGuess(self.F[i][0]) for i in range(self.F.shape[0])}
sum_torques[0][0].Evaluate(env)
I use TFF 0.12.0
each client has in train 38 images and in test 16 images, I have 4 clients,
I write a simple code of federated learning :
.....
def create_compiled_keras_model():
base_model = tf.keras.applications.resnet.ResNet50(include_top=False, weights='imagenet', input_shape=(224,224,3,))
global_average_layer = tf.keras.layers.GlobalAveragePooling2D()
prediction_layer = tf.keras.layers.Dense(2, activation='softmax')
model = tf.keras.Sequential([
base_model,
global_average_layer,
prediction_layer
])
return model
def model_fn():
keras_model = create_compiled_keras_model()
return tff.learning.from_keras_model(keras_model, sample_batch, loss=tf.keras.losses.CategoricalCrossentropy(), metrics=[tf.keras.metrics.CategoricalAccuracy()])
iterative_process = tff.learning.build_federated_averaging_process(model_fn, server_optimizer_fn=lambda: tf.keras.optimizers.SGD(learning_rate=1.0), client_optimizer_fn=lambda: tf.keras.optimizers.SGD(learning_rate=0.001), client_weight_fn=None)
state = iterative_process.initialize()
I can't understand why I find those lines in execution:
2020-11-05 15:00:16.642666: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 32 in the outer inference context.
2020-11-05 15:00:16.642724: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 23 in the outer inference context.
2020-11-05 15:00:16.643344: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 51 in the outer inference context.
2020-11-05 15:00:16.643400: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 41 in the outer inference context.
2020-11-05 15:00:16.643545: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 69 in the outer inference context.
2020-11-05 15:00:16.643589: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 60 in the outer inference context.
2020-11-05 15:00:16.643696: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 97 in the outer inference context.
2020-11-05 15:00:16.643756: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 86 in the outer inference context.
2020-11-05 15:00:16.643923: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 106 in the outer inference context.
2020-11-05 15:00:16.643988: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 116 in the outer inference context.
2020-11-05 15:00:16.644071: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 79 in the outer inference context.
2020-11-05 15:00:16.644213: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 134 in the outer inference context.
Knowing that If I change Resnet50 with VGG16, those lines disappear.
Help please !! what does this mean ?
So, I have been working on my first ML project and as part of that I have been trying out various models from sci-kit learn and I wrote this piece of code for a random forest model:
#Random Forest
reg = RandomForestRegressor(random_state=0, criterion = 'mse')
#Apply grid search for best parameters
params = {'randomforestregressor__n_estimators' : range(100, 500, 200),
'randomforestregressor__min_samples_split' : range(2, 10, 3)}
pipe = make_pipeline(reg)
grid = GridSearchCV(pipe, param_grid = params, scoring='mean_squared_error', n_jobs=-1, iid=False, cv=5)
reg = grid.fit(X_train, y_train)
print('Best MSE: ', grid.best_score_)
print('Best Parameters: ', grid.best_estimator_)
y_train_pred = reg.predict(X_train)
y_test_pred = reg.predict(X_test)
tr_err = mean_squared_error(y_train_pred, y_train)
ts_err = mean_squared_error(y_test_pred, y_test)
print(tr_err, ts_err)
results_train['random_forest'] = tr_err
results_test['random_forest'] = ts_err
But, when I run this code, I get the following error:
KeyError Traceback (most recent call last)
~\anaconda3\lib\site-packages\sklearn\metrics\_scorer.py in get_scorer(scoring)
359 else:
--> 360 scorer = SCORERS[scoring]
361 except KeyError:
KeyError: 'mean_squared_error'
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
<ipython-input-149-394cd9e0c273> in <module>
5 pipe = make_pipeline(reg)
6 grid = GridSearchCV(pipe, param_grid = params, scoring='mean_squared_error', n_jobs=-1, iid=False, cv=5)
----> 7 reg = grid.fit(X_train, y_train)
8 print('Best MSE: ', grid.best_score_)
9 print('Best Parameters: ', grid.best_estimator_)
~\anaconda3\lib\site-packages\sklearn\utils\validation.py in inner_f(*args, **kwargs)
71 FutureWarning)
72 kwargs.update({k: arg for k, arg in zip(sig.parameters, args)})
---> 73 return f(**kwargs)
74 return inner_f
75
~\anaconda3\lib\site-packages\sklearn\model_selection\_search.py in fit(self, X, y, groups, **fit_params)
652 cv = check_cv(self.cv, y, classifier=is_classifier(estimator))
653
--> 654 scorers, self.multimetric_ = _check_multimetric_scoring(
655 self.estimator, scoring=self.scoring)
656
~\anaconda3\lib\site-packages\sklearn\metrics\_scorer.py in _check_multimetric_scoring(estimator, scoring)
473 if callable(scoring) or scoring is None or isinstance(scoring,
474 str):
--> 475 scorers = {"score": check_scoring(estimator, scoring=scoring)}
476 return scorers, False
477 else:
~\anaconda3\lib\site-packages\sklearn\utils\validation.py in inner_f(*args, **kwargs)
71 FutureWarning)
72 kwargs.update({k: arg for k, arg in zip(sig.parameters, args)})
---> 73 return f(**kwargs)
74 return inner_f
75
~\anaconda3\lib\site-packages\sklearn\metrics\_scorer.py in check_scoring(estimator, scoring, allow_none)
403 "'fit' method, %r was passed" % estimator)
404 if isinstance(scoring, str):
--> 405 return get_scorer(scoring)
406 elif callable(scoring):
407 # Heuristic to ensure user has not passed a metric
~\anaconda3\lib\site-packages\sklearn\metrics\_scorer.py in get_scorer(scoring)
360 scorer = SCORERS[scoring]
361 except KeyError:
--> 362 raise ValueError('%r is not a valid scoring value. '
363 'Use sorted(sklearn.metrics.SCORERS.keys()) '
364 'to get valid options.' % scoring)
ValueError: 'mean_squared_error' is not a valid scoring value. Use sorted(sklearn.metrics.SCORERS.keys()) to get valid options.
So, I tried running it by removing the scoring='mean_squared_error' from GridSearchCV(pipe, param_grid = params, scoring='mean_squared_error', n_jobs=-1, iid=False, cv=5). When I do that, the code runs perfectly and gives a decent enough training and testing error.
Regardless of that, I can't figure out why with scoring='mean_squared_error' parameter in GridSearchCV function throws me that error. What am I doing wrong?
According to the documentation:
All scorer objects follow the convention that higher return values are better than lower return values. Thus metrics which measure the distance between the model and the data, like metrics.mean_squared_error, are available as neg_mean_squared_error which return the negated value of the metric.
This means that you have to pass scoring='neg_mean_squared_error' in order to evaluate the grid search results with Mean Squared Error.
I am trying to use direct transcription to solve my trajectory optimization problem which involves some trigonometric functions.
I have the following variable types
a[i] = array([<Expression "(state_0(0) + 0.001 * state_0(4))">,
<Expression "(state_0(1) + 0.001 * state_0(5))">,
<Expression "(state_0(2) + 0.001 * state_0(6))">,
<Expression "(state_0(3) + 0.001 * state_0(7))">,
<Expression "...omitted...">,
<Expression "...omitted...">,
<Expression "...omitted...">,
<Expression "...omitted...">], dtype=object)
b[i] = array([Variable('state_1(0)', Continuous),
Variable('state_1(1)', Continuous),
Variable('state_1(2)', Continuous),
Variable('state_1(3)', Continuous),
Variable('state_1(4)', Continuous),
Variable('state_1(5)', Continuous),
Variable('state_1(6)', Continuous),
Variable('state_1(7)', Continuous)], dtype=object)
I'm trying to create a constraint as follows
mp.AddConstraint(b[i] <= a[i])
But I get the following error
RuntimeError: You should not call `__bool__` / `__nonzero__` on `Formula`. If you are trying to make a map with `Variable`, `Expression`, or `Polynomial` as keys (and then access the map in Python), please use pydrake.common.containers.EqualToDict`.
That's correct. Although you can also use the function names like eq(a,b)
https://github.com/RobotLocomotion/drake/issues/8315
It appears that the constraint must be specified per element, i.e.
for j in range(8):
mp.AddConstraint(b[i][j] <= a[i][j])
I have been getting this error when I call std on a frozen exponweib distribution?
Here is the code:
d = st.exponweib
params = d.fit(y)
arg = params[:-2]
loc = params[-2]
scale = params[-1]
rv1 = d(arg,loc,scale)
print rv1.std()
The parameters after fitting are:
arg: (3.445136651705262, 0.10885378466279112)
loc: 11770.05
scale: 3.87424773976
Here is the error:
ValueError Traceback (most recent call last)
<ipython-input-637-4394814bbb8c> in <module>()
11 rv1 = d(arg,loc,scale)
12
---> 13 print rv1.std()
.../anaconda/lib/python2.7/site-packages/scipy/stats/_distn_infrastructure.pyc in std(self)
487
488 def std(self):
--> 489 return self.dist.std(*self.args, **self.kwds)
490
491 def moment(self, n):
.../anaconda/lib/python2.7/site-packages/scipy/stats/_distn_infrastructure.pyc in std(self, *args, **kwds)
1259 """
1260 kwds['moments'] = 'v'
-> 1261 res = sqrt(self.stats(*args, **kwds))
1262 return res
1263
.../anaconda/lib/python2.7/site-packages/scipy/stats/_distn_infrastructure.pyc in stats(self, *args, **kwds)
1032 mu = self._munp(1, *goodargs)
1033 mu2 = mu2p - mu * mu
-> 1034 if np.isinf(mu):
1035 # if mean is inf then var is also inf
1036 mu2 = np.inf
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
Please let me what is wrong with what I'm doing or how to avoid this.
The exponweib distribution has two required parameters a, c and two optional, loc and scale. When you call d(arg, loc, scale) the result is that arg is interpreted as a, loc is interpreted as c, and scale is interpreted as loc. And since your arg is a tuple of two elements, you end up with a tuple of random variables, neither of which is what you want.
Solution: unpack the tuple: d(*arg, loc, scale). Or even simpler, use
rv1 = d(*params)
which unpacks all the parameters for you, without you having to extract and name them.
By the way, when you want to provide your own loc and scale of a random variable, it's better to pass them as named arguments, like d(3, 5, loc=90, scale=0.3). This avoids the situation you encountered, when some of these parameters get interpreted as something else because you didn't get some argument right. In your example, d(arg, loc=loc, scale=scale) would immediately throw an error, "missing 1 required positional argument: 'c'" instead of taking loc instead of c.