Does np.linalg.solve() not work for AutoDiff? I use is to solve manipulator equation. The error message is shown below.
I try a similar "double" version code, it is no issue. Please tell me how to fix it, thanks!
### here is the error message
vdot_ad = np.linalg.solve(M_,ggg_ad)
File "<__array_function__ internals>", line 5, in solve
File "/usr/local/lib/python3.8/site-packages/numpy/linalg/linalg.py", line 394, in solve
r = gufunc(a, b, signature=signature, extobj=extobj)
TypeError: No loop matching the specified signature and casting was found for ufunc solve1
####. here is the code
plant = MultibodyPlant(time_step= 0.01)
parser = Parser(plant)
parser.AddModelFromFile("double_pendulum.sdf")
plant.Finalize()
plant_autodiff = plant.ToAutoDiffXd()
####### <AutoDiff> get the error message
xu = np.hstack((x, u))
xu_ad = initializeAutoDiff(xu)[:,0]
x_ad = xu_ad[:4]
q_ad = x_ad[:2]
v_ad = x_ad[2:4]
u_ad = xu_ad[4:]
(M_, Cv_, tauG_, B_, tauExt_) = ManipulatorDynamics(plant_autodiff, q_ad, v_ad)
vdot_ad = np.linalg.solve(M_,tauG_ + np.dot(B_,u_ad) - np.dot(Cv_,v_ad))
Note that in pydrake, AutoDiffXd scalars are exposed to NumPy using dtype=object.
There are some drawbacks to this approach, like what you have ran into now.
This is not necessarily an issue with Drake, but a limitation on NumPy itself given the ufunc's that are implemented on the (super old) version that is on 18.04.
To illustrate, here is what I see on Ubuntu 18.04, CPython 3.6.9, NumPy 1.13.3:
>>> import numpy as np
>>> A = np.eye(2)
>>> b = np.array([1, 2])
>>> np.linalg.solve(A, b)
array([ 1., 2.])
>>> A = A.astype(object)
>>> b = b.astype(object)
>>> np.linalg.solve(A, b)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3/dist-packages/numpy/linalg/linalg.py", line 375, in solve
r = gufunc(a, b, signature=signature, extobj=extobj)
TypeError: No loop matching the specified signature and casting
was found for ufunc solve1
The most direct solution would be to expose an analogous routine in pydrake, and have users leverage that.
That is what we had to do for np.linalg.inv as well:
https://github.com/RobotLocomotion/drake/pull/11173/files
Not the best solution :( However, it's simple enough!
Related
I downloaded Era5 U and V wind components from era-5 and I am using xarray to read the .nc file and select several lat lon points from the data. After i need to calculate wind speed and direction using metpy.calc function:
ds = xr.open_mfdataset('ERA5_u10_*.nc',combine='by_coords').metpy.parse_cf()
dv = xr.open_mfdataset('ERA5_v10_*.nc',combine='by_coords').metpy.parse_cf()
#direction =mpcalc.wind_direction(ds['u10'],dv['v10'])
#dd=mpcalc.wind_direction(ds.u10,dv.v10)
locations=[]
k=0
u10 = []
v10=[]
speed=[]
direction=[]
u100 = []
v100=[]
for i,j in zip(lats,lons):
stations_u=ds.sel(longitude=j,latitude=i,method='nearest')
stations_v=dv.sel(longitude=j,latitude=i,method='nearest')
u10.append(stations_u.u10)### mudar variavel
v10.append(stations_v.v10)### mudar variavel
speed.append(mpcalc.wind_speed(stations_u.u10,stations_v.v10))
direction.append(mpcalc.wind_direction(stations_u.u10,stations_v.v10))
Wind speed works with no problem, but wind direction raises an error:
runfile('/mnt/data1/ERA5/wind/untitled0.py', wdir='/mnt/data1/ERA5/wind')
Found valid latitude/longitude coordinates, assuming latitude_longitude for projection grid_mapping variable
Found valid latitude/longitude coordinates, assuming latitude_longitude for projection grid_mapping variable
Traceback (most recent call last):
File "/usr/local/python/anaconda3/envs/my_env/lib/python3.6/site-packages/dask/array/core.py", line 1615, in __setitem__
y = where(key, value, self)
File "/usr/local/python/anaconda3/envs/my_env/lib/python3.6/site-packages/dask/array/routines.py", line 1382, in where
return elemwise(np.where, condition, x, y)
File "/usr/local/python/anaconda3/envs/my_env/lib/python3.6/site-packages/dask/array/core.py", line 4204, in elemwise
broadcast_shapes(*shapes)
File "/usr/local/python/anaconda3/envs/my_env/lib/python3.6/site-packages/dask/array/core.py", line 4165, in broadcast_shapes
"shapes {0}".format(" ".join(map(str, shapes)))
ValueError: operands could not be broadcast together with shapes (262968,) (nan,) (262968,)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/mnt/data1/ERA5/wind/untitled0.py", line 40, in <module>
direction.append(mpcalc.wind_direction(stations_u.u10,stations_v.v10))
File "/usr/local/python/anaconda3/envs/my_env/lib/python3.6/site-packages/metpy/xarray.py", line 1206, in wrapper
result = func(*bound_args.args, **bound_args.kwargs)
File "/usr/local/python/anaconda3/envs/my_env/lib/python3.6/site-packages/metpy/units.py", line 246, in wrapper
return func(*args, **kwargs)
File "/usr/local/python/anaconda3/envs/my_env/lib/python3.6/site-packages/metpy/calc/basic.py", line 104, in wind_direction
wdir[mask] += 360. * units.deg
File "/usr/local/python/anaconda3/envs/my_env/lib/python3.6/site-packages/pint/quantity.py", line 1868, in __setitem__
self._magnitude[key] = factor.magnitude
File "/usr/local/python/anaconda3/envs/my_env/lib/python3.6/site-packages/dask/array/core.py", line 1622, in __setitem__
) from e
ValueError: Boolean index assignment in Dask expects equally shaped arrays.
Example: da1[da2] = da3 where da1.shape == (4,), da2.shape == (4,) and da3.shape == (4,).
If i try to calculate Wdir with only 1 element in U and in V, surprisingly it works:
mpcalc.wind_direction(stations_u.u10[0],stations_v.v10[0]).values
Out[12]: array(173.41553, dtype=float32)
metpy.calc.wind_direction is unfortunately known not to work with Dask arrays--and in fact many places in MetPy don't work well with Dask, though we definitely want to. For now, to use wind_direction you'll need to turn stations_u, etc. into standard numpy arrays using e.g. .compute().
Thaks for your reply! Yeh it makes sense...
The dataarrays i am working with are very big, so i need to keep it as a dataArray because copying it to a normal np array would use more memory that what i have xD
I eventually tried the calculation with statins_u as both u and v, and surprisingly it works:
mpcalc.wind_direction(stations_u.u10,stations_u.u10)## Heading ##
Out[7]:
<xarray.DataArray 'sub-404afe706bcfce1f36810c70766c0647' (time: 262968)>
<Quantity(dask.array<sub, shape=(262968,), dtype=float32, chunksize=(87672,), chunktype=numpy.ndarray>, 'degree')>
Coordinates:
longitude float32 -7.1
latitude float32 43.8
* time (time) datetime64[ns] 1990-01-01 ... 2019-12-31T23:00:00
why does it work in this case ? isn't metpy.calc_wind_direction using dask array in this case ?
i fell that there's something wrong with attributes names and units and pint, but cant understand what it is ...
I was learning to do classification with the MNIST dataset. And I got an error with I am not able to figure out, I have done a lot of google searches and I am not able to do anything, maybe you are an expert and can help me. Here is the code--
>>> from sklearn.datasets import fetch_openml
>>> mnist = fetch_openml('mnist_784', version=1)
>>> mnist.keys()
output:
dict_keys(['data', 'target', 'frame', 'categories', 'feature_names', 'target_names', 'DESCR', 'details', 'url'])
>>> X, y = mnist["data"], mnist["target"]
>>> X.shape
output:(70000, 784)
>>> y.shape
output:(70000)
>>> X[0]
output:KeyError Traceback (most recent call last)
c:\users\khush\appdata\local\programs\python\python39\lib\site-packages\pandas\core\indexes\base.py in get_loc(self, key, method, tolerance)
2897 try:
-> 2898 return self._engine.get_loc(casted_key)
2899 except KeyError as err:
pandas\_libs\index.pyx in pandas._libs.index.IndexEngine.get_loc()
pandas\_libs\index.pyx in pandas._libs.index.IndexEngine.get_loc()
pandas\_libs\hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item()
pandas\_libs\hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item()
KeyError: 0
The above exception was the direct cause of the following exception:
KeyError Traceback (most recent call last)
<ipython-input-10-19c40ecbd036> in <module>
----> 1 X[0]
c:\users\khush\appdata\local\programs\python\python39\lib\site-packages\pandas\core\frame.py in __getitem__(self, key)
2904 if self.columns.nlevels > 1:
2905 return self._getitem_multilevel(key)
-> 2906 indexer = self.columns.get_loc(key)
2907 if is_integer(indexer):
2908 indexer = [indexer]
c:\users\khush\appdata\local\programs\python\python39\lib\site-packages\pandas\core\indexes\base.py in get_loc(self, key, method, tolerance)
2898 return self._engine.get_loc(casted_key)
2899 except KeyError as err:
-> 2900 raise KeyError(key) from err
2901
2902 if tolerance is not None:
KeyError: 0
Please answer, there can be a silly mistake because I am a beggineer in ML. It would be really helpful if you gave me some hint also.
The API of fetch_openml changed between versions. In earlier versions, it returns a numpy.ndarray array. Since 0.24.0 (December 2020), as_frame argument of fetch_openml is set to auto (instead of False as default option earlier) which gives you a pandas.DataFrame for the MNIST data. You can force the data read as a numpy.ndarray by setting as_frame = False. See fetch_openml reference .
I was also facing the same problem.
scikit-learn: 0.24.0
matplotlib: 3.3.3
Python: 3.9.1
I used to below code to resolve the issue.
import matplotlib as mpl
import matplotlib.pyplot as plt
# instead of some_digit = X[0]
some_digit = X.to_numpy()[0]
some_digit_image = some_digit.reshape(28,28)
plt.imshow(some_digit_image,cmap="binary")
plt.axis("off")
plt.show()
You don't need to downgrade you scikit-learn library, if you follow the code below:
from sklearn.datasets import fetch_openml
mnist = fetch_openml('mnist_784', version= 1, as_frame= False)
mnist.keys()
You load the dataset as a dataframe for you to able to access the images, you have two ways to do this,
Transform the dataframe to an Array
# Transform the dataframe into an array. Check the first value
some_digit = X.to_numpy()[0]
# Reshape it to (28,28). Note: 28 x 28 = 7064, if the reshaping doesn't meet
# this you are not able to show the image
some_digit_image = some_digit.reshape(28,28)
plt.imshow(some_digit_image,cmap="binary")
plt.axis("off")
plt.show()
Transform the row
# Transform the row of your choosing into an array
some_digit = X.iloc[0,:].values
# Reshape it to (28,28). Note: 28 x 28 = 7064, if the reshaping doesn't
# meet this you are not able to show the image
some_digit_image = some_digit.reshape(28,28)
plt.imshow(some_digit_image,cmap="binary")
plt.axis("off")
plt.show()
I am porting a Keras, Tensorflow, and OpenCV script to TF2 and Keras 2 and have run into a problem. I am getting an error on K.learning_phase(): 0.
The error happens in this code section.
ef detect_image(self, image):
if self.model_image_size != (None, None):
assert self.model_image_size[0]%32 == 0, 'Multiples of 32 required'
assert self.model_image_size[1]%32 == 0, 'Multiples of 32 required'
boxed_image = image_preporcess(np.copy(image), tuple(reversed(self.model_image_size)))
image_data = boxed_image
out_boxes, out_scores, out_classes = self.sess.run(
[self.boxes, self.scores, self.classes],
feed_dict={
self.yolo_model.input: image_data,
self.input_image_shape: [image.shape[0], image.shape[1]],
tf.keras.learning_phase(): 0 })
here is a gist to the full code
https://gist.github.com/robisen1/31976de17af9e752c6ba8d1dd0e08906
Traceback (most recent call last):
File "webcam_detect.py", line 188, in <module>
r_image, ObjectsList = yolo.detect_image(frame)
File "webcam_detect.py", line 110, in detect_image
K.learning_phase(): 0
File "C:\Anaconda3\envs\simplecv\lib\site-packages\tensorflow_core\python\framework\ops.py", line 705, in __hash__
raise TypeError("Tensor is unhashable if Tensor equality is enabled. "
TypeError: Tensor is unhashable if Tensor equality is enabled. Instead, use tensor.experimental_ref() as the key.
(simplecv) PS C:\dev\lacv\yolov3\yolov3ct>
I am not sure what is going on. I would appreciate any insights.
You are trying to use Tensorflow 1.x, which works in graph mode whereas TensorFlow 2.x works in eager mode. TensorFlow 1.X requires users to manually stitch together an abstract syntax tree (the graph) by making tf.* API calls. It then requires users to manually compile the abstract syntax tree by passing a set of output tensors and input tensors to a session.run() call. TensorFlow 2.0 executes eagerly (like Python normally does) and in 2.0, graphs and sessions should feel like implementation details.
The error is due to version. If you are using session in TF2 then you need to use the compatible version and same goes with other operations. Also in TF2 it is tf.keras.backend.learning_phase.
Would recommend to go through the guide - Migrate your TensorFlow 1 code to TensorFlow 2.
For Example Below Code throws the error similar to the error you are facing -
import tensorflow as tf
print(tf.__version__)
x = tf.constant(5)
y = tf.constant(10)
z = tf.constant(20)
# This will show same error.
tensor_set = {x, y, z}
tensor_dict = {x: 'five', y: 'ten', z: 'twenty'}
Output -
2.2.0
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-3-509b2d8d7ab1> in <module>()
6
7 # This will show same error.
----> 8 tensor_set = {x, y, z}
9 tensor_dict = {x: 'five', y: 'ten', z: 'twenty'}
10
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py in __hash__(self)
724 if (Tensor._USE_EQUALITY and executing_eagerly_outside_functions() and
725 (g is None or g.building_function)):
--> 726 raise TypeError("Tensor is unhashable. "
727 "Instead, use tensor.ref() as the key.")
728 else:
TypeError: Tensor is unhashable. Instead, use tensor.ref() as the key.
But below code will fix the issue -
import tensorflow as tf
print(tf.__version__)
x = tf.constant(5)
y = tf.constant(10)
z = tf.constant(20)
#This solves the issue
tensor_set = {x.experimental_ref(), y.experimental_ref(), z.experimental_ref()}
tensor_dict = {x.experimental_ref(): 'five', y.experimental_ref(): 'ten', z.experimental_ref(): 'twenty'}
Output -
2.2.0
WARNING:tensorflow:From <ipython-input-4-05e379e669d9>:12: Tensor.experimental_ref (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use ref() instead.
If you are still facing the error, then kindly share the reproducible code for the error like above. Will be happy to help you.
Hope this answers your question. Happy Learning.
try to disable tf.compat.v1.disable_eager_execution()
from tensorflow.compat.v1 import disable_eager_execution
disable_eager_execution()
I have a large dataset (207989, 23), and I am trying to apply Hierarchical clustering on just one column right now to test if it's suitable for the task at my hand.
What I have tried:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn import preprocessing
data = pd.read_csv('gpmd.csv', header = 0)
X = data.loc[:, ['ContextID', 'BacksGas_Flow_sccm']]
min_max_scaler = preprocessing.MinMaxScaler()
X_minmax = min_max_scaler.fit_transform(X.values[:,[1]])
import scipy.cluster.hierarchy as sch
dendrogram = sch.dendrogram(sch.linkage(X_minmax, method = 'ward'))
after doing this, I am getting the following error:
dendrogram = sch.dendrogram(sch.linkage(X_minmax, method = 'ward'))
Traceback (most recent call last):
File "<ipython-input-4-429f42b68112>", line 1, in <module>
dendrogram = sch.dendrogram(sch.linkage(X_minmax, method = 'ward'))
File "C:\Users\kashy\Anaconda3\envs\py36\lib\site-packages\scipy\cluster\hierarchy.py", line 708, in linkage
y = distance.pdist(y, metric)
File "C:\Users\kashy\Anaconda3\envs\py36\lib\site-packages\scipy\spatial\distance.py", line 1877, in pdist
dm = np.empty((m * (m - 1)) // 2, dtype=np.double)
MemoryError
Can someone explain what exactly is the problem here?
Thanks in advance
Hierarchical clustering in most variants needs O(n²) memory.
Because of this, most implementations will fail at around 65535 instances, when they hit the 32 bit mark (some may fail at 32k already). But just do the math: n * n * 8 bytes for double precision: how much memory would you need?
I am learning machine learning using Titanic dataset from Kaggle. I am using LabelEncoder of sklearn to transform text data to numeric labels. The following code works fine for "Sex" but not for "Embarked".
encoder = preprocessing.LabelEncoder()
features["Sex"] = encoder.fit_transform(features["Sex"])
features["Embarked"] = encoder.fit_transform(features["Embarked"])
This is the error I got
Traceback (most recent call last):
File "../src/script.py", line 20, in <module>
features["Embarked"] = encoder.fit_transform(features["Embarked"])
File "/opt/conda/lib/python3.6/site-packages/sklearn/preprocessing/label.py", line 131, in fit_transform
self.classes_, y = np.unique(y, return_inverse=True)
File "/opt/conda/lib/python3.6/site-packages/numpy/lib/arraysetops.py", line 211, in unique
perm = ar.argsort(kind='mergesort' if return_index else 'quicksort')
TypeError: '>' not supported between instances of 'str' and 'float'
I solved it myself. The problem was that the particular feature had NaN values. Replacing it with a numerical value it will still throw an error since it is of different datatypes. So I replaced it with a character value
features["Embarked"] = encoder.fit_transform(features["Embarked"].fillna('0'))
Try this function, you’ll need to pass a Pandas Dataframe. It will look at the type of your column and encode. So you won’t need to even bother checking the types yourself.
def encoder(data):
'''Map the categorical variables to numbers to work with scikit learn'''
for col in data.columns:
if data.dtypes[col] == "object":
le = preprocessing.LabelEncoder()
le.fit(data[col])
data[col] = le.transform(data[col])
return data