Julia MLJ not loading correctly - machine-learning

I'm at the install/testing phase of Julia and attempting to run the Iris example MLJ Example of training a model using the Julia MLJ. I think I have it installed properly but am getting this error. What have I done incorrectly? Thx. J
julia> Tree = #load DecisionTreeClassifier pkg=DecisionTree
DecisionTreeClassifier(
max_depth = -1,
min_samples_leaf = 1,
min_samples_split = 2,
min_purity_increase = 0.0,
n_subfeatures = 0,
post_prune = false,
merge_purity_threshold = 1.0,
pdf_smoothing = 0.0,
display_depth = 5,
rng = Random._GLOBAL_RNG()) #726
julia> tree = Tree()
ERROR: MethodError: no method matching (::MLJDecisionTreeInterface.DecisionTreeClassifier)()
Closest candidates are:
(::Supervised)(::Tuple{AbstractMatrix{T} where T, Any}) at /home/name/.julia/packages/MLJBase/8HFqb/src/composition/learning_networks/arrows.jl:25
(::Supervised)(::Tuple{AbstractNode, AbstractNode}) at /home/name/.julia/packages/MLJBase/8HFqb/src/composition/learning_networks/arrows.jl:21
(::Supervised)(::Tuple{AbstractNode, Any}) at /home/name/.julia/packages/MLJBase/8HFqb/src/composition/learning_networks/arrows.jl:22
...
Stacktrace:
[1] top-level scope
# REPL[24]:1

I found the problem, old and un-updateable versions of some packages. So I moved copies of ~/.julia/environments/v1.6/Manifest.toml and Project.toml to another directory for storage, restarted julia, reinstalled the packages MLJ and MLJDecisionTreeInterface, and this time got the most recent versions. Now the code runs fine.

Related

Type Error for Image generation GAN model

I am trying to visualize the final images from the generator model for GAN image generator, everything else worked perfectly.
When trying to visualize the images after training the model I am getting TypeError.
This is what i was going for image plotting:
imgs = test_model.predict(tf.random.normal((16, 128, 1)))
^ this ran fine
but the one below gave me an error:
fig, ax = plt.subplots(ncols = 4, nrows = 4, figsize = (20, 20))
for r in range(4):
for c in range(4):
ax[r][c].imshow(imgs[(r+1)*(c+1)-1])
This is the error.
TypeError: Invalid shape (28, 28, 1) for image data
Anyone can tell me how to fix this ?
I checked for all possible errors, couldn't find it.
What am I missing to check?

Is np.linalg.solve() not working for AutoDiff?

Does np.linalg.solve() not work for AutoDiff? I use is to solve manipulator equation. The error message is shown below.
I try a similar "double" version code, it is no issue. Please tell me how to fix it, thanks!
### here is the error message
vdot_ad = np.linalg.solve(M_,ggg_ad)
File "<__array_function__ internals>", line 5, in solve
File "/usr/local/lib/python3.8/site-packages/numpy/linalg/linalg.py", line 394, in solve
r = gufunc(a, b, signature=signature, extobj=extobj)
TypeError: No loop matching the specified signature and casting was found for ufunc solve1
####. here is the code
plant = MultibodyPlant(time_step= 0.01)
parser = Parser(plant)
parser.AddModelFromFile("double_pendulum.sdf")
plant.Finalize()
plant_autodiff = plant.ToAutoDiffXd()
####### <AutoDiff> get the error message
xu = np.hstack((x, u))
xu_ad = initializeAutoDiff(xu)[:,0]
x_ad = xu_ad[:4]
q_ad = x_ad[:2]
v_ad = x_ad[2:4]
u_ad = xu_ad[4:]
(M_, Cv_, tauG_, B_, tauExt_) = ManipulatorDynamics(plant_autodiff, q_ad, v_ad)
vdot_ad = np.linalg.solve(M_,tauG_ + np.dot(B_,u_ad) - np.dot(Cv_,v_ad))
Note that in pydrake, AutoDiffXd scalars are exposed to NumPy using dtype=object.
There are some drawbacks to this approach, like what you have ran into now.
This is not necessarily an issue with Drake, but a limitation on NumPy itself given the ufunc's that are implemented on the (super old) version that is on 18.04.
To illustrate, here is what I see on Ubuntu 18.04, CPython 3.6.9, NumPy 1.13.3:
>>> import numpy as np
>>> A = np.eye(2)
>>> b = np.array([1, 2])
>>> np.linalg.solve(A, b)
array([ 1., 2.])
>>> A = A.astype(object)
>>> b = b.astype(object)
>>> np.linalg.solve(A, b)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3/dist-packages/numpy/linalg/linalg.py", line 375, in solve
r = gufunc(a, b, signature=signature, extobj=extobj)
TypeError: No loop matching the specified signature and casting
was found for ufunc solve1
The most direct solution would be to expose an analogous routine in pydrake, and have users leverage that.
That is what we had to do for np.linalg.inv as well:
https://github.com/RobotLocomotion/drake/pull/11173/files
Not the best solution :( However, it's simple enough!

TypeError("Tensor is unhashable if Tensor equality is enabled. " K.learning_phase(): 0

I am porting a Keras, Tensorflow, and OpenCV script to TF2 and Keras 2 and have run into a problem. I am getting an error on K.learning_phase(): 0.
The error happens in this code section.
ef detect_image(self, image):
if self.model_image_size != (None, None):
assert self.model_image_size[0]%32 == 0, 'Multiples of 32 required'
assert self.model_image_size[1]%32 == 0, 'Multiples of 32 required'
boxed_image = image_preporcess(np.copy(image), tuple(reversed(self.model_image_size)))
image_data = boxed_image
out_boxes, out_scores, out_classes = self.sess.run(
[self.boxes, self.scores, self.classes],
feed_dict={
self.yolo_model.input: image_data,
self.input_image_shape: [image.shape[0], image.shape[1]],
tf.keras.learning_phase(): 0 })
here is a gist to the full code
https://gist.github.com/robisen1/31976de17af9e752c6ba8d1dd0e08906
Traceback (most recent call last):
File "webcam_detect.py", line 188, in <module>
r_image, ObjectsList = yolo.detect_image(frame)
File "webcam_detect.py", line 110, in detect_image
K.learning_phase(): 0
File "C:\Anaconda3\envs\simplecv\lib\site-packages\tensorflow_core\python\framework\ops.py", line 705, in __hash__
raise TypeError("Tensor is unhashable if Tensor equality is enabled. "
TypeError: Tensor is unhashable if Tensor equality is enabled. Instead, use tensor.experimental_ref() as the key.
(simplecv) PS C:\dev\lacv\yolov3\yolov3ct>
I am not sure what is going on. I would appreciate any insights.
You are trying to use Tensorflow 1.x, which works in graph mode whereas TensorFlow 2.x works in eager mode. TensorFlow 1.X requires users to manually stitch together an abstract syntax tree (the graph) by making tf.* API calls. It then requires users to manually compile the abstract syntax tree by passing a set of output tensors and input tensors to a session.run() call. TensorFlow 2.0 executes eagerly (like Python normally does) and in 2.0, graphs and sessions should feel like implementation details.
The error is due to version. If you are using session in TF2 then you need to use the compatible version and same goes with other operations. Also in TF2 it is tf.keras.backend.learning_phase.
Would recommend to go through the guide - Migrate your TensorFlow 1 code to TensorFlow 2.
For Example Below Code throws the error similar to the error you are facing -
import tensorflow as tf
print(tf.__version__)
x = tf.constant(5)
y = tf.constant(10)
z = tf.constant(20)
# This will show same error.
tensor_set = {x, y, z}
tensor_dict = {x: 'five', y: 'ten', z: 'twenty'}
Output -
2.2.0
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-3-509b2d8d7ab1> in <module>()
6
7 # This will show same error.
----> 8 tensor_set = {x, y, z}
9 tensor_dict = {x: 'five', y: 'ten', z: 'twenty'}
10
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py in __hash__(self)
724 if (Tensor._USE_EQUALITY and executing_eagerly_outside_functions() and
725 (g is None or g.building_function)):
--> 726 raise TypeError("Tensor is unhashable. "
727 "Instead, use tensor.ref() as the key.")
728 else:
TypeError: Tensor is unhashable. Instead, use tensor.ref() as the key.
But below code will fix the issue -
import tensorflow as tf
print(tf.__version__)
x = tf.constant(5)
y = tf.constant(10)
z = tf.constant(20)
#This solves the issue
tensor_set = {x.experimental_ref(), y.experimental_ref(), z.experimental_ref()}
tensor_dict = {x.experimental_ref(): 'five', y.experimental_ref(): 'ten', z.experimental_ref(): 'twenty'}
Output -
2.2.0
WARNING:tensorflow:From <ipython-input-4-05e379e669d9>:12: Tensor.experimental_ref (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use ref() instead.
If you are still facing the error, then kindly share the reproducible code for the error like above. Will be happy to help you.
Hope this answers your question. Happy Learning.
try to disable tf.compat.v1.disable_eager_execution()
from tensorflow.compat.v1 import disable_eager_execution
disable_eager_execution()

Planet NDVI calculation: ModuleNotFoundError: No module named 'rasterio'

I'm performing NDVI calculation on a Planet Scope 4 band image as per Planet's documentation
The following block of code is what I wrote:
Extract band data from original image in working directory
import rasterio import numpy
image_file = "20170430_194027_0c82_3B_AnalyticMS"
with rasterio.open(image_file) as src: band_red = src.read(3)
with rasterio.open(image_file) as src: band_nir = src.read(4)
from xml.dom import minidom
xmldoc = minidom.parse("20170430_194027_0c82_3B_AnalyticMS_metadata") nodes = xmldoc.getElementsByTagName("ps:bandSpecificMetadata")
Extract TOA correction coefficients from metadata file in directory
TOA_coeffs = {} for node in nodes: bn = node.getElementsByTagName("ps:bandNumber")[0].firstChild.data if bn in ['1', '2', '3', '4']:
i = int(bn)
value = node.getElementsByTagName("ps:ReflectanceCoefficient")[0].firstChild.data
TOA_coeffs[1] = float(value)
Calculate NDVI and save file
band_red = band_red * TOA_coeffs[3] band_nir = band_nir * TOA_coeffs[4]
numpy.seterr(divide = 'ignore', invalid = 'ignore')
NDVI = (band_nir.astype(float) - band_red.astype(float))/(band_nir + band_red) numpy.nanmin(NDVI), numpy.nanmax(NDVI)
kwargs = src.meta kwargs.update(dtype=rasterio.float32,
count = 1)
with rasterio.open('ndvi.tif', 'W', **kwargs) as dst: dst.write_band(1, NDVI.astype(rasterio.float32))
Add symbology and plot color bar
import matplotlib.pyplot as plt import matplotlib.colors as colors
class MidpointNormalize(colors.Normalize): def __init__(self, vmin=None, vmax=None, midpoint=None, clip=False):
self.midpoint = midpoint
colors.Normalize.__init__(self, vmin, vmax, clip)
def __call__(self, value, clip=None):
x, y = [self.vmin, self.midpoint, self.vmax], [0, 0.5, 1]
return numpy.ma.masked_array(numpy.interp(value, x, y), >numpy.isnan(value))
min = numpy.nanmin(NDVI) min = numpy.nanmax(NDVI) mid = 0.1
fig = plt.figure(figsize= (20,10)) ax = fig.add_subplot(111)
cmap = plt.cm.RdYlGn
cax = ax.imshow(NDVI, cmap=cmap, clim=(min,max),
>norm=MidpointNormalize(midpoint=mid, vmin=min, vmax=max))
ax.axis('off') ax.set_title('NDVI_test', fontsize= 18, fontweight='bold')
cbar = fig.colorbar(cax, orientation= 'horizontal', shrink=0.65)
fig.savefig("output/NDVI_test.png", dpi=200, bbox_inches='tight',
>pad_inches=0.7)
plt.show()
Plot histogram for NDVI pixel value distribution
fig2 = plt.figure(figsize=(10,10)) ax = fig2.add_subplot(111)
plt.title("NDVI Histogram", fontsize=18, fontweight='bold') plt.xlabel("NDVI values", fontsize=14) plt.ylabel("# pixels", fontsize=14)
x = NDVI[~numpy.isnan(NDVI)] numBins = 20 ax.hist(x,numBins,color='green',alpha=0.8)
fig2.savefig("output/ndvi-histogram.png", dpi=200, bbox_inches='tight', >pad_inches=0.7)
plt.show()
Alas, the execution of the script is cut short at the beginning of the code:
File "C:/Users/David/Desktop/ArcGIS files/Planet Labs/2017.6_Luis_Bedin_Bolivia/planet_order_58311/20170430_194027_0c82/TOA_correction_NDVI.py", line 8, in <module>
import rasterio
ModuleNotFoundError: No module named 'rasterio'
So I decide to install rasterio, that should solve the problem:
C:\Users\David\Desktop\ArcGIS files\Planet Labs\2017.6_Luis_Bedin_Bolivia\planet_order_58311\20170430_194027_0c82>pip install rasterio
Collecting rasterio
Using cached rasterio-0.36.0.tar.gz
Requirement already satisfied: affine in c:\users\david\anaconda3\lib\site-packages (from rasterio)
Requirement already satisfied: cligj in c:\users\david\anaconda3\lib\site-packages (from rasterio)
Requirement already satisfied: numpy in c:\users\david\anaconda3\lib\site-packages (from rasterio)
Requirement already satisfied: snuggs in c:\users\david\anaconda3\lib\site-packages (from rasterio)
Requirement already satisfied: click-plugins in c:\users\david\anaconda3\lib\site-packages (from rasterio)
What I interpret from this is that rasterio is already installed. How can this be if the Python console tells me there's no module named rasterio. The output from the console also says Microsoft Visual C++ is required. Upon further research I find this user's solution. I tried it but the console also tells me that rasterio is already installed:
(envpythonfs) C:\Users\David\Desktop\ArcGIS files\Planet Labs\2017.6_Luis_Bedin_Bolivia\planet_order_58311\20170430_194027_0c82>conda install rasterio gdal
Fetching package metadata .............
Solving package specifications: .
# All requested packages already installed.
# packages in environment at C:\Users\David\Anaconda3\envs\envpythonfs:
#
I'm creating the script using Spyder 3.1.2 with Python 3.6 on a Windows 10 64-bit machine.
I think pip is not the best way to go for making sure dependencies are handled appropriately. Since you're already using anaconda, I would suggest:
conda install rasterio -c conda-forge/label/dev
Note that installing from the dev labeled version is not the long term solution (see https://github.com/conda-forge/rasterio-feedstock/pull/36).

Invalid indexing operation error when trying to draw epipolar lines

I'm creating Stereo images processing project modeled on Matlab's examples. A copy pasted code from one of them don't works well.
I1 = rgb2gray(imread('viprectification_deskLeft.png'));
I2 = rgb2gray(imread('viprectification_deskRight.png'));
points1 = detectHarrisFeatures(I1);
points2 = detectHarrisFeatures(I2);
[features1, valid_points1] = extractFeatures(I1, points1);
[features2, valid_points2] = extractFeatures(I2, points2);
indexPairs = matchFeatures(features1, features2);
matchedPoints1 = valid_points1(indexPairs(:, 1),:);
matchedPoints2 = valid_points2(indexPairs(:, 2),:);
figure; showMatchedFeatures(I1, I2, matchedPoints1, matchedPoints2);
load stereoPointPairs
[fLMedS, inliers] = estimateFundamentalMatrix(matchedPoints1,matchedPoints2,'NumTrials',4000);
figure;
subplot(121); imshow(I1);
title('Inliers and Epipolar Lines in First Image'); hold on;
plot(matchedPoints1(inliers,1), matchedPoints1(inliers,2), 'go');
An error:
Error using epilineTest (line 24) Invalid indexing operation.
Best regards
Looks like you have an older version of MATLAB. Try doing this:
[fLMedS, inliers] = estimateFundamentalMatrix(...
matchedPoints1.Location, matchedPoints2.Location,'NumTrials',4000);
Generally, look at the example in your own local MATLAB documentation, rather than the one on the website. The website has the doc for the latest release (currently R2014a), and the examples may be using new features that do not exist in the older versions.

Resources