Issue using patchify library to open image files., - opencv

I was trying to stitch smaller patches of images into one large image using the patchify library and the code used by DigitalSreeni on Youtube in episode 208 of multiclass semantic segmentation. However when using the below piece of code, i wasn't able to open the image files from the very beginning. It asked me to take a look at the directory or the file itself, but i knew the directory was correct. Code and error has been attached below.
from patchify import patchify, unpatchify
large_image = cv2.imread("Users/anish/largeimages/largeimage.png", 0)
#This will split the image into small images of shape [3,3]
patches = patchify(large_image, (128, 128), step=1)
Error shown on command prompt:
from patchify import patchify, unpatchify
large_image = cv2.imread("Users/anish/largeimages/largeimage.png", 0)
#This will split the image into small images of shape [3,3]
patches = patchify(large_image, (128, 128), step=1)
Traceback (most recent call last):
File "C:\Users\anish\AppData\Local\Temp\ipykernel_23432\463661116.py", line 5, in <module>
patches = patchify(large_image, (128, 128), step=1)
File "C:\Users\anish\anaconda3\envs\py37gpu\lib\site-packages\patchify\__init__.py", line 32, in patchify
return view_as_windows(image, patch_size, step)
File "C:\Users\anish\anaconda3\envs\py37gpu\lib\site-packages\patchify\view_as_windows.py", line 21, in view_as_windows
raise TypeError("`arr_in` must be a numpy ndarray")
TypeError: `arr_in` must be a numpy ndarray
[ WARN:0#9.588] global D:\a\opencv-python\opencv-python\opencv\modules\imgcodecs\src\loadsave.cpp (239) cv::findDecoder imread_('Users/anish/largeimages/largeimage.png'): can't open/read file: check file path/integrity

Related

Converting h5 to coreMl (IOS)

I'm currently working in a collaboration. My task is to convert an h5-file, which was generated by a neural network with tensorflow, to an coreML. Additionally I should implement it to my Xcode Project.
The Input is a two dimensional array of 21 Floats:
input = [[0.5, 0.4, ...]]
The output should be a Float between 0 and 1.
I've tried a lot but as far as I know the main issue is that coreML supports just the classification of a picture. I didn't find any clue how to convert an h5 to an coreML with this specific type of input and output as mentioned. Can anybody help?
Thanks a lot!
Edit
This is my code. I'm confused because once I read that I just have to name the input and output instead of defining the variable as an MLMultiArray. I guess this is my main issue. But didn't catch how to define the input as an MLMultiArray.
from keras.models import load_model
import coremltools
coreml_model = coremltools.converters.keras.convert('modelv.h5',
input_names=['data'],
output_names=['output'],
)
coreml_model.save('PredictionModel.mlmodel')
When I run the code I'm getting this following message from the compiler.
runfile('/Path/Neuronal Network')
Traceback (most recent call last):
File "/Path/ Neuronal Network/Converter.py", line 20, in <module>
output_names='output',
File "/path/", line 804, in convert
use_float_arraytype=use_float_arraytype)
File "/Path/opt/anaconda3/lib/python3.7/site-packages/coremltools/converters/keras/_keras_converter.py", line 585, in convertToSpec
use_float_arraytype=use_float_arraytype)
File "/Path/opt/anaconda3/lib/python3.7/site-packages/coremltools/converters/keras/_keras2_converter.py", line 328, in _convert
graph.build()
File "/Path/opt/anaconda3/lib/python3.7/site-packages/coremltools/converters/keras/_topology2.py", line 740, in build
self.make_input_layers()
File "/Path/opt/anaconda3/lib/python3.7/site-packages/coremltools/converters/keras/_topology2.py", line 169, in make_input_layers
if isinstance(kl, InputLayer) and kl.input == ts:
File "/Path/opt/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py", line 765, in __bool__
self._disallow_bool_casting()
File "/Path/opt/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py", line 534, in _disallow_bool_casting
self._disallow_in_graph_mode("using a `tf.Tensor` as a Python `bool`")
File "/Path/opt/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py", line 523, in _disallow_in_graph_mode
" this function with #tf.function.".format(task))
OperatorNotAllowedInGraphError: using a `tf.Tensor` as a Python `bool` is not allowed in Graph execution. Use Eager execution or decorate this function with #tf.function.

How can I run this prediction.py and evaluate_performance.py file?

I am a medical student and I am using google colab to learn fastAI. In this project, https://github.com/QinglingGo/Classification-of-Objects-using-Deep-Learning-Model,
I can achieve the output of the model, but I don't know how to perform the prediction.py and the evaluat_performance.py files.
When I run the evaluat_performance.py, the following message will appear:
python3: can't open file 'prediction.py': [Errno 2] No such file or directory
/usr/local/lib/python3.6/dist-packages/IPython/utils/traitlets.py:5: UserWarning: IPython.utils.traitlets has moved to a top-level traitlets package.
  warn ("IPython.utils.traitlets has moved to a top-level traitlets package.")
1. Loading Data ...
ImageDataBunch;
Train: LabelList (942 items)
x: SegmentationItemList
Image (3, 256, 256), Image (3, 256, 256), Image (3, 256, 256), Image (3, 256, 256), Image (3, 256, 256)
y: SegmentationLabelList
ImageSegment (1, 256, 256), ImageSegment (1, 256, 256), ImageSegment (1, 256, 256), ImageSegment (1, 256, 256), ImageSegment (1, 256, 256)
Path: / content / drive / My Drive / Colab Notebooks / bbc_train / images;
Valid: LabelList (0 items)
x: SegmentationItemList
y: SegmentationLabelList
Path: / content / drive / My Drive / Colab Notebooks / bbc_train / images;
Test: None
2. Instantiating Model ...
Traceback (most recent call last):
  File "evaluate_preformance.py", line 66, in <module>
    combined_accuracy, classification_accuracy, bbox_score, segmentation_accuracy = evaluate ()
  File "evaluate_preformance.py", line 29, in evaluate
    M = Model (path = model_dir, file = 'export.pkl')
NameError: name 'Model' is not defined.
And I don't understand the meaning of "from sample_student import Model" in line 6 of the .py file? Can anyone help me?
Thanks in advance!
I don't know whether this will solve your problem completely or not but this is the basic things that you should keep in mind when you use such projects
Kindly go to the directory from your terminal where you can find a python script called the evaluate_performance.py code and use the command python evaluate_performance.py. I guess the deep learning model is also defined there in one of the python scripts. Set all the paths to your dataset properly and if everything is correct then you will be able to run the code successfully.
Note kindly keep all the python scripts in the same directory so that they are accessible from anywhere in the same directory. Hope this will help you.
IN a new cell of your jupyter notebook, run the below command.
%run /path_to_file/filename.py
This will execute the python file inside the jupyter notebook.
Note: Make sure you are giving the correct directory. If path is wrong then it wil raise an error that file not found

ZeroDivisionError: float division by zero during net_segment inference patch aggregation

I ran (on Ubuntu 16.04 in a Google Cloud VM Instance):
net_segment inference -c <path-to-config>
for a binary segmentation problem using unet_2d with softmax and a (96,96,1) spatial window.
This was after I trained my model for 10 epochs and saved the checkpoint. I'm not sure why it's drawing a zero division error
from windows_aggregator_resize.py. What is the cause of this issue and what can I do to fix it?
Here are some inference settings and the corresponding error:
pixdim: (1.0, 1.0, 1.0)
[NETWORK]
batch_size: 1
cutoff: (0.01, 0.99)
name: unet_2d
normalisation: False
volume_padding_size: (96, 96, 0)
reg_type: L2
window_sampling: resize
multimod_foreground_type: and
[INFERENCE]
border = (96,96,0)
inference_iter = -1
output_interp_order = 0
spatial_window_size = (96,96,2)
INFO:niftynet: Accessing /home/xchaosfailx1/niftynet/models/MSD/heart_la_seg/models/model.ckpt-10 ...
INFO:niftynet: Restoring parameters from /home/xchaosfailx1/niftynet/models/MSD/heart_la_seg/models/model.ckpt-10
INFO:niftynet: Cleaning up...
WARNING:niftynet: stopped early, incomplete loops
INFO:niftynet: stopping sampling threads
INFO:niftynet: SegmentationApplication stopped (time in second 17.07).
Traceback (most recent call last):
File "/home/xchaosfailx1/.local/bin/net_segment", line 11, in <module>
sys.exit(main())
File "/home/xchaosfailx1/.local/lib/python3.5/site-packages/niftynet/__init__.py", line 139, in main
app_driver.run_application()
File "/home/xchaosfailx1/.local/lib/python3.5/site-packages/niftynet/engine/application_driver.py", line 275, in run_application
self._inference_loop(session, loop_status)
File "/home/xchaosfailx1/.local/lib/python3.5/site-packages/niftynet/engine/application_driver.py", line 493, in _inference_loop
self._loop(iter_generator(itertools.count(), INFER), sess, loop_status)
File "/home/xchaosfailx1/.local/lib/python3.5/site-packages/niftynet/engine/application_driver.py", line 442, in _loop
iter_msg.current_iter_output[NETWORK_OUTPUT])
File "/home/xchaosfailx1/.local/lib/python3.5/site-packages/niftynet/application/segmentation_application.py", line 390, in interpret_output
batch_output['window'], batch_output['location'])
File "/home/xchaosfailx1/.local/lib/python3.5/site-packages/niftynet/engine/windows_aggregator_resize.py", line 55, in decode_batch
self._save_current_image(window[batch_id, ...], resize_to_shape)
File "/home/xchaosfailx1/.local/lib/python3.5/site-packages/niftynet/engine/windows_aggregator_resize.py", line 82, in _save_current_image
[float(p) / float(d) for p, d in zip(window_shape, image_shape)]
File "/home/xchaosfailx1/.local/lib/python3.5/site-packages/niftynet/engine/windows_aggregator_resize.py", line 82, in <listcomp>
[float(p) / float(d) for p, d in zip(window_shape, image_shape)]
ZeroDivisionError: float division by zero
For reproducing the error:
changed the padding in niftynet.network.unet_2d.py from valid to same
dataset [Task2_Heart] : https://drive.google.com/drive/folders/1HqEgzS8BV2c7xYNrZdEAnrHk7osJJ--2
updated config:
https://drive.google.com/open?id=1RI111BZLv4Lhf9cGvHo_sAHRt_k5Xt0I
Didn't check the inference data but I think spatial_window_size in [INFERENCE] should be 96, 96, 1 as that's what you set in training.
The mistake that I made was that I set the border (96,96,0) under [Inference] to the same shape as my spatial window (96,96,1), so when the batch was cropped in decode_batch, the cropped image had an image shape with 0s in it. Hence, when the zoom ratio was calculated in _save_current_image, it led to a ZeroDivsionError. The temporary fix was to remove volume padding and changing the border=(0,0,0).

TypeError: len() of unsized object in Python Extreme Learning Machine (ELM) library

I have installed elm library of python. There is an example provided in this link http://elm.readthedocs.io/en/latest/usage.html. The code is:
import elm
# download an example dataset from
# https://github.com/acba/elm/tree/develop/tests/data
# load dataset
data = elm.read("iris.data")
# create a classifier
elmk = elm.ELMKernel()
# search for best parameter for this dataset
# define "kfold" cross-validation method, "accuracy" as a objective function
# to be optimized and perform 10 searching steps.
# best parameters will be saved inside 'elmk' object
elmk.search_param(data, cv="kfold", of="accuracy", eval=10)
# split data in training and testing sets
# use 80% of dataset to training and shuffle data before splitting
tr_set, te_set = elm.split_sets(data, training_percent=.8, perm=True)
#train and test
# results are Error objects
tr_result = elmk.train(tr_set)
te_result = elmk.test(te_set)
print(te_result.get_accuracy)
When I run the code I am shown this error. It would be great help for me if someone could point out what is causing the problem. I have downloaded the dataset from the given URL provided in the link. My elm package's version is 0.1.1 and python version is 3.5.2. Thanks in advance.
Error is:
Traceback (most recent call last):
File "F:\7th semester\machine language\thesis work\python\Applying ELM in iris dataset\elm1.py", line 17, in <module>
elmk.search_param(data, cv="kfold", of="accuracy", eval=10)
File "C:\Users\maisha\AppData\Local\Programs\Python\Python35\lib\site-packages\elm\elmk.py", line 489, in search_param
param_kernel=param_ranges[1])
File "C:\Users\maisha\AppData\Local\Programs\Python\Python35\lib\site-packages\optunity\api.py", line 212, in minimize
pmap=pmap)
File "C:\Users\maisha\AppData\Local\Programs\Python\Python35\lib\site-packages\optunity\api.py", line 245, in optimize
solution, report = solver.optimize(f, maximize, pmap=pmap)
File "C:\Users\maisha\AppData\Local\Programs\Python\Python35\lib\site-packages\optunity\solvers\CMAES.py", line 139, in optimize
sigma=self.sigma)
File "C:\Users\maisha\AppData\Local\Programs\Python\Python35\lib\site-packages\deap\cma.py", line 90, in __init__
self.dim = len(self.centroid)
TypeError: len() of unsized object

Multihot encoding in tensoflow (google cloud machine learning, tf estimator api)

I have a feature like a post tag. So for each observation the post_tag feature might be a selection of tags like "oscars,brad-pitt,awards". I'd like to be able to pass this as a feature to a tensorflow model build using the estimator api running on google cloud machine learning (as per this example but adapted for my own problem).
I'm just not sure how to transform this into a multi-hot encoded feature in tensorflow. I'm trying to get something similar to MultiLabelBinarizer in sklearn ideally.
I think this is sort of related but not quite what i need.
So say i have data like:
id,post_tag
1,[oscars,brad-pitt,awards]
2,[oscars,film,reviews]
3,[matt-damon,bourne]
I want to featurize it, as part of preprocessing within tensorflow, as:
id,post_tag_oscars,post_tag_brad_pitt,post_tag_awards,post_tag_film,post_tag_reviews,post_tag_matt_damon,post_tag_bourne
1,1,1,1,0,0,0,0
2,1,0,0,1,1,0,0
3,0,0,0,0,0,1,1
Update
If i have post_tag_list be a string like "oscars,brad-pitt,awards" in the input csv. And if i try then do:
INPUT_COLUMNS = [
...
tf.contrib.lookup.HashTable(tf.contrib.lookup.KeyValueTensorInitializer('post_tag_list',
tf.range(0, 10, dtype=tf.int64),
tf.string, tf.int64),
default_value=10, name='post_tag_list'),
...]
I get this error:
Traceback (most recent call last):
File "/usr/lib/python2.7/runpy.py", line 174, in _run_module_as_main
"__main__", fname, loader, pkg_name)
File "/usr/lib/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/home/andrew_maguire/localDev/codeBase/pmc-analytical-data-mart/clickmodel/trainer/task.py", line 4, in <module>
import model
File "trainer/model.py", line 49, in <module>
default_value=10, name='post_tag_list'),
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/lookup_ops.py", line 276, in __init__
super(HashTable, self).__init__(table_ref, default_value, initializer)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/lookup_ops.py", line 162, in __init__
self._init = initializer.initialize(self)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/lookup_ops.py", line 348, in initialize
table.table_ref, self._keys, self._values, name=scope)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_lookup_ops.py", line 205, in _initialize_table_v2
values=values, name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py", line 767, in apply_op
op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2632, in create_op
set_shapes_for_outputs(ret)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1911, in set_shapes_for_outputs
shapes = shape_func(op)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1861, in call_with_requiring
return call_cpp_shape_fn(op, require_shape_fn=True)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/common_shapes.py", line 595, in call_cpp_shape_fn
require_shape_fn)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/common_shapes.py", line 659, in _call_cpp_shape_fn_impl
raise ValueError(err.message)
ValueError: Shape must be rank 1 but is rank 0 for 'key_value_init' (op: 'InitializeTableV2') with input shapes: [], [], [10].
If i was to pad each post_tag_list to be like "oscars,brad-pitt,awards,OTHER,OTHER,OTHER,OTHER,OTHER,OTHER,OTHER" so it's always 10 long. Would that be a potential solution here.
Or do i need to in some way know the size of all post tags i might ever be passing in here (kinda ill defined as new ones created all the time).
Have you tried tf.contrib.lookup.Hashtable?
Here is an example usage from my own use: https://github.com/TensorLab/tensorfx/blob/master/src/data/_transforms.py#L160 and a made up example snippet based on that:
import tensorflow as tf
session = tf.InteractiveSession()
entries = ['red', 'blue', 'green']
table = tf.contrib.lookup.HashTable(
tf.contrib.lookup.KeyValueTensorInitializer(entries,
tf.range(0, len(entries), dtype=tf.int64),
tf.string, tf.int64),
default_value=len(entries), name='entries')
tf.tables_initializer().run()
value = tf.constant([['blue', 'red'], ['green', 'red']])
print(table.lookup(value).eval())
I believe lookup works for both regular tensors and SparseTensors (you might end up with the latter given your variable length list of values).
There are a couple of issues to tackle here. First, is the question about a tag set which keeps growing. You would also like to know how to parse variable-length data from CSV.
To handle a growing tag set, you'll need to use an OOV or feature hashing. Nikhil showed the latter, so I'll show the former.
How to parse variable-length data from CSV
Let's suppose the column with variable length data uses | as a separator, e.g.
csv = [
"1,oscars|brad-pitt|awards",
"2,oscars|film|reviews",
"3,matt-damon|bourne",
]
You can use code like this to convert those to a SparseTensor.
import tensorflow as tf
# Purposefully omitting "bourne" to demonstrate OOV mappings.
TAG_SET = ["oscars", "brad-pitt", "awards", "film", "reviews", "matt-damon"]
NUM_OOV = 1
def sparse_from_csv(csv):
ids, post_tags_str = tf.decode_csv(csv, [[-1], [""]])
table = tf.contrib.lookup.index_table_from_tensor(
mapping=TAG_SET, num_oov_buckets=NUM_OOV, default_value=-1)
split_tags = tf.string_split(post_tags_str, "|")
return ids, tf.SparseTensor(
indices=split_tags.indices,
values=table.lookup(split_tags.values),
dense_shape=split_tags.dense_shape)
# Optionally create an embedding for this.
TAG_EMBEDDING_DIM = 3
ids, tags = sparse_from_csv(csv)
embedding_params = tf.Variable(tf.truncated_normal([len(TAG_SET) + NUM_OOV, TAG_EMBEDDING_DIM]))
embedded_tags = tf.nn.embedding_lookup_sparse(embedding_params, sp_ids=tags, sp_weights=None)
# Test it out
with tf.Session() as s:
s.run([tf.global_variables_initializer(), tf.tables_initializer()])
print(s.run([ids, embedded_tags]))
You'll see output like so (since the embedding is random, exact numbers will change):
[array([1, 2, 3], dtype=int32), array([[ 0.16852427, 0.26074541, -0.4237918 ],
[-0.38550434, 0.32314634, 0.858069 ],
[ 0.19339906, -0.24429649, -0.08393878]], dtype=float32)]
You can see that each column in the CSV is represented as an ndarray, where the tags are now 3-dimensional embeddings.

Resources