How can I run this prediction.py and evaluate_performance.py file? - machine-learning

I am a medical student and I am using google colab to learn fastAI. In this project, https://github.com/QinglingGo/Classification-of-Objects-using-Deep-Learning-Model,
I can achieve the output of the model, but I don't know how to perform the prediction.py and the evaluat_performance.py files.
When I run the evaluat_performance.py, the following message will appear:
python3: can't open file 'prediction.py': [Errno 2] No such file or directory
/usr/local/lib/python3.6/dist-packages/IPython/utils/traitlets.py:5: UserWarning: IPython.utils.traitlets has moved to a top-level traitlets package.
  warn ("IPython.utils.traitlets has moved to a top-level traitlets package.")
1. Loading Data ...
ImageDataBunch;
Train: LabelList (942 items)
x: SegmentationItemList
Image (3, 256, 256), Image (3, 256, 256), Image (3, 256, 256), Image (3, 256, 256), Image (3, 256, 256)
y: SegmentationLabelList
ImageSegment (1, 256, 256), ImageSegment (1, 256, 256), ImageSegment (1, 256, 256), ImageSegment (1, 256, 256), ImageSegment (1, 256, 256)
Path: / content / drive / My Drive / Colab Notebooks / bbc_train / images;
Valid: LabelList (0 items)
x: SegmentationItemList
y: SegmentationLabelList
Path: / content / drive / My Drive / Colab Notebooks / bbc_train / images;
Test: None
2. Instantiating Model ...
Traceback (most recent call last):
  File "evaluate_preformance.py", line 66, in <module>
    combined_accuracy, classification_accuracy, bbox_score, segmentation_accuracy = evaluate ()
  File "evaluate_preformance.py", line 29, in evaluate
    M = Model (path = model_dir, file = 'export.pkl')
NameError: name 'Model' is not defined.
And I don't understand the meaning of "from sample_student import Model" in line 6 of the .py file? Can anyone help me?
Thanks in advance!

I don't know whether this will solve your problem completely or not but this is the basic things that you should keep in mind when you use such projects
Kindly go to the directory from your terminal where you can find a python script called the evaluate_performance.py code and use the command python evaluate_performance.py. I guess the deep learning model is also defined there in one of the python scripts. Set all the paths to your dataset properly and if everything is correct then you will be able to run the code successfully.
Note kindly keep all the python scripts in the same directory so that they are accessible from anywhere in the same directory. Hope this will help you.

IN a new cell of your jupyter notebook, run the below command.
%run /path_to_file/filename.py
This will execute the python file inside the jupyter notebook.
Note: Make sure you are giving the correct directory. If path is wrong then it wil raise an error that file not found

Related

Type Error for Image generation GAN model

I am trying to visualize the final images from the generator model for GAN image generator, everything else worked perfectly.
When trying to visualize the images after training the model I am getting TypeError.
This is what i was going for image plotting:
imgs = test_model.predict(tf.random.normal((16, 128, 1)))
^ this ran fine
but the one below gave me an error:
fig, ax = plt.subplots(ncols = 4, nrows = 4, figsize = (20, 20))
for r in range(4):
for c in range(4):
ax[r][c].imshow(imgs[(r+1)*(c+1)-1])
This is the error.
TypeError: Invalid shape (28, 28, 1) for image data
Anyone can tell me how to fix this ?
I checked for all possible errors, couldn't find it.
What am I missing to check?

Issue using patchify library to open image files.,

I was trying to stitch smaller patches of images into one large image using the patchify library and the code used by DigitalSreeni on Youtube in episode 208 of multiclass semantic segmentation. However when using the below piece of code, i wasn't able to open the image files from the very beginning. It asked me to take a look at the directory or the file itself, but i knew the directory was correct. Code and error has been attached below.
from patchify import patchify, unpatchify
large_image = cv2.imread("Users/anish/largeimages/largeimage.png", 0)
#This will split the image into small images of shape [3,3]
patches = patchify(large_image, (128, 128), step=1)
Error shown on command prompt:
from patchify import patchify, unpatchify
large_image = cv2.imread("Users/anish/largeimages/largeimage.png", 0)
#This will split the image into small images of shape [3,3]
patches = patchify(large_image, (128, 128), step=1)
Traceback (most recent call last):
File "C:\Users\anish\AppData\Local\Temp\ipykernel_23432\463661116.py", line 5, in <module>
patches = patchify(large_image, (128, 128), step=1)
File "C:\Users\anish\anaconda3\envs\py37gpu\lib\site-packages\patchify\__init__.py", line 32, in patchify
return view_as_windows(image, patch_size, step)
File "C:\Users\anish\anaconda3\envs\py37gpu\lib\site-packages\patchify\view_as_windows.py", line 21, in view_as_windows
raise TypeError("`arr_in` must be a numpy ndarray")
TypeError: `arr_in` must be a numpy ndarray
[ WARN:0#9.588] global D:\a\opencv-python\opencv-python\opencv\modules\imgcodecs\src\loadsave.cpp (239) cv::findDecoder imread_('Users/anish/largeimages/largeimage.png'): can't open/read file: check file path/integrity

How can I save each image to a folder from this array on jupyter notebook?

I have an array full of 25 images, with a shape of (25, 64, 128, 1). I'm trying to save each image as jpg into a folder.
Thank you!
Install PIL using pip install Pillow. Let's assume arr is the array of shape (25, 64, 128, 1).
from PIL import Image
for i in range(arr.shape[0])
im = Image.fromarray(arr[i, :, :, 0])
im.save(f"image_{i}.jpeg")

TFF: Custom input spec with custom data set - TypeError: object of type 'TensorSpec" has no len()

1: problem:
I have the need to use a custom data set in a tff simulation. I have built on the tff/python/research/compression example "run_experiment.py".
The error:
File "B:\tools and software\Anaconda\envs\bookProjects\lib\site-packages\IPython\core\interactiveshell.py", line 3331, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-2-47998fd56829>", line 1, in <module>
runfile('B:/projects/openProjects/githubprojects/BotnetTrafficAnalysisFederaedLearning/anomaly-detection/train_v04.py', args=['--experiment_name=temp', '--client_batch_size=20', '--client_optimizer=sgd', '--client_learning_rate=0.2', '--server_optimizer=sgd', '--server_learning_rate=1.0', '--total_rounds=200', '--rounds_per_eval=1', '--rounds_per_checkpoint=50', '--rounds_per_profile=0', '--root_output_dir=B:/projects/openProjects/githubprojects/BotnetTrafficAnalysisFederaedLearning/anomaly-detection/logs/fed_out/'], wdir='B:/projects/openProjects/githubprojects/BotnetTrafficAnalysisFederaedLearning/anomaly-detection')
File "B:\tools and software\PyCharm 2020.1\plugins\python\helpers\pydev\_pydev_bundle\pydev_umd.py", line 197, in runfile
pydev_imports.execfile(filename, global_vars, local_vars) # execute the script
File "B:\tools and software\PyCharm 2020.1\plugins\python\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "B:/projects/openProjects/githubprojects/BotnetTrafficAnalysisFederaedLearning/anomaly-detection/train_v04.py", line 292, in <module>
app.run(main)
File "B:\tools and software\Anaconda\envs\bookProjects\lib\site-packages\absl\app.py", line 299, in run
_run_main(main, args)
File "B:\tools and software\Anaconda\envs\bookProjects\lib\site-packages\absl\app.py", line 250, in _run_main
sys.exit(main(argv))
File "B:/projects/openProjects/githubprojects/BotnetTrafficAnalysisFederaedLearning/anomaly-detection/train_v04.py", line 285, in main
train_main()
File "B:/projects/openProjects/githubprojects/BotnetTrafficAnalysisFederaedLearning/anomaly-detection/train_v04.py", line 244, in train_main
input_spec=input_spec),
File "B:/projects/openProjects/githubprojects/BotnetTrafficAnalysisFederaedLearning/anomaly-detection/train_v04.py", line 193, in model_builder
metrics=[tf.keras.metrics.Accuracy()]
File "B:\tools and software\Anaconda\envs\bookProjects\lib\site-packages\tensorflow_federated\python\learning\keras_utils.py", line 125, in from_keras_model
if len(input_spec) != 2:
TypeError: object of type 'TensorSpec' has no len()
highlighting: TypeError: object of type 'TensorSpec' has no len()
2: have tried:
I have looked at the response to: TensorFlow Federated: How can I write an Input Spec for a model with more than one input
describing what would be needed to produce a custom input spec for.
I might be miss understanding input spec.
If I don't need to do this, and there is a better way, please tell.
3: source:
df = get_train_data(sysarg)
x_train, x_opt, x_test = np.split(df.sample(frac=1,
random_state=17),
[int(1 / 3 * len(df)), int(2 / 3 * len(df))])
x_train, x_opt, x_test = create_scalar(x_opt, x_test, x_train)
input_spec = tf.nest.map_structure(tf.TensorSpec.from_tensor, tf.convert_to_tensor(x_train))
TFF's models declare a slightly different input specification than you may be expecting; they generally are expecting both the x and the y values as parameters (IE, data and labels). It is unfortunate that you're hitting that AttributeError, as the ValueError TFF would be raising is probably more helpful in this case. Inlining the operative parts of the message here:
The top-level structure in `input_spec` must contain exactly two elements,
as it must specify type information for both inputs to and predictions from the model.
The TLDR in your particular example is: if you have access to the labels as well (y_train below), simply change your input_spec definition to:
input_spec = tf.nest.map_structure(
tf.TensorSpec.from_tensor,
[tf.convert_to_tensor(x_train), tf.convert_to_tensor(y_train)])

ZeroDivisionError: float division by zero during net_segment inference patch aggregation

I ran (on Ubuntu 16.04 in a Google Cloud VM Instance):
net_segment inference -c <path-to-config>
for a binary segmentation problem using unet_2d with softmax and a (96,96,1) spatial window.
This was after I trained my model for 10 epochs and saved the checkpoint. I'm not sure why it's drawing a zero division error
from windows_aggregator_resize.py. What is the cause of this issue and what can I do to fix it?
Here are some inference settings and the corresponding error:
pixdim: (1.0, 1.0, 1.0)
[NETWORK]
batch_size: 1
cutoff: (0.01, 0.99)
name: unet_2d
normalisation: False
volume_padding_size: (96, 96, 0)
reg_type: L2
window_sampling: resize
multimod_foreground_type: and
[INFERENCE]
border = (96,96,0)
inference_iter = -1
output_interp_order = 0
spatial_window_size = (96,96,2)
INFO:niftynet: Accessing /home/xchaosfailx1/niftynet/models/MSD/heart_la_seg/models/model.ckpt-10 ...
INFO:niftynet: Restoring parameters from /home/xchaosfailx1/niftynet/models/MSD/heart_la_seg/models/model.ckpt-10
INFO:niftynet: Cleaning up...
WARNING:niftynet: stopped early, incomplete loops
INFO:niftynet: stopping sampling threads
INFO:niftynet: SegmentationApplication stopped (time in second 17.07).
Traceback (most recent call last):
File "/home/xchaosfailx1/.local/bin/net_segment", line 11, in <module>
sys.exit(main())
File "/home/xchaosfailx1/.local/lib/python3.5/site-packages/niftynet/__init__.py", line 139, in main
app_driver.run_application()
File "/home/xchaosfailx1/.local/lib/python3.5/site-packages/niftynet/engine/application_driver.py", line 275, in run_application
self._inference_loop(session, loop_status)
File "/home/xchaosfailx1/.local/lib/python3.5/site-packages/niftynet/engine/application_driver.py", line 493, in _inference_loop
self._loop(iter_generator(itertools.count(), INFER), sess, loop_status)
File "/home/xchaosfailx1/.local/lib/python3.5/site-packages/niftynet/engine/application_driver.py", line 442, in _loop
iter_msg.current_iter_output[NETWORK_OUTPUT])
File "/home/xchaosfailx1/.local/lib/python3.5/site-packages/niftynet/application/segmentation_application.py", line 390, in interpret_output
batch_output['window'], batch_output['location'])
File "/home/xchaosfailx1/.local/lib/python3.5/site-packages/niftynet/engine/windows_aggregator_resize.py", line 55, in decode_batch
self._save_current_image(window[batch_id, ...], resize_to_shape)
File "/home/xchaosfailx1/.local/lib/python3.5/site-packages/niftynet/engine/windows_aggregator_resize.py", line 82, in _save_current_image
[float(p) / float(d) for p, d in zip(window_shape, image_shape)]
File "/home/xchaosfailx1/.local/lib/python3.5/site-packages/niftynet/engine/windows_aggregator_resize.py", line 82, in <listcomp>
[float(p) / float(d) for p, d in zip(window_shape, image_shape)]
ZeroDivisionError: float division by zero
For reproducing the error:
changed the padding in niftynet.network.unet_2d.py from valid to same
dataset [Task2_Heart] : https://drive.google.com/drive/folders/1HqEgzS8BV2c7xYNrZdEAnrHk7osJJ--2
updated config:
https://drive.google.com/open?id=1RI111BZLv4Lhf9cGvHo_sAHRt_k5Xt0I
Didn't check the inference data but I think spatial_window_size in [INFERENCE] should be 96, 96, 1 as that's what you set in training.
The mistake that I made was that I set the border (96,96,0) under [Inference] to the same shape as my spatial window (96,96,1), so when the batch was cropped in decode_batch, the cropped image had an image shape with 0s in it. Hence, when the zoom ratio was calculated in _save_current_image, it led to a ZeroDivsionError. The temporary fix was to remove volume padding and changing the border=(0,0,0).

Resources