Visualize Neural Net Layers in Torch, without itorch - lua

I have a neural net with several convolution layers and I'd like to visualize the feature maps I'm creating.
I've seen this post:
Visualize images in intermediate layers in torch (lua)
which suggests using itorch, but it requires running my code in an itorch notebook which I would like to avoid.
Are there any other Torch packages which can be used to visualize convolution layers?
EDIT (with detailed solution):
Since I found so few resources online about how to go about doing this I documented my full solution and slapped it on Github. Anyone who wants to visualize neural nets in torch can just go here to get started!
https://github.com/egaebel/torch-neural-network-visualization
Much thanks again to YuTse for the gnuplot tip!

gnuplot
in itorch console mode(itorch) / torch mode(th)
require 'image';
a = image.lena();
require 'gnuplot';
gnuplot.figure(1);
gnuplot.imagesc(a[1])

Related

How to input images in rllib

last time I saw library rllib: https://docs.ray.io/en/latest/rllib/index.html.
It has amazing features for reinforcement learning, but unfortunately, I couldn't find a way to input images as an observation without flattening them (I basically want to use convolutional neural network). Is there any way to input image observations in models using rllib library?
Rllib is compatible with openai's gym, you can create a custom env https://docs.ray.io/en/latest/rllib/rllib-env.html#configuring-environments and return a Box as an observation space like https://stackoverflow.com/a/69602365/4994352

How does MTCNN perform vs DLIB for face detection?

I saw MTCNN being recommended but haven't seen a direct comparison of DLIB and MTCNN.
I assume since MTCNN uses a neural networks it might work better for more use cases, but also have some surprisingly horrible edge cases?
Has anyone done an analysis of error rate, performance under different conditions (GPU and CPU), and general eyeball observations of the two?
You can have a look at this amazing kaggle notebook by timesler. Comparison is made between facenet-pytorch, DLIB & MTCNN.
https://www.kaggle.com/timesler/comparison-of-face-detection-packages
"Each package is tested for its speed in detecting the faces in a set of 300 images (all frames from one video), with GPU support enabled. Detection is performed at 3 different resolutions.
Any one-off initialization steps, such as model instantiation, are performed prior to performance testing."
You can test it within deepface easily. My experiments show that mtcnn overperforms than dlib.
#!pip install deepface
from deepface import DeepFace
backends = ['opencv', 'ssd', 'dlib', 'mtcnn']
DeepFace.detectFace("img.jpg", detector_backend = backends[0])

Are there any references to Tensorflow MNIST example

Looking for scientific article references for the network architecture presented in Deep MNIST for Experts tutorial (https://www.tensorflow.org/versions/r0.9/tutorials/mnist/pros/index.html)
I have a similar image processing data and I'm looking for a good vanilla architecture, any recommendations?
Currently the best solution for this problem are wavelet transform based solutions
You probably don't want to look at Deep MNIST for Experts as an example of a good architecture for MNIST or as a scientific baseline. It's more an example of basic Tensorflow building blocks and a nice introduction to convolutional models.
I.e, you should be able to get equal or better results with a model with 5% of the free parameters and less layers.

Is there an implementation of Convolutional Neural Network available in OpenCV or similar?

Is there a Convolutional Neural Network in OpenCV? How possible is it to use the algorithm for image or video processing?
Yes and No.
There is no directly implemented convnet library bundled into OpenCV, however Caffe (one of the leading convolutional neural network packages) interacts with it rather well.
If you install Caffe, one of its requisites is OpenCV, and you can then use OpenCV through Caffe's C or Python API's. See main Caffe website.
If you install OpenCV (a recent enough version) you can use the new opencv_dnn module to interact with Caffe. See opencv_dnn tutorial.

How do I create a custom haar classifier?

I am struggling to create a custom haar classifier. I have found a couple tutorials on the web, but they do not specify which version of opencv they are using. What I need is a very concise and simplified example of the steps that are required, along with a simple dataset of images. I also need to know the opencv version and the OS platform so I can get it running. I have tried a matrix of opencv versions on both windows and linux and I have run into memory error after memory error. I would like to start with a known good set of data and simple commands before expanding it to fit my problem.
Thanks for your help,
Chris
OpenCV provides two utility commands createsamples.exe and haartraining.exe, which can generate xml files used by Haar Classifiers. That is, with the xml file outputted from haartraining.exe, you can directly use the face detection sample with your xml file to detect any customized objects.
About the detailed procedures to use the commands, you may consult Page 513-516 in the book "Learning OpenCV", or this tutorial.
About the internal mechanism of how the classifier works, you may consult the paper "Rapid Object Detection using a Boosted Cascade of Simple
Features", which has been cited 5500+ times.

Resources