How to input images in rllib - machine-learning

last time I saw library rllib: https://docs.ray.io/en/latest/rllib/index.html.
It has amazing features for reinforcement learning, but unfortunately, I couldn't find a way to input images as an observation without flattening them (I basically want to use convolutional neural network). Is there any way to input image observations in models using rllib library?

Rllib is compatible with openai's gym, you can create a custom env https://docs.ray.io/en/latest/rllib/rllib-env.html#configuring-environments and return a Box as an observation space like https://stackoverflow.com/a/69602365/4994352

Related

What is the term for using a Neural Network to create new data based on training data?

I have a large set of Training data which consists of various texts. They should be the input for my neural network. I have no output, or I don't know what to put as output.
Anyway, after the learning phase I want the neural network to create new texts based on the training data.
I read about this like „I made a bot watch 1000 hours of xy and asked it to write a new xy“.
Now my question is, what kind of machine learning is this? I am not looking for instructions on how to write it, but just a hint on how to find some keywords or tutorials. My Google searches so far were useless.
Your problem can usually be solved by an Encoder-Decoder architecture. This architecture would learn a set of latent vectors from your input, then try to output in whatever form you want. This architecture can be built with RNN, LSTM or CNN. Nowadays, attention-based models like transformers are more common among the big names. If you want to do text generation, you can start by reading about Generative Adversarial Networks (GANs).

Feeding image features to tensorflow for training

Is it possible to feed image features, say SIFT features, to a convolutional neural network model in Tensorflow? I am trying a tensorflow implementation of this project in which a grayscale image is coloured. Will image features be a better choice than feeding the images as is to the model?
PS. I am a novice to machine learning and is not familiar with creating neural n/w models
You can feed tensorflow neural net almost anything.
If you have extra features for each pixel, then instead of using one channel (intensity) you would use multiple channels.
If you have extra features, which are about whole image, you can make separate input a merge features at some upper layer.
As for the better performance, you should try both approaches.
General intuition is that, extra features help if you don't have many samples and their effect is diminishing if you have many samples and network can learn features by itself.
Also one more point: If you are novice, I strongly recommend using higher level framework like keras.io (which is layer over tensorflow) instead of tensorflow.

Using OpenCV to create a Neural network?

I am working on creating a Real-time image processor for a self driving small scale car project for uni, It uses a raspberry pi to get various information to send to the program to base a decision by.
the only stage i have left is to create a Neural network which will view the image displayed from the camera ( i already have to code to send the array of CV_32F values between 0-255 etc.
I have been scouring the internet and cannot seem to find any example code that is related to my specific issue or my kind of task in general (how to implement a neural network of this kind), so my question is is it possible to create a NN of this size in c++ without hard coding it (aka utilising openCv's capabilities): it will need 400 input nodes for each value (from 20x20 image) and produce 4 outputs of left right fwd or backwards respectively.
How would one create a neural network in opencv?
Does openCV provide a backpropogation(training) interface /function or would I have to write this myself.
once it is trained am I correct in assuming I can load the neural network using ANN_MLP load etc? following this pass the live stream frame (as an array of values) to it and it should be able to produce the correct output.
edit:: I have found this OpenCV image recognition - setting up ANN MLP. and It is very simple in comparison to what I want to do, and I am not Sure how to adapt that to my problem.
OpenCV is not a neural network framework and in turn won't find any advanced features. It's far more common to use a dedicated ANN library and combine it with OpenCV. Caffe is a great choice as a computer vision dedicated deep learning framework (with C++ API), and it can be combined with OpenCV.

Text detection on images

I am a very new student on machine learning. I just wanted to ask what are possible ways to improve a method (Naive Bayes for example) to get better results classifying images into text or non-text images, instead of just inputing a x number of images and telling the system which have text and which do not?
Thanks in advance
The state of the art in such problems are deep neural networks with several convolutional layers. See this article for an example of image classification using deep convolutional nets. Your problem (just determining if an image has text or not) is much easier than the general image classification problem the authors consider, so you'd probably get away with using a much simpler network architecture.
Nowadays you don't need to implement these things yourself, there are efficient and GPU-accelerated implementations freely available, for instance Caffe, Torch7, keras...

Face recognition with a small number of samples

Can anyone advise me way to build effective face classifier that may be able to classify many different faces (~1000)?
And i have only 1-5 examples of each face
I know about opencv face classifier, but it works bad for my task (many classes, a few samples).
It works alright for one face classification with small number of samples. But i think that 1k separate classifier is not good idea
I read a few articles about face recognition but methods from these articles reqiues a lot of samples of each class for work
PS Sorry for my writing mistakes. English in not my native language.
Actually, for giving you a proper answer, I'd be happy to know some details of your task and your data. Face Recognition is a non-trivial problem and there is no general solution for all sorts of image acquisition.
First of all, you should define how many sources of variation (posing, emotions, illumination, occlusions or time-lapse) you have in your sample and testing sets. Then you should choose an appropriate algorithm and, very importantly, preprocessing steps according to the types.
If you don't have any significant variations, then it is a good idea to consider for a small training set one of the Discrete Orthogonal Moments as a feature extraction method. They have a very strong ability to extract features without redundancy. Some of them (Hahn, Racah moments) can also work in two modes - local and global feature extraction. The topic is relatively new, and there are still few articles about it. Although, they are thought to become a very powerful tool in Image Recognition. They can be computed in near real-time by using recurrence relationships. For more information, have a look here and here.
If the pose of the individuals significantly varies, you may try to perform firstly pose correction by Active Appearance Model.
If there are lots of occlusions (glasses, hats) then using one of the local feature extractors may help.
If there is a significant time lapse between train and probe images, the local features of the faces could change over the age, then it's a good option to try one of the algorithms which use graphs for face representation so as to keep the face topology.
I believe that non of the above are implemented in OpenCV, but for some of them you can find MATLAB implementation.
I'm not native speaker as well, so sorry for the grammar
Coming to your problem , it is very unique in its way. As you said there are only few images per class , the model which we train should either have an awesome architecture which can create better features within an image itself , or there should be an different approach which can achieve this task .
I have four things which I can share as of now :
Do data pre-processing and then create a bigger dataset and train on a neural network ideally. Here, we can do pre-processing like:
- image rotation
- image shearing
- image scaling
- image blurring
- image stretching
- image translation
and create atleast 200 images per class. Please checkout opencv documentation which provides many more methods on how you can increase the size of your dataset. Once you do this, then we can apply transfer learning , which is a better approach than training a neural network from scratch.
Transfer learning is a method where we train a network on our own custom classes , and this network is already pre-trained on 1000's of classes. Since our data here is very less, I would prefer transfer learning only. I have written a blog on how you can approach this using tranfer learning after you have the required amount of data. It is linked here. Face recognition also is a classification task itself, where each human is a separate class. So, follow the instructions given in the blog , may be it would help you create your own powerful classifer.
Another suggestion would be , after creating a dataset , encode them properly. This encoding would help you preserve the features in an image and can help you train better networks. VLAD ,Fisher , Bag of Words are few encoding techniques. You can search few repositories online which have implemented these already on ORL database. Once you encode , train the network on the encodings , you will obviously see a better performance.
Even do check out , Siamese network here which is meant for this purpose I feel . Here they compare two images with similar characteristics on different networks and there by achieve better classification accuracies . Git repository is here.
Another standard approach would be using SVM , Random forests since the data is less. If you still prefer neural networks the above methods would serve you the purpose. If you intend to go with encodings , then I would suggest random forests , as it is highly preferrable in learning and flexible too.
Hopefully , this answer would help you proceed in the right direction of achieving things.
You might want to take a look at OpenFace, a Python and Torch implementantion of face recognition with deep neural networks: https://cmusatyalab.github.io/openface/

Resources