On replacing the LJ-Speech dataset with your own - machine-learning

In most github repositories for machine learning based text to speech, the LJ-Speech dataset is being used and optimized for.
Having unsucessfully tried to use my own wave files for it, I am interested in the right approach to prepare your dataset for an optimized framework to likely convert.

With Mozilla TTS, you can have a look at the LJ-Speech script used to prepare the data to have an idea of what is needed for your own dataset:
https://github.com/erogol/TTS_recipes/blob/master/LJSpeech/DoubleDecoderConsistency/train_model.sh

Related

Tensorflow Object Detection API

I decided to take a dip into ML and with a lot of trial and error was able to create a model using TS' inception.
To take this a step further, I want to use their Object Detection API. But their input preparation instructions, references the use of Pascal VOC 2012 dataset but I want to do the training on my own dataset.
Does this mean I need to setup my datasets to either Pascal VOC or Oxford IIT format? If yes, how do I go about doing this?
If no (my instinct says this is the case), what are the alternatives of using TS object detection with my own datasets?
Side Note: I know that my trained inception model can't be used for localization because its a classifier
Edit:
For those still looking to achieve this, here is how I went about doing it.
The training jobs in the Tensorflow Object Detection API expect to get TF Record files with certain fields populated with groundtruth data.
You can either set up your data in the same format as the Pascal VOC or Oxford-IIIT examples, or you can just directly create the TFRecord files ignoring the XML formats.
In the latter case, the create_pet_tf_record.py or create_pascal_tf_record.py scripts are likely to still be useful as a reference for which fields the API expects to see and what format they should take. Currently we do not provide a tool that creates these TFRecord files generally, so you will have to write your own.
Except TF Object Detection API you may look at OpenCV Haar Cascades. I was starting my object detection way from that point and if provide well prepared data set it works pretty fine.
There are also many articles and tutorials about creating your own cascades, so it`s easy to start.
I was using this blog, it helps me a lot.

How to "teach" TensorFlow to compute expected output?

So as a fun project, I've been messing with the TensorFlow API (in Java unfortunately.. but I should be able to get some results out anyways). My first goal is to develop a model for 2D point cloud filtering. So I have written code that generates random clouds in 224x172 resolution, computes the result of a neighbor density filter, and stores both (see images below).
So basically I have generated data for both an input and expected output, which can be done as much as needed for a massive dataset.
I have both the input and output arrays stored as 224x172 binary arrays (0 for no point at index, 1 for a point at that index). So my input and output are both 224x172. At this point, I'm not sure how to translate my input to my expected result. I'm not sure how to weight each "pixel" of my cloud, or how to "teach" the program the expected result. Any suggestions/guidance on whether this is even possible for my given scenario would be appreciated!
Please don't be too hard on me... I'm a complete noob when it comes to machine learning.
Imagine, that Tensorflow is a set of building blocks (like LEGO) that allows constructing machine learning models. After the model is constructed, it could be trained and evaluated.
So basically your question could be divided into three steps:
1. I'm new to machine learning. Please guide me how to choose the model that fits the task.
2. I'm new to tensorflow. I have the idea of model (see 1) and I want to construct it via Tensorflow.
3. I'm new to Java's API of tensorflow. I know how to build model using tensorflow (2), but I'm stuck in Java.
This sounds scaring, but that's not too bad really. And I'd suggest you the following plan to do:
1. You need to look through the machine learning models to find the model that suits your case. So you need to ask yourself: what are the models that could be used for cloud filtering? And basically, do you really need some machine learning model? Why don't you use simple math formulas?
That are the questions you may ask to yourself.
Ok, assume you've found some model. For example, you've found a paper describing neural network able to solve your tasks. Then go to the next step
2. Tensorflow has a lot of examples and code snippets. So you could even
find the code with your model implemented yet.
The bad thing, that most code examples and API are Python based. But as you want to go into machine learning, I'd suggest you studying Python. It's easy to enter. It's very common to use python in the science world as it allows not to waste time on wrappers, configuration, etc. (as Java needs). We just start solving our task from the first line of the script.
As I've told initially about tensorflow and LEGO similarity, I'd like to add that there are more high-level additions to the tensorflow. So you work with the not building blocks but some kind of layers of blocks.
Something like tflearn. It's very good especially if you don't have deep math or machine learning background. It allows building machine learning models in a very simple and understandable way. So do you need to add some neural network layer? Here you are. And that's all without complex low-level tensor operations.
The disadvantage that you won't be able to load tflearn model from Java.
Anyway, we assume, at the end of this step you are able to build your model, to train it and to evaluate the model and prediction quality.
3. So you have your machine learning model, you understand Tensorflow mechanics, and if you still need to work with Java that should be much easier yet.
I note, that you won't be able to load tflearn model from Java. You can try to use jython to call python's functions directly from Java, though I haven't tried it.
And on this way (1-3) you will definitely have some more questions. So welcome to SO.

Annotated images classification

I've got a bunch of images (~3000) which have been manually classified (approved/rejected) based on some business criteria. I've processed these images with Google Cloud Platform obtaining annotations and SafeSearch results, for example (csv format):
file name; approved/rejected; adult; spoof; medical; violence; annotations
A.jpg;approved;VERY_UNLIKELY;VERY_UNLIKELY;VERY_UNLIKELY;UNLIKELY;boat|0.9,vehicle|0.8
B.jpg;rejected;VERY_UNLIKELY;VERY_UNLIKELY;VERY_UNLIKELY;UNLIKELY;text|0.9,font|0.8
I want to use machine learning to be able to predict if a new image should be approved or rejected (second column in the csv file).
Which algorithm should I use?
How should I format the data, especially the annotations column? Should I obtain first all the available annotation types and use them as a feature with the numerical value (0 if it doesn't apply)? Or would it be better to just process the annotation column as text?
I would suggest you try convolutional neural networks.
Maybe the fastest way to test your idea if it will work or not (problem could be the number of images you have, which is quite low), is to use transfer learning with Tensorflow. There are great tutorials made by Magnus Erik Hvass Pedersen, who published them on youtube.
I suggest you go through all the videos, but the important ones are #7 and #8.
Using transfer learning allows you to use the models they build at google to classify images. But with transfer learning, you are able to use your own data with your own labels.
Using this approach you will be able to see if this is suitable for your problem. Then you can dive into convolutional neural networks and create the pipeline that will work the best for your problem.

How to process XML files using Rapidminer for classification

I am new to Rapidminer. I have many XML files and I want to classify these files manually based on keywords. Then I would like to train a classifier like Naive Bayer and SVM on these data and calculate their performances using cross- validator.
Could you please let me know different steps for this?
Should I need to use text processing activities like tokenising, TFIDF etc.?
The steps would go something like this
Loop over files - i.e. iterate over all files in a folder and read each one in turn.
For each file
read it in as a document.
tokenize it using operators like Extract Information or Cut Document containing suitable XPath queries to output a row corresponding to the extracted information in the document.
Create a document vector with all the rows. This is where TF-IDF or other approaches would be used. The choice depends on the problem at hand with TF-IDF being a usual choice where it is important to give more weight to tokens that appear often in a relatively small number of the documents.
Build the model and use cross validation to get an estimate of the performance on unseen data.
I have included a link to a process that you could use as the basis for this. It reads the RapidMiner repository which contains XML files so is a good example of processing XML documents using text processing techniques. Obviously, you would have to make some large modifications for your case.
Hope it helps.
Probably, it is too late to reply. But it could help to other people. There is an extension called 'text mining extension', I am using version 6.1.0 . So you may go to RapidMiner > help>update and install this extension. It will get all the files from one directory. It has various text mining algorithms that you may use
Also, I found this tutorial video which could be of some help to you as well
https://www.youtube.com/watch?v=oXrUz5CWM4E

CRF++ or CRFSuite

I'm starting to work with crf++ and crfsuite (both use a very similar file format). I want to do things related to images (segmentation, activiy recognition, etc). My main problem is how to build the training file. Has anybody work with crf and images? Has anybody explain me or give some file to learn.
Thanks in advance.
CRFsuite is faster than CRF++ and it can deal with a huge training data. I tried both of them. They perfectly work on a reasonable amount of data, but when my dataset increased to be more than 100,000 sentences, CRF++ did not manage to deal with it and suddenly stopped working.
Look at the following link
CRFsuite - CRF Benchmark test
there is a comparison between many CRF software in some criteria
I used CRF++ before and it worked very well.
But my field is natural language processing, and I use CRF++ for named entity recognition or POS tagging. CRF++ is easy to install on Linux but has some minor issue when compiling on windows.
You can just follow its document for training data format: each row represents a data sample and each column represents a feature type.
Or, you can also consider Mallet which has a CRF component.
Probably you should start with the DGM library (https://github.com/Project-10/DGM), which is the best choice for those, who never worked with CRFs before. It includes a number of ready-to-go demo projects, which will classify/ segment your images just out-of-the-box. It is also well documented.
I have just came across this one for Windows:
http://crfsharp.codeplex.com/
maybe you also want to try CRF component in Mallet package.

Resources