CVAT annotation tool. Including own DL model - image-processing

I work with CVAT, but I want to include my own trained model for auto segmentation. When I try to find how to do it in the new version I did not find any tutorial or explanation, even this situation does not explain in the documentation.

Related

Image annotation tool that support annotation using existing CNN

I have trained a YoloV4 CNN. It's pretty good already. I want more images as training data but there is no point of manually annotate most of the stuff because CNN can do it for me. I could review and re-correct if there are any issues. Is there a image annotation tool/service that can do that? I'm currently using Supervisely. I also tried CVAT and VoTT Couldn't find such feature.
I created a tool python project to generate supervisely project using darknet. It's available on github.
https://github.com/s1n7ax/partially-annotate

How to "teach" TensorFlow to compute expected output?

So as a fun project, I've been messing with the TensorFlow API (in Java unfortunately.. but I should be able to get some results out anyways). My first goal is to develop a model for 2D point cloud filtering. So I have written code that generates random clouds in 224x172 resolution, computes the result of a neighbor density filter, and stores both (see images below).
So basically I have generated data for both an input and expected output, which can be done as much as needed for a massive dataset.
I have both the input and output arrays stored as 224x172 binary arrays (0 for no point at index, 1 for a point at that index). So my input and output are both 224x172. At this point, I'm not sure how to translate my input to my expected result. I'm not sure how to weight each "pixel" of my cloud, or how to "teach" the program the expected result. Any suggestions/guidance on whether this is even possible for my given scenario would be appreciated!
Please don't be too hard on me... I'm a complete noob when it comes to machine learning.
Imagine, that Tensorflow is a set of building blocks (like LEGO) that allows constructing machine learning models. After the model is constructed, it could be trained and evaluated.
So basically your question could be divided into three steps:
1. I'm new to machine learning. Please guide me how to choose the model that fits the task.
2. I'm new to tensorflow. I have the idea of model (see 1) and I want to construct it via Tensorflow.
3. I'm new to Java's API of tensorflow. I know how to build model using tensorflow (2), but I'm stuck in Java.
This sounds scaring, but that's not too bad really. And I'd suggest you the following plan to do:
1. You need to look through the machine learning models to find the model that suits your case. So you need to ask yourself: what are the models that could be used for cloud filtering? And basically, do you really need some machine learning model? Why don't you use simple math formulas?
That are the questions you may ask to yourself.
Ok, assume you've found some model. For example, you've found a paper describing neural network able to solve your tasks. Then go to the next step
2. Tensorflow has a lot of examples and code snippets. So you could even
find the code with your model implemented yet.
The bad thing, that most code examples and API are Python based. But as you want to go into machine learning, I'd suggest you studying Python. It's easy to enter. It's very common to use python in the science world as it allows not to waste time on wrappers, configuration, etc. (as Java needs). We just start solving our task from the first line of the script.
As I've told initially about tensorflow and LEGO similarity, I'd like to add that there are more high-level additions to the tensorflow. So you work with the not building blocks but some kind of layers of blocks.
Something like tflearn. It's very good especially if you don't have deep math or machine learning background. It allows building machine learning models in a very simple and understandable way. So do you need to add some neural network layer? Here you are. And that's all without complex low-level tensor operations.
The disadvantage that you won't be able to load tflearn model from Java.
Anyway, we assume, at the end of this step you are able to build your model, to train it and to evaluate the model and prediction quality.
3. So you have your machine learning model, you understand Tensorflow mechanics, and if you still need to work with Java that should be much easier yet.
I note, that you won't be able to load tflearn model from Java. You can try to use jython to call python's functions directly from Java, though I haven't tried it.
And on this way (1-3) you will definitely have some more questions. So welcome to SO.

Should I implement Content-based Recommender from scratch or use Machine learning library like mahout?

I am new to apache mahout but i read one article which said Apache Mahout 1.0 gives content based recommendion (http://mahout.apache.org/users/algorithms/intro-cooccurrence-spark.html) but now it turns out that it does not give content-based recommendation rather it gives recommendation based on different user actions on website.
Amazon ,Netflix might have been using content-based recommender and probabily they might have implemented them from scratch but now my question is:
Is there any Machine Learning library which gives us content-based recommendation or do i have to implement it by myself ?
Here by content based recommendation i mean there is feature vector for item and we behaviour vector for each user and hence by multiplying them we get recommendation for particular user.
Please recommend something to me,
Thanks in advance.

How do I create a custom haar classifier?

I am struggling to create a custom haar classifier. I have found a couple tutorials on the web, but they do not specify which version of opencv they are using. What I need is a very concise and simplified example of the steps that are required, along with a simple dataset of images. I also need to know the opencv version and the OS platform so I can get it running. I have tried a matrix of opencv versions on both windows and linux and I have run into memory error after memory error. I would like to start with a known good set of data and simple commands before expanding it to fit my problem.
Thanks for your help,
Chris
OpenCV provides two utility commands createsamples.exe and haartraining.exe, which can generate xml files used by Haar Classifiers. That is, with the xml file outputted from haartraining.exe, you can directly use the face detection sample with your xml file to detect any customized objects.
About the detailed procedures to use the commands, you may consult Page 513-516 in the book "Learning OpenCV", or this tutorial.
About the internal mechanism of how the classifier works, you may consult the paper "Rapid Object Detection using a Boosted Cascade of Simple
Features", which has been cited 5500+ times.

How to use Opencv for Document Recognition with OCR?

I´m a beginner on computer vision, but I know how to use some functions on opencv. I´m tryng to use Opencv for Document Recognition, I want a help to find the steps for it.
I´m thinking to use opencv example find_obj.cpp , but the documents, for example passport, has some variables, name, birthdate, pictures. So, I need a help to define the steps for it, and if is possible how function I have to use on the steps.
I'm not asking a whole code, but if anyone has any example link or you can just type a walkthrough, it is of great help.
There are two very different steps involved here. One is detecting your object, and the other is analyzing it.
For object detection, you're just trying to figure out whether the object is in the frame, and approximately where it's located. The OpenCv features framework is great for this. For some tutorials and comprehensive sample code, see the OpenCv features2d tutorials and especially the feature matching tutorial.
For analysis, you need to dig into optical character recognition (OCR). OpenCv does not include OCR libraries, but I recommend checking out tesseract-ocr, which is a great OCR library. If your documents have a fixed structured (consistent layout of text fields) then tesseract-ocr is all you need. For more advanced analysis checking out ocropus, which uses tesseract-ocr but adds layout analysis.

Resources