Tiny YOLOv3 (Darknet) training "too quickly" and produces different output - machine-learning

I am pretty new to YOLO/Darknet and am walking in circles with the solutions. I have looked at the Github and Stackexchange fora pages corresponding with similar issues, but none seems to directly address this output issue (i.e. where the region IOU line is missing). Here is my output (training/testing):
Here is my directory structure:
Other details:
I am using the AlexeyAB fork.
6 classes in total (following this convention of annotating occluded and truncated items, so two "items" with three classes each)
I'm using 200+ training images (definitely too few, but I don't know if this is the root cause of my troubles).
There is no predictions.png, just predictions.jpg. However, I don't think this should be an issue.
I followed this tutorial.
Any help is very much appreciated; thank you in advance!

If it finish too soon on training, try adding -clear 1at the end of your training command.
EDIT:
This is the correct answer (ergo why I accepted it), but lacks an explanation. The "-clear 1" flag is, according to this answer, clears past stats.

Related

Kaggle: TrackML Particle Tracking Challenge

I'm new to ML and Kaggle. I was going through the solution of a Kaggle Challenge.
Challenge: https://www.kaggle.com/c/trackml-particle-identification
Solution: https://www.kaggle.com/outrunner/trackml-2-solution-example
While going through the code, I noticed that the author has used only train_1 file (not train_2, 3, …).
I know there is some strategy involved behind using only the train_1 file. Can someone, please, explain why is it so? Also, what are the use of blacklist_training.zip, train_sample.zip, and detectors.zip files?
I'm one of the organiser of the challenge. train_1 2 3 .. files are all equivalent. Outrunner has probably seen there was no improvement using more data.
train_sample.zip is a small dataset equivalent to train_1 2 3... provided for convenience.
blacklist_training.zip is a list of particles to be ignored due to a small bug in the simulator (not very important).
detectors.zip is the list of the geometrical surfaces where the x y z measurements are made.
David

encogmodel selectmethod configuration

Could someone point me to examples on how to configure the encogmodel with selectmethod? This is an overloaded method with the first one providing just taking inputs as dataset and method. The second one however allows the following:
dataset
methodtype
methodArgs
trainingType
trainingArgs
I am unable to get this working as the following error appears "Layer can't have zero neurons, Unknown architecture element:". Any help is appreciated. thank you.
Also, some insight on how to dump the weights in this approach? When the model is built via building the network (BasicNetwork), it is possible to dump the weights as network.flat approach. In this encogmodel driven approach, how do we dump the weights, gradients etc? thank you
There are three examples for EncogModel, you can find them here:
If that does not help, let me know more specifically what you are trying to do, or provide some code that is not working, and I update this to a more specific answer.
The weights can be directly accessed by BasicNetwork.dumpWeights, BasicNetwork.dumpWeightsVerbose(), or more directly with BasicNetwork.getWeight

error detection on food packaging -using Open Cv

I am trying to determine when a food packaging have error or not error. Example
the logo " McDonald's " have error misprints or not, as the wrong label, wrong color..( i can not post picture )
What should I do, please help me!!
It's not a trivial task by any stretch of the imagination. Two images of the same identical object will always be different according to lightning conditions, perspective, shooting angle, etc.
Basically you need to:
1. Process the 2 images into "digested" data - dominant color, shapes, etcw
2. Design and run your own similarity algorithm between the 2 objects
You may want to look at Feature detectors in OpenCV: Surf, SIFT, etc.
Along a result I just found your question, so I think I come too late.
If not I think your problem car easily be resolved, it exists since years and is called Sikuli .
While it's for testing purposes, I have been using it in the same way as you need : compare a reference and a production image. Based on OpenCV it does it very well.

Conjoint analysis based on a orthogonal design

I'm having some issues regarding a conjoint analysis. Excuse me if some of the terms I use are wrong, but it has been some time since I last worked with SPSS - and my teacher was Danish.
Task object
I am to make a series of concept travelpackages (attributes and attribute notes/levels).
This far I've got things under control - I've reduced the number of packages from 81 to 9, with the help of 'orthogonal' design.
These 9 packages have been rated by some people (1-10), on a questionnaire.
Then I've been asked to write a syntax which evaluates my conjoint plan:
CONJOINT PLAN= 'C:\Users\MYNAME\DROBBOXFOLDER\Conjoint_cards.SAV'
/DATA='C:\Users\MYNAME\DROBBOXFOLDER\Respondents.SAV'
/SCORE=Card_1 TO Card_9
/SUBJECT=ID
/FACTORS= SMS Minutter Data Tryghed
/PRINT=ALL
/PLOT=ALL.
However I keep getting this error:
SUBJECT SUBCOMMAND -- Subject variable is not on data file.
Execution of this command stops.
At this point I've been to the dark pages of Google and back for an answer to what I am doing wrong, but nothing so far. The answer is probably staring me in the face. But I will appreciate any help or pointers as to what I'm doing wrong.
Problem solved:
So apparently one shouldn't follow a guide to the letter. My datafile didn't contain a ID, so removing this from my syntax solved the problem.

upper bound - display

This is an idea I got in to my mind,
All the display devices(screens which have pixels etc...) have an upper bound for the amount of various images they can generate.
as an example 1024*728 - 32 bit pixel display can only show (2^32)^(1024*768) etc... number of identical frames without duplicating any scene(view).
funny thing is, It's like we could pre generate all the films all the windows we have ever seen in our lives through screens etc...
the question here is can anybody use this abstract idea to create something useful? :D
You're talking of a number about
(2^32)^(1024*768) ~~ ((2^4)^8)^(10^6) ~~ 10^8^(10^6) ~ 10^8000000.
The number of atoms in universe is about
10^80 // http://en.wikipedia.org/wiki/Observable_universe#Matter_content
I think that there is no way we could pre-generate all the screens in our life.
Let me formulate another question. From a number this big, what can we do to reduce it? How to aggregate similar pictures in order to reduce the complexity?
Another nice question is: what kind of data structure we need to store all this information? Suppose we reduce the number of similar images to 10^10. What kind of structure can handle so many different kinds of pictures in an efficient way?
So given some extra information about the scenes you could generate you might be able to pull apart the scenes that no-one has ever seen.
So if you could take all the pictures out on the internet and the statistics about what was popular or viewed a lot then compute your all possible screens you could pull apart that was not viewed much.
With some basic rules about complexity of the image you might be able to come up with images that have not been seen before. Think 80% flesh tones might produce something coupled with a variance to show range might render people naked. :-)
Of course the computation of such an idea is vastly outside our potential. 2^32^(1024*768) is in the superexponential range which is outside the bounds of reality. I tried to compute it in ruby, and it just died. It would have been fun if it had actually worked. :-)

Resources