Illumination Normalization for Face Recognition [closed] - opencv

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 2 years ago.
Improve this question
I am doing a project in Face Recognition. However, when I implemented the Illumination normalization I didn't get the expected results.
I applied the ideal in the paper below:
http://lear.inrialpes.fr/pubs/2007/TT07/Tan-amfg07a.pdf
But the output images have different averages of grayscales, like this.
I am not sure but I guess it will significantly reduced the PCA-based face recognition rate.
What I should do to get the similar averages of grayscales over images, like this.
Thanks a lot.

Related

What are the good practices to building your own custom facial recognition? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I am working on building a custom facial recognition for our office.
I am planning to use Google FaceNet,
Now my question is that you can find or create your own version of facenet model in keras or pytorch there's no issue in that, but regarding creating dataset ,I want to know what are the best practices to capture photo of person when I don't have any prior photo of that person,all I have is a camera and a person ,should I create variance in by changing lightning condition or orientation or face size ?
A properly trained FaceNet model should already be somewhat invariant to lighting conditions, pose and other features that should not be a part of identifying a face. At least that is what is claimed in a draft of the FaceNet paper. If you only intend to compare feature vectors generated from the network, and intend to recognize a small group of people, your own dataset likely does not have to be particulary large.
Personally I have done something quite similar to what you are trying to achieve for a group of around ~100 people. The dataset consisted of 1 image per person and I used a 1-N-N classifier to classify the generated feature vectors. While I do not remember the exact results, it did work quite well. The pretrained network's architecture was different from FaceNet's but the overall idea was the same though.
The only way to truly answer your question though would be to experiment and see how well things work out in practice.

Why are there no dropout layers in inception net? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 2 years ago.
Improve this question
I recently was implementing the InceptionNet and came across the scenario where the dropout layer was not implemented in the network at all in the early or mid stages. Any particular reason for this?
You can see this paper posted model:
It actually has a slight regularization effect which is similar to dropout.
Think like that we are choosing every node with a certain possibility for that
layer so we creating our NN architecture with a possibilities. Similar
situation is valid also in here but this time we apply the all possibilities.
Hence, inception network helps to prevent over fitting the parameters so that
learning is happening for more deeper understanding please check out the
original paper but that is just an observation not a prove.

Can Tensorflow Object Detect With a Small Data Set? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
I am hoping that TensorFlow can turn this input, to this output.
Input: A floorplan PNG, and 1 - 5 images of a symbol
Output: The same floorplan, but with all matching symbols highlighted
I can do the hard work of figuring out HOW to do it, but I don't want to waste 2 weeks just to figure out it wouldn't be possible. I know I'd need to train it with multiple images, but I won't have more than 5 examples of a given symbol.
Does TensorFlow have these capabilities?
Thanks!
Yes, it is possible to use tensorflow to create a machine learning algorithm to do that for you, but I would bet that is not how you want to do this. First off, in order to do this in tensorflow, you would need to manually create a large number of training samples and spend a significant amount of time figuring out how to define the network and train it. Sure, you could do it, but I definitely wouldn't advise it.
If you have a specific set of symbols that you want to highlight, it would probably be better to use opencv to find and highlight the symbols. For example, in opencv, you could use Template Matching to find a specific symbol in the floor plan and then highlight them by modifying pixel color.

How to write a program that outputs source code [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
This might not be the right place for this to ask, but I am interested in artificial neural networks and want to learn more.
How do you design a network and train it on source code so it can come up with programs for, for example, easy number theory problems?
What's the general name of this research field?
This is a hugely interesting, and very hard, problem area. It will probably take you months to read enough to even understand how to attack the problem. Here's a few things that might help you get started, and they are more to show the problems you will face than to provide solutions:
http://karpathy.github.io/2015/05/21/rnn-effectiveness/
Then read this, and related papers:
https://arxiv.org/pdf/1410.5401v2.pdf
Next, you probably want to read the classic papers in program synthesis and generation at the parse tree/AST level (mostly out of MIT, I think, in the early 90s.)
Best of luck. This is not trivial.

Image Segmentation applications [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I wanna experiment k-means clustering method on different kind of images, so I am trying to find different kind of images used in image segmentation such as MRI images.I want to gather some more categories.
Any suggestion would be gratefully appreciated.
Although this is not the correct place for asking your question, to help you ,Image segmentation has a wide range of application including segmenting Satellite imagery
and Medical Imaging images, Texture Recognition, Facial Recognition System, Automatic Number Plate Recognition, and a lot of other machine vision applications.

Resources