Face recognition. OpenCV+Python+ffnet - opencv

I'm Alexander Mashkovtsev, student of gymnasium "Akademy", Kyiv, Ukrane. I'm 15.
I'd like to do face recognition program using OpenCV.
I write science work about face recognition, too.
It's very interesting for me, so i search a command.
I'd like to demonstrate the work on Kyiv High-Technology Center to get help with this.
There are people who are ready help me to create this program?
I will be grateful. Also ready to to reward the person who will help me.
Thanks!

have a look at the opencv facereco docs
or, here for a small python demo (yea, i 've seen your other questions here, that's why i'm posting the latter).
but ofc, you want to write your own, if i understood that right, that's great!

It seems that Face++ SDKs are more easier than OpenCV.
You can refer to Face++ website, look through their API docs overview.
Good luck!

Related

fuzzy logic for RPL objective function experiment

I'm intending to develop a new OF for RPL in cooja simulator, the thing is that I don't find any tutorial or example on how to do so!
Also, There are hundreds of published papers on this work, yet no guidance on how to conduct your own experiements!
Any help or tutorial i can follow.
Furthermore, i need to know what are possible tools needed to do so, like matlab, python or C++ libraries?
too much confused and cannot figure out where to start actually.
Please Help
Please Help I have been searching and reading alot, nothing found but journal papers discusses things theoritically.

SwiftUi ARKit measurements

Sorry I am pretty inexperienced with ARKit. I am working on an app and it will have more features later but the first step would basically be recreating the measure app that is included with iOS. I have looked at the documentation that Apple gives and most of it is for stuff like face tracking, object detection, or image tracking. I wasn't sure exactly where to start. The rest of the existing code I have now is written in SwiftUI if that matters. Thank you!
Understand that it can be quite confusing in the beginning. I would recommend to walk throught the toruial at raywenderlich.com. This toturial from Codestars on Youtube is also very good if you like to listen and watch instead of reading. Both talks go throught a lot of important parts of ARKit so I really recomend it. After that you problably have a create understanding and you clould watch Apples WWDC2019 talk What's new in ARKit 3.
Hope I understood your question correctly and please reach out if you have any questions or other concerns.

Core ML Vision producing incorrect answers

I'm using Apple's Core ML to visually recognize items in an image but it's returning incorrect answers sometimes identifying shoes as a knife etc. Is there a way to provide feedback about CoreML and hopefully guide it towards correctly identifying the items in an image?
You're probably giving the Core ML model inputs that it does not expect. I wrote a blog post about the most common mistakes: http://machinethink.net/blog/help-core-ml-gives-wrong-output/
I would open a feedback ticket at https://developer.apple.com/bug-reporting/
Apple is really glad to get devs feedback. Try to detail yours as deeply as possible :)
EDIT : I would also suggest that you try another CoreML model ! I had a few tries with Inception V3 which worked like a charm with my apps. https://developer.apple.com/machine-learning/

CvBGCodeBookModel A good explanation?

I have been running in and out of OpenCV 2.4.3 trying to figure out the extra functions and parameters that can be used to CvBGCodeBookModel based background subtraction. The documentation is not very helpful, does anyone know a resource/tutorial out there that explains CvBGCodeBookModel implemented in OpenCV along with some of its functions?
Guidance much appreciated
There is a sample in the opencv code (samples/c/bgfg_codebook.cpp) that uses CvBGCodeBookModel, it might be a good place to look.
It says the code is adapted from the book "Learning OpenCV" by O'Reilly press, so that would be another resource.
There is also this paper that describes the theory, not sure if that would be helpful to you or not.

How to compare a webcam image/video to a specific image/video?

I am basically just starting out in computer programming; mostly fluent in basic Java. I have an idea of creating an ASL (American Sign Language) to English, and my initial problem is how to identify hand movement from a webcam then comparing them to Signs that is already stored as an image or another video. If the problem is a bit too advanced for me then please list any major concepts that I can learn. Please and thank you.
You clearly have a challenging problem ^^. Try to explain all you need to solve your problem would be very hard, mainly because there many ways to do this. I advice you to read a nice book about image processing (Gonzalez' book is a nice choice) and the OpenCV documentation (but it is implemented in C, C++ and has Python bindings; although it's a library that implements a lot of image processing techniques). Maybe you should focus your study on feature detection, motion analysis and object tracking. As sign language uses not just hand sign (static state) but also hand moviments (dynamic state) to express something, object tracking may be a good way to describe the signs.I hope these informations help you, at least a little -^.^- Bye bye.
Look at OpenCV. They have a lot of libraries that you might find handy.
http://opencv.willowgarage.com/wiki/

Resources