How to implement thumb and fore finger interface - opencv

I am developing a simple multi touch table using only a projector and a web cam. I found out that i could use this Thumb and fore finger interface technique. but i don't have any clue of how to implement it. i think it can be implemented with OpenCV or OpenFrameworks and use with CCV. can anybody help me ?
Thanks,

You could go to the researcher's web site and download his recent papers about the subject. There the core parts of his technique should be explained.

i found something helpful here Thanks everyone.. and if you have any coding examples let me know.

Related

SwiftUi ARKit measurements

Sorry I am pretty inexperienced with ARKit. I am working on an app and it will have more features later but the first step would basically be recreating the measure app that is included with iOS. I have looked at the documentation that Apple gives and most of it is for stuff like face tracking, object detection, or image tracking. I wasn't sure exactly where to start. The rest of the existing code I have now is written in SwiftUI if that matters. Thank you!
Understand that it can be quite confusing in the beginning. I would recommend to walk throught the toruial at raywenderlich.com. This toturial from Codestars on Youtube is also very good if you like to listen and watch instead of reading. Both talks go throught a lot of important parts of ARKit so I really recomend it. After that you problably have a create understanding and you clould watch Apples WWDC2019 talk What's new in ARKit 3.
Hope I understood your question correctly and please reach out if you have any questions or other concerns.

Is there an equivalent of Androids ShowcaseView for iOS?

There is a project for Android on Github:
https://github.com/amlcurran/Showcaseview
According to the readme:
The ShowcaseView library is designed to highlight and showcase specific parts of apps to the user with a distinctive and attractive overlay. This library is great for pointing out points of interest for users, gestures, or obscure but useful items.
I would like to know if a functionally equivalent one exists for iOS. It would be useful to give users a quick tour of an app. Typically app intros are handled with a few swipe screens. Think Uber and Duolingo.
Google and Stackoverflow searching returns nothing meaningful. If I had time I'd work on this as a side project.
Edit: I've ended up using github.com/IFTTT/RazzleDazzle which works for both Swift and Objective-C.
You can also try https://github.com/rahuliyer95/iShowcase this is a similar implementation of ShowcaseView for Android on iOS.
You can also check out my implementation at https://github.com/scihant/CTShowcase
It's developed using Swift 2.0 and can also draw animated highlights such as this one:
Update:
It's now updated for Swift 3.0
You can check a small library (MaterialShowcase on Github) that I created when developing my company app.
There's a framework we've been working on that might be useful, BubbleShowCase. Check it out and don't hesitate to leave any feedback.
Try WSCoachMarksView. It is very easy to use, e.g.:
DDCoachMarks is a simple and flexible iOS alternative:

Face recognition. OpenCV+Python+ffnet

I'm Alexander Mashkovtsev, student of gymnasium "Akademy", Kyiv, Ukrane. I'm 15.
I'd like to do face recognition program using OpenCV.
I write science work about face recognition, too.
It's very interesting for me, so i search a command.
I'd like to demonstrate the work on Kyiv High-Technology Center to get help with this.
There are people who are ready help me to create this program?
I will be grateful. Also ready to to reward the person who will help me.
Thanks!
have a look at the opencv facereco docs
or, here for a small python demo (yea, i 've seen your other questions here, that's why i'm posting the latter).
but ofc, you want to write your own, if i understood that right, that's great!
It seems that Face++ SDKs are more easier than OpenCV.
You can refer to Face++ website, look through their API docs overview.
Good luck!

How to compare a webcam image/video to a specific image/video?

I am basically just starting out in computer programming; mostly fluent in basic Java. I have an idea of creating an ASL (American Sign Language) to English, and my initial problem is how to identify hand movement from a webcam then comparing them to Signs that is already stored as an image or another video. If the problem is a bit too advanced for me then please list any major concepts that I can learn. Please and thank you.
You clearly have a challenging problem ^^. Try to explain all you need to solve your problem would be very hard, mainly because there many ways to do this. I advice you to read a nice book about image processing (Gonzalez' book is a nice choice) and the OpenCV documentation (but it is implemented in C, C++ and has Python bindings; although it's a library that implements a lot of image processing techniques). Maybe you should focus your study on feature detection, motion analysis and object tracking. As sign language uses not just hand sign (static state) but also hand moviments (dynamic state) to express something, object tracking may be a good way to describe the signs.I hope these informations help you, at least a little -^.^- Bye bye.
Look at OpenCV. They have a lot of libraries that you might find handy.
http://opencv.willowgarage.com/wiki/

Particle effects in iOS

I've hired a developer to work on an iPhone & iPad application and as part of the application we would like to have a particle effect.
How do you implement particle effects by using CoreGraphics?
Please note that I've referred the developer to the following link, but he's told me it doesn't work well due to leaks:
http://www.clingmarks.com/generate-particles-along-a-path/822
If I may suggest this: http://www.raywenderlich.com/6063/uikit-particle-systems-in-ios-5-tutorial tutorial in an attempt to implement your particles, maybe you will have some better luck with it. I havent personally used it, but I've used tons of other tutorials from Ray's site, and all of them are fantastic.
Additionally, if you just want to debug your current implementation, this one seems to be up your alley: http://www.raywenderlich.com/2696/how-to-debug-memory-leaks-with-xcode-and-instruments-tutorial.
Hope that helps!

Resources