Core ML Vision producing incorrect answers - ios

I'm using Apple's Core ML to visually recognize items in an image but it's returning incorrect answers sometimes identifying shoes as a knife etc. Is there a way to provide feedback about CoreML and hopefully guide it towards correctly identifying the items in an image?

You're probably giving the Core ML model inputs that it does not expect. I wrote a blog post about the most common mistakes: http://machinethink.net/blog/help-core-ml-gives-wrong-output/

I would open a feedback ticket at https://developer.apple.com/bug-reporting/
Apple is really glad to get devs feedback. Try to detail yours as deeply as possible :)
EDIT : I would also suggest that you try another CoreML model ! I had a few tries with Inception V3 which worked like a charm with my apps. https://developer.apple.com/machine-learning/

Related

SwiftUi ARKit measurements

Sorry I am pretty inexperienced with ARKit. I am working on an app and it will have more features later but the first step would basically be recreating the measure app that is included with iOS. I have looked at the documentation that Apple gives and most of it is for stuff like face tracking, object detection, or image tracking. I wasn't sure exactly where to start. The rest of the existing code I have now is written in SwiftUI if that matters. Thank you!
Understand that it can be quite confusing in the beginning. I would recommend to walk throught the toruial at raywenderlich.com. This toturial from Codestars on Youtube is also very good if you like to listen and watch instead of reading. Both talks go throught a lot of important parts of ARKit so I really recomend it. After that you problably have a create understanding and you clould watch Apples WWDC2019 talk What's new in ARKit 3.
Hope I understood your question correctly and please reach out if you have any questions or other concerns.

how to pretrain my image using resnet50 in mask-rcnn

I am researching about mask r-cnn. I want to know how to pretrain my image(knife,sofa,baby,.....) using resnet50 in mask-rcnn. I struggle to find that in github, but I can't. Please help me anybody who know how to handle it.
Try this implementation of Mask RCNN on github here.
You can follow Mask_RCNN github repo. It has both resnet50 and resnet100 (might be wrong here). It is a beautiful implementation I would say. The base model is from FAIR (Facebook AI Research). There is a demo file which you can check before starting your work.
If it works well, you can see my answer, it will help you to train the model with your custom data. The answer is a bit long, but it lists all the steps.
Something which I personally like about this implementation is:
It is easy to setup. Won't bother you much about the dependencies. Having a python virtual environment does the wonders.
It falls back automatically from a CPU version to GPU and vice versa.
It is having good support from its developers. It is getting commits frequently.
The code is very customisable. So If you want to do some changes, it's pretty easy. Some booleans and numbers changes up and down and you are done...!!!

Get the Microsoft Research Sentence Completion Challenge

I am currently working on natural language processing for scholar purposes, and I would like to get the Microsoft Research Sentence Completion Challenge dataset.
Unfortunately, it seems that it is no more available on Microsoft's website : when I click on any of the two links to get the training or test data, I am redirected to the main page of Microsoft Research. I tried to contact Microsoft's technical support, but they didn't answer me, and I couldn't find the dataset on an other website.
Do you know where I could find this dataset (I'm mainly interested in the test set) ?
Thanks in advance for your help.
I did some research, I have two sources (were quite hard to find):
Kaggle - https://inclass.kaggle.com/c/mlsd-hw3/data
Github repo/google drive link - https://drive.google.com/drive/folders/0B5eGOMdyHn2mWDYtQzlQeGNKa2s
Hope they are correct :)

Face recognition. OpenCV+Python+ffnet

I'm Alexander Mashkovtsev, student of gymnasium "Akademy", Kyiv, Ukrane. I'm 15.
I'd like to do face recognition program using OpenCV.
I write science work about face recognition, too.
It's very interesting for me, so i search a command.
I'd like to demonstrate the work on Kyiv High-Technology Center to get help with this.
There are people who are ready help me to create this program?
I will be grateful. Also ready to to reward the person who will help me.
Thanks!
have a look at the opencv facereco docs
or, here for a small python demo (yea, i 've seen your other questions here, that's why i'm posting the latter).
but ofc, you want to write your own, if i understood that right, that's great!
It seems that Face++ SDKs are more easier than OpenCV.
You can refer to Face++ website, look through their API docs overview.
Good luck!

XML based image gallery in ios development help needed

I searched every where and tried to do learn from the Apple document, but did not success. Actually I learned to fetch XML, learned to Load Image, but not in the combined form.
Please suggest me some example if it does exist.
I came here by asking the same question, After some struggle I've found an interesting article by IBM:
http://www.ibm.com/developerworks/library/x-iosslideshow/
This should help you ( and me) to build the gallery.
If not just add it as a "Guide" on how to achieve that.
If I learn how to do it properly I'll let you know how.
Cheers,
Wolf

Resources