Image Recognition in iOS [closed] - ios

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I am creating an iPad app for golfers, I will get their score card image as below. I want to calculate sum of scores for each person by scanning this image.
Is there any way. Please give me a logic for this.

I have used opencv http://opencv.org in the past to do something similar but with sudoku puzzles.
It is a LOT of work and making it work with handwriting will add to the difficulty.
I found a really good resource for analysing sudoku grids. I'll try to find it again but it was 4 years ago.
Good luck though.

There is a Tesseract port for iOS which is about the best OCR you're likely to get on the platform without either:
A) Porting another OCR library
or
B) Shipping the images off to an online OCR service
To make this more complex, you don't just want to OCR but you want to OCR handwriting and put it into a grid. This is not something that can be done overnight but is in fact rather complex. Not impossible, but complex.
Would it not just be easier to let the players enter there scores straight onto the app and then airprint a score card?

Related

Should webgl be used for simple websites? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
Should webgl be used for simple websites?
I'm not sure or it is wise to use webgl for a simple website just to give it a better look. Will this work on all devices?
WebGL is widely supported today https://www.caniuse.com/#feat=webgl
Whether you "should" use it or not is a broad question. Remember that you aim at improving the user experience. People are forgiving when they play video games, but they don't want to hear their computer fans spin, witness their battery discharging very fast or feel their device getting hot when all they wanted was to read a cooking recipe. Try to be user friendly.
You may for instance want to cap the framerate and/or reduce the resolution on high definition devices, pause the animation loop when the window looses focus (which is not the default behaviour of requestAnimationFrame) or when there is nothing changing on the screen (if the WebGL element is interactive for example). Also, try to write efficient algorithms: it's easy to start writing things on the fragment shader or the CPU when they should be done on the vertex shader. There are many ways to accomplish the same thing and they don't put the same stress on the computer.

Is it a good idea to train a Neural Network on continiously randomly generated training data? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
Hello everyone I'm building a license plate detection model in Tensorflow. I built a function that chooses a license plate at random from a collection of ~5000 plates and puts it in a random place in on a random background and saves the coordinates. At first I thought to generate about 40K images this way and train the network on with the generated data. But wouldn't it be a good idea to just continiously keep generating new data to feed to the network and basically eliminate any chance of it getting overfitted?
This is an excellent way to train it on how to spot the discontinuities around a superimposed yellow / white / blue rectangle, but maybe not such a great way of teaching it to spot a real license plate. If you've got a good way of procedurally generating images then great! but be warned.
It might spot the wrong pattern.

Is there any way to plot and analyse data with swift? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I want to build an iOS application where you can easily view and interact with geophysics data (well logs, seismic sections etc), which usually come as huge matrices in SEGY format or similar.
Is there any way I can do this with swift? Also I need to extract statistics and and perform mathematical operations. Is there any scientific use of swift at all?
Sorry if I'm being vague, it's a fairly new idea and I would love to do it on iOS instead of using C/matlab/python etc.
There's nothing native to Swift, but you could always use third party frameworks for anything.
Of course, science power of Matlab won't be achievable by iOS, since the language is not intended for that, so you'll have to write some math functions on your own.
For charts, I used CorePlot, but now there's a better alternative written completely in Swift, called ios-charts.

Augmented reality to measure - iPhone [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
This is my first time working on Augmented Reality topic. I am about to develop an application which uses augmented reality to recognise and measure the n number of objects in my room. Something like in the attached image. I need to identify each edges and corners and have to do some vast math calculation for the measurement. I am pretty sure that it cant be achieved only through iOS SDK, I need to use some external library/SDKs. I need some scanner SDK which does the real time image recognition.
I came across Qualcomm's Vuforia, Realitycap, metaIO. My dilemma is, a developer who worked in product and business based iOS application can do this image recognition stuff? An iOS developer does not have that awesome experience in image processing. Can anyone suggest me some cool stuff to come over? Suggest some ideas also, it will help me a lot.
You can use metaio sdk to scan different objects. You can create as many object models in unity3D and have in database. The above sdk helps you to do in 3D scanning and many more features.
I think there is a possibility to use OpenCV
API can be found at: http://opencv.org

Getting text from a picture in iOS [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I was wondering if it is possible to get text (like an NSString) from a picture in iOS. So like if you took a picture of the words "Hello World" the code would convert the image into an NSString with the words "Hello World". For my purposes it wont matter if it is not case sensitive but I would like to know if it is possible and if so, then how.
Thanks.
I would look into this https://github.com/nolanbrown/Tesseract-iPhone-Demo and stack overflow has some posts about this already like Getting text from image on ios (image processing)
It is possible.
You would use OCR (Optical Character Recognition).
Because OCR is a broad and complex topic, the question of "how to" is probably too vague for S.O. You should probably do some research, make some prototypes, and develop some more specific follow up questions.
One of the main open source libraries used to do OCR on iOS is a google-sponsored open source project called tesseract.
Here is some info on compiling tesseract for use in iOS apps:
tesseract
The same guy has a nice sample project on github demonstrating how a simple client might use the compiled library:
Pocket-OCR

Resources