How to draw a smooth shape based on 4 points in iOS? - ios

I'm using Sketch to draw a share like the picture below
Basically,the shape has 4 control points,and I want to connect these points into a shape smoothly
I tried UIBezierPath, but it seems that the API doesn't work for me. E.g, the Right Point shown in the picture, which I need the line to actually cross it, while I drag each of these 4 points, I can get a smooth shape, how can I achieve that?

You want something called Catmull-Rom splines. That is a kind of spline where all the control points lie on the curve.
The problem you'll face with Catmull-Rom splines is that with some control points, you can introduce loops or kinks in your curve that you don't want.
I have a project called RandomBlobs on github that demonstrates how to do this.
Here is a Youtube video showing the output of the app:
(Credit to Erica Sadun, author of the outstanding "iOS Developers' Cookbook" series for the technique. And a disclaimer. I was one of the technical reviewers on a couple of her books, but I did so because I really like her writing and wanted to help.)

Related

OpenCV detect square with difficult background

i am working on an Android app that will recognize a GO board and create a SGF file of it.
i need to detect the whole board in order to warp it and to be able to find the correct lines and stones like below.
(source: eightytwo.axc.nl)
right now i use an Opencv RGB Mat and do the following:
separate the channels
canny the separate channels
Imgproc.Canny(channel, temp_canny, 30, 100);
combine (bitwise OR) all channels.
Core.bitwise_or(temp_canny, canny, canny);
find the board contour
Still i am not able to detect the board consistently as some lines tend to disappear as you can see in the picture below, black lines on the board and stones are clearly visible but the board edge is missing at some places.
(source: eightytwo.axc.nl)
how can i improve this detection? or should i implement multiple ways of detecting it and switching between them when one fails..
* Important to keep in mind *
go boards vary in color
go boards can be empty or completely filled with stones
this implies i can't rely on detecting the outer black line on the board
backgrounds are not always plain white
this is a small collection of pictures with go boards i would like to detect
* Update * 23-05-2016
I kinda ran out of inspiration using opencv to solve this, so new inspiration is much appreciated!!!
In the meantime i started using machine learning, first results are nice and i'll keep you posted but still having high hopes creating a opencv implementation.
I'm working on the same problem!
My approach is to assume two things:
The camera is steady
The board is steady
That allows me to deduce the warping parameters from a single frame when the board is still empty (before playing). Use these parameters to warp every frame, no matter how many stones are occluding the board edges.

iOS:Which Augmented Reality SDK for virtual try room to be used?

I am working on iOS Augmented Reality project, Where i need to integrate virtual dressing concept.
I tried OpenCV, it worked as desired for me in Face Detection Scenario Only but when i did Upper Body Portion, That didn't work for me as desired.
I used UPPER_BODY_HAAR_CASCADE but it didn't work as it was desired
it came as something like
but my desired output is something like this
If someone has achieved this functionality in iOS, Please Reply me
Not exactly answer you are looking for. You make your app depending on the sdk you choose. Most of them are quite expensive to use and may suffer from changing the use policy. Additionally you drag all the extensive functionality you don't need into your app. So at the end of day your app is 60-100MB in size.
If I was you (and I was in similar situation), I would develop own little sdk with the functionality you need. If you know how to do it then it takes couple days for the basic things to work. Plus opencv and you are in good shape.
PS. #Tommy asked interesting question. How one can approach to implement something like on this video: youtube.com/watch?v=IBE11ROpxHE
Adding some info which is too long for comment.
#Tommy Nice video. It seems to have all we need to proceed. First of all, for any AR application you need your camera (mobile phone camera) calibration info. In simple case, it contains two matrixes: camera matrix and distortion matrix. Camera matrix is then used for creating opengl projection matrix (how the 3d model is projected to 2d flat screen, field of view, planes, etc). And distortions matrix is used for example, for warping parts of your input frame in case of detecting something. In the example with watches, we need to detect the belt and watches body in order to place the 3d model in that position. Given the paper watches is not having ideal perspective with 90 degrees angle to the eye, it needs to be transformed to this view.
In other words, your paper watches looks like this:
/---/
/ /
/---/
And for the analysis and detecting the model name you need it look like this:
---
| |
| |
---
This is where distortion matrix is used in order to have precise transformation. And different cameras have their own distortions.
Most of application use so called offline calibration. There is a chessboard and its feed into opencv functions that detect cells on series of frames with different perspective, and build the matrices based on how the cells are shaped.
In your case, the belt of your watch may be designed in a way that it will contain all the needed for online calibration. On your video it has special pattern, I'm pretty sure its done exactly for this purpose. You may do the same and use chessboard pattern for simplicity.
Then you could use lets say 25 first frames for online calibration and then having all the matrixes you go for detecting paper watches, building projection matrix and replace it with your 3d model. If all is done right then your paper watcthes will have coord 0 0 0 in 3d space and you could easily place something else in that position.

How to detect PizzaMarker

did somebody tried to find a pizzamarker like this one with "only" OpenCV so far?
I was trying to detect this one but couldn't get good results so far. I do not know where this marker is in picture (no ROI is possible), the marker will be somewhere in the room (different ligthning effects) and not faceing orthoonal towards us. What I want - the corners and later the orientation of this marker extracted with the corners but first of all only the 5Corners. (up, down, left, right, center)
I was trying so far: threshold, noiseclearing, find contours but nothing realy helped for a good result. Chessboards or square markers are normaly found because of their (parallel) lines- i guess this can't help me here...
What is an easy way to find those markers?
How would you start?
Use other colorformat like HSV?
A step-by-step idea or tutorial would be realy helpfull. Cause i couldn't find tuts at the net. Maybe this marker isn't called pizzamarker -> does somebody knows the real name?
thx for help
First - thank you for all of your help.
It seems that several methods are usefull. Some more or less time expansive.
For me it was the easiest with a template matching but not with the same marker.
I used only a small part of it...
this can be found 5 times(4 times negative and one positive) in this new marker:
now I use only the 4 most negatives Points and the most positive and got my 5 points that I finaly wanted. To make this more sure, I check if they are close to each other and will do a cornerSubPix().
If you need something which can operate in real-time I'd go down the edge detection route and look for intersecting lines like these guys did. Seems fast and robust to lighting changes.
Read up on the Hough Line Transform in openCV to get started.
Addendum:
Black to White is the strongest edge you can have. If you create a gradient image and use the strongest edges found in the scene (via histogram or other) you will be able to limit the detection to only the black/white edges. Look for intersections. This should give you a small number of center points to apply Hough ellipse detection (or alternate) to. You could rotate in a template as a further check if you wish.
BTW.. OpenCV has Edge Detection, Hough transform and FitEllipse if you do go down this route.
actually this 'pizza' pattern is one of the building blocks of the haar featured used in the
Viola–Jones object detection framework.
So what I would do is compute the summed area table, or integral image using cv::integral(img) and then run exhaustive search for this pattern, on various scales (size dependant).
In each window you are using only 9 points (top-left, top-center, ..., bottom left).
You can train and use cvHaarDetectObjects to detect the marker using VJ.
Probably not the fastest method but it should work.
You can find more info on object detection methods using OpenCV here: http://opencv.willowgarage.com/documentation/object_detection.html

Detecting location of music note dots in a sheet music image

I want to start a project that uses a very basic form of optical music recognition.
For those who understand sheet music: Unlike other OMR projects, the only information which needs to be extracted is the order and pitch values of each note in a bar. Quarter notes, half notes and whole notes need to be distinguished. Shorter notes can acceptably be interpreted as quarter notes. Dots on notes can be ignored. Dynamics markings are not important
For everyone: Strictly speaking I need to find the locations of each of the following...
... in a sample image like this...
I have no experience in image processing so a basic, conceptual explanation of what technique or set of techniques are used to achieve this would be greatly appreciated.
I would do the following:
Extract the line locations using Hough transform. (You get the angle as well). Crop each group of lines (5 lines), and process individually.
For each group of lines, you know the angle of the lines, so you can get the locations of the vertical small lines that separate the bars. Search again in Hough space, but with specific angle. (The original + 90). Crop each bar and process individually.
For each bar, use template matching on the possible notes (Quarter, Half,etc..)
I did something similar to your work and trust me it is a complete mess.
Howeverr, for the pitch for each note you extract the head from the rest and calculate the baricentre and compare its position to the position of lines calculated with Hough transform as already said (assuming that the lines are already straight: if not i think you can use the Fourier Transform).
For the duration you need a classification algorithm.

How to implement a Photoshop Curves editor in UIKit

I'm familiar with UIBezierPath and the corresponding CG routines, but none of them appear to draw the same type of path as what I see in Photoshop, etc.
How would one go about this? I'm just talking about the UI--letting the user drag around points.
A java example I found is here: http://www.cse.unsw.edu.au/~lambert/splines/natcubic.html
I would look into CGContextAddCurveToPoint and drag around curve's control points. If you require more control points to create a complex curve, just split a resulting curve into simple segments.
Take a look at this article It explains how to calculate the control points based on knots you have on your curve.

Resources