xcode custom overlay capture - ios

I am working on OCR recognition App and I want to give the user the option to manually select the area (during the camera selection) on which to perform the OCR. Now, the issue I face is that I provide a rectangle on the camera screen by simply overriding the - (void)drawRect:(CGRect)rect method, However, despite there being a rectangle ,the camera tries to focus on the entire captured area rather than just within rectangle specified.
In other word, I do not want the entire picture to be send for processing but rather only the part of the captured image inside the rectangle. I have managed to provide a rectangle, However with no functionality. I do not want the entire screen area to be focused, but only the area under the rectangle.
I hope this makes sense since i have tried my best to explain it.
Thanks and let me know

Stream the camera's image to a UIScrollView using an AVCaptureOutput then allow the user to pinch/pull/pan the camera into the proper place... now use UIGraphics Image Context to take a "screen-shot" of this area and send that UIImage.CGImage in for processing.

Related

Unity3D Displaying a RenderTexture overlayed ontop of another Camera

So to be simple I have a RenderTexture of another camera, and I need to overlay it onto another camera either through:
a) A RenderTexture of that camera
or
b) directly to the cameras rendering
What I'm trying to do can also be seen in this representation:
fig1 shows the main render, fig2 shows the desired overlay to be applied, fig3 shows them applied in a way of overlay, fig4 shows post processing of the now newly edited image
Where the first box is the main camera, and the second is what I want overlayed onto it as a RenderTexture in OnRenderObject() A.K.A when these two get rendered. Then in OnPostRender() these are combined where the overlay will be ontop. Then in OnRenderImage(), image effects can freely change the combined images together.
So as a list of what I need help with in explaining is that:
I do not know how to either:
Access the cameras rendering directly
or
Set a RenderTexture as a cameras rendering in OnPostRender()
I also need help though explanation in correctly overlaying a RenderTexture onto either one of the above (This would be using the depth rendered to the RenderTexture as alpha) just as shown in fig3 of the image.
This is the method I've thought up in order to overlay a forward rendering onto deferred for image effects. If you have any other solutions or ideas, it'd be very well appreciated if you could post them as a comment.
Just to clarify I'm not asking for source code, just methods and or links to Unity3D's documentation of said methods that I'm asking about.
Thank you very much so in advance. :)

How to separate the query and the train image from the Mat object returned from DrawMatches() method

I am trying to detect an object in a video. i am using SURF as feature detection and descriptor extractor, and BRUTFORCE as matcher. i tested my work with faces, i captured a picture of me and when i run the camera and direct it toward me, my face gets detected and a rectangle is drawn around it. i tried to make another test, i captured an image of my mouse and resized it, and when i run the cam, it is not getting detected
the problems i am facing are:
1-is the size of the query/object image matters in such cases,? i am asking this question because the image i captured of my self is bigger than the one of the mouse, and the face is getting detected and the mouse not.
2-regardless of which image i am using as a query/object iamge, how to display camera preview of only the train/scene image without the query/object image. i am asking this question because, what i am getting is something as shown in the below posted images, while what i want to do is something as it is shown here, i checked the code in that link, it is in C++ but i followed the same thing and also the tutorial uses 'drawMatches' method which has a peer in java which is Features2D.DrawMatches() and both of them returns a Mat object with the query/object image on the left side and the train/scene image on the right side as also shown in the image i posted below.
what i want to do is, to display on the the camera output without the query/object image, i want the area designated for the camera output is to show only the train/scene image captured from the camera.
please let me know how to solve this issues, i want to do something as shown in the tutorial i cited in the link.
1 - size matters but in your case, I think the most crucial problem is "textureness". SURF detect the interest points where the "texture gradient" is strong. In the case of your mouse, the gradient is mainly smooth, except aroud the logo (fujitsu), the button and at the border of the image. In the tutorial you point to, you notice it uses a very textured object to demonstrate the effect.
2 - to the best of my knowledge, there is fully automatic method to do what you want, but it can be done with a few steps. Basically, you must determine the surrounding box of your object then draw it. To draw, the easier is to use cv::rectangle but you can be more precise with four (or more....) cv::line. To determine the surrounding box, you can estimate the extreme points among the filtered matches.
Good luck!

Fill image with different color by detecting the different parts

I have an Image of a landscape which i need to fill with different colors.
When i select colors from palette and start scrubbing on any particular part, only that part should get the color even if by mistake i take my finger outside of that image part.
So basically i need to detect which part of image have i tapped so that only that part takes the color.
I am developing this app in Cocos2dx, but any help in logic would be a good point to start.
Here is an example of what i want.
Note : I know i could achieve this by taking separate images and then detecting touches, but that increases the app size by alot of MB's.
I guess user will be able to draw only on white part of the image.
If above is true, what i want you to do is, in your touchesMoved method, check if any black color (non white) pixel is present between previous touch point and current touch point.
If there is no such black pixel, then draw it else dont draw it.

How to dim/blur everything outside given rect in iOS?

I'm currently developing an iOS app that is using OCR.
Currently I'm using AVFoundation to preview the video from the camera (using Apples sample AVCam).
For a good user experience I want to lay out a rectangle in the preview layer. The image inside this rectangle will be the image parsed by the OCR engine. My problem is that I also would like to "dim" everything outside this rectangle and I'm currently out of ideas how to solve this. Does anybody know how to do this?
Edit
This is what I would like to accomplish (image taken from the app Horizon):
http://i.imgur.com/MuuJNS9.png
You can use two black images covering the top and bottom areas that you want to "dim", set the alpha of those images to a certain value, like 0.5.
Why not add a subview that covers the entire screen and set the background color to a semi transparent gray - your gray overlay?
And then add the image parsed by the OCR engine add a subview of this grayoverlay int the center of it

iOs draw an intereactive map

I need to draw an interactive map for an iOs application. For example it can be the map of the US showing the states. It will need to show all the states in different colors ( I'll get this from a delegate colorForStateNo: ) It will need to allow the user to select a state by touching it, when the color will change, and a "stick out" effect should be shown, maybe even a symbol animated to appear over the selected state. Also the color of some states will need to change depending on external events. This color change will mean an animation like a circle starting in the middle of the state and progressing towards the edges changing the color from the current one to the one inside the circle.
Can this be done ,easily in core-graphics? Or is it only possible with Open GL ES? What is the easiest way to do this? I have worked with core graphics and it doesn't seem to handle animation very well, I just redraw the entire screen when something needed to move... Also how could I use an external image to draw the map? Setting up a lot of drawLineToPoint seems like , a lot of work to draw only one state let alone the whole map ...
You could create the map using vector graphics and then have that converted to OpenGL calls.
Displaying SVG in OpenGL without intermediate raster
EDIT: The link applies to C++, but you may be able to find a similar solution.

Resources