So to be simple I have a RenderTexture of another camera, and I need to overlay it onto another camera either through:
a) A RenderTexture of that camera
or
b) directly to the cameras rendering
What I'm trying to do can also be seen in this representation:
fig1 shows the main render, fig2 shows the desired overlay to be applied, fig3 shows them applied in a way of overlay, fig4 shows post processing of the now newly edited image
Where the first box is the main camera, and the second is what I want overlayed onto it as a RenderTexture in OnRenderObject() A.K.A when these two get rendered. Then in OnPostRender() these are combined where the overlay will be ontop. Then in OnRenderImage(), image effects can freely change the combined images together.
So as a list of what I need help with in explaining is that:
I do not know how to either:
Access the cameras rendering directly
or
Set a RenderTexture as a cameras rendering in OnPostRender()
I also need help though explanation in correctly overlaying a RenderTexture onto either one of the above (This would be using the depth rendered to the RenderTexture as alpha) just as shown in fig3 of the image.
This is the method I've thought up in order to overlay a forward rendering onto deferred for image effects. If you have any other solutions or ideas, it'd be very well appreciated if you could post them as a comment.
Just to clarify I'm not asking for source code, just methods and or links to Unity3D's documentation of said methods that I'm asking about.
Thank you very much so in advance. :)
Related
Currently, RealityKit doesn't have any method that provides the currently visible entities. In SceneKit we do have a method for that particular functionality—nodesInsideFrustum(pointOfView).
Our internal solution is to create a big fake bounding box in front of the camera. We then check intersections between the "frustum" bounding box and each entity's bounding box. That, of course, is a bit cumbersome and inaccurate. I wonder if someone can come up with a better solution who is willing to share it.
You could combine two ARView methods:
ARView.project(position) to get the 2D point in screen space
ARView.bounds.contains(point) to know if it's visible on screen
But it's not enough, you also have to check if the object is behind you:
Entity.position(relativeTo: cameraAnchor) (with cameraAnchor being an AnchorEntity(.camera)) to have the local position
the sign of localPosition.z shows if it's in front or behind the camera
So I have followed one of the few tutorials on how to access the camera with the UIimagePickerController and that works fine. (It even implements face detection as well which was my next step!)
But now I would like to create something like Apple's "grid" view with my own personally made grid. I have how to make an ImageView but after that I have a few more questions:
Would I make a separate image view and somehow layer it over the UIimagePickerController Image View that starts the camera?
Could I just make one UIImageview that already has the lines and then accesses the camera? If so how would I do that?
My final goal would be to detect if something (like a face that the face detector has found) has crossed over into the grid or possibly just map where it is on the screen. Is this possible? How would I get an object's location on the camera screen?
I am trying to detect an object in a video. i am using SURF as feature detection and descriptor extractor, and BRUTFORCE as matcher. i tested my work with faces, i captured a picture of me and when i run the camera and direct it toward me, my face gets detected and a rectangle is drawn around it. i tried to make another test, i captured an image of my mouse and resized it, and when i run the cam, it is not getting detected
the problems i am facing are:
1-is the size of the query/object image matters in such cases,? i am asking this question because the image i captured of my self is bigger than the one of the mouse, and the face is getting detected and the mouse not.
2-regardless of which image i am using as a query/object iamge, how to display camera preview of only the train/scene image without the query/object image. i am asking this question because, what i am getting is something as shown in the below posted images, while what i want to do is something as it is shown here, i checked the code in that link, it is in C++ but i followed the same thing and also the tutorial uses 'drawMatches' method which has a peer in java which is Features2D.DrawMatches() and both of them returns a Mat object with the query/object image on the left side and the train/scene image on the right side as also shown in the image i posted below.
what i want to do is, to display on the the camera output without the query/object image, i want the area designated for the camera output is to show only the train/scene image captured from the camera.
please let me know how to solve this issues, i want to do something as shown in the tutorial i cited in the link.
1 - size matters but in your case, I think the most crucial problem is "textureness". SURF detect the interest points where the "texture gradient" is strong. In the case of your mouse, the gradient is mainly smooth, except aroud the logo (fujitsu), the button and at the border of the image. In the tutorial you point to, you notice it uses a very textured object to demonstrate the effect.
2 - to the best of my knowledge, there is fully automatic method to do what you want, but it can be done with a few steps. Basically, you must determine the surrounding box of your object then draw it. To draw, the easier is to use cv::rectangle but you can be more precise with four (or more....) cv::line. To determine the surrounding box, you can estimate the extreme points among the filtered matches.
Good luck!
I am working on OCR recognition App and I want to give the user the option to manually select the area (during the camera selection) on which to perform the OCR. Now, the issue I face is that I provide a rectangle on the camera screen by simply overriding the - (void)drawRect:(CGRect)rect method, However, despite there being a rectangle ,the camera tries to focus on the entire captured area rather than just within rectangle specified.
In other word, I do not want the entire picture to be send for processing but rather only the part of the captured image inside the rectangle. I have managed to provide a rectangle, However with no functionality. I do not want the entire screen area to be focused, but only the area under the rectangle.
I hope this makes sense since i have tried my best to explain it.
Thanks and let me know
Stream the camera's image to a UIScrollView using an AVCaptureOutput then allow the user to pinch/pull/pan the camera into the proper place... now use UIGraphics Image Context to take a "screen-shot" of this area and send that UIImage.CGImage in for processing.
I was wondering how the characters in this app are animated on screen. Is it possible to have a video with transparent background to put as the overlay of camera capture ? Is this just a set of UIImages animated together ? These characters seems more animated than simple gifs.
That is most likely an OpenGL animation you are seeing overlaid on the camera display. See one of the often cited answers of Brad Larson on how to do that - includes a linked example project (that dude rocks).
To achieve that effect, you use the input of the camera, put that on a planar object as a texture and render your stuff (highly animated characters or even naked, dancing robot women) on top of it, presto.