I would like to take a picture from my back camera on my ios.
I am using webcamtexture to get the back camera status.
I print those pixels on a UI Texture, after that I capture a screenshot.
Everything works fine, but the quality is pixelated because the webcamtexture has a maximum of 1920x1080 resolution.
Is there any other way to get a picture from a device camera in unity?
I already tried a lot of plugins but none of them seem to work better.
Related
im using OpenCV 4.3 with c++ and the VideoCapture to get the videostream from a Logitech C922 Pro Webcam.
Im changing the resolution with
cap.set(CAP_PROP_FRAME_HEIGHT,height)
cap.set(CAP_PROP_FRAME_WIDTH,width)
This works fine until im using either the resolution of 640x480 or 640x360. In this cases OpenCV stays in the aspect ratio it was before (640x480 on startup).
Is there an efficitent way to change the resolution to the right one instead of changing it to another one before?
CAP_PROP_FRAME_HEIGHT or CAP_PROP_FRAME_WIDTH are used to switch the resolution of your camera. If your camera doesn't support that resolution you set, the code will not work. If you check the specs of Logitech C922, you will see that it doesn't include the resolutions you desired.
There are other options you can apply:
1- You can crop the desired sizes(ex. 640,360) from the main frame.
2- You can resize the frame but this will crash the aspect ratio.
I'm using openGL ES to display CVPixelBuffers on iOS. The openGL pipeline uses the fast texture upload APIs (CVOpenGLESTextureCache*). When running my app on the actual device the display is great but on the simulator it's not the same (I understand that those APIs don't work on the simulator).
I noticed that, when using the simulator, the pixel format is kCVPixelFormatType_422YpCbCr8 and I'm trying to extract the Y and UV components and use the glTexImage2D to upload but, I'm getting some incorrect results. For now I'm concentrating on the Y component only, and the result looks like the image is half of the expected width and is duplicated - if it makes sense.
I would like to know from some one that has successfully displayed YUV422 video frames on iOS simulator if I'm on the right track and/or if I can get some pointers on how to solve my problem.
Thanks!
On Snapchat, it allows you to take a full screen camera photo on iOS. The preview is full screen, and the image returned is full screen. There appears to be no cropping/stretching/etc... What you see is what you get.
Now I've looked all over the place, and I can't figure out how this is actually being done, seeing that the iPhone camera always returns an image with an aspect ratio of 4:3. Yes, you can use the camera view transform to have a full screen "preview", but the image returned is still 4:3 and needs to be cropped.
So my question is, how do you take a full screen camera photo on iOS without cropping? If your answer is that it can't be done, then how is Snapchat doing it (or appearing to do it)?
Snapchat isn't displaying everything the camera is picking up. By cropping a bit from the top/bottom or sides, they can create a 16:9 image from a 4:3 image. This is easy to verify.
Open up the snapchat and camera apps so it's easy to switch between them.
Place your phone on its side pointed at something with some marks for reference points.
Switch between the apps without moving the phone. There is content that you do not see on Snapchat.
I'm currently using AVCaptureSessionpresetPhoto to take my pictures and I'm adding filters to them. Problem is that the resolution is so big that I have memory warnings ringing all over the place. The picture is simply way to large to process. It crashes my app every single time. Is there anyway I can specify the resolution to shoot at?
EDIT**
Photography apps like Instagram or the Facebook Camera app for example can do this without any problems. These applications can take pictures at high resolutions, scale them down and process them without any delay. I did a comparison check, the native iOS camera maintains a much higher quality resolution when compared to pictures taken by other applications. The extreme level of quality isn't really needed required for a mobile platform so it seems as if these images are being taken at lower resolution to allow for faster processing and quick upload times. Thus there must be a way to shoot at a lower resolution. If anyone has a solution to this problem, it would greatly be appreciated!
You need to re-size image after capture image using AVCaptureSession and store it's image after resizing.
You found lots of similar question in to StackOverlow i just putting some link bellow that makes help's you.
One More thing As my suggestion that using SDWebImage for Displaying Images asynchronously Becouse App working smoothly. There are also some other way for example(Grand Central Dispatch (GCD) Reference , NSOperationQueue etc) in iOS for asynchronous Tast
Re-size Image:-
How to resize an image in iOS?
UIImage resizing not working properly
How to ReSize Image with Good Quality in iPhone
How to resize the image programmatically in objective-c in iphone
I have an app that I would like to have video capture for the front-facing camera only. That's no problem. But I would like the video capture to always be in landscape, even when the phone is being held in portrait.
I have a working implementation based on the AVCamDemo code that Apple published. And borrowing from the information in this tech note, I am able to specify the orientation. There's just one trick: while the video frame is oriented correctly, the contents still appear as though shot in portrait:
I'm wondering if I'm just getting boned by the physical constraints of the hardware: is the image sensor just oriented this way? The referenced tech note above makes this note:
Important: Setting the orientation on a still image output and movie
file output doesn't physically rotate the buffers. For the movie file
output, it applies a track transform (matrix) to the video track so
that the movie is rotated on playback, and for the still image output
it inserts exif metadata that image viewers use to rotate the image
properly when viewing later.
But my playback of that video suggests otherwise. Any insight or suggestions would be appreciated!
Thanks,
Aaron.
To answer your question, yes, the image sensor is just oriented that way. The video camera is an approx 1-megapixel "1080p" camera that has a fixed orientation. The 5MP (or 8MP for 4S, etc) still camera also has a fixed orientation. The lenses themselves don't rotate nor do any of the other camera bits, and hence the feed itself has a fixed orientation.
"But wait!", you say, "pictures I take with the camera app (or API) get rotated correctly. Why is that?" That's cuz iOS takes a look at the orientation of the phone when a picture is taken and stores that information with the picture (as an Exif attachment). Yet video isn't so flagged -- and each frame would have to be individually flagged, and then there's issues about what to do when the user rotates the phone during video....
So, no, you can't ask a video stream or a still image what orientation the phone was in when the video was captured. You can, however, directly ask the phone what orientation it is in now:
UIDeviceOrientation currentOrientation = [UIDevice currentDevice].orientation;
If you do that at the start of video capture (or when you grab a still image from a video feed) you can then use that information to do your own rotation of playback.