How to change MTKView current drawable size - meta

As apple documentation declare the MTKView current drawable default size is derived from the current view’s size, in native pixels. Is there any way to make MTKView current drawable half of device screen size?

Related

ios drawing lines to an imageView and combine the lines and the image to a new image, size of the image changed

I have an imageView and say its size is the screen size. It displays an image which has a larger size, and the imageView's content mode is set to scaleAspectFill. Then I drawing some lines on the imageView by using UIBeizerPath.
Later I would like to generate an new image which includes the lines I drew by using drawViewHierarchyInRect. The problem is the new image size is the imageView's size, since the drawViewHierarchyInRect method works only like taking a snapshot. How Can I combine the original image with the lines I drew and at the same time, keep the image's size?
You want to use the method UIGraphicsBeginImageContextWithOptions to create an off-screen context of the desired size. (In your case, the size of the image.)
Then draw your image into the context, draw the lines on top, and extract your composite image from the context. Finally, dispose of the context.
There is tons of sample code online showing how to use UIGraphicsBeginImageContextWithOptions. It's quite easy.

Swift: is possible to get the default size of UIImageView used by UIImagePickerController?

In my iOS app I am developing, it is necessary for the user to take a photo. As you can see from the following image
by default, when we take a photo with iOS camera app, then it is showed with a standard/default size (the one delimited by those 8 pins) and I would like my app to be the same. So, which size should I set my UIImageView into which the photo will be displayed? How can I get that default size in Swift?
Or maybe...which would be the best size to give to UIImageview to prevent the photo from being deformed too much?
Thank you very much for you attention
UIImage has a property called size which specifies width and height of the image. So you could size your UIImageView to those.
Alternatively, if you're using constraints or autoresize mask (flexible width, flexible height) simply don't set a size and the UIImageView will fill itself according to contentMode.
You must understand however that what you see in the image you posted is not the "original size" of the image. Someone decided that the UIImageView should be place at X distance from top and bottom margins, thus forcing an implicit size on the UIImageView

What does CALayer.contentsScale mean?

I'm reading this tutorial, iOS 7 Blur Effects with GPUImage. I have read the document, this variable means x px / y pt. But I don't get this line of code.
_blurView.layer.contentsScale = (MENUSIZE / 320.0f) * 2;
What's the logic behind this line? How should I determine the contentsScale in my code?
If I don't set the contentsScale, which is default to 2.0, the screen looks like:
But after I set it to (MENUSIZE / 320.0f) * 2, the screen is:
This is strange because the contentsScale decreased but the image grow bigger. MENUSIZE is 150.0f.
contentsScale determines the size of the backing store bitmap, so that the bitmap will work on both nonretina and retina screens.
Let's say you make a layer (CALayer) into which you intend to draw. Lets say its size is 100x100. Then to make this layer look good on a double-resolution screen, you will want its contentsScale to be 2.0. This means that behind the scenes the bitmap is 200x200. But it is transformed so that you still treat it as 100x100 when you draw into it; you think in points, just as you normally would, and the backing store is scaled to match the doubled pixels of a retina device.
In most cases you don't have to worry about this because if a layer is the main layer of a view, its contentSize is set automatically for the current device. But if you create a layer yourself, in code, out of whole cloth, then setting its contentsScale based on the scale of the main UIScreen is up to you.

Again edit of edited UIImage in iOS app

In my iOS app, I am putting several UIImages on one back UIImage and then saving the overall back image with all subimages added on it by taking screenshot programmatically.
Now I want to change the subviews UIImages position of that saved image. So want to know how to detect the subview images position as I have taken whole image as screenshot.
Record their frame as converted to window coordinates. The pixels of the image should be the same as the frame origin (for normal) or double for retina. The screenshot is of the whole screen, so its dimensions are equivalent to the window frame. UIView has some convenience methods to convert arbitrary view frames to other view (or window) coordinates.
EDIT: to deal with content fit, you have to do the math yourself. You know the frame of the imageView, and you can ask the image for its size. Knowing the aspect ratio of each will let you determine in which dimension the image completely fits, and then you can compute the other dimension (which will be a value less than the imageView frame. Divide the difference of the view dimension minus the image dimension by two, and that lets you know the offset to the image inside the view. Now you can save the frame of the image as its displayed in the view.

How to resize OpenGL ES renderbuffer on iOS?

I am working on an application and a lot of the code is based on the GLPaint sample from Apple.
In the GLPaint sample the framebuffer and colorbuffer are destroyed and recreated in layoutSubviews.
I load an image from the imagepicker and resize it so width/height is within within maximum texture size. I then set the GLview frame to the same size.
When I resize my view and layoutSubviews is called for the second time calling context renderbufferStorage:fromDrawable: returns NO and therefore my FBO is incomplete. This is the exact same code that is initially used to set up the FBO and colorbuffer.
What's the proper way to resize the renderbuffer?
Code: https://gist.github.com/1340465
I'm pretty sure that there is no way to resize the render buffer. The only way is to recreate it when the target view is resized.

Resources