Opengl View drawing issue - ios

For dimension consideration, I resize the opengl view to 2.0 scale than origin, like this:
NSInteger Dimension = 2;
self.glView = [[WQPaintGLView alloc] initWithFrame:CGRectMake(0, 0, width*Dimension, height*Dimension)];
CGAffineTransform tScale = CGAffineTransformMakeScale((float)1/Dimension, (float)1/Dimension);
CGAffineTransform tTranslate = CGAffineTransformTranslate(tScale, -width, -height);
self.glView.transform = tTranslate;
[self.canvasContainerView addSubview:self.glView];
But get a strange issue, see:
I can only draw stuff in the left bottom 1/4 area.
What did I wrong?

The UIView transform and openGL are not very compatible. Also the view resizing after the openGL initialization could be troublesome and in most cases a new render buffer must be created from the view.
Anyway since you scaled the view to have a larger surface you should check for following calls:
glViewport should define what part of the buffer you are writing at. Usually it is set like (0, 0, viewWidth, viewHeight). In your case it must include the scale as well.
glOrtho (or glFrustum) define your coordinate system if used. Those should most likely be the same no matter the view scale.
Any other matrix usage or scissors that may be defined by the view's frame.
By all means if possible remove the transform on the view and try to find a better solution.

Related

Flip video horizontally on x- axis in objective c

I have two play two videos simultaneously on a view .Both videos would be same.
Now, my concern is the video on right is actually to be flipped horizontally along x-axis and then saved in photo library.I have tried googling a lot and found that CGAFFineRotateTransform can help but I am not able to use that in my code.Kindly help me to flip the video on right horizontally while keeping the scale and move same.
Any help or guidance in this direction would be appreciable .Thanks in advance!
Check the difference between video on left and right,video on left is complete but video on right is showing half video only
To just flip (mirror) the video horizontally, use a negative x-value for scaling:
CGAffineTransform scale = CGAffineTransformMakeScale( -1.0, 1.0);
Edit: Regarding your more general question on how to position tracks: For each video track involved:
build a bounding rect based on the original size of the track - this is the source rect
ask yourself where the track should end up, in terms of origin and size in the resulting video - this gives you the destination rect
You can then derive the corresponding affine transform with a function like:
CGAffineTransform NHB_CGAffineTransformMakeRectToRect( CGRect srcRect, CGRect dstRect)
{
CGAffineTransform t = CGAffineTransformIdentity;
t = CGAffineTransformTranslate( t, dstRect.origin.x - fmin( 0., dstRect.size.width), dstRect.origin.y - fmin( 0., dstRect.size.height));
t = CGAffineTransformScale( t, dstRect.size.width / srcRect.size.width, dstRect.size.height / srcRect.size.height);
return t;
}
To mirror, provide a negative size for the corresponding axis (the - fmin(,) part compensates the offset).
Given a video track and assuming, for example, the track should go mirrored to the right half of a 640x480 video, you can get the corresponding transform with:
CGSize srcSize = videoTrack.naturalSize;
CGRect srcRect = CGRectMake( 0, 0, srcSize.width, srcSize.height);
CGRect dstRect = CGRectMake( 320, 0, -320, 480);
CGAffineTransform t = NHB_CGAffineTransformMakeRectToRect(srcRect, dstRect);
Of course this may stretch the video track; to keep the aspect ratio, you'll have to take the source size into account when calculating the destination rect.
Some remarks:
note that in NHB_CGAffineTransformMakeRectToRect I deliberately chose to start with the identity matrix, and then add the required transforms one by one. This way much more complex transforms can be build, including rotation. As I said, try to get a grasp on affine transforms, they're really powerful
AVAssetTracks naturalSize sometimes returns confusing results for some videos with complex SAR/PAR definitions. To make this bullet proof, you'll have to derive the size from the dimensions in the corresponding format descriptions, but that's a whole new topic...

Setting UIView.transform to arbitrary translate CGAffineTransform does nothing

I have a UIView called container that I want to move (offset) using affine transfrom. This view contains UIImageView and is a subview of UICollectionViewCell.
So it should be simple:
container.transform = CGAffineTransformMakeTranslation(100, 200) //render container 100 points right and 200 points down
Instead it is very hard, because theat code does not do anything. The view is rendered excatly on the same place as if I delete that line. So I added 'print' to verify what affine translation was set:
container.transform = CGAffineTransformMakeTranslation(100, 200)
print(container.transform) //prints: CGAffineTransform(a: 1.0, b: 0.0, c: 0.0, d: 1.0, tx: 100.0, ty: 200.0)
That seems all right. So I tried rotating the container view instead with CGAffineTransformMakeRotation and it rotates the view just not around its center as it should according to documentation. I tried different combinations of translate, rotation and scale transforms just to find that the affine transformation matrixes set are OK, but attributes tx and ty seems to be ignored and a, b, c and d seems to be using different anchor point then the centre of the view (cannot say what that point is).
Any ideas on what can be causing this and how to fix it?
There must be something like auto layout messing things up for you. In the absence of outside influence, setting a view's affine transform to CGAffineTransformMakeTranslation(100, 200) will shift it right 100 points and down 200. I verified this by making a new Single View Project in Xcode and changing the viewDidLoad method in the ViewController.swift class to:
override func viewDidLoad()
{
super.viewDidLoad()
view.backgroundColor = UIColor.blueColor();
let container = UIView(frame: CGRectMake(0,0,100,100));
container.backgroundColor = UIColor.greenColor();
container.transform = CGAffineTransformMakeTranslation(100, 200);
view.addSubview(container);
}
As expected this makes the green container view appear 100 points to the right and 200 points down, even though its frame is (0,0,100,100).
So please check for auto layout and other such things that might influence the placement of this view, and if you can't find anything please post more code. Also, if your container view doesn't have a background color, please give it one so that you can see its position directly, instead of deducing its position by looking at the image view.
n.b. Setting a view's transform doesn't actually move the view itself, it just changes how/where it draws its content.

how to change the scale of view

I am trying to make sidebar menu like in app Euro Sport! When the menu slides from left , the sourceviewcontroller slide to left and becomes smaller.
var percentWidthOfContainer = containerView.frame.width * 0.2 // this is 20 percent of width
var widthOfMenu = containerView.frame.width - percentWidthOfContainer
bottomView.transform = self.offStage(widthOfMenu)
bottomView.frame.origin.y = 60
bottomView.frame.size = CGSizeMake(widthOfMenu, 400)
bottomView.updateConstraints()
menucontroller.view.frame.size = CGSizeMake(widthOfMenu, containerView.frame.height)
menucontroller.updateViewConstraints()
Here, the bottom view is sourceviewcontroller.view. So, the question is how to scale bottom view. In my case , i can change the size but everything inside view is still in the same size.
You can use CGAffineTransformScale to scale instance of UIView
For Instance
Suppose you have an instance of UIView as
UIView *view;
// lets say you have instantiated and customized your view
..
..
// Keep the original transform of the view in a variable as
CGAffineTransform viewsOriginalTransform = view.transform;
// to scale down the view use CGAffineTransformScale
view.transform = CGAffineTransformScale(viewsOriginalTransform, 0.5, 0.5);
// again to scale up the view
view.transform = CGAffineTransformScale(viewsOriginalTransform, 1.0, 1.0);
As per Apple doc's
The CGAffineTransform data structure represents a matrix used for
affine transformations. A transformation specifies how points in one
coordinate system map to points in another coordinate system. An
affine transformation is a special type of mapping that preserves
parallel lines in a path but does not necessarily preserve lengths or
angles. Scaling, rotation, and translation are the most commonly used
manipulations supported by affine transforms, but skewing is also
possible.
So your solution to minimize the size of bottomView :-
bottomView.transform = CGAffineTransformMakeScale(0.2, 0.2) // you can change it as per your requirement
If you want to resize it or maximize it to its original size:
bottomView.transform = CGAffineTransformMakeScale(1.0, 1.0)
Just in case you want to expand the bottom view more than its size:-
bottomView.transform = CGAffineTransformMakeScale(1.3, 1.3) // you can change it as per your requirement

Direct3D9 fullscreen app - deformed renderering

I have been hardly coding on a Direct3D9 based game. Everything went excellent util I hit a big problem. I created a class that wraps the process of loading a mesh from a .x file. I successfully loaded a cube with only one face visible. In theory, that face should look like a square but it is actually rendered as a rectangle. I am quite sure that there is something wrong with the D3DPRESENT_PARAMETERS structure. Down bellow are only the most important lines of my application's initialization.
First part to be created is the focus window:
HWND hWnd = CreateWindowEx(0UL, L"NewFrontiers3DWindowClass", Title.c_str(), WS_POPUP | WS_EX_TOPMOST, 0, 0, 1280, 1024, nullptr, (HMENU)false, hInstance, nullptr);
Then I fill out the D3DPRESENT_PARAMETERS structure.
D3DDISPLAYMODE D3DMM;
SecureZeroMemory(&D3DMM, sizeof(D3DDISPLAYMODE));
if(FAILED(hr = Direct3D9->GetAdapterDisplayMode(Adapter, &D3DMM)))
{
// Error is processed here
}
PresP.BackBufferWidth = D3DMM.Width;
PresP.BackBufferHeight = D3DMM.Height;
PresP.BackBufferFormat = BackBufferFormat;
PresP.BackBufferCount = 1U;
PresP.MultiSampleType = D3DMULTISAMPLE_NONE;
PresP.MultiSampleQuality = 0UL;
PresP.SwapEffect = D3DSWAPEFFECT_DISCARD;
PresP.hDeviceWindow = hWnd;
PresP.Windowed = false;
PresP.EnableAutoDepthStencil = EnableAutoDepthStencil;
PresP.AutoDepthStencilFormat = AutoDepthStencilFormat;
PresP.Flags = D3DPRESENTFLAG_DISCARD_DEPTHSTENCIL;
PresP.FullScreen_RefreshRateInHz = D3DMM.RefreshRate;
PresP.PresentationInterval = PresentationInterval;
Then the Direct3D9 device is created, followed by the SetRenderState functions.
Next, the viewport is assigned.
D3DVIEWPORT9 D3D9Viewport;
SecureZeroMemory(&D3D9Viewport, sizeof(D3DVIEWPORT9));
D3D9Viewport.X = 0UL;
D3D9Viewport.Y = 0UL;
D3D9Viewport.Width = (DWORD)D3DMM.Width;
D3D9Viewport.Height = (DWORD)D3DMM.Height;
D3D9Viewport.MinZ = 0.0f;
D3D9Viewport.MaxZ = 1.0f;
if(FAILED(Direct3D9Device->SetViewport(&D3D9Viewport)))
{
// Error is processed here
}
After this initialization, I globally declare some parameters that will be used later.
D3DXVECTOR3 EyePt(0.0f, 0.0f, -5.0f), Up(0.0f, 1.0f, 0.0f), LookAt(0.0f, 0.0f, 0.0f);
D3DXMATRIX View, Proj, World;
The update function looks like this:
Mesh.Render(Direct3D9Device);
D3DXMatrixLookAtLH(&View, &EyePt, &LookAt, &Up);
Direct3D9Device->SetTransform(D3DTS_VIEW, &View);
D3DXMatrixPerspectiveFovLH(&Proj, D3DX_PI/4, 1.0f, 1.0f, 1000.f);
Direct3D9Device->SetTransform(D3DTS_PROJECTION, &Proj);
D3DXMatrixTranslation(&World, 0.0f, 0.0f, 0.0f);
Direct3D9Device->SetTransform(D3DTS_WORLD, &World);
The device is not a null pointer.
I recently realized that there is no difference between declaring and setting up a view port and not doing so.
If there is anybody who can point me to the right answer, please help me solve this annoying problem.
If you don't set any transformation matrices, so the identity transformation is applied to your mesh, then face of the cube will be stretched to the same shape of the viewport. If your viewport isn't square (eg. it's the same size as the screen) then your cube's face also won't be square.
You can use a square viewport to workaround this problem, but that will limit your rendering to just that square on the screen. If you want to render to the entire screen you'll need to set a suitable projection matrix. You can calculate a normal perspective perspective matrix using D3DXMatrixPerspectiveFovLH. If you want an orthogonal perspective, where everything is the same size regardless of the distance from the camera, then use D3DXMatrixOrthoLH to calculate the perspective matrix. Note that if you use your viewport's width and height with the later function it will shrink your cube. A unit size cube will be rendered as a single pixel on the screen. You can either use a world or view transform to scale it up again, or use something like width/height and 1 as your width and height parameters to D3DXMatrixOrthoLH.
If you go with D3DXMatrixPerspectiveFovLH then you want something like this:
D3DXMatrixPerspectiveFovLH(&Proj, D3DX_PI/4, (double) D3DMM.Width / D3DMM.Height,
1.0f, 1000.f);
I think your problem not in D3DPP parameters but in your projective matrix. If you use D3DXMatrixPerspectiveFovLH, check aspect ratio to be 1280 / 1024 = 1.3333f

Is drawRect "wasteful" when cropping? Is there an alternative?

Let's say you have an original image that is
200 high, 100 wide
Let's say you want to draw only a square of it. Let's say, just the bottom square.
Let's say you want to draw it on to a new small image that is
20 high, 20 wide
Of course, you simply do this:
CGRect imageRect = CGRectMake( -10,0, 20,20);
.. begin graphics context ..
[originalImage drawInRect:imageRect];
With drawRect, you supply a rectangle the same full shape (same proportions) of the original image, but expressed in the size of the new canvas. No problem.
BUT:
in the example, you are drawing THE WHOLE ORIGINAL IMAGE -- THE WHOLE 200 HEIGHT on to the new small square.
(Of course the "top half" misses the new canvas, and you only get the bottom half on the new canvas -- which is what you wanted.)
My impression is iOS renders or calculates the "whole" original image, and it only "puts on" the bottom half (in the example) on to the new canvas.
This seems very wasteful.
IS THERE A FASTER WAY TO DO THIS?
It seems like there should be a command, something like this:
drawThisPartOfTheOriginalImage: (0,100 to 100,200)
ontoThisPartOfTheNewCanvas: (0,20 to 20,20)
What's the situation? Is there a more efficient command than drawRect when you are only drawing a small part of the original image? Cheers
CGContextClipToRect approach...(doesn't work!)
.
I experimented with CGContextClipToRect as Peter suggested below.
CGContextClipToRect indeed sets the area you will draw to on your "result" canvas. I simply set it to the size of that result canvas (it would be 20.20 in the example above). To repeat the aim here being to have iOS save time by avoiding pointlessly drawing the, err, not-drawn part of the original.
This example is for an original image 2000.2000 drawing on to a 500.500 (ie, only drawing the top left quarter of the original on to the result).
In fact notice it is slightly slower when you include the CGContextClipToRect, again suggesting iOS "knows when to stop" anyways.
// no need to "overdraw"... quickener turned OFF
//CGContextRef c = UIGraphicsGetCurrentContext();
//CGContextClipToRect(c, CGRectMake(0, 0, resultSize.width,resultSize.height));
//Execution Time .................................. 0.443669
// no need to "overdraw"... quickener turned ON
CGContextRef c = UIGraphicsGetCurrentContext();
CGContextClipToRect(c, CGRectMake(0, 0, resultSize.width,resultSize.height));
//Execution Time .................................. 0.461845
As you can see it's a hair slower, actually, adding the CGContextClipToRect trick.
For the record, here is the exact routine used to crop an image:
-(UIImage *)simplishTopCrop:(UIImage *)fromImage
{
// check for zero fromImage.size.width etc etc
CGSize resultSize = CGSizeMake(640,640);
CGFloat scale = MAX(
resultSize.width/fromImage.size.width,
resultSize.height/fromImage.size.height);
CGFloat width = fromImage.size.width * scale;
CGFloat height = fromImage.size.height * scale;
CGRect imageRect = CGRectMake(0,0, width,height);
UIGraphicsBeginImageContextWithOptions(resultSize, NO, 0);
// INSERT 'CGContextClipToRect' TRICK ABOVE, RIGHT HERE
[fromImage drawInRect:imageRect];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
This is where clipping comes in. Clip to your dirty rect, then draw the whole image into your bounds. The clipping path will keep the rest of the image at least from appearing, and hopefully from being composited or sampled at all.
If your profiling in Instruments finds that that is not efficient enough, you might try cropping the image itself, using CGImageCreateWithImageInRect, and then drawing that image into your dirty rect. You may want to keep your cropped image around and only throw it away when the rect changes. One way or the other, cropping the image may be more efficient—but don't forget to profile both before and after to prove that.

Resources