iOS - zoom only part of display using openGL ES - ios

I'm working on a level builder app to build content for a game. Part of the iPad screen is a dedicated control panel, while the rest is a graphical representation of the level being built. I need to zoom the level area in and out without affecting the control panel. I'm using openGL ES for rendering. Can anyone give me some pointers here? Can I split the screen with different viewports and so just scale one?

The trick is to understand that OpenGL is a state machine and there is no such thing like "global initialization". As long as you follow the badly written tutorials and have your projection matrix setup in the window resize handler you'll be stuck. What you actually do is something like this:
void render_perspective_scene(void);
void render_ortho_scene(void);
void render_HUD();
void display()
{
float const aspect = (float)win_width/(float)win_height;
glViewport(0,0,win_width,win_height);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glFrustum(-aspect*near/lens, aspect*near/lens, -near/lens, near/lens, near, far);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
render_perspective_scene();
glEnable(GL_SCISSOR_TEST);
// just clear the depth buffer, so that everything that's
// drawn next will overlay the previously rendered scene.
glClear(GL_DEPTH_BUFFER_BIT);
glViewport(ortho_x, ortho_y, ortho_width, ortho_height);
glScissor(ortho_x, ortho_y, ortho_width, ortho_height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(-aspect*scale, aspect*scale, -scale, scale, 0, 1);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
render_ortho_scene();
// Same for the HUD, only that we render
// that one in pixel coordinates.
glViewport(hud_x, hud_y, hud_width, hud_height);
glScissor(hud_x, hud_y, hud_width, hud_height);
glClear(GL_DEPTH_BUFFER_BIT);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, win_width, 0, win_height, 0, 1);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
render_HUD();
}
The important part is, that you set the viewport/scissor and the projection within the drawing handler prior to rendering that sub-part.

If you're rendering with OpenGL ES (presumably 2.0), then you have full control over the scaling of what you render. You decide where the scale gets applied, and you decide how things are rendered.
I'd guess your code currently looks a bit like this.
Get view scale
Apply view scale to view matrix
Render level
Render control panel
When it should look like this.
Render control panel
Get view scale
Apply view scale to view matrix
Render level
The matrices (or whatever transformation stuff you have) you use for transforming the level should not be used for transforming the control panel.

Related

integrate Oculus SDK Distortion within simple DirectX Engine

I was working some time on a very simple DirectX11 Render Engine. Today I managed to Setup Stereo Rendering (Rendering the Scene twice into textures) for my Oculus Rift integration.
[Currently]
So what I am basically doing currently is:
I have a 1280 x 800 Window
render the whole scene into the RenderTargetViewLeft_ (1280 x 800)
render the content of RenderTargetViewLeft_ as a "EyeWindow" (like in the tutorial) to the left side of the Screen (640 x 800)
render the whole Scene into the RenderTargetViewRight_ (1280 x 800)
render the content of RenderTargetViewRight_ as a "EyeWindow" (like in the tutorial) to the right side of the Screen (640 x 800)
so all of this works so far, I got the Scene rendered twice into seperate Textures, ending up in a Splitscreen.
[DirectX11 Render Loop]
bool GraphicsAPI::Render()
{
bool result;
// [Left Eye] The first pass of our render is to a texture now.
result = RenderToTexture(renderTextureLeft_);
if (!result)
{
return false;
}
// Clear the buffers to begin the scene.
BeginScene(0.0f, 0.0f, 0.0f, 1.0f);
// Turn off the Z buffer to begin all 2D rendering.
TurnZBufferOff();
// Render The Eye Window orthogonal to the screen
RenderEyeWindow(eyeWindowLeft_, renderTextureLeft_);
// Turn the Z buffer back on now that all 2D rendering has completed.
TurnZBufferOn();
// [Right Eye] ------------------------------------
result = RenderToTexture(renderTextureRight_);
if (!result)
{
return false;
}
TurnZBufferOff();
RenderEyeWindow(eyeWindowRight_, renderTextureRight_);
TurnZBufferOn();
// [End] Present the rendered scene to the screen.
EndScene(); // calls Present
return true;
}
[What I want to do now]
Now I am trying to achieve a Barrel Distortion with the Oculus SDK. Currently I am not concerning about a different virtual camera for the second Image, just want to achieve the Barrel distortion for now.
I have read the Developers Guide [1] and also tried to look into the TinyRoom Demo, but I don't understand completely what's necessary now to achieve the distortion with the SDK in my already working DirectX Engine.
In Developers Guide Render Texture Initialization, they show how to create a texture for the API. I guess it means, I need to setup ALL my RendertargetViews with the according API size (Render Targets are currently sized 1280 x 800) - and even change the DepthStencilView and Backbuffer sice aswell I guess.
The render-loop would look something like this then:
ovrHmd_BeginFrame(hmd, 0);
BeginScene(0.0f, 0.0f, 0.0f, 1.0f);
...
// Render Loop as the Code above
...
ovrHmd_EndFrame(hmd, headPose, EyeTextures);
// EndScene(); // calls Present, not needed on Oculus Rendering
I feel something's missing so I am sure I don't got all that right.
[Update]
So, i achieve to Render the Scene with barrel Distortion using the Oculus API. Though the polygon of the left- and right image are too far seperated, but this could be caused by using my default 1280 x 800 Texture Size for the Render Targets. The CameraStream seems aswell not rendered orthogonal to the Screen when moving the HMD. Gonna do some further testing ;)
[1] - Oculus Developers Guide: https://developer.oculus.com/documentation/
The key Point of an 3d HMD Support is generally to render the whole graphics Scene twice. Once with the left virtual camera, and once with the right virtual camera - the "eye" distance between them varies, but it's approximaetly about 65mm.
To store the Scene, one has to render the graphic Scene to textures. I have rendered my Scene first using my left virtual camera into a RenderTextureLeft_, and afterwards I have rendered the exact same Scene with my right virtual camera into a RenderTextureRight_. This technique is called "render to texture". This means we save the Image for further post processing stuff into a separate Texture instead of rendering it directly into the backbuffer to Display on the Monitor.
So well, but how can we render to the Oculus Rift now? It's important to set up an hmd instance and configure it correctly first. This is really good explained here in the official docs [1]: https://developer.oculus.com/documentation/pcsdk/latest/concepts/dg-render/
After both RenderTextures (left eye, right eye) are successfully rendered into textures and the oculus device has been configured accordingly, one needs to supply the Oculus SDK with both of the rendered Textures to print them on the hmd's Monitor and doing the Barrel distortion using the Oculus SDK (not the Client distortion which is no longer supported in the newer SDK Versions).
Here I am showing my DirectX code which supplys the oculus sdk with both of my renderetextures and doing also the Barrel distortion:
bool OculusHMD::RenderDistortion()
{
ovrD3D11Texture eyeTexture[2]; // Gather data for eye textures
Sizei size;
size.w = RIFT_RESOLUTION_WIDTH;
size.h = RIFT_RESOLUTION_HEIGHT;
ovrRecti eyeRenderViewport[2];
eyeRenderViewport[0].Pos = Vector2i(0, 0);
eyeRenderViewport[0].Size = size;
eyeRenderViewport[1].Pos = Vector2i(0, 0);
eyeRenderViewport[1].Size = size;
eyeTexture[0].D3D11.Header.API = ovrRenderAPI_D3D11;
eyeTexture[0].D3D11.Header.TextureSize = size;
eyeTexture[0].D3D11.Header.RenderViewport = eyeRenderViewport[0];
eyeTexture[0].D3D11.pTexture = graphicsAPI_->renderTextureLeft_->renderTargetTexture_;
eyeTexture[0].D3D11.pSRView = graphicsAPI_->renderTextureLeft_->GetShaderResourceView();
eyeTexture[1].D3D11.Header.API = ovrRenderAPI_D3D11;
eyeTexture[1].D3D11.Header.TextureSize = size;
eyeTexture[1].D3D11.Header.RenderViewport = eyeRenderViewport[1];
eyeTexture[1].D3D11.pTexture = graphicsAPI_->renderTextureRight_->renderTargetTexture_;
eyeTexture[1].D3D11.pSRView = graphicsAPI_->renderTextureRight_->GetShaderResourceView();
ovrHmd_EndFrame(hmd_, eyeRenderPose_, &eyeTexture[0].Texture);
return true;
}
The presentation of the Stereo Image including the Barrel distortion as Kind of a post processing effect is finally done via this line:
ovrHmd_EndFrame(hmd_, eyeRenderPose_, &eyeTexture[0].Texture);
Hopefully the code helps the one or other to understand the Pipeline better.

GLKit Doesn't draw GL_POINTS or GL_LINES

I am working hard on a new iOS game that is drawn only with procedurally generated lines. All is working well, except for a few strange hiccups with drawing some primitives.
I am at a point where I need to implement text, and the characters are set up to be a series of points in an array. When I go to draw the points (which are CGPoints) some of the drawing modes are working funny.
effect.transform.modelviewMatrix = matrix;
[effect prepareToDraw];
glEnableVertexAttribArray(GLKVertexAttribPosition);
glVertexAttribPointer(GLKVertexAttribPosition, 2, GL_FLOAT, 0, 0, &points);
glDrawArrays(GL_POINTS, 0, ccc);
I am using this code to draw from the array, and when the mode is set to GL_LINE_LOOP or GL_LINE_STRIP all works well. But if I set it to GL_POINTS, I get a gpus_ReturnGuiltyForHardwareRestert error. And if I try GL_LINES it just doesn't draw anything.
What could possibly be going on?
When you draw with GL_POINTS in ES2 or ES3, you need to specify gl_PointSize in the vertex shader or you'll get undefined behavior (ugly rendering on device at best, the crash you're seeing at worst). The vertex shader GLKBaseEffect uses doesn't do gl_PointSize, so you can't use it with GL_POINTS. You'll need to implement your own shaders. (For a starting point, try the ones in the "OpenGL Game" template you get when creating a new Xcode project, or using the Xcode Frame Debugger to look at the GLSL that GLKBaseEffect generates.)
GL_LINES should work fine as long as you're setting an appropriate width with glLineWidth() in client code.

Good tutorial on using Quads for custom Text in OpenGL ES 2.0 on iOS

I'm currently new to OpenGL ES and am self teaching myself how to program iOS games. I'm currently playing with a project that I would like to put a HUD over with some custom text. I don't want to do this using a UILabel and currently have no idea how to use Quads to cut up a png or such full of text and attach them to normal text to be used for display. I would like the end result to be providing a simple string to a command/method and the output to be displayed using the textures/bitmap for the quad. Say glPrint("Hello World");. Would anyone be able to guide me in the proper direction? There doesn't seem to be a single good tutorial on how to do this for OpenGL ES 2.0 (just OpenGL). I also want to try to avoid using 3rd party APIs. I really need/want to understand how to tackle this.
When I was getting started with OpenGL ES for my current 2D project I used Ray's tutorial, which helped me get a handle on rendering textured 2D quads. In conjunction with his 3D OpenGL ES tutorial, you might be able to piece together what you want to do. Note that you probably wouldn't render every single quad separately like in the tutorial, as that is very inefficient. Instead, you would gather all of the vertices of the characters into two big arrays/vertex buffers and batch render the characters. The basic flow for rendering each frame would probably look like this: pass a normal perspective projection matrix for 3D rendering, get your vertex information for your 3D scene to your shaders somehow, render the 3D scene. This part you've already done. For the text, immediately after, pass an orthogonal projection matrix in, bind your font texture (generally generated earlier with the GLKTextureLoader class) to the active texture unit, generate two big arrays of texture and geometric vertices for the characters/update VBOs if the text has changed, pass that in, and then batch render all of the letters at once using either glDrawArrays or glDrawElements (which requires indices).
Also, as I'm also new at using OpenGL, some of this may be wrong/inefficient. I've yet to use OpenGL ES to render anything 3D, so I'm not sure what other state changes (enabling, disabling, etc) besides a different projection matrix might be needed between rendering your 3D scene and the 2D scene (text).
It seems that drawing text using only OpenGL is a relatively difficult and tedious task, so if you just want to render a HUD overlay displaying frame rates and other things you are much better off using UILabels and saving yourself the trouble, especially if your project is not very complex. This also prevents you from having to deal with wrapping, kerning, font sizes, colors, different languages and a load of other stuff that greatly complicates text rendering if you need anything more complex.
Rather than tracking the location of each letter, why not use Core Graphics to draw your entire string into a bitmap, then upload that as a texture? You'd just need to get the dimensions from your bitmap to know what size quad to draw for that text string.
Within my open source GPUImage framework, I have an input class called a GPUImageUIElement that does something similar. The relevant code from that input is as follows:
CGSize layerPixelSize = [self layerSizeInPixels];
GLubyte *imageData = (GLubyte *) calloc(1, (int)layerPixelSize.width * (int)layerPixelSize.height * 4);
CGColorSpaceRef genericRGBColorspace = CGColorSpaceCreateDeviceRGB();
CGContextRef imageContext = CGBitmapContextCreate(imageData, (int)layerPixelSize.width, (int)layerPixelSize.height, 8, (int)layerPixelSize.width * 4, genericRGBColorspace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGContextTranslateCTM(imageContext, 0.0f, layerPixelSize.height);
CGContextScaleCTM(imageContext, layer.contentsScale, -layer.contentsScale);
[layer renderInContext:imageContext];
CGContextRelease(imageContext);
CGColorSpaceRelease(genericRGBColorspace);
glBindTexture(GL_TEXTURE_2D, outputTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, (int)layerPixelSize.width, (int)layerPixelSize.height, 0, GL_BGRA, GL_UNSIGNED_BYTE, imageData);
free(imageData);
This code takes a CALayer (either directly or from the backing layer of a UIView) and renders its contents to a texture. I've already initialized the texture before this, so the code sets up a bitmap context, renders the layer into that context using -renderInContext:, and then uploads that bitmap to the texture for use in OpenGL ES.
The helper method -layerSizeInPixels just accounts for the current Retina scale factor as follows:
- (CGSize)layerSizeInPixels;
{
CGSize pointSize = layer.bounds.size;
return CGSizeMake(layer.contentsScale * pointSize.width, layer.contentsScale * pointSize.height);
}
If you used a UILabel for your view and had it autosize to fit its text, you could set the text on it, use the above to render and upload your texture, and then take the pixel size of the element to determine your quad size. However, it would probably be more efficient to just draw the text yourself using -drawAtPoint:withFont:fontForSize: or the like with an NSString.
Using Core Graphics to render your text makes it easy to manipulate the text as an NSString and use all of Core Graphics' typesetting capabilities instead of rolling your own.

Keep pixel aspect with different resolution in xna game

I'm currently developping an old-school game with XNA 4.
My graphics assets are based on 568x320 resolution (16/9 ration), I want to change my window resolution (1136x640 for example) and my graphics are scaled without stretching, that they keep pixel aspect.
How can I reach this ?
You could use a RenderTargetto achieve your goal. It sounds like you don't want to have to render accordingly to every possible screen size, so if your graphics aren't dependant on other graphical features like a mouse, then I would use a RenderTarget and draw all the pixel data to that and afterwards draw it to the actual screen allowing the screen to stretch it.
This technique can be used in other ways too. I use it to draw objects in my game, so I can easily change the rotation and location without having to calculate every sprite for the object.
Example:
void PreDraw()
// You need your graphics device to render to
GraphicsDevice graphicsDevice = Settings.GlobalGraphicsDevice;
// You need a spritebatch to begin/end a draw call
SpriteBatch spriteBatch = Settings.GlobalSpriteBatch;
// Tell the graphics device where to draw too
graphicsDevice.SetRenderTarget(renderTarget);
// Clear the buffer with transparent so the image is transparent
graphicsDevice.Clear(Color.Transparent);
spriteBatch.Begin();
flameAnimation.Draw(spriteBatch);
spriteBatch.Draw(gunTextureToDraw, new Vector2(100, 0), Color.White);
if (!base.CurrentPowerUpLevel.Equals(PowerUpLevels.None)) {
powerUpAnimation.Draw(spriteBatch);
}
// DRAWS THE IMAGE TO THE RENDERTARGET
spriteBatch.Draw(shipSpriteSheet, new Rectangle(105,0, (int)Size.X, (int)Size.Y), shipRectangleToDraw, Color.White);
spriteBatch.End();
// Let the graphics device know you are done and return to drawing according to its dimensions
graphicsDevice.SetRenderTarget(null);
// utilize your render target
finishedShip = renderTarget;
}
Remember, in your case, you would initialize your RenderTarget with dimensions of 568x320 and draw according to that and not worry about any other possible sizes. Once you give the RenderTarget to the spritebatch to draw to the screen, it will "stretch" the image for you!
EDIT:
Sorry, I skimmed through the question and missed that you don't want to "stretch" your result. This could be achieved by drawing the final RenderTarget to your specified dimensions according to the graphics device.
Oh Gosh !!!! I've got it ! Just give SamplerState.PointClamp at your spriteBatch.Begin methods to keep that cool pixel visuel effet <3
spriteBatch.Begin(SpriteSortMode.Immediate,
BlendState.AlphaBlend,
SamplerState.PointClamp,
null,
null,
null,
cam.getTransformation(this.GraphicsDevice));

Scaling entire screen in XNA

Using XNA, I'm trying to make an adventure game engine that lets you make games that look like they fell out of the early 90s, like Day of the Tentacle and Sam & Max Hit the Road. Thus, I want the game to actually run at 320x240 (I know, it should probably be 320x200, but shh), but it should scale up depending on user settings.
It works kind of okay right now, but I'm running into some problems where I actually want it to look more pixellated that it currently does.
Here's what I'm doing right now:
In the game initialization:
public Game() {
graphics = new GraphicsDeviceManager(this);
graphics.PreferredBackBufferWidth = 640;
graphics.PreferredBackBufferHeight = 480;
graphics.PreferMultiSampling = false;
Scale = graphics.PreferredBackBufferWidth / 320;
}
Scale is a public static variable that I can check anytime to see how much I should scale my game relative to 320x240.
In my drawing function:
spriteBatch.Begin(SpriteSortMode.BackToFront, BlendState.NonPremultiplied, SamplerState.PointClamp, DepthStencilState.Default, RasterizerState.CullNone, null, Matrix.CreateScale(Game.Scale));
This way, everything is drawn at 320x240 and blown up to fit the current resolution (640x480 by default). And of course I do math to convert the actual coordinates of the mouse into 320x240 coordinates, and so forth.
Now, this is great and all, but now I'm getting to the point where I want to start scaling my sprites, to have them walk into the distance and so forth.
Look at the images below. The upper-left image is a piece of a screenshot from when the game is running at 640x480. The image to the right of it is how it "should" look, at 320x240. The bottom row of images is just the top row blown up to 300% (in Photoshop, not in-engine) so you can see what I'm talking about.
In the 640x480 image, you can see different "line thicknesses;" the thicker lines are how it should really look (one pixel = 2x2, because this is running at 640x480), but the thinner lines (1x1 pixel) also appear, when they shouldn't, due to scaling (see the images on the right).
Basically I'm trying to emulate a 320x240 display but blown up to any resolution using XNA, and matrix transformations aren't doing the trick. Is there any way I could go about doing this?
Render everything in the native resolution to a RenderTarget instead of the back buffer:
SpriteBatch targetBatch = new SpriteBatch(GraphicsDevice);
RenderTarget2D target = new RenderTarget2D(GraphicsDevice, 320, 240);
GraphicsDevice.SetRenderTarget(target);
//perform draw calls
Then render this target (your whole screen) to the back buffer:
//set rendering back to the back buffer
GraphicsDevice.SetRenderTarget(null);
//render target to back buffer
targetBatch.Begin();
targetBatch.Draw(target, new Rectangle(0, 0, GraphicsDevice.DisplayMode.Width, GraphicsDevice.DisplayMode.Height), Color.White);
targetBatch.End();

Resources