openGL scaling/rotating (what comes first...) - ios

Recently, I jumped in to openGL. Most things have working out quite okay, but I keep banging my head against the wall with this one.
I am trying to rotate/scale an 2D image. I am struggling with the fact it I should rotate first, and then scale, or the other way around. Both ways don't quite work out the way I want.
I have made two short video's showing what it going wrong:
First rotate, then scale
https://dl.dropboxusercontent.com/u/992980/rotate_then_scale.MOV
First scale, then rotate
https://dl.dropboxusercontent.com/u/992980/scale_then_rotate.MOV
The left image is square, the right image is a rectangle. As you can see, with both methods, something is not quite right :)
The black area is the openGL viewport. When the viewport is square, everything is fine, when it is a rectangle, things start to go wrong :) For every image i draw, I calculate a different X and Y scale, in reference to the viewport, I think I am doing something wrong there...
Note that I am quite new to openGL, and I am probably doing something stupid (I hope I am). Hopefully, I can get my question across clearly this way.
Thanks in advance for any help given!
Corjan
The code for drawing one image:
void instrument_renderer_image_draw_raw(struct InstrumentRenderImage* image, struct InstrumentRendererCache* cache, GLuint program) {
// Load texture if not yet done
if (image->loaded == INSTRUMENT_RENDER_TEXTURE_UNLOADED) {
image->texture = instrument_renderer_texture_cache_get(image->imagePath);
if (image->texture == 0) {
image->loaded = INSTRUMENT_RENDER_TEXTURE_ERROR;
}
else {
image->loaded = INSTRUMENT_RENDER_TEXTURE_LOADED;
}
}
// Show image when texture has been correctly loaded into GPU memory
if (image->loaded == INSTRUMENT_RENDER_TEXTURE_LOADED) {
float instScaleX = (float)cache->instBounds.w / cache->instOrgBounds.w;
float instScaleY = (float)cache->instBounds.h / cache->instOrgBounds.h;
float scaleX = (float)image->w / (float)cache->instOrgBounds.w;
float scaleY = (float)image->h / (float)cache->instOrgBounds.h;
// Do internal calculations when dirty
if (image->base.dirty) {
mat4 matScale;
mat4 matRotate;
mat4 matModelView;
mat4 matProjection;
matrixRotateZ(image->angle, matRotate);
matrixScale(scaleX , scaleY * -1, 0, matScale);
matrixMultiply(matRotate, matScale, matModelView);
// Determine X and Y within this instrument's viewport
float offsetX = ((float)cache->instOrgBounds.w - (float)image->w) / 2 / (float)cache->instOrgBounds.w;
float offsetY = ((float)cache->instOrgBounds.h - (float)image->h) / 2 / (float)cache->instOrgBounds.h;
float translateX = ( ((float)image->x / (float)cache->instOrgBounds.w) - offsetX) * 2;
float translateY = ( ( ( (float)cache->instOrgBounds.h - (float)image->y - (float)image->h ) / (float)cache->instOrgBounds.h) - offsetY) * -2;
matrixTranslate(translateX, translateY*-1, -2.4,matModelView);
//matrixPerspective(45.0, 0.1, 100.0, (double)cache->instOrgBounds.w/(double)cache->instOrgBounds.h, matProjection);
matrixOrthographic(-1, 1, -1, 1, matProjection);
matrixMultiply(matProjection, matModelView, image->glMatrix);
image->base.dirty = 0;
}
glUseProgram(program);
glViewport(cache->instBounds.x * cache->masterScaleX,
cache->instBounds.y * cache->masterScaleY,
cache->instBounds.w * cache->masterScaleX,
cache->instBounds.w * cache->masterScaleX);
glUniformMatrix4fv(matrixUniform, 1, GL_FALSE, image->glMatrix);
// Load texture
glBindTexture(GL_TEXTURE_2D, image->texture);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
}

What framework/library are you using for matrix multiplication?
The thing that needs to come first depends on your matrix representation (e.g. row- vs. column-major and post- vs. pre-multiplication). The library you use dictates that; fixed-function OpenGL (glMultMatrix (...) et al.) was column-major and post-multiplication. Most OpenGL-based frameworks follow tradition, though there are some exceptions like OpenTK. Traditional matrix multiplications were done in the following order:
1. Translation
2. Scaling
3. Rotation
But because of the nature of post-multiplying column-major matrices (matrix multiplication is non-commutative) the operations effectively occured from bottom-to-top. Even though you do the multiplication for translation before the one for rotation, rotation is actually applied to the pre-translated coordinates.
In effect, assuming your matrix library follows OpenGL convention, you are doing the sequence of matrix multiplications in reverse.

Related

Shadow Mapping - Space Transformations are going bad

I am currently studying shadow mapping, and my biggest issue right now is the transformations between spaces. This is my current working theory/steps.
Pass 1:
Get depth of pixel from camera, store in depth buffer
Get depth of pixel from light, store in another buffer
Pass 2:
Use texture coordinate to sample camera's depth buffer at current pixel
Convert that depth to a view space position by multiplying the projection coordinate with invProj matrix. (also do a perspective divide).
Take that view position and multiply by invV (camera's inverse view) to get a world space position
Multiply world space position by light's viewProjection matrix.
Perspective divide that projection-space coordinate, and manipulate into [0..1] to sample from light depth buffer.
Get current depth from light and closest (sampled) depth, if current depth > closest depth, it's in shadow.
Shader Code
Pass1:
PS_INPUT vs(VS_INPUT input) {
output.pos = mul(input.vPos, mvp);
output.cameraDepth = output.pos.zw;
..
float4 vPosInLight = mul(input.vPos, m);
vPosInLight = mul(vPosInLight, light.viewProj);
output.lightDepth = vPosInLight.zw;
}
PS_OUTPUT ps(PS_INPUT input){
float cameraDepth = input.cameraDepth.x / input.cameraDepth.y;
//Bundle cameraDepth in alpha channel of a normal map.
output.normal = float4(input.normal, cameraDepth);
//4 Lights in total -- although only 1 is active right now. Going to use r/g/b/a for each light depth.
output.lightDepths.r = input.lightDepth.x / input.lightDepth.y;
}
Pass 2 (Screen Quad):
float4 ps(PS_INPUT input) : SV_TARGET{
float4 pixelPosView = depthToViewSpace(input.texCoord);
..
float4 pixelPosWorld = mul(pixelPosView, invV);
float4 pixelPosLight = mul(pixelPosWorld, light.viewProj);
float shadow = shadowCalc(pixelPosLight);
//For testing / visualisation
return float4(shadow,shadow,shadow,1);
}
float4 depthToViewSpace(float2 xy) {
//Get pixel depth from camera by sampling current texcoord.
//Extract the alpha channel as this holds the depth value.
//Then, transform from [0..1] to [-1..1]
float z = (_normal.Sample(_sampler, xy).a) * 2 - 1;
float x = xy.x * 2 - 1;
float y = (1 - xy.y) * 2 - 1;
float4 vProjPos = float4(x, y, z, 1.0f);
float4 vPositionVS = mul(vProjPos, invP);
vPositionVS = float4(vPositionVS.xyz / vPositionVS.w,1);
return vPositionVS;
}
float shadowCalc(float4 pixelPosL) {
//Transform pixelPosLight from [-1..1] to [0..1]
float3 projCoords = (pixelPosL.xyz / pixelPosL.w) * 0.5 + 0.5;
float closestDepth = _lightDepths.Sample(_sampler, projCoords.xy).r;
float currentDepth = projCoords.z;
return currentDepth > closestDepth; //Supposed to have bias, but for now I just want shadows working haha
}
CPP Matrices
// (Position, LookAtPos, UpDir)
auto lightView = XMMatrixLookAtLH(XMLoadFloat4(&pos4), XMVectorSet(0,0,0,1), XMVectorSet(0,1,0,0));
// (FOV, AspectRatio (1000/680), NEAR, FAR)
auto lightProj = XMMatrixPerspectiveFovLH(1.57f , 1.47f, 0.01f, 10.0f);
XMStoreFloat4x4(&_cLightBuffer.light.viewProj, XMMatrixTranspose(XMMatrixMultiply(lightView, lightProj)));
Current Outputs
White signifies that a shadow should be projected there. Black indicates no shadow.
CameraPos (0, 2.5, -2)
CameraLookAt (0, 0, 0)
CameraFOV (1.57)
CameraNear (0.01)
CameraFar (10.0)
LightPos (0, 2.5, -2)
LightLookAt (0, 0, 0)
LightFOV (1.57)
LightNear (0.01)
LightFar (10.0)
If I change the CameraPosition to be (0, 2.5, 2), basically just flipped on the Z axis, this is the result.
Obviously a shadow shouldn't change its projection depending on where the observer is, so I think I'm making a mistake with the invV. But I really don't know for sure. I've debugged the light's projView matrix, and the values seem correct - going from CPU to GPU. It's also entirely possible I've misunderstood some theory along the way because this is quite a tricky technique for me.
Aha! Found my problem. It was a silly mistake, I was calculating the depth of pixels from each light, but storing them in a texture that was based on the view of the camera. The following image should explain my mistake better than I can with words.
For future reference, the solution I decided was to scrap my idea for storing light depths in texture channels. Instead, I basically make a new pass for each light, and bind a unique depth-stencil texture to render the geometry to. When I want to do light calculations, I bind each of the depth textures to a shader resource slot and go from there. Obviously this doesn't scale well with many lights, but for my student project where I'm only required to have 2 shadow casters, it suffices.
_context->DrawIndexed(indexCount, 0, 0); //Draw to regular render target
_sunlight->use(1, _context); //Use sunlight shader (basically just runs a Vertex Shader & Null Pixel shader so depth can be written to depth map)
_sunlight->bindDSVSetNullRenderTarget(_context);
_context->DrawIndexed(indexCount, 0, 0); //Draw to sunlight depth target
bindDSVSetNullRenderTarget(ctx){
ID3D11RenderTargetView* nullrv = { nullptr };
ctx->OMSetRenderTargets(1, &nullrv, _sunlightDepthStencilView);
}
//The purpose of setting a null render target before doing the draw call is
//that a draw call with only a depth target bound is much faster.
//(At least I believe so, from my reading online)

How to draw specific part of texture in fragment shader (SpriteKit)

I have a node size of 64x32 and texture size of 192x192 and I am trying to draw the first part of this texture at the first node, the second part at the second node...
Fragment shader (attached to SKSpriteNode with texture size of 64x32)
void main() {
float bX = 64.0 / 192.0 * (offset.x + 1);
float aX = 64.0 / 192.0 * (offset.x );
float bY = 32.0 / 192.0 * (offset.y + 1);
float aY = 32.0 / 192.0 * (offset.y);
float normalizedX = (bX - aX) * v_tex_coord.x + aX;
float normalizedY = (bY - aY) * v_tex_coord.y + aY;
gl_FragColor = texture2D(u_temp, vec2(normalizedX, normalizedY));
}
offset.x - [0, 2]
offset.y - [0, 5]
u_temp - texture size of 192x192
function to convert a value from [0,1] to, for example, [0, 0.33]
But the result seems to be wrong:
SKSpriteNode with attached texture
SKSpriteNode without texture (what I want to achieve with texture)
When a texture is in an altas, it's not addressed by coordinates from (0,0) to (1,1) anymore. The atlas is really one large texture that has been assembled behind the scenes. When you use a particular named image from an atlas in a normal sprite, SpriteKit is looking up that image name in information about how the atlas was assembled and then telling the GPU something like "draw this sprite with bigAtlasTexture, coordinates (0.1632,0.8814) through (0.1778, 0.9143)". If you're going to write a custom shader using the same texture, you need that information about where it lives inside the atlas, which you get from textureRect:
https://developer.apple.com/documentation/spritekit/sktexture/1519707-texturerect
So you have your texture which is not really one image but defined by a location textureRect() in a big packed-up image of lots of textures. I find it easiest to think in terms of (0,0) to (1,1), so when writing a shader I usually do textureRect => subtract and scale to get to (0,0)-(1,1) => compute desired modified coordinates => scale and add to get to textureRect again => texture2D lookup.
Since your shader will need to know about textureRect but you can't call that from the shader code, you have two choices:
Make an attribute or uniform to hold that information, fill it in from the outside, and then have the shader reference it.
If the shader is only used for a specific texture or for a few textures, then you can generate shader code that's specialized for the required textureRect, i.e., it just has some constants in the code for the texture.
Here's a part of an example using approach #2:
func myShader(forTexture texture: SKTexture) -> SKShader {
// Be careful not to assume that the texture has v_tex_coord ranging in (0, 0) to
// (1, 1)! If the texture is part of a texture atlas, this is not true. I could
// make another attribute or uniform to pass in the textureRect info, but since I
// only use this with a particular texture, I just pass in the texture and compile
// in the required v_tex_coord transformations for that texture.
let rect = texture.textureRect()
let shaderSource = """
void main() {
// Normalize coordinates to (0,0)-(1,1)
v_tex_coord -= vec2(\(rect.origin.x), \(rect.origin.y));
v_tex_coord *= vec2(\(1 / rect.size.width), \(1 / rect.size.height));
// Update the coordinates in whatever way here...
// v_tex_coord = desired_transform(v_tex_coord)
// And then go back to the actual coordinates for the real texture
v_tex_coord *= vec2(\(rect.size.width), \(rect.size.height));
v_tex_coord += vec2(\(rect.origin.x), \(rect.origin.y));
gl_FragColor = texture2D(u_texture, v_tex_coord);
}
"""
let shader = SKShader(source: shaderSource)
return shader
}
That's a cut-down version of some specific examples from here:
https://github.com/bg2b/RockRats/blob/master/Asteroids/Hyperspace.swift

Normal mapping on a large sphere is not entirely correct

So I've been working on a Directx11/hlsl rendering engine with the goal of creating a realistic planet which you can view from both on the surface and also at a planetary level. The planet is a normalized cube, which is procedurally generated using noise and as you move closer to the surface of the planet, a binary-based triangle tree splits until the desired detail level is reached. I got vertex normal calculations to work correctly, and I recently started trying to implement normal mapping for my terrain textures, and I have gotten something that seems to work for the most part. However, when the sun is pointing almost perpendicular to the ground (90 degrees), it is way more lit up
However, from the opposite angle (270 degrees), I am getting something that seems
, but may as well be just as off.
The debug lines that are being rendered are the normal, tangent, and bitangents (which all appear to be correct and fit the topology of the terrain)
Here is my shader code:
Vertex shader:
PSIn mainvs(VSIn input)
{
PSIn output;
output.WorldPos = mul(float4(input.Position, 1.f), Instances[input.InstanceID].WorldMatrix); // pass pixel world position as opposed to screen space position for lighitng calculations
output.Position = mul(output.WorldPos, CameraViewProjectionMatrix);
output.TexCoord = input.TexCoord;
output.CameraPos = CameraPosition;
output.Normal = normalize(mul(input.Normal, (float3x3)Instances[input.InstanceID].WorldMatrix));
float3 Tangent = normalize(mul(input.Tangent, (float3x3)Instances[input.InstanceID].WorldMatrix));
float3 Bitangent = normalize(cross(output.Normal, Tangent));
output.TBN = transpose(float3x3(Tangent, Bitangent, output.Normal));
return output;
}
Pixel shader (Texcoord scalar is for smaller textures closer to planet surface):
float3 FetchNormalVector(float2 TexCoord)
{
float3 Color = NormalTex.Sample(Samp, TexCoord * TexcoordScalar);
Color *= 2.f;
return normalize(float3(Color.x - 1.f, Color.y - 1.f, Color.z - 1.f));
}
float3 LightVector = -SunDirection;
float3 TexNormal = FetchNormalVector(input.TexCoord);
float3 WorldNormal = normalize(mul(input.TBN, TexNormal));
float nDotL = max(0.0, dot(WorldNormal, LightVector));
float4 SampleColor = float4(1.f, 1.f, 1.f, 1.f);
SampleColor *= nDotL;
return float4(SampleColor.xyz, 1.f);
Thanks in advance, and let me know if you have any insight as to what could be the issue here.
Edit 1: I tried it with a fixed blue value instead of sampling from the normal texture, which gives me the correct and same results as if I had not applied mapping (as expected). Still don't have a lead on what would be causing this issue.
Edit 2: I just noticed the strangest thing. At 0, 0, +Z, there are these hard seams that only appear with normal mapping enabled
It's a little hard to see, but it seems almost like there are multiple tangents associated to the same vertex (since I'm not using indexing yet) because the debug lines appear to split on the seams.
Here is my code that I'm using to generate the tangents (bitangents are calculated in the vertex shader using cross(Normal, Tangent))
v3& p0 = Chunk.Vertices[0].Position;
v3& p1 = Chunk.Vertices[1].Position;
v3& p2 = Chunk.Vertices[2].Position;
v2& uv0 = Chunk.Vertices[0].UV;
v2& uv1 = Chunk.Vertices[1].UV;
v2& uv2 = Chunk.Vertices[2].UV;
v3 deltaPos1 = p1 - p0;
v3 deltaPos2 = p2 - p0;
v2 deltaUV1 = uv1 - uv0;
v2 deltaUV2 = uv2 - uv0;
f32 r = 1.f / (deltaUV1.x * deltaUV2.y - deltaUV1.y * deltaUV2.x);
v3 Tangent = (deltaPos1 * deltaUV2.y - deltaPos2 * deltaUV1.y) * r;
Chunk.Vertices[0].Tangent = Normalize(Tangent - (Chunk.Vertices[0].Normal * DotProduct(Chunk.Vertices[0].Normal, Tangent)));
Chunk.Vertices[1].Tangent = Normalize(Tangent - (Chunk.Vertices[1].Normal * DotProduct(Chunk.Vertices[1].Normal, Tangent)));
Chunk.Vertices[2].Tangent = Normalize(Tangent - (Chunk.Vertices[2].Normal * DotProduct(Chunk.Vertices[2].Normal, Tangent)));
Also for reference, this is the main article I was looking at while implementing all of this: link
Edit 3:
Here is an image of the planet from a distance with normal mapping enabled:
And one from the same angle without:

Porting from OpenGL to MetalKit - Projection Matrix (?) Problems

Question
I'm working on porting from OpenGL (OGL) to MetalKit (MTK) on iOS. I'm failing to get identical display in the MetalKit version of the app. I modified the projection matrix to account for differences in Normalized Device Coordinates between the two frameworks, but don't know what else to change to get identical display. Any ideas what else needs to be changed to port from OpenGL to MetalKit?
Projection Matrix Changes so far...
I understand that the Normalized Device Coordinates (NDC) are different in OGL vs MTK:
OGL NDC: -1 < z < 1
MTK NDC: 0 < z < 1
I modified the projection matrix to address the NDC difference, as indicated here. Unfortunately, this modification to the projection matrix doesn't result in identical display to the old OGL code.
I'm struggling to even know what else to try.
Background
For reference, here's some misc background information:
The view matrix is very simple (identity matrix); i.e. camera is at (0, 0, 0) and looking toward (0, 0, -1)
In the legacy OpenGL code, I used GLKMatrix4MakeFrustum to produce the projection matrix, using the screen bounds for left, right, top, bottom, and near=1, far=1000
I stripped the scene down to bare bones while debugging and below are 2 images, the first from legacy OGL code and the second from MTK, both just showing the "ground" plane with a debug texture and a black background.
Any ideas about what else might need to change to get to identical display in MetalKit would be greatly appreciated.
Screenshots
OpenGL (legacy)
MetalKit
Edit 1
I tried to extract code relevant to calculation and use of the projection matrix:
float aspectRatio = 1.777; // iPhone 8 device
float top = 1;
float bottom = -1;
float left = -aspectRatio;
float right = aspectRatio;
float RmL = right - left;
float TmB = top - bottom;
float nearZ = 1;
float farZ = 1000;
GLKMatrix4 projMatrix = { 2 * nearZ / RmL, 0, 0, 0,
0, 2 * nearZ / TmB, 0, 0,
0, 0, -farZ / (farZ - nearZ), -1,
0, 0, -farZ * nearZ / (farZ - nearZ), 0 };
GLKMatrix4 viewMatrix = ...; // Identity matrix: camera at origin, looking at (0, 0, -1), yUp=(0, 1, 0);
GLKMatrix4 modelMatrix = ...; // Different for various models, but even when this is the identity matrix in old/new code the visual output is different
GLKMatrix4 mvpMatrix = GLKMatrix4Multiply(projMatrix, GLKMatrix4Multiply(viewMatrix, modelMatrix));
...
GLKMatrix4 x = mvpMatrix; // rename for brevity below
float mvpMatrixArray[16] = {x.m00, x.m01, x.m02, x.m03, x.m10, x.m11, x.m12, x.m13, x.m20, x.m21, x.m22, x.m23, x.m30, x.m31, x.m32, x.m33};
// making MVP matrix available to vertex shader
[renderCommandEncoder setVertexBytes:&mvpMatrixArray
length:16 * sizeof(float)
atIndex:1]; // vertex data is at "0"
[renderCommandEncoder setVertexBuffer:vertexBuffer
offset:0
atIndex:0];
...
[renderCommandEncoder drawPrimitives:MTLPrimitiveTypeTriangleStrip
vertexStart:0
vertexCount:4];
Sadly this issue ended up being due to a bug in the vertex shader that was pushing all geometry +1 on the Z axis, leading to the visual differences.
For any future OpenGL-to-Metal porters: the projection matrix changes above, accounting for the differences in normalized device coordinates, are enough.
Without seeing the code it's hard to say what the problem is. One of the most common issues could be a wrongly configured viewport:
// Set the region of the drawable to draw into.
[renderEncoder setViewport:(MTLViewport){0.0, 0.0, _viewportSize.x, _viewportSize.y, 0.0, 1.0 }];
The default values for the viewport are:
originX = 0.0
originY = 0.0
width = w
height = h
znear = 0.0
zfar = 1.0
*Metal: znear = minZ, zfar = maxZ.
MinZ and MaxZ indicate the depth-ranges into which the scene will be
rendered and are not used for clipping. Most applications will set
these members to 0.0 and 1.0 to enable the system to render to the
entire range of depth values in the depth buffer. In some cases, you
can achieve special effects by using other depth ranges. For instance,
to render a heads-up display in a game, you can set both values to 0.0
to force the system to render objects in a scene in the foreground, or
you might set them both to 1.0 to render an object that should always
be in the background.
Applications typically set MinZ and MaxZ to 0.0 and 1.0 respectively
to cause the system to render to the entire depth range. However, you
can use other values to achieve certain affects. For example, you
might set both values to 0.0 to force all objects into the foreground,
or set both to 1.0 to render all objects into the background.

Centering all of the points in iOS OpenGL ES app

I have an OpenGL view that displays a set of 3D points with some basic shaders:
// Fragment Shader
static const char* PointFS = STRINGIFY
(
void main(void)
{
gl_FragColor = vec4(0.8, 0.8, 0.8, 1.0);
}
);
// Vertex Shader
static const char* PointVS = STRINGIFY
(
uniform mediump mat4 uProjectionMatrix;
attribute mediump vec4 position;
void main( void )
{
gl_Position = uProjectionMatrix * position;
gl_PointSize = 3.0;
}
);
And the MVP matrix is calculated as:
- (void)setMatrices
{
// ModelView Matrix
GLKMatrix4 modelViewMatrix = GLKMatrix4Identity;
modelViewMatrix = GLKMatrix4Scale(modelViewMatrix, 2, 2, 2);
// Projection Matrix
const GLfloat aspectRatio = (GLfloat)(self.view.bounds.size.width) / (GLfloat)(self.view.bounds.size.height);
const GLfloat fieldView = GLKMathDegreesToRadians(90.0f);
const GLKMatrix4 projectionMatrix = GLKMatrix4MakePerspective(fieldView, aspectRatio, 0.1f, 10.0f);
glUniformMatrix4fv(self.pointShader.uProjectionMatrix, 1, 0, GLKMatrix4Multiply(projectionMatrix, modelViewMatrix).m);
}
This works fine, but I have a set of 500 points and I see only a few.
How do I scale/translate the MVP matrix to display all of them (they are a dynamic set)? Ideally the "centroid" should be at the origin, and all of the points visible. It should be able to adapt to rotations of the view (gestures are the next step I want to implement).
Seeing how you present this you might need quite a lot... I guess best approach might be using "look at", the point you are looking at is (0,0,0) as you stated, camera position should probably be (0,0,Z) and up (0,1,0). So the only issue here is the Z component of camera position.
If you start the Z with for instance -.1 and the iterate through all the points then sin(fieldView*.5f) * (p.z-Z) >= point.y for the point to be visible. So you can compute Z1 = p.z-(point.y/sin(fieldView*.5f)) and if Z1<Z then Z=Z1. This check is only for the positive Y check, you also need the same for negative Y and same for +-X. These evasions are very similar though when checking X you could also take the screen ratio into account.
This procedure should give you the smallest field possible to see all the points (with given limitations such as looking towards (0,0,0)) but is far from the simplest. You also need to consider if the equation will work if p.z<-Z.
Another bit easier approach is to generate the smallest cube around centre which holds all the points: iterate through points and get the coordinate with largest absolute value (any of X,Y or Z). When you have it use it with frustum instead perspective so that all rect parameters (top, bottom, left and right) are generated with this value as +-largest. Then you need to compute the translation which for 90 degrees field is Z = (largest*.5). Z is the zNear for the frustum and then also translate the matrix by -(Z+largest). Again one of the coordinate in frustum must be multiplied by screen ratio.
In any case do watch out what your zFar is, having it only 10.0f might be a bit too short in your case. Until you need the depth buffer you should not worry about that value being too large.

Resources