I'm working with OpenGl on iOS and Android, what I'm trying to do is draw a model, a sphere, set the camera/eye coords inside the sphere, set a texture and enable panning and zoom to achieve a 360 degree effect, I just made for Android using OpenGl 1.0, but I was having a lot of problems in iOS, so I made it using OpenGl 2.0, everything is set and working, but I'm having a problem with the panning, in order to rotate the Model View matrix, I'm applying the rotate transformation, it works but if I change any axis rotation, it messes up the other two axis, At the end if I apply a rotation in both axis, X and Y, the sphere rotates like if some kind of transformation has been don in the Z axis, the texture ends upside-down or being displayed in diagonal, I'm doing the exact same transformations in Android and I don't have any problem there, anybody has some experience with this issue? Any suggestion? clue? code? article? I think that when I apply the first transformation the coords in the space just change and the next transformation is not being applied properly.
Here's my iOS code :
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect
{
glClear(GL_COLOR_BUFFER_BIT);
GLKMatrixStackPush(_obStack);
GLKMatrixStackRotate(_obStack,
GLKMathDegreesToRadians(_obAngleX),
0.0f, 1.0f, 0.0f);
GLKMatrixStackRotate(_obStack,
GLKMathDegreesToRadians(_obAngleY),
1.0f, 0.0f, 0.0f);
GLKMatrixStackScale(_obStack,
_obScaleFactor,
_obScaleFactor,
_obScaleFactor);
self.obEffect.transform.modelviewMatrix = GLKMatrixStackGetMatrix4(_obStack);
// Prepare effect
[self.obEffect prepareToDraw];
// Draw Model
glDrawArrays(GL_TRIANGLES, 0, panoramaVertices);
GLKMatrixStackPop(_obStack);
self.obEffect.transform.modelviewMatrix = GLKMatrixStackGetMatrix4(_obStack);
}
This is my Android code :
public void onDrawFrame(GL10 arGl) {
arGl.glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
arGl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
arGl.glPushMatrix();
arGl.glRotatef(obAngleY, 1.0f, 0.0f, 0.0f);
arGl.glRotatef(obAngleX, 0.0f, 1.0f, 0.0f);
arGl.glScalef(obScaleFactor, obScaleFactor, obScaleFactor);
if (obModel != null) {
obModel.draw(arGl);
}
arGl.glPopMatrix();
}
I was trying over and over again with several implementations, every single time with no success, at the end, I went back and implemented the OpenGL 1.0 version for iOS and under this scenario, whenever the matrix is being rotated, the axis aren't, so this was my solution, implement the OpenGL 1.0 version.
Related
I am currently developing on a new mechanism to visualize laser beam hits on spaceships' shields. The development on the CPU side is done and the Vertex Shader is working fine, but I have an issue while creating the Pixel Shader: Any transparency value below 0.5 is invisible.
The following Pixel Shader is incredible simple: If the pixel is inside the hit radius, I show a semi-transparent blue pixel, otherwise nothing is shown.
float4 PS(PS_IN input) : SV_Target
{
if (input.impactDistances.x > 0.0f && input.impactDistances.x < 2.0f)
{
return float4(0.0f, 0.0f, 1.0f, 0.5f);
}
return float4(1.0f, 1.0f, 1.0f, 0.0f);
}
This results in something like this (see blue area above the yellow arrow).
Now, if I change the line
return float4(0.0f, 0.0f, 1.0f, 0.5f);
to
return float4(0.0f, 0.0f, 1.0f, 0.4f);
then the impact areas are completely invisible and I can't think of anything that causes this behaviour. Do you have any idea?
If it helps, here are my settings for the blend state that I use for the entire game.
var blendStateDescription = new SharpDX.Direct3D11.BlendStateDescription();
blendStateDescription.RenderTarget[0].IsBlendEnabled = true;
blendStateDescription.RenderTarget[0].RenderTargetWriteMask = SharpDX.Direct3D11.ColorWriteMaskFlags.All;
blendStateDescription.RenderTarget[0].SourceBlend = SharpDX.Direct3D11.BlendOption.SourceAlpha;
blendStateDescription.RenderTarget[0].BlendOperation = SharpDX.Direct3D11.BlendOperation.Add;
blendStateDescription.RenderTarget[0].DestinationBlend = SharpDX.Direct3D11.BlendOption.InverseSourceAlpha;
blendStateDescription.RenderTarget[0].SourceAlphaBlend = SharpDX.Direct3D11.BlendOption.Zero;
blendStateDescription.RenderTarget[0].AlphaBlendOperation = SharpDX.Direct3D11.BlendOperation.Add;
blendStateDescription.RenderTarget[0].DestinationAlphaBlend = SharpDX.Direct3D11.BlendOption.One;
blendStateDescription.AlphaToCoverageEnable = true;
_blendState = new SharpDX.Direct3D11.BlendState(_device, blendStateDescription);
_deviceContext.OutputMerger.SetBlendState(_blendState);
blendStateDescription.AlphaToCoverageEnable = true;
will definitely create an issue in your case, setting it to false will apply correct alpha blending.
I am trying to implement a custom CIFilter to use with an SCNNode in my ARSCNView. Unfortunately it just creates a gray rectangle where the node should be on the screen. I have also tried built-in CIFilters to double check my code to no avail.
On some other SO post I have read that CIFilter only works when OpenGL is selected as the renderingAPI for the SCNView because CoreImage doesn't play well with Metal and as far as I can tell it is impossible to get ARSCNView to run with OpenGL. The said post is from 2016 so I am wondering if anything has changed.
What I am trying to implement is to outline/highlight the object on the screen to feedback the user about object selection. I have achieved something usable by adding a shader modifier but it gives limited control over shading. I really don't want to overtake all shading on myself.
Below is my CIKernel for outlining which works very good on Quartz Composer.
Any help and information is highly appreciated.
kernel vec4 outline(sampler src) {
vec2 texturePos = destCoord();
float alpha = 4.0f * sample(src, texturePos).a;
float thickness = 5.0f;
alpha -= sample(src, texturePos + vec2(thickness, 0.0f)).a;
alpha -= sample(src, texturePos + vec2(-thickness, 0.0f)).a;
alpha -= sample(src, texturePos + vec2(0.0f, thickness)).a;
alpha -= sample(src, texturePos + vec2(0.0f, -thickness)).a;
if (alpha > 0.9f) {
vec4 resultCol = vec4(1.0f, 1.0f, 1.0f, alpha);
return resultCol;
}else{
vec4 resultCol = sample(src, texturePos);
return resultCol;
}
}
I also faced a similar problem. The cause is because we made the following settings. CIFilter could be implemented by removing this setting.
I have not analyzed the details but if it comes to help!
sceneView.antialiasingMode = .multisampling4X
sceneView.contentScaleFactor = 1.3
I am debugging an issue in which a library I am using creates an OpenGL view, and triggers a memory warning.
One thing I noticed is that setting the view to a fraction of the window size causes it to work fine. When debugging via the view via XCodes interface debugger I see that the bounds of the view go well out past the bounds of the parent view. When printing the view in question I see this:
<RenderView: 0x140a61d10; frame = (5 0; 1019 728); transform = [1019, 0, 0, 728, 0, 0]; layer = <CALayer: 0x140ad0a40>>
I am unfamiliar with this, but from reading the CGAffineTransform docs it seems that the variables being set are "a" and "d" which correspond to the scale sx and sy.
So my question is would this transform actually be displaying a view which is 1019*1019 x 728*728, does this seem suspicious? likely a bug in the library, or is my understand incorrect?
I am seeing this issue using Xcode 7, on multiple devices, currently testing on a iPad Pro 9.7 running 9.3.1.
Here's clarification on what goes where between a 2x3 CGAffineTransform and a GLKMatrix4...
Your CGAffineTransform that represents the preferred or actual transform of your view or asset dimensions and orientation (or aspect) translate to a GLKMatrix4 (or mat4, which is multiples against the position in your vertex shader), thusly:
CGAffineTransform preferredTransform= [videoTrack preferredTransform];
GLfloat preferredTransformMatrix[] = {
preferredTransform.a, preferredTransform.b, preferredTransform.tx, 0.0,
preferredTransform.c, preferredTransform.d, preferredTransform.ty, 0.0,
0.0, 0.0, 1.0, 0.0,
0.0, 0.0, 0.0, 1.0
};
This is just an example showing the correct positions for each value in a 2x3 Matrix for a 4x4 matrix.
I am trying to create a Unity Augmented Reality Application that works of the pose determined by the ADF localization. I add the camera feed to the "Persistent State" demo, but my ADF localized frame does not align with the camera feed. The ground always seems to be transformed to be above the horizon.
I had the same problem and I've found a solution. The problem is that in the PoseController script, the horizon is not set correctly with the real world horizon, but in the AugmentedReality example it is done correctly (in the ARScreen script). So in my AR application things looked like they moved up as i moved far away from them.
To fix this, you can either start your application inside the AugmentedReality example as Guillaume said or make the next changes in the PoseController scripts:
-First of all we have the matrix m_matrixdTuc which is initialized as follows:
m_matrixdTuc = new Matrix4x4();
m_matrixdTuc.SetColumn(0, new Vector4(1.0f, 0.0f, 0.0f, 0.0f));
m_matrixdTuc.SetColumn(1, new Vector4(0.0f, 1.0f, 0.0f, 0.0f));
m_matrixdTuc.SetColumn(2, new Vector4(0.0f, 0.0f, -1.0f, 0.0f));
m_matrixdTuc.SetColumn(3, new Vector4(0.0f, 0.0f, 0.0f, 1.0f));
Well, we must rename it as m_matrixcTuc and change the -1.0f value:
m_matrixcTuc = new Matrix4x4();
m_matrixcTuc.SetColumn(0, new Vector4(1.0f, 0.0f, 0.0f, 0.0f));
m_matrixcTuc.SetColumn(1, new Vector4(0.0f, -1.0f, 0.0f, 0.0f));
m_matrixcTuc.SetColumn(2, new Vector4(0.0f, 0.0f, 1.0f, 0.0f));
m_matrixcTuc.SetColumn(3, new Vector4(0.0f, 0.0f, 0.0f, 1.0f));
This is because we're not gonna use this matrix directly to get the transform of the camera but we're gonna make some operations before to get the true m_matrixdTuc.
-2nd, we must initialize this three variables:
private Matrix4x4 m_matrixdTuc;
// Device frame with respect to IMU frame.
private Matrix4x4 m_imuTd;
// Color camera frame with respect to IMU frame.
private Matrix4x4 m_imuTc;
(the true m_matrixdTuc and the other 2 are gotten from the ARScreen script)
-3rd we need the _SetCameraExtrinsics method from ARScreen:
/// <summary>
/// The function is for querying the camera extrinsic, for example: the transformation between
/// IMU and device frame. These extrinsics is used to transform the pose from the color camera frame
/// to the device frame. Because the extrinsic is being queried using the GetPoseAtTime()
/// with a desired frame pair, it can only be queried after the ConnectToService() is called.
///
/// The device with respect to IMU frame is not directly queryable from API, so we use the IMU
/// frame as a temporary value to get the device frame with respect to IMU frame.
/// </summary>
private void _SetCameraExtrinsics()
{
double timestamp = 0.0;
TangoCoordinateFramePair pair;
TangoPoseData poseData = new TangoPoseData();
// Getting the transformation of device frame with respect to IMU frame.
pair.baseFrame = TangoEnums.TangoCoordinateFrameType.TANGO_COORDINATE_FRAME_IMU;
pair.targetFrame = TangoEnums.TangoCoordinateFrameType.TANGO_COORDINATE_FRAME_DEVICE;
PoseProvider.GetPoseAtTime(poseData, timestamp, pair);
Vector3 position = new Vector3((float)poseData.translation[0],
(float)poseData.translation[1],
(float)poseData.translation[2]);
Quaternion quat = new Quaternion((float)poseData.orientation[0],
(float)poseData.orientation[1],
(float)poseData.orientation[2],
(float)poseData.orientation[3]);
m_imuTd = Matrix4x4.TRS(position, quat, new Vector3(1.0f, 1.0f, 1.0f));
// Getting the transformation of IMU frame with respect to color camera frame.
pair.baseFrame = TangoEnums.TangoCoordinateFrameType.TANGO_COORDINATE_FRAME_IMU;
pair.targetFrame = TangoEnums.TangoCoordinateFrameType.TANGO_COORDINATE_FRAME_CAMERA_COLOR;
PoseProvider.GetPoseAtTime(poseData, timestamp, pair);
position = new Vector3((float)poseData.translation[0],
(float)poseData.translation[1],
(float)poseData.translation[2]);
quat = new Quaternion((float)poseData.orientation[0],
(float)poseData.orientation[1],
(float)poseData.orientation[2],
(float)poseData.orientation[3]);
m_imuTc = Matrix4x4.TRS(position, quat, new Vector3(1.0f, 1.0f, 1.0f));
m_matrixdTuc = Matrix4x4.Inverse(m_imuTd) * m_imuTc * m_matrixcTuc;
}
Now as you can see, we get the true value of the m_matrixdTuc variable that will be used in the OnTangoPoseAvailable method.
I don't really understand the maths behind this methods but I've found that it works perfectly on my application. Hope it works for you too :)
I already tried something similar. If I understand well, the objects you add to be augmented are rotated toward the top of the screen, and this is especially visible when they are far, as they look like lifted upward.
To make it work, I simply started my project inside the Experimental Augmented Reality app with the "red map marker": there, the camera feed is already aligned to the world properly.
I have been hardly coding on a Direct3D9 based game. Everything went excellent util I hit a big problem. I created a class that wraps the process of loading a mesh from a .x file. I successfully loaded a cube with only one face visible. In theory, that face should look like a square but it is actually rendered as a rectangle. I am quite sure that there is something wrong with the D3DPRESENT_PARAMETERS structure. Down bellow are only the most important lines of my application's initialization.
First part to be created is the focus window:
HWND hWnd = CreateWindowEx(0UL, L"NewFrontiers3DWindowClass", Title.c_str(), WS_POPUP | WS_EX_TOPMOST, 0, 0, 1280, 1024, nullptr, (HMENU)false, hInstance, nullptr);
Then I fill out the D3DPRESENT_PARAMETERS structure.
D3DDISPLAYMODE D3DMM;
SecureZeroMemory(&D3DMM, sizeof(D3DDISPLAYMODE));
if(FAILED(hr = Direct3D9->GetAdapterDisplayMode(Adapter, &D3DMM)))
{
// Error is processed here
}
PresP.BackBufferWidth = D3DMM.Width;
PresP.BackBufferHeight = D3DMM.Height;
PresP.BackBufferFormat = BackBufferFormat;
PresP.BackBufferCount = 1U;
PresP.MultiSampleType = D3DMULTISAMPLE_NONE;
PresP.MultiSampleQuality = 0UL;
PresP.SwapEffect = D3DSWAPEFFECT_DISCARD;
PresP.hDeviceWindow = hWnd;
PresP.Windowed = false;
PresP.EnableAutoDepthStencil = EnableAutoDepthStencil;
PresP.AutoDepthStencilFormat = AutoDepthStencilFormat;
PresP.Flags = D3DPRESENTFLAG_DISCARD_DEPTHSTENCIL;
PresP.FullScreen_RefreshRateInHz = D3DMM.RefreshRate;
PresP.PresentationInterval = PresentationInterval;
Then the Direct3D9 device is created, followed by the SetRenderState functions.
Next, the viewport is assigned.
D3DVIEWPORT9 D3D9Viewport;
SecureZeroMemory(&D3D9Viewport, sizeof(D3DVIEWPORT9));
D3D9Viewport.X = 0UL;
D3D9Viewport.Y = 0UL;
D3D9Viewport.Width = (DWORD)D3DMM.Width;
D3D9Viewport.Height = (DWORD)D3DMM.Height;
D3D9Viewport.MinZ = 0.0f;
D3D9Viewport.MaxZ = 1.0f;
if(FAILED(Direct3D9Device->SetViewport(&D3D9Viewport)))
{
// Error is processed here
}
After this initialization, I globally declare some parameters that will be used later.
D3DXVECTOR3 EyePt(0.0f, 0.0f, -5.0f), Up(0.0f, 1.0f, 0.0f), LookAt(0.0f, 0.0f, 0.0f);
D3DXMATRIX View, Proj, World;
The update function looks like this:
Mesh.Render(Direct3D9Device);
D3DXMatrixLookAtLH(&View, &EyePt, &LookAt, &Up);
Direct3D9Device->SetTransform(D3DTS_VIEW, &View);
D3DXMatrixPerspectiveFovLH(&Proj, D3DX_PI/4, 1.0f, 1.0f, 1000.f);
Direct3D9Device->SetTransform(D3DTS_PROJECTION, &Proj);
D3DXMatrixTranslation(&World, 0.0f, 0.0f, 0.0f);
Direct3D9Device->SetTransform(D3DTS_WORLD, &World);
The device is not a null pointer.
I recently realized that there is no difference between declaring and setting up a view port and not doing so.
If there is anybody who can point me to the right answer, please help me solve this annoying problem.
If you don't set any transformation matrices, so the identity transformation is applied to your mesh, then face of the cube will be stretched to the same shape of the viewport. If your viewport isn't square (eg. it's the same size as the screen) then your cube's face also won't be square.
You can use a square viewport to workaround this problem, but that will limit your rendering to just that square on the screen. If you want to render to the entire screen you'll need to set a suitable projection matrix. You can calculate a normal perspective perspective matrix using D3DXMatrixPerspectiveFovLH. If you want an orthogonal perspective, where everything is the same size regardless of the distance from the camera, then use D3DXMatrixOrthoLH to calculate the perspective matrix. Note that if you use your viewport's width and height with the later function it will shrink your cube. A unit size cube will be rendered as a single pixel on the screen. You can either use a world or view transform to scale it up again, or use something like width/height and 1 as your width and height parameters to D3DXMatrixOrthoLH.
If you go with D3DXMatrixPerspectiveFovLH then you want something like this:
D3DXMatrixPerspectiveFovLH(&Proj, D3DX_PI/4, (double) D3DMM.Width / D3DMM.Height,
1.0f, 1000.f);
I think your problem not in D3DPP parameters but in your projective matrix. If you use D3DXMatrixPerspectiveFovLH, check aspect ratio to be 1280 / 1024 = 1.3333f