I know how to render an object using a rotation matrix R1. but what i want to know is how i save the resulting frame into a texture, then rotate this object using the mouse and saving the second frame into another texture. How can i do this in webgl?
Any help will be appreciated.
Related
I'm learning the Vulkan API, and I came across a little "problem":
Currently my program is able to draw, using the Projection-View-Model matrix transformation, a cube at the axis origin:
I'm using 3 images/imageViews/framebuffers, so for each transformation matrix I have a vector of size 3 that holds them, and everything work perfectly (no errors from the validation layers etc)... the problem is:
I now want to draw another object near my cube, so I thought I just had to update the model matrix twice every frame, the first time to position the cube, the second time for the other object, but this cannot work because the cube isn't drawn immediately when registering the command buffer, but when submitting it, so in the end the command buffer would simply use the second update of the model matrix for both the cube and the other object:
How to handle this situation?
Thanks.
Make the uniform buffer bigger put the second matrix after the first and point the second draw to the correct offset in the uniform buffer.
You can use either separate descriptors or dynamic offsets.
I am trying to play a video using OpenGL ES 2.0 in iOS. I am not able to get a sample code or starting point of how to achieve this. Can anybody help me with this?
What you are looking for is getting a raw buffer for the video in real time. I believe you need to look into AVFoundation and somehow extract the CVPixelBufferRef. If I remember correctly you have a few ways; one is on demand at specific time, another for processing where you will get a fast iteration of the frames in a block, and the one you probably need is to receive the frames in real time. So with this you can extract a raw RGB buffer which needs to be pushed to the texture and then drawn to the render buffer.
I suggest you create a texture once (per video) and try making it as small as possible but ensure that the video frame will fit. You might need the POT (power of two) textures so to get the texture dimension from video width you need something like:
GLInt textureWidth = 1.0f;
while(textureWidth<videoWidth) textureWidth <<= 1; // Multiplies by 2
So the texture size is expected to be larger then the video. To push the data to the texture you then need to use texture subimage glTexSubImage2D. Which expects a pointer to your raw data and rectangle parameters where to save the data which are then (0, 0, sampleWidth, sampleHeight). Also then the texture coordinates must computed so they are not in range [0, 1] but rather for x: [0, sampleWidth/textureWidth].
So then you just need to put it all together:
Have a system to keep generating the video raw sample buffers
Generate a texture to fit video size
On new sample update the texture using glTexSubImage2D (watch out for threads)
After the data is loaded into the texture draw the texture as full screen rectangle (watch out for threads)
You might need to watch out for video orientation, transformation. So if possible do test your system with a few videos that have been recorded on the device in different orientations. I think there is now a support to receive the buffers already correctly oriented. But by default the sample at least used to be "wrong"; the portrait recorded video still had the samples in landscape but a transformation matrix or orientation was given with the asset.
I am trying to make something similar to this: https://www.youtube.com/watch?v=D2Kb3ryfGNc
I succeeded in detecting laser position, but now I can't figure out how to paint where the laser has been?
Do I need to paint lines of where laser has been in one frame and add it to camera stream frame in order to make sure that lines are painted?
Here's the thing - When we stream a continuous video using openCV Mat object, it displays one frame after another, thus the info of the nth frame is lost when the (n+1)th frame is received.
What you need are 2 Mat objects- one to stream the camera (say Mat_cam), and the other to draw the laser trajectory (Mat_traj). Mat_cam will be used to track the laser position frame-by-frame, using any standard colour thresholding algo. Even the video says that the laser should be bright, meaning that jimez86 might be using white color threshold, followed by largest blob localization.
As a new laser position is received in nth frame, draw a corresponding circle on Mat_traj. When the next frame is received, Mat_cam will be updated and it'll have a new laser position, but Mat_traj will be the same, since it will not be cleared/refreshed with every 'for' loop iteration, hence it will contain the whole trajectory. Adding Mat_traj and Mat_cam using Weighted addition will give you the desired result. Follow the algo below:
Mat Mat_traj(640,480,CV_8UC1,Scalar(0)),Mat_cam,Mat_res;
VideoCapture cam(0);
for(;;)
{
cam>>Mat_cam;
Point laserCentre=getLaserCentre(Mat_cam);//you'll be defining this function;
drawCircle(Mat_traj,laserCentre);
addWeighted(Mat_cam,Mat_traj,other_params,Mat_res);
imshow("out",Mat_res);
waitKey(10);
}
I'm new with dx programming and I have a problem with textures.
I'm doing a 2d engine, I implemented a simple sprite batching, I can write on my dynamic buffer, set uv coordinates and draw some sprites on the screen.
Everything works fine if I'm using a single texture but, when I want to change texture and draw new sprites nothing works anymore.
What I'm doing is loading the textures using the function D3DX11CreateShaderResourceViewFromFile and storing the pointer.
Then in the rendering loop, when I'm done with one texture, I use:
PSSetShaderResources(0, 1, &texture_pointer)
to swap to another texture but this last function crashes, it works only with one single texture.
What am I supposed to do to swap from a texture to another texture?
Thank you!
I am new to OpenGL-es 2.0 and GLKit,
and would like to ask a question.
I tried to find a good example on 2D camera but couldn't find any,
so I hope that you guys can help me :D
--
1)
Firstly, I have an object, and I store its position in GLKVector2.
I would like to know how to draw it in the world space.
2)
I have a "2D Camera" class, storing as a CGRect with its world position and size.
Its size may change depending on the "zoom" I want.
Is there any way to easily draw the objects from world space into this 2D Camera?
Is any optimization required too? such as not drawing objects outside this 2D Camera,
and clipping objects that have some parts outside of 2D Camera.
3)
If the objects are drawn into this 2D camera, how do I apply effects like clip/scale/etc so that it fits on the device screen, and draw it on the screen?
--
I have seen many things about model, view, and projection matrix, but I don't get them. I have only done XNA/Android bitmap drawing calls, which is drawing them onto a Bitmap, and resizing the Bitmap onto the screen.