I have video tag and he plays simple video. [works]
I have canvas2d with playing same video [works]
opencvjs video processing (canvas is output , video is input)- also works
I have three.js with plane mesh
texture = new THREE.CanvasTexture(this.$refs.testcanvas)
texture.needsUpdate = true;
materialLocal = new THREE.MeshBasicMaterial({ map: texture })
materialLocal.needsUpdate = true;
materialLocal.map.needsUpdate = true;
this.mainVideoMesh.material = materialLocal
this.mainVideoMesh.material.needsUpdate = true;
No hepls . I got just first image screen texture and than stops updating.
In runtime i found ->
this.scene.children[2].material.map.needsUpdate: undefined
Strange situation any suggestion.
When using a video as a data source for a texture, the idea is to use the THREE.VideoTexture class. This type of texture will automatically manage its needsUpdate flag in order to ensure new frames are correctly displayed on your plane mesh.
Using THREE.VideoTexture requires that you use the video element as an argument, not the canvas.
Related
It seems there are lots of related topics out there, but I was not able to find the answer I'm searching for.
I'm using JavaCV and FFmpegFrameGrabber to get an image from the middle of the video. If an mp4 file has a "Rotation" metadata field (like 90 or 270), I'm getting an image that is positioned not correctly. I wanted to get Orientation from FFmpegFrameGrabber, but could not find a way to do so.
Is there a way to tell FFmpegFrameGrabber to respect orientation, or is there a way to get this value somehow using JavaCV?
Just in case the code that I have so far
FFmpegFrameGrabber g = new FFmpegFrameGrabber(input);
g.start();
g.getVideoMetadata(); // <-- this thing is empty
try {
g.setFrameNumber(g.getLengthInFrames() / 2);
Java2DFrameConverter converter = new Java2DFrameConverter();
Frame frame = g.grabImage();
BufferedImage bufferedImage = converter.convert(frame);
ImageIO.write(bufferedImage, "jpeg", output);
} finally {
g.stop();
}
I am attempting to place a number of overlays (textures) on top of an existing texture. For the most part, this works fine.
However, for the life of me, I can't figure out why the output of this is sporadically "flickering" in my drawRect method of my MTKView. Everything seems fine; I do further processing on theTexture (in a kernel shader) after I loop with my placing my overlays. For some reason, I feel like this encoding is ending early and not enough work is getting done on it.
To clarify, everything starts out fine but about 5 seconds in, the flickering starts and gets progressively worse. For debugging purposes (right now, anyways) that loop runs only once -- there is only one overlay element. The input texture (theTexture) is bona-fide every time before I start (created with a descriptor where storageMode is MTLStorageModeManaged and usage is MTLTextureUsageUnknown).
I've also tried stuffing the encoder instantiation/ending inside the loop; no difference.
Can someone help me see what I'm doing wrong?
id<MTLTexture> theTexture; // valid input texture as "background"
MTLRenderPassDescriptor *myRenderPassDesc = [MTLRenderPassDescriptor renderPassDescriptor];
myRenderPassDesc.colorAttachments[0].texture = theTexture;
myRenderPassDesc.colorAttachments[0].storeAction = MTLStoreActionStore;
myRenderPassDesc.colorAttachments[0].loadAction = MTLLoadActionLoad;
id<MTLRenderCommandEncoder> myEncoder = [commandBuffer renderCommandEncoderWithDescriptor:myRenderPassDesc];
MTLViewport viewPort = {0.0, 0.0, 1920.0, 1080.0, -1.0, 1.0};
vector_uint2 imgSize = vector2((u_int32_t)1920,(u_int32_t)1080);
[myEncoder setViewport:viewPort];
[myEncoder setRenderPipelineState:metalVertexPipelineState];
for (OverlayWrapper *ow in overlays) {
id<MTLTexture> overlayTexture = ow.overlayTexture;
VertexRenderSet *v = [ow getOverlayVertexInfoPtr];
NSUInteger vSize = v->metalVertexCount*sizeof(AAPLVertex);
id<MTLBuffer> mBuff = [self.device newBufferWithBytes:v->metalVertices
length:vSize
options:MTLResourceStorageModeShared];
[myEncoder setVertexBuffer:mBuff offset:0 atIndex:0];
[myEncoder setVertexBytes:&imgSize length:sizeof(imgSize) atIndex:1];
[myEncoder setFragmentTexture:overlayTexture atIndex:0];
[myEncoder drawPrimitives:MTLPrimitiveTypeTriangle vertexStart:0 vertexCount:v->metalVertexCount];
}
[myEncoder endEncoding];
// do more work (kernel shader) with "theTexture"...
UPDATE #1:
I've attached a image of a "good" frame, with the vertex area (lower right) being shown. My encoder is responsible for placing the green stand-in "image" on top of the video frame theTexture at 30fps, which it does do. Just to clarify, theTexture is created for each frame (from a CoreVideo pixel buffer). After this encoder, I only read from the theTexture in a kernel shader to adjust brightness -- all that is working just fine.
My problems must exist elsewhere, as the video frames stop flowing (though the audio keeps going) and I end up alternating between 2 or 3 previous frames once this encoder is inserted (hence, the flicker). I believe now that my video pixel buffer vendor is being inadvertently supplanted by this "overlay" vendor.
If I comment out this entire vertex renderer, my video frames flow through just fine; it's NOT a problem with my video frame vendor.
UPDATE #2:
Here is the declaration of my rendering pipeline:
MTLRenderPipelineDescriptor *p = [[MTLRenderPipelineDescriptor alloc] init];
if (!p)
return nil;
p.label = #"Vertex Mapping Pipeline";
p.vertexFunction = [metalLibrary newFunctionWithName:#"vertexShader"];
p.fragmentFunction = [metalLibrary newFunctionWithName:#"samplingShader"];
p.colorAttachments[0].pixelFormat = MTLPixelFormatBGRA8Unorm;
NSError *error;
metalVertexPipelineState = [self.device newRenderPipelineStateWithDescriptor:p
error:&error];
if (error || !metalVertexPipelineState)
return nil;
Here is the texture descriptor used for creation of theTexture:
metalTextureDescriptor = [MTLTextureDescriptor texture2DDescriptorWithPixelFormat:MTLPixelFormatBGRA8Unorm
width:width
height:height
mipmapped:NO];
metalTextureDescriptor.storageMode = MTLStorageModePrivate;
metalTextureDescriptor.usage = MTLTextureUsageUnknown;
I haven't included the AAPLVertex and the vertex/fragment functions because of this: If I just comment out the OverlayWrapper loop in my rendering code (ie. don't even set vertex buffers or draw primitives), the video frames still flicker. The video is still playing but only 2-3 frames or so are playing in a continuous loop, from the time that this encoder "ran".
I've also added this code after the [... endEncoding] and changed the texture usage to MTLStorageModeManaged -- still, no dice:
id<MTLBlitCommandEncoder> blitEncoder = [commandBuffer blitCommandEncoder];
[blitEncoder synchronizeResource:crossfadeOutput];
[blitEncoder endEncoding];
To clarify a few things: The subsequent computer shader uses theTexture for input only. These are video frames; thusly, theTexture is re-created each time. Before it goes through this render stage, it has a bona-fide "background"
UPDATE #3:
I got this working, if by unconventional means.
I used this vertex shader to render my overlay onto a transparent background of a newly-created blank texture, specifically with my loadAction being MTLLoadActionClear with a clearColor of (0,0,0,0).
I then mixed this resulting texture with my theTexture with a kernel shader. I should not have to do this, but it works!
I had the same problem and wanted to explore a simpler solution before attempting #zzyzy's. This solution is also somewhat unsatisfying but at least seems to work.
The key (but inadequate in and of itself) is to reduce the buffering on the Metal layer:
metalLayer_.maximumDrawableCount = 2
Second, once the buffering was reduced, I found I had to go through a render/present/commit cycle to draw a trivial, invisible item with .clear set on the render pass descriptor — pretty straightforward:
renderPassDescriptor.colorAttachments[0].loadAction = .clear
(That there were a few invisible triangles drawn is probably irrelevant; it is probably the MTLLoadActionClear attribute that differentiates the pass. I used the same clear color as #zzyzy above and I think this echos the above solution.)
Third, I found I had to run the code through that render/present/commit cycle a second time — i.e., twice in a row. Of the three, this seems the most arbitrary and I don't pretend to understand it, but the three together worked for me.
I currently work on a little Project, where I render a CubeMap with WebGL and then apply some sounds with the "web audio API" Web Audio API
Since the project is very large, I just try to explain what I am looking for. When I load an audio file, the sounds gets visualized (looks like a cube). The audio listener position is ALWAYS at position 0,0,0. What I have done so far is that I have created "Camera" (gl math library) with lookAt and perspective and when I rotate the camera away from the audio emitting cube, the audio played should sound different.
How am I doing this?
Every Frame I set the the orientation of the PannerNode (panner node set orientation) to the upVector of the camera. Here is the every frame update-Method (for the sound):
update(camera) {
this._upVec = vec3.copy(this._upVec, camera.upVector);
//vec3.transformMat4(this._upVec, this._upVec, camera.vpMatrix);
//vec3.normalize(this._upVec, this._upVec);
this._sound.panner.setOrientation(this._upVec[0], this._upVec[1], this._upVec[2]);
}
And here is the updateViewProjectionMethod-Methof from my Camera class, where I update the Orientation of the listener:
updateViewProjMatrix() {
let gl = Core.mGetGL();
this._frontVector[0] = Math.cos(this._yaw) * Math.cos(this._pitch);
this._frontVector[1] = Math.sin(this._pitch);
this._frontVector[2] = Math.sin(this._yaw) * Math.cos(this._pitch);
vec3.normalize(this._lookAtVector, this._frontVector);
vec3.add(this._lookAtVector, this._lookAtVector, this._positionVector);
mat4.lookAt(this._viewMatrix, this._positionVector, this._lookAtVector, this._upVector);
mat4.perspective(this._projMatrix, this._fov * Math.PI / 180, gl.canvas.clientWidth / gl.canvas.clientHeight, this._nearPlane, this._farPlane);
mat4.multiply(this._vpMatrix, this._projMatrix, this._viewMatrix);
Core.getAudioContext().listener.setOrientation(this._lookAtVector[0], this._lookAtVector[1], this._lookAtVector[2], 0, 1, 0);
}
Is this the right way? I can hear that the sound is different if I rotate the camera, but I am not sure. And do I have to multiply the resulting upVector with the current viewProjectionMatrix?
I'm using openCV to split a video into frames. For that I need the fps and duration. Both of these value return 1 when asking them via cvGetCaptureProperty.
I've made a hack where I use AVURLAsset to get the fps and duration, but when I combine that with openCV I get only a partial video. It seems like it's missing frames.
This is my code right now:
while (cvGrabFrame(capture)) {
frameCounter++;
if (frameCounter % (int)(videoFPS / MyDesiredFramesPerSecond) == 0) {
IplImage *frame = cvCloneImage(cvRetrieveFrame(capture));
// Do Stuff
}
if (frameCounter > duration*fps)
break; // this is here because the loop never stops on its own
}
How can I get all the frames of a video using openCV on iOS? (opencv 2.3.2)
According to the documentation you should check the value returned by cvRetrieveFrame(), if a null pointer is returned you're at the end of the video sequence. Then you break the loop when that happens, instead of relying on the accuracy of FPS*frame_number.
So I have a XNA application set up. The camera is in first person mode, and the user can move around using the keyboard and reposition the camera target with the mouse. I have been able to load 3D models fine, and they appear on screen no problem. Whenever I try to draw any primitive (textured or not), it does not show up anywhere on the screen, no matter how I position the camera.
In Initialize(), I have:
quad = new Quad(Vector3.Zero, Vector3.UnitZ, Vector3.Up, 2, 2);
quadVertexDecl = new VertexDeclaration(this.GraphicsDevice, VertexPositionNormalTexture.VertexElements);
In LoadContent(), I have:
quadTexture = Content.Load<Texture2D>(#"Textures\brickWall");
quadEffect = new BasicEffect(this.GraphicsDevice, null);
quadEffect.AmbientLightColor = new Vector3(0.8f, 0.8f, 0.8f);
quadEffect.LightingEnabled = true;
quadEffect.World = Matrix.Identity;
quadEffect.View = Matrix.CreateLookAt(cameraPosition, cameraTarget, Vector3.Up);
quadEffect.Projection = this.Projection;
quadEffect.TextureEnabled = true;
quadEffect.Texture = quadTexture;
And in Draw() I have:
this.GraphicsDevice.VertexDeclaration = quadVertexDecl;
quadEffect.Begin();
foreach (EffectPass pass in quadEffect.CurrentTechnique.Passes)
{
pass.Begin();
GraphicsDevice.DrawUserIndexedPrimitives<VertexPositionNormalTexture>(
PrimitiveType.TriangleList,
quad.Vertices, 0, 4,
quad.Indexes, 0, 2);
pass.End();
}
quadEffect.End();
I think I'm doing something wrong in the quadEffect properties, but I'm not quite sure what.
I can't run this code on the computer here at work as I don't have game studio installed. But for reference, check out the 3D audio sample on the creator's club website. They have a "QuadDrawer" in that project which demonstrates how to draw a textured quad in any position in the world. It's a pretty nice solution for what it seems you want to do :-)
http://creators.xna.com/en-US/sample/3daudio