I am wondering is there any way to draw Int32/Uint32 into render target/texture? Also is it possible to upload Int32/Uint32 buffer with texImage2D without converting it to Float32? Does WebGL 1.0 have any such extension? Is going WebGL 2.0 to have such feature?
Related
I want to implement NVIDIA Reflex to My Direct2D application. I have a ID2D11Device, but NvAPI_D3D_SetSleepMode requires Direct3DDevice.
I know that Direct2D is based on Direct3D. So, I think that I can acquire Direct3D Device from Direct2D Device. But, I can't find any solution.
How to get Direct3D device from Direct2D device? If I misunderstand conceptions, please let me make know a right concepts. Thanks.
When you created the ID2D1Device, you had to start with a Direct3D device. Use that one.
// Obtain the underlying DXGI device of the Direct3D11.1 device.
DX::ThrowIfFailed(
m_d3dDevice.As(&dxgiDevice)
);
// Obtain the Direct2D device for 2-D rendering.
DX::ThrowIfFailed(
m_d2dFactory->CreateDevice(dxgiDevice.Get(), &m_d2dDevice)
);
// And get its corresponding device context object.
DX::ThrowIfFailed(
m_d2dDevice->CreateDeviceContext(
D2D1_DEVICE_CONTEXT_OPTIONS_NONE,
&m_d2dContext
)
);
I found the git below is simple and efficient by using func capturer(_ capturer: RTCVideoCapturer, didCapture frame: RTCVideoFrame) of RTCVideoCapturerDelegate. You get RTCVideoFrame and then convert to CVPixelBuffer to modify.
https://gist.github.com/lyokato/d041f16b94c84753b5e877211874c6fc
However, I found Chronium says nativeHandle to get PixelBuffer is no more available(link below). I tried frame.buffer.pixelbuffer..., but, looking at framework > Headers > RTCVideoFrameBuffer.h, I found CVPixelBuffer is also gone from here!
https://codereview.webrtc.org/2990253002
Is there any good way to convert RTCVideoFrame to CVPixelBuffer?
Or do we have better way to modify captured video from RTCCameraVideoCapturer?
Below link suggests modifying sdk directly but hopefully we can achieve this on Xcode.
How to modify (add filters to) the camera stream that WebRTC is sending to other peers/server
can you specify what is your expectation? because you can get pixel buffer from RTCVideoframe easily but I feel there can be a better solution if you want to filter video buffer than sent to Webrtc, you should work with RTCVideoSource.
you can get buffer with
as seen
RTCCVPixelBuffer *buffer = (RTCCVPixelBuffer *)frame.buffer;
CVPixelBufferRef imageBuffer = buffer.pixelBuffer;
(with latest SDK and with local video camera buffer only)
but in the sample i can see that filter will not work for remote.
i have attached the screenshot this is how you can check the preview as well.
I'm trying to swap the fragement-shader used in a program. The fragment-shaders all have the same variables, just different calculations. I am trying to provide alternative shaders for lower level hardware.
I end up getting single color outputs (instead of a texture), does anyone have an idea what I could be doing wrong? I know the shaders are being used, due to the color changing accordingly.
//if I don't do this:
//WebGL: INVALID_OPERATION: attachShader: shader attachment already has shader
gl.detachShader(program, _.attachedFS);
//select a random shader, all using the same parameters
attachedFS = fragmentShaders[~~(Math.qrand()*fragmentShaders.length)];
//attach the new shader
gl.attachShader(program, attachedFS);
//if I don't do this nothing happens
gl.linkProgram(program);
//if I don't add this line:
//globject.js:313 WebGL: INVALID_OPERATION: uniform2f:
//location not for current program
updateLocations();
I am assuming you have called gl.compileShader(fragmentShader);
Have you tried to test the code on a different browser and see if you get the same behavior? (it could be standards implementation specific)
Have you tried to delete the fragment shader (gl.deleteShader(attachedFS); ) right after detaching it. The
previous shader may still have a pointer in memory.
If this does not let you move forward, you may have to detach both shaders (vertex & frag) and reattach them or even recreate the program from scratch
I found the issue, after trying about everything else without result. It also explains why I was seeing a shader change, but just getting a flat color. I was not updating some of the attributes.
I just recently starting getting these messages, and was wondering if anyone has seen them, or know what may be causing them. I'm using Three.js with Chrome version '21.0.1180.57' on MacOS. I don't get these messages with Safari or FireFox.
PERFORMANCE WARNING: Attribute 0 is disabled. This has signficant performance penalty
WebGL: too many errors, no more errors will be reported to the console for this context.
Same message on Firefox is : "Error: WebGL: Drawing without vertex attrib 0 array enabled forces the browser to do expensive emulation work when running on desktop OpenGL platforms, for example on Mac. It is preferable to always draw with vertex attrib 0 array enabled, by using bindAttribLocation to bind some always-used attribute to location 0."
This is not only a performance drawback, but will also result in bad output.
PROBLEM: This message occurs if a JS tries to run a WebGL shader that is expecting color information in gl_Color on a mesh not providing a color array.
SOLUTION: Use a WebGL shader with constant color not accessing gl_Color or provide a color array in the mesh to be shaded.
If using lightgl.js from Evan Wallace, try to add the option colors:true in the new GL.Mesh statement and provide a propper mesh.colors array of same size as your vertices array. Or try this shader:
blackShader = new GL.Shader(
'void main() { gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex; }',
'void main() { gl_FragColor = vec4(0.0, 0.0, 0.0, 1.0); }'
);
Sorry, I never used Three.js but the problem should be similar, provide color to your mesh before shading.
Looks like a Chrome bug:
http://code.google.com/p/chromium-os/issues/detail?id=32528
The situation is this:
I've written a simple MovieClip replacement that converts an existing imported MovieClip to a sequence of BitmapData. This removes the requirement for Flash to render vector data in the MovieClip on each frame.
But BitmapData has a huge memory footprint. I've tried converting the BitmapData to a ByteArray and using the compress() method. This results in a significantly smaller memory footprint. But it has proven impractical. For each redraw, I tried uncompressing()'ing the ByteArray, then using SetPixels to blit the data to the screen, then re-compressing() the frame. This works but is terribly slow.
So I was wondering if anybody else has an approach I could try. In Flash, is it possible to compress bitmap data in memory and quickly blit it to the screen?
I wonder how native animated GIFs work in Flash. Does it uncompress them to BitmapData behind the scenes, or is frame decompression done on the fly?
Perhaps there is an Alchemy project that attempts to blit compressed images?
Thanks for any advice you can offer :)
#thienhaflash's response is good but has aged a year and since then Flash Player and AIR Runtime have expanded their capabilities. Today I stumbeled on this little tidbit from Adobe's AS3 Guide. As of player 11.3 there are native image compression techniques available. Here's a snippet:
// Compress a BitmapData object as a JPEG file.
var bitmapData:BitmapData = new BitmapData(640,480,false,0x00FF00);
var byteArray:ByteArray = new ByteArray();
bitmapData.encode(new Rectangle(0,0,640,480), new flash.display.JPEGEncoderOptions(), byteArray);
Not sure about the practicality for blitting but it's nice that it can be done natively.
For memory reservation you need to think twice before convert a MovieClip to a Bitmap sequence. Is it really that need ? Can you break things down as there are several things (like the background) is static (or just moving around) why don't cache bitmap for each elements instead of one big Bitmap sequence ?
I usually used AnimatedBitmap (the name for bitmap sequence alternative for a MovieClip) only for small size animated icons, and other heavy calculation stuffs (like fire / smoke effects ...). Just break things down as much as you can !
As far as i know, there are no way to compress the memory used by a BitmapData located in the memory and there are nothing related to Alchemy could help improve memory used in this case.
Animated GIF won't works in Flash natively, you will need some library to do that. Search for AnimatedGIF as3 library from bytearray.com, actually the library just read the gif file in raw byteArray and convert to an animatedBitmap just like how you've done.
this is an old question, but there is recent info on this : jackson Dunstan has had a run with bitmapdatas and it turns out that Bitmap data obtained from compressed sources will "deflate" after some time unused.
here are the articles : http://jacksondunstan.com/articles/2112, and the two referred at the beginning of it.
So you could absolutely do something like :
var byteArray:ByteArray = new ByteArray();
myBitmapData.encode(new Rectangle(0,0,640,480), new flash.display.JPEGEncoderOptions(), byteArray);
var loader = new Loader();
loader.addEventListener(Event.COMPLETE, function(_e:Event):void{
if(loader.content is Bitmap){
myBitmapData.dispose()
myBitmapData= Bitmap(loader.content).bitmapData;
}
});
loader.loadBytes(byteArray);
I'm not sure if it would work as is, and you definitely want to handle your memory better. but now, myBitmapData will be uncompressed when you try to read from it, and then re-compressed when you don't use it for about ten seconds.