Hi I hava jpeg compressed data stored in Uin8Array . I read about texture in Webgl . All Link what i saw initialize texture after loading image ( created by jpeg data , image.src = "some data" image.onload ( load texture ) ) . But this is asynchronus process . This process works fine . But can i use function compressedTexImage2D(target, level, internalFormat, width, height, border, data) internel format should be related to jpeg and data will be in form of compressed jpeg format ( width or height is not in form of pow of 2 ) so that whole process should be synchronous ? Or any other method in webgl that take jpeg compressed data directly without loading an image ?
So here is the bad news currently as of September 2012 WebGL does not actually support compressedTexImage2D. If you try calling the function it will always return an INVALID_ENUM error. If you are curious here is the section of the specification that explains it.
Now the some what good news is that you can create a texture from a Uint8Array of jpeg data. I'm not sure how to do this synchronously, but maybe this code will help anyways.
Basically we have to convert the original Uint8Array data into a base64 string, so we can create a new image with the base64 string as the image source.
So here is the code:
function createTexture(gl, data) {
var stringData = String.fromCharCode.apply(null, new Uint16Array(data));
var encodedData = window.btoa(stringData);
var dataURI = "data:image/jpeg;base64," + encodedData;
texture = gl.createTexture();
texture.image = new Image();
texture.image.onload = function () {
gl.bindTexture(gl.TEXTURE_2D, texture);
gl.pixelStorei(gl.UNPACK_FLIP_Y_WEBGL, true);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, texture.image);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.LINEAR);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);
gl.bindTexture(gl.TEXTURE_2D, null);
};
texture.image.src = dataURI;
return texture;
}
I have a demo of the function here. To keep the file small I'm only using a 24x24 pixel jpeg. Just in case you are wondering, the function also works for jpeg's with non power of 2 heights/widths.
If you want to see the full source code of the demo look here.
Related
When i try to render the image or text using the context of Canvas 2d. I'm getting the cross-origin error. even if i made crossorigin ='anonymous' it is behaving wierd. Can anyone help me out. here is my code.
Uncaught DOMException: Failed to execute 'texImage2D' on 'WebGLRenderingContext': The image element contains cross-origin data, and may not be loaded.
at Image.image.onload (file:///C:/Users/***/WebGl-Integration/texture.html:312:8)
Reference :https://developer.mozilla.org/en-US/docs/Web/API/WebGL_API/Tutorial/Using_textures_in_WebGL
function loadTexture(gl, url) {
const texture = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, texture);
// Because images have to be download over the internet
// they might take a moment until they are ready.
// Until then put a single pixel in the texture so we can
// use it immediately. When the image has finished downloading
// we'll update the texture with the contents of the image.
const level = 0;
const internalFormat = gl.RGBA;
const width = 1;
const height = 1;
const border = 0;
const srcFormat = gl.RGBA;
const srcType = gl.UNSIGNED_BYTE;
const pixel = new Uint8Array([0, 0, 255, 255]); // opaque blue
gl.texImage2D(gl.TEXTURE_2D, level, internalFormat,
width, height, border, srcFormat, srcType,
pixel);
const image = new Image();
image.src="img_the_scream.jpg";
//image.crossOrigin="Anonymous";
image.onload = function() {
gl.bindTexture(gl.TEXTURE_2D, texture);
**getting issue here**
gl.texImage2D(gl.TEXTURE_2D, level, internalFormat,
srcFormat, srcType, image);
// WebGL1 has different requirements for power of 2 images
// vs non power of 2 images so check if the image is a
// power of 2 in both dimensions.
if (isPowerOf2(image.width) && isPowerOf2(image.height)) {
// Yes, it's a power of 2. Generate mips.
gl.generateMipmap(gl.TEXTURE_2D);
} else {
// No, it's not a power of 2. Turn of mips and set
// wrapping to clamp to edge
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR);
}
};
image.src = url;
return texture;
}
function isPowerOf2(value) {
return (value & (value - 1)) == 0;
}
You need to run a simple web server. You can not load images into WebGL from your hard drive directly.
Here's one with a GUI.
Here's another
Here's one that runs from the command line
See this and this as well
Suppose we have the following color:
const Scalar TRANSPARENT2 = Scalar(255, 0, 255,0);
which is magenta but fully transparent: alpha = 0 (to be fully opaque is 255).
Now I made the following test based on:
http://blogs.msdn.com/b/lucian/archive/2015/12/04/opencv-first-version-up-on-nuget.aspx
WriteableBitmap^ Grabcut::TestTransparent()
{
Mat res(400,400, CV_8UC4);
res.setTo(TRANSPARENT2);
WriteableBitmap^ wbmp = ref new WriteableBitmap(res.cols, res.rows);
IBuffer^ buffer = wbmp->PixelBuffer;
unsigned char* dstPixels;
ComPtr<IBufferByteAccess> pBufferByteAccess;
ComPtr<IInspectable> pBuffer((IInspectable*)buffer);
pBuffer.As(&pBufferByteAccess);
pBufferByteAccess->Buffer(&dstPixels);
memcpy(dstPixels, res.data, res.step.buf[1] * res.cols * res.rows);
return wbmp;
}
The issue I have is that the image created is not fully transparent, it has a bit of alpha:
I understand there is a fila in the memcpy data, but I am not really sure about how to solve this. any idea to get it to alpha 0?
more details
To see I saving the image could then read and test if it works, I saw that the imwrite contains an snippet about transparency like in the image, but well imwrite is not implemented yet. But the transparency method is not working neither.
Any light with this snippet?
Thanks.
Finally I did the conversion in the C# code, first avoid calling CreateAlphaMat.
Then what I did is use a BitmapEncoder to convert data:
WriteableBitmap wb = new WriteableBitmap(bitmap.PixelWidth, bitmap.PixelHeight);
using (IRandomAccessStream stream = new InMemoryRandomAccessStream())
{
BitmapEncoder encoder = await BitmapEncoder.CreateAsync(BitmapEncoder.PngEncoderId, stream);
Stream pixelStream = bitmap.PixelBuffer.AsStream();
byte[] pixels = new byte[pixelStream.Length];
await pixelStream.ReadAsync(pixels, 0, pixels.Length);
encoder.SetPixelData(BitmapPixelFormat.Bgra8, BitmapAlphaMode.Premultiplied,
(uint)bitmap.PixelWidth, (uint)bitmap.PixelHeight, 96.0, 96.0, pixels);
await encoder.FlushAsync();
wb.SetSource(stream);
}
this.MainImage.Source = wb;
where bitmap is the WriteableBitmap from the OpenCV result. And now the image is fully transparent.
NOTE: Do not use MemoryStream and then .AsRandomAccessStream because it won't FlushAsync
I want to increase performance in my WebGL project by setting up VBO double buffering. Although there are plenty of articles on this topic, I failed to find one with a coding example.
I tried the following:
// Initial setup...
var buff1 = gl.createBuffer();
var buff2 = gl.createBuffer();
var buffActive = buff1;
// For each frame, change the verticies data and then ...
gl.bindBuffer(gl.ARRAY_BUFFER, buffActive);
gl.bufferData(gl.ARRAY_BUFFER, vertArray, gl.DYNAMIC_DRAW);
gl.drawArrays(gl.TRIANGLES, start, count);
buffActive = (buffActive === buff2) ? buff1 : buff2;
The calls to gl.bindBuffer and gl.bufferData work fine. However, gl.drawArrays always renders using only the data from buff1. I assumed gl.drawArrays would render using whichever VBO is currently bound to gl.ARRAY_BUFFER, but apparently that's not the case.
Does anyone see what I'm missing?
The issue here is you have bound both buff1 and buff2 to the same buffer, vertArray.
A better thing to do is create two vertex arrays (say, vertArray1 and vertArray2) and
to bind each vertex array to a separate buffer object during the initial setup (and not each frame, because
bufferData() destroys and reinitializes a buffer object's data, which may be an
expensive operation if doing so requires uploading the data to the GPU).
Here's an example:
// Initial setup
var buff1 = gl.createBuffer();
var buff2 = gl.createBuffer();
// Associate the buffer data once during initial setup.
// Note that we use two vertex arrays, vertArray1 and vertArray2,
// rather than one
gl.bindBuffer(gl.ARRAY_BUFFER, buff1);
gl.bufferData(gl.ARRAY_BUFFER, vertArray1, gl.DYNAMIC_DRAW);
gl.bindBuffer(gl.ARRAY_BUFFER, buff2);
gl.bufferData(gl.ARRAY_BUFFER, vertArray2, gl.DYNAMIC_DRAW);
var buffActive = buff1;
var vertArrayActive = vertArray1;
// For each frame:
gl.bindBuffer(gl.ARRAY_BUFFER, buffActive);
// No need to re-load the buffer data
gl.drawArrays(gl.TRIANGLES, start, count);
buffActive = (buffActive === buff2) ? Buff1 : buff2;
vertArrayActive = (vertArrayActive === vertArray2) ? VertArray1 : vertArray2;
If you implement VBO double buffering then you need to ensure calls to gl.vertexAttribPointer and gl.enableVertexAttribArray for vertex attribute data (stored as interleaved data in vertArray) are made every time the VBOs switch, rather than just during setup. Otherwise, your vertex attribute pointers will always be referencing data in the first VBO, and that VBO will now only be updated every other frame as a consequence of double buffering. The calls to gl.vertexAttribPointer and gl.enableVertexAttribArray must now be made after switching VBOs and before gl.drawArrays.
on ios beta 4,ipad2, i did some check to see if the device support gpu particle simulation..
gl.getParameter(gl.MAX_VERTEX_TEXTURE_IMAGE_UNITS) >= 1
!!gl.getExtension( 'OES_texture_float' )
they both say yes, but things dont really work...
I wanna know how to detect this kind of bug...
so that i can fall back to other things to show...
webgl preview and src:
https://googledrive.com/host/0B2CX8zXCqhScelpNMkpSX1pmRHM
screenshots:
https://drive.google.com/folderview?id=0B2CX8zXCqhScR0d2SExtZm9EWDA
I use this to detect iOS 8 beta4 and before....
is there any better way to detect and fallback?
if (
navigator.userAgent.match(/(iPod|iPhone|iPad)/)
){
var usrA= navigator.userAgent;
var info = usrA.match(/(opera|chrome|safari|firefox|msie|trident(?=\/))\/?\s*(\d+)/i) || [];
if (parseFloat(info[2],10) <= 9537){
check.gpuSim = false;
}
}
thx for reading this >v<~
Checking by userAgent for anything whatsoever in HTML/JavaScript is an anti-pattern.
The correct way for his case is to check if you can render to a floating point texture (something that's usually needed for particle simulations). To test if you can render to floating point textures you need to create a framebuffer, attach a floating point texture, then check if it's complete.
var gl = someCanvasElement.getContext("experimental-webgl");
var ext = gl.getExtension("OES_texture_float");
if (!ext) {
alert("no OES_texture_float");
return;
}
now you can create and render with floating point textures. The next thing to do is see if you can render to floating point textures.
var tex = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, tex);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, width, height, 0, gl.RGBA, gl.FLOAT, null);
gl.texParameteri(gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_MIN_FILTER, gl.NEAREST);
gl.texParameteri(gl.TEXTURE_MAG_FILTER, gl.NEAREST);
var fb = gl.createFramebuffer();
gl.bindFrameBuffer(gl.FRAMEBUFFER, fb);
gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.TEXTURE_2D, tex, 0);
var status = gl.checkFramebufferStatus(gl.FRAMEBUFFER);
if (status != gl.FRAMEBUFFER_COMPLETE) {
alert("can not render to floating point textures");
return;
}
Also, if you use a depth or stencil attachment when rendering to that floating point texture you need to attach that as well before checking if it's complete.
I create a little application in WebGL, I have two objects which move, a cube and a sphere.
I modify objects with shaders, each object have it shader. So I want update objects on the display at determine time, for this I use drawElements function.
For each object I have a buffer which contains indices of faces in vertices buffer :
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, this.indexBuffer);
gl.bufferData(gl.ELEMENT_ARRAY_BUFFER, new Uint16Array(indices), gl.STREAM_DRAW);
indexCount = indices.length;
indices is an array which contains values of indices of each faces. (3 values per face, we work with triangles).
So, after this, for draw triangles of the object I do :
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, indexBuffer);
gl.drawElements(gl.TRIANGLES, indexCount, gl.UNSIGNED_SHORT, 0);
But I have nothing on the screen, and I have this warning :
WebGL: INVALID_OPERATION: drawElements: attribs not setup correctly
What's could be my error ?
Thanks
I think you missed codes which link javascript vertex buffer to shader's attributes before drawElements().
e.g:
gl.bindBuffer(gl.ARRAY_BUFFER, meshVertexPositionBuffer);
gl.vertexAttribPointer(shaderProgram.vertexPositionAttribute,
meshVertexPositionBuffer.itemSize, gl.FLOAT, false, 0, 0);
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, meshIndexBuffer);
gl.drawElements(gl.TRIANGLES, meshIndexBuffer.numberOfItems, gl.UNSIGNED_SHORT, 0);