How to render 2d text or 2d image in webgl? - webgl

When i try to render the image or text using the context of Canvas 2d. I'm getting the cross-origin error. even if i made crossorigin ='anonymous' it is behaving wierd. Can anyone help me out. here is my code.
Uncaught DOMException: Failed to execute 'texImage2D' on 'WebGLRenderingContext': The image element contains cross-origin data, and may not be loaded.
at Image.image.onload (file:///C:/Users/***/WebGl-Integration/texture.html:312:8)
Reference :https://developer.mozilla.org/en-US/docs/Web/API/WebGL_API/Tutorial/Using_textures_in_WebGL
function loadTexture(gl, url) {
const texture = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, texture);
// Because images have to be download over the internet
// they might take a moment until they are ready.
// Until then put a single pixel in the texture so we can
// use it immediately. When the image has finished downloading
// we'll update the texture with the contents of the image.
const level = 0;
const internalFormat = gl.RGBA;
const width = 1;
const height = 1;
const border = 0;
const srcFormat = gl.RGBA;
const srcType = gl.UNSIGNED_BYTE;
const pixel = new Uint8Array([0, 0, 255, 255]); // opaque blue
gl.texImage2D(gl.TEXTURE_2D, level, internalFormat,
width, height, border, srcFormat, srcType,
pixel);
const image = new Image();
image.src="img_the_scream.jpg";
//image.crossOrigin="Anonymous";
image.onload = function() {
gl.bindTexture(gl.TEXTURE_2D, texture);
**getting issue here**
gl.texImage2D(gl.TEXTURE_2D, level, internalFormat,
srcFormat, srcType, image);
// WebGL1 has different requirements for power of 2 images
// vs non power of 2 images so check if the image is a
// power of 2 in both dimensions.
if (isPowerOf2(image.width) && isPowerOf2(image.height)) {
// Yes, it's a power of 2. Generate mips.
gl.generateMipmap(gl.TEXTURE_2D);
} else {
// No, it's not a power of 2. Turn of mips and set
// wrapping to clamp to edge
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR);
}
};
image.src = url;
return texture;
}
function isPowerOf2(value) {
return (value & (value - 1)) == 0;
}

You need to run a simple web server. You can not load images into WebGL from your hard drive directly.
Here's one with a GUI.
Here's another
Here's one that runs from the command line
See this and this as well

Related

Why sprites render over objects?

I want to render a cube on a picture like in this tutorial. Problem is that it renders only the picture and the cube doesn't render. Can you help me ? Thankyou
m_spriteBatch->Begin();
m_spriteBatch->Draw(m_background.Get(), m_fullscreenRect);
//
// Clear the back buffer
//
g_pImmediateContext->ClearRenderTargetView( g_pRenderTargetView, Colors::MidnightBlue );
g_pImmediateContext->ClearDepthStencilView(g_pDepthStencilView, D3D11_CLEAR_DEPTH, 1.0f, 0);
g_pImmediateContext->OMSetRenderTargets(1, &g_pRenderTargetView, g_pDepthStencilView);
//
// Update variables
//
ConstantBuffer cb;
cb.mWorld = XMMatrixTranspose( g_World );
cb.mView = XMMatrixTranspose( g_View );
cb.mProjection = XMMatrixTranspose( g_Projection );
g_pImmediateContext->UpdateSubresource( g_pConstantBuffer, 0, nullptr, &cb, 0, 0 );
//
// Renders a triangle
//
g_pImmediateContext->VSSetShader( g_pVertexShader, nullptr, 0 );
g_pImmediateContext->VSSetConstantBuffers( 0, 1, &g_pConstantBuffer );
g_pImmediateContext->PSSetShader( g_pPixelShader, nullptr, 0 );
g_pImmediateContext->DrawIndexed( 36, 0, 0 ); // 36 vertices needed for 12 triangles in a triangle list
//
// Present our back buffer to our front buffer
//
m_spriteBatch->End();
g_pSwapChain->Present( 0, 0 );
SpriteBatch batches up draws for performance, so it's likely being drawn after the cube draw. If you want to make sure the sprite background draws first, then you need to call End before you submit your cube. You also need to call Begin after you set up the render target:
// Clear
g_pImmediateContext->ClearDepthStencilView(g_pDepthStencilView, D3D11_CLEAR_DEPTH, 1.0f, 0);
g_pImmediateContext->OMSetRenderTargets(1, &g_pRenderTargetView, g_pDepthStencilView);
// Draw background image
m_spriteBatch->Begin();
m_spriteBatch->Draw(m_background.Get(), m_fullscreenRect);
m_spriteBatch->End();
// Draw objects
context->OMSetBlendState(…);
context->OMSetDepthStencilState(…);
context->IASetInputLayout(…);
context->IASetVertexBuffers(…);
context->IASetIndexBuffer(…);
context->IASetPrimitiveTopology(…);
You can omit the ClearRenderTargetView if the m_background texture covers the whole screen.
For more on how SpriteBatch draw order and batching works, see the wiki.
Based on this answer by #ChuckWalbourn I fixed the problem.
g_pImmediateContext->ClearRenderTargetView( g_pRenderTargetView, Colors::MidnightBlue );
g_pImmediateContext->ClearDepthStencilView(g_pDepthStencilView, D3D11_CLEAR_DEPTH |
D3D11_CLEAR_STENCIL, 1.0f, 0);
m_spriteBatch->Begin();
m_spriteBatch->Draw(m_background.Get(), m_fullscreenRect);
m_spriteBatch->End();
states = std::make_unique<CommonStates>(g_pd3dDevice);
g_pImmediateContext->OMSetBlendState(states->Opaque(), Colors::Black, 0xFFFFFFFF);
g_pImmediateContext->OMSetDepthStencilState(states->DepthDefault(), 0);
// Set the input layout
g_pImmediateContext->IASetInputLayout(g_pVertexLayout);
UINT stride = sizeof(SimpleVertex);
UINT offset = 0;
g_pImmediateContext->IASetVertexBuffers(0, 1, &g_pVertexBuffer, &stride, &offset);
// Set index buffer
g_pImmediateContext->IASetIndexBuffer(g_pIndexBuffer, DXGI_FORMAT_R16_UINT, 0);
// Set primitive topology
g_pImmediateContext->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST);
// Draw objects

WebGL: async operations?

I'd like to know if there are any async calls for WebGL that one could take advantage of?
I have looked into Spec v1 and Spec v2 they don't mention anything. In V2, there is a WebGL Query mechanism which I don't think is what I'm looking for.
A search on the web didn't come up with anything definitive. There is this example and is not clear how the sync and async version differ. http://toji.github.io/shader-perf/
Ultimately I'd like to be able to some of all of these asynchronously:
readPixels
texSubImage2D and texImage2D
Shader compilation
program linking
draw???
There is a glFinish operation and the documentation for it says: "does not return until the effects of all previously called GL commands are complete.". To me this means that there are asynchronous operations which can be awaited for by calling Finish()?
And some posts on the web suggest that calling getError() also forces some synchronicity and is not a very desired thing to do after every call.
It depends on your definition of async.
In Chrome (Firefox might also do this now? not sure). Chrome runs all GPU code in a separate process from JavaScript. That means your commands are running asynchronous. Even OpenGL itself is designed to be asynchronous. The functions (WebGL/OpenGL) insert commands into a command buffer. Those are executed by some other thread/process. You tell OpenGL "hey, I have new commands for you to execute!" by calling gl.flush. It executes those commands asynchronously. If you don't call gl.flush it will be called for you periodically when too many commands have been issued. It will also be called when the current JavaScript event exits, assuming you called any rendering command to the canvas (gl.drawXXX, gl.clear).
In this sense everything about WebGL is async. If you don't query something (gl.getXXX, gl.readXXX) then stuff is being handled (drawn) out of sync with your JavaScript. WebGL is giving you access to a GPU after all running separately from your CPU.
Knowing this one way to take advantage of it in Chrome is to compile shaders async by submitting the shaders
for each shader
s = gl.createShader()
gl.shaderSource(...);
gl.compileShader(...);
gl.attachShader(...);
gl.linkProgram(...)
gl.flush()
The GPU process will now be compiling your shaders. So, say, 250ms later you only then start asking if it succeeded and querying locations, then if it took less then 250ms to compile and link the shaders it all happened async.
In WebGL2 there is at least one more clearly async operation, occlusion queries, in which WebGL2 can tell you how many pixels were drawn for a group of draw calls. If non were drawn then your draws were occluded. To get the answer you periodically pole to see if the answer is ready. Typically you check next frame and in fact the WebGL spec requires the answer to not be available until the next frame.
Otherwise, at the moment (August 2018), there is no explicitly async APIs.
Update
HankMoody brought up in comments that texImage2D is sync. Again, it depends on your definition of async. It takes time to add commands and their data. A command like gl.enable(gl.DEPTH_TEST) only has to add 2-8 bytes. A command like gl.texImage2D(..., width = 1024, height = 1024, RGBA, UNSIGNED_BYTE) has to add 4meg!. Once that 4meg is uploaded the rest is async, but the uploading takes time. That's the same for both commands it's just that adding 2-8 bytes takes a lot less time than adding 4meg.
To more be clear, after that 4 meg is uploaded many other things happen asynchronously. The driver is called with the 4 meg. The driver copies that 4meg. The driver schedules that 4meg to be used sometime later as it can't upload the data immediately if the texture is already in use. Either that or it does upload it immediately just to a new area and then swaps what the texture is pointing to at just before a draw call that actually uses that new data. Other drivers just copy the data and store it and wait until the texture is used in a draw call to actually update the texture. This is because texImage2D has crazy semantics where you can upload different size mips in any order so the driver can't know what's actually needed in GPU memory until draw time since it has no idea what order you're going to call texIamge2D. All of this stuff mentioned in this paragraph happens asynchronously.
But that does bring up some more info.
gl.texImage2D and related commands have to do a TON of work. One is they have to honor UNPACK_FLIP_Y_WEBGL and UNPACK_PREMULTIPLY_ALPHA_WEBGL so they man need to make a copy of multiple megs of data to flip it or premultiply it. Second, if you pass them a video, canvas, or image they may have to do heavy conversions or even reparse the image from source especially in light of UNPACK_COLORSPACE_CONVERSION_WEBGL. Whether this happens in some async like way or not is up to the browser. Since you don't have direct access to the image/video/canvas it would be possible for the browser to do all of this async but one way or another all that work has to happen.
To make much of that work ASYNC the ImageBitmap API was added. Like most Web APIs it's under-specified but the idea is you first do a fetch (which is async). You then request to create an ImageBitmap and give it options for color conversion, flipping, pre-multiplied alpha. This also happens async. You then pass the result to gl.texImage2D with the hope being that the browser was able to make do all the heavy parts before it got to this last step.
Example:
// note: mode: 'cors' is because we are loading
// from a different domain
async function main() {
const response = await fetch('https://i.imgur.com/TSiyiJv.jpg', {mode: 'cors'})
if (!response.ok) {
return console.error('response not ok?');
}
const blob = await response.blob();
const bitmap = await createImageBitmap(blob, {
premultiplyAlpha: 'none',
colorSpaceConversion: 'none',
});
const gl = document.querySelector("canvas").getContext("webgl");
const tex = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, tex);
{
const level = 0;
const internalFormat = gl.RGBA;
const format = gl.RGBA;
const type = gl.UNSIGNED_BYTE;
gl.texImage2D(gl.TEXTURE_2D, level, internalFormat,
format, type, bitmap);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR);
}
const vs = `
uniform mat4 u_worldViewProjection;
attribute vec4 position;
attribute vec2 texcoord;
varying vec2 v_texCoord;
void main() {
v_texCoord = texcoord;
gl_Position = u_worldViewProjection * position;
}
`;
const fs = `
precision mediump float;
varying vec2 v_texCoord;
uniform sampler2D u_tex;
void main() {
gl_FragColor = texture2D(u_tex, v_texCoord);
}
`;
const m4 = twgl.m4;
const programInfo = twgl.createProgramInfo(gl, [vs, fs]);
const bufferInfo = twgl.primitives.createCubeBufferInfo(gl, 2);
const uniforms = {
u_tex: tex,
};
function render(time) {
time *= 0.001;
twgl.resizeCanvasToDisplaySize(gl.canvas);
gl.viewport(0, 0, gl.canvas.width, gl.canvas.height);
gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);
gl.enable(gl.DEPTH_TEST);
const fov = 30 * Math.PI / 180;
const aspect = gl.canvas.clientWidth / gl.canvas.clientHeight;
const zNear = 0.5;
const zFar = 10;
const projection = m4.perspective(fov, aspect, zNear, zFar);
const eye = [1, 4, -6];
const target = [0, 0, 0];
const up = [0, 1, 0];
const camera = m4.lookAt(eye, target, up);
const view = m4.inverse(camera);
const viewProjection = m4.multiply(projection, view);
const world = m4.rotationY(time);
uniforms.u_worldViewProjection = m4.multiply(viewProjection, world);
gl.useProgram(programInfo.program);
twgl.setBuffersAndAttributes(gl, programInfo, bufferInfo);
twgl.setUniforms(programInfo, uniforms);
gl.drawElements(gl.TRIANGLES, bufferInfo.numElements, gl.UNSIGNED_SHORT, 0);
requestAnimationFrame(render);
}
requestAnimationFrame(render);
}
main();
body { margin: 0; }
canvas { width: 100vw; height: 100vh; display: block; }
<script src="https://twgljs.org/dist/4.x/twgl-full.min.js"></script>
<canvas></canvas>
Unfortunately this only works in Chrome as of 2018 August. Firefox bug is here. Other browsers I don't know.

how to detect bug on iOS8_beta4 webgl vertex shander sampler2d

on ios beta 4,ipad2, i did some check to see if the device support gpu particle simulation..
gl.getParameter(gl.MAX_VERTEX_TEXTURE_IMAGE_UNITS) >= 1
!!gl.getExtension( 'OES_texture_float' )
they both say yes, but things dont really work...
I wanna know how to detect this kind of bug...
so that i can fall back to other things to show...
webgl preview and src:
https://googledrive.com/host/0B2CX8zXCqhScelpNMkpSX1pmRHM
screenshots:
https://drive.google.com/folderview?id=0B2CX8zXCqhScR0d2SExtZm9EWDA
I use this to detect iOS 8 beta4 and before....
is there any better way to detect and fallback?
if (
navigator.userAgent.match(/(iPod|iPhone|iPad)/)
){
var usrA= navigator.userAgent;
var info = usrA.match(/(opera|chrome|safari|firefox|msie|trident(?=\/))\/?\s*(\d+)/i) || [];
if (parseFloat(info[2],10) <= 9537){
check.gpuSim = false;
}
}
thx for reading this >v<~
Checking by userAgent for anything whatsoever in HTML/JavaScript is an anti-pattern.
The correct way for his case is to check if you can render to a floating point texture (something that's usually needed for particle simulations). To test if you can render to floating point textures you need to create a framebuffer, attach a floating point texture, then check if it's complete.
var gl = someCanvasElement.getContext("experimental-webgl");
var ext = gl.getExtension("OES_texture_float");
if (!ext) {
alert("no OES_texture_float");
return;
}
now you can create and render with floating point textures. The next thing to do is see if you can render to floating point textures.
var tex = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, tex);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, width, height, 0, gl.RGBA, gl.FLOAT, null);
gl.texParameteri(gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_MIN_FILTER, gl.NEAREST);
gl.texParameteri(gl.TEXTURE_MAG_FILTER, gl.NEAREST);
var fb = gl.createFramebuffer();
gl.bindFrameBuffer(gl.FRAMEBUFFER, fb);
gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.TEXTURE_2D, tex, 0);
var status = gl.checkFramebufferStatus(gl.FRAMEBUFFER);
if (status != gl.FRAMEBUFFER_COMPLETE) {
alert("can not render to floating point textures");
return;
}
Also, if you use a depth or stencil attachment when rendering to that floating point texture you need to attach that as well before checking if it's complete.

DirectX: How to make a 2D image constantly pan / scroll in place

I am trying to find an efficient way to pan a 2D image in place using DirectX 9
I've attached a picture below that hopefully explains what I want to do. Basically, I want to scroll the tu and tv coordinates of all the quad's vertices across the texture to produce a "scrolling in place" effect for a 2D texture.
The first image below represents my loaded texture.
The second image is the texture with the tu,tv coordinates of the four vertices in each corner showing the standard rendered image.
The third image illustrates what I want to happen; I want to move the vertices in such a way that the box that is rendered straddles the end of the image and wraps back around in such a way that the texture will be rendered as shown with the two halves of the cloud separated.
The fourth image shows my temporary (wasteful) solution; I simply doubled the image and pan across until I reach the far right edge, at which point I reset the vertices' tu and tv so that the box being rendered is back on the far right.
Is there a legitimate way to do this without breaking everything into two separate quads?
I've added details of my set up and my render code below, if that helps clarify a path to a solution with my current design.
I have a function that sets up DirectX for 2D render as follows. I've added wrap properties to texture stage 0 as recommended:
VOID SetupDirectXFor2DRender()
{
pd3dDevice->SetSamplerState( 0, D3DSAMP_MINFILTER, D3DTEXF_POINT );
pd3dDevice->SetSamplerState( 0, D3DSAMP_MAGFILTER, D3DTEXF_POINT );
pd3dDevice->SetSamplerState( 0, D3DSAMP_MIPFILTER, D3DTEXF_POINT );
// Set for wrapping textures to enable panning sprite render
pd3dDevice->SetSamplerState( 0, D3DSAMP_ADDRESSU, D3DTADDRESS_WRAP );
pd3dDevice->SetSamplerState( 0, D3DSAMP_ADDRESSV, D3DTADDRESS_WRAP );
pd3dDevice->SetRenderState( D3DRS_ALPHAFUNC, D3DCMP_GREATEREQUAL );
pd3dDevice->SetRenderState( D3DRS_ALPHAREF, 0 );
pd3dDevice->SetRenderState( D3DRS_ALPHABLENDENABLE, true );
pd3dDevice->SetRenderState( D3DRS_ALPHATESTENABLE, false );
pd3dDevice->SetRenderState( D3DRS_SRCBLEND , D3DBLEND_SRCALPHA );
pd3dDevice->SetRenderState( D3DRS_DESTBLEND, D3DBLEND_INVSRCALPHA) ;
pd3dDevice->SetTextureStageState( 0, D3DTSS_COLOROP, D3DTOP_MODULATE );
pd3dDevice->SetTextureStageState( 0, D3DTSS_COLORARG1, D3DTA_TEXTURE );
pd3dDevice->SetTextureStageState( 0, D3DTSS_COLORARG2, D3DTA_DIFFUSE );
pd3dDevice->SetTextureStageState( 0, D3DTSS_ALPHAOP, D3DTOP_MODULATE );
pd3dDevice->SetTextureStageState( 0, D3DTSS_ALPHAARG1, D3DTA_TEXTURE );
pd3dDevice->SetTextureStageState( 0, D3DTSS_ALPHAARG2, D3DTA_DIFFUSE );
pd3dDevice->SetTextureStageState( 0, D3DTSS_COLOROP, D3DTOP_SELECTARG1 );
pd3dDevice->SetTextureStageState( 0, D3DTSS_COLORARG1, D3DTA_TEXTURE );
pd3dDevice->SetTextureStageState( 0, D3DTSS_COLORARG2, D3DTA_DIFFUSE );
return;
}
On each frame, I render things as follows:
VOID RenderAllEntities()
{
HRESULT hResult;
// Void pointer for DirectX buffer locking
VOID* pVoid;
hResult = pd3dDevice->Clear( 0,
NULL,
D3DCLEAR_TARGET,
0x0,
1.0f,
0 );
hResult = pd3dDevice->BeginScene();
// Do rendering on the back buffer here
hResult = pd3dDevice->SetFVF( CUSTOMFVF );
hResult = pd3dDevice->SetStreamSource( 0, pVertexBuffer, 0, sizeof(CUSTOM_VERTEX) );
for ( std::vector<RenderContext>::iterator renderContextIndex = queuedContexts.begin(); renderContextIndex != queuedContexts.end(); ++renderContextIndex )
{
// Render each sprite
for ( UINT uiIndex = 0; uiIndex < (*renderContextIndex).uiNumSprites; ++uiIndex )
{
// Lock the vertex buffer into memory
hResult = pVertexBuffer->Lock( 0, 0, &pVoid, 0 );
// Copy our vertex buffer to memory
::memcpy( pVoid, &renderContextIndex->vertexLists[uiIndex], sizeof(vertexList) );
// Unlock buffer
hResult = pVertexBuffer->Unlock();
hResult = pd3dDevice->SetTexture( 0, (*renderContextIndex).textures[uiIndex]->GetTexture() );
hResult = pd3dDevice->DrawPrimitive( D3DPT_TRIANGLELIST, 0, 6 );
}
}
// Complete and present the rendered scene
hResult = pd3dDevice->EndScene();
hResult = pd3dDevice->Present( NULL, NULL, NULL, NULL );
return;
}
To test SetTransform, I tried adding the following (sloppy but temporary) code block inside the render code before the call to DrawPrimitive:
{
static FLOAT di = 0.0f;
static FLOAT dy = 0.0f;
di += 0.03f;
dy += 0.03f;
// Build and set translation matrix
D3DXMATRIX ret;
D3DXMatrixIdentity(&ret);
ret(3, 0) = di;
ret(3, 1) = dy;
//ret(3, 2) = dz;
hResult = pd3dDevice->SetTransform( D3DTS_TEXTURE0, &ret );
}
This does not make any of my rendered sprites pan about.
I've been working through DirectX tutorials and reading the MS documentation to catch up on things but there are definitely holes in my knowledge, so I hope I'm not doing anything too completely brain-dead.
Any help super appreciated.
Thanks!
This should be quiet easy to do with one quad.
Assuming that you're using DX9 with the fixed function pipeline, you can translate your texture with IDirect3DDevice9::SetTransform (doc) with the properly texture as D3DTRANSFORMSTATETYPE(doc) and a 2D-Translation Matrix. You must ensure that your sampler state D3DSAMP_ADDRESSU and D3DSAMP_ADDRESSV (doc) is set to D3DTADDRESS_WRAP (doc). This tiles the texture virtually, so that negativ uv-values or values greater 1 are mapped to an infinit repetition of your texture.
If you're using shaders or another version of directx, you can translate you texture coordinate by yourself in the shader or manipulate the uv-values of your vertices.

raw jpeg data loading in webgl texture synchronously

Hi I hava jpeg compressed data stored in Uin8Array . I read about texture in Webgl . All Link what i saw initialize texture after loading image ( created by jpeg data , image.src = "some data" image.onload ( load texture ) ) . But this is asynchronus process . This process works fine . But can i use function compressedTexImage2D(target, level, internalFormat, width, height, border, data) internel format should be related to jpeg and data will be in form of compressed jpeg format ( width or height is not in form of pow of 2 ) so that whole process should be synchronous ? Or any other method in webgl that take jpeg compressed data directly without loading an image ?
So here is the bad news currently as of September 2012 WebGL does not actually support compressedTexImage2D. If you try calling the function it will always return an INVALID_ENUM error. If you are curious here is the section of the specification that explains it.
Now the some what good news is that you can create a texture from a Uint8Array of jpeg data. I'm not sure how to do this synchronously, but maybe this code will help anyways.
Basically we have to convert the original Uint8Array data into a base64 string, so we can create a new image with the base64 string as the image source.
So here is the code:
function createTexture(gl, data) {
var stringData = String.fromCharCode.apply(null, new Uint16Array(data));
var encodedData = window.btoa(stringData);
var dataURI = "data:image/jpeg;base64," + encodedData;
texture = gl.createTexture();
texture.image = new Image();
texture.image.onload = function () {
gl.bindTexture(gl.TEXTURE_2D, texture);
gl.pixelStorei(gl.UNPACK_FLIP_Y_WEBGL, true);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, texture.image);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.LINEAR);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);
gl.bindTexture(gl.TEXTURE_2D, null);
};
texture.image.src = dataURI;
return texture;
}
I have a demo of the function here. To keep the file small I'm only using a 24x24 pixel jpeg. Just in case you are wondering, the function also works for jpeg's with non power of 2 heights/widths.
If you want to see the full source code of the demo look here.

Resources