I'm trying to find a way to draw, by OpenGL calls in C++, to a HTML element, in such a way that anything behind the canvas (background image, HTML text, ...) is visible where the GL context framebuffer has no opaque color to be drawn, or maybe even with the ability to use blending.
I've tried setting opacity to zero in the glClearColor() call. I use emscripten to compile my C++ code, and when I use it to generate a module loader, I've noticed that there's code in the generated output to make the canvas element's background color black when no background color has explicitly been set. I've tried disabling this behaviour, in the hopes of gaining transparency, to no avail.
I know that Unity's WebGL version supports transparent canvases, as is demonstrated here. But I'm not sure if the Unity of which spoken actually uses WebAssembly, because I do know the JavaScript end to a canvas element can be used to draw on a transparent background.
Is this already possible? Will it ever be?
Of course it's 100% possible because WebGL can do it. You can always fork the emscripten libraries,change a few lines, and use your fork. Unfortunately I can't give you an answer unless you specify how your are intializing OpenGL in emscripten. SDL? EGL? GLEW? GLUT? Each will have a different answer.
The first thing I would do is go look at the source for these libraries.
For SDL we see this
$SDL: {
defaults: {
width: 320,
height: 200,
// If true, SDL_LockSurface will copy the contents of each surface back to the Emscripten HEAP so that C code can access it. If false,
// the surface contents are captured only back to JS code.
copyOnLock: true,
// If true, SDL_LockSurface will discard the contents of each surface when SDL_LockSurface() is called. This greatly improves performance
// of SDL_LockSurface(). If discardOnLock is true, copyOnLock is ignored.
discardOnLock: false,
// If true, emulate compatibility with desktop SDL by ignoring alpha on the screen frontbuffer canvas. Setting this to false will improve
// performance considerably and enables alpha-blending on the frontbuffer, so be sure to properly write 0xFF alpha for opaque pixels
// if you set this to false!
opaqueFrontBuffer: true
},
and this
var webGLContextAttributes = {
antialias: ((SDL.glAttributes[13 /*SDL_GL_MULTISAMPLEBUFFERS*/] != 0) && (SDL.glAttributes[14 /*SDL_GL_MULTISAMPLESAMPLES*/] > 1)),
depth: (SDL.glAttributes[6 /*SDL_GL_DEPTH_SIZE*/] > 0),
stencil: (SDL.glAttributes[7 /*SDL_GL_STENCIL_SIZE*/] > 0),
alpha: (SDL.glAttributes[3 /*SDL_GL_ALPHA_SIZE*/] > 0)
};
for EGL there's this
var LibraryEGL = {
$EGL__deps: ['$Browser'],
$EGL: {
// This variable tracks the success status of the most recently invoked EGL function call.
errorCode: 0x3000 /* EGL_SUCCESS */,
defaultDisplayInitialized: false,
currentContext: 0 /* EGL_NO_CONTEXT */,
currentReadSurface: 0 /* EGL_NO_SURFACE */,
currentDrawSurface: 0 /* EGL_NO_SURFACE */,
alpha: false,
For GLUT there is this
glutCreateWindow: function(name) {
var contextAttributes = {
antialias: ((GLUT.initDisplayMode & 0x0080 /*GLUT_MULTISAMPLE*/) != 0),
depth: ((GLUT.initDisplayMode & 0x0010 /*GLUT_DEPTH*/) != 0),
stencil: ((GLUT.initDisplayMode & 0x0020 /*GLUT_STENCIL*/) != 0),
alpha: ((GLUT.initDisplayMode & 0x0008 /*GLUT_ALPHA*/) != 0)
};
which seem like they might lead to an answer?
Otherwise if you can't be bothered to read through the source then you can force it. Add this code to the top of your html file before any other scripts
<script>
(function() {
if (typeof HTMLCanvasElement !== "undefined") {
wrapGetContext(HTMLCanvasElement);
}
if (typeof OffscreenCanvas !== "undefined") {
wrapGetContext(OffscreenCanvas);
}
function wrapGetContext(ContextClass) {
const isWebGL = /webgl/i;
ContextClass.prototype.getContext = function(origFn) {
return function(type, attributes) {
if (isWebGL.test(type)) {
attributes = Object.assign({}, attributes || {}, {alpha: true});
}
return origFn.call(this, type, attributes);
};
}(ContextClass.prototype.getContext);
}
}());
</script>
I tested that with this sample, changed this line
SDL_SetRenderDrawColor(renderer, 255, 255, 255, 255);
to this
SDL_SetRenderDrawColor(renderer, 0, 0, 0, 0);
and it worked for me.
Related
I need to allow the user to choose a color on iOS.
I use the following code to fire up the color picker:
var picker = new UIColorPickerViewController();
picker.SupportsAlpha = true;
picker.Delegate = this;
picker.SelectedColor = color.ToUIColor();
PresentViewController(picker, true, null);
When the color picker displays, the color is always slightly off. For example:
input RGBA: (220, 235, 92, 255)
the initial color in the color picker might be:
selected color: (225, 234, 131, 255)
(these are real values from tests). Not a long way off... but enough to notice if you are looking for it.
I was wondering if the color picker grid was forcing the color to the
nearest color entry - but if that were true, you would expect certain colors to
stay fixed (i.e. if the input color exactly matches one of the grid colors,
it should stay unchanged). That does not happen.
p.s. I store colors in a cross platform fashion using simple RGBA values.
The ToUIColor converts to local UIColor using
new UIColor((nfloat)rgb.r, (nfloat)rgb.g, (nfloat)rgb.b, (nfloat)rgb.a);
From the hints in comments by #DonMag, I've got some way towards an answer, and also a set of resources that can help if you are struggling with this.
The key challenge is that mac and iOS use displayP3 as the ColorSpace, but most people use default {UI,NS,CG}Color objects, which use the sRGB ColorSpace (actually... technically they are Extended sRGB so they can cover the wider gamut of DisplayP3). If you want to know the difference between these three - there's resources below.
When you use the UIColorPickerViewController, it allows the user to choose colors in DisplayP3 color space (I show an image of the picker below, and you can see the "Display P3 Hex Colour" at the bottom).
If you give it a color in sRGB, I think it gets converted to DisplayP3. When you read the color, you need to convert back to sRGB, which is the step I missed.
However I found that using CGColor.CreateByMatchingToColorSpace, to convert from DisplayP3 to sRGB never quite worked. In the code below I convert to and from DisplayP3 and should have got back my original color, but I never did. I tried removing Gamma by converting to a Linear space on the way but that didn't help.
cg = new CGColor(...values...); // defaults to sRGB
// sRGB to DisplayP3
tmp = CGColor.CreateByMatchingToColorSpace(
CGColorSpace.CreateWithName("kCGColorSpaceDisplayP3"),
CGColorRenderingIntent.Default, cg, null);
// DisplayP3 to sRGB
cg2 = CGColor.CreateByMatchingToColorSpace(
CGColorSpace.CreateWithName("kCGColorSpaceExtendedSRGB"),
CGColorRenderingIntent.Default, tmp, null);
Then I found an excellent resource: http://endavid.com/index.php?entry=79 that included a set of matrices that can perform the conversions. And that seems to work.
So now I have extended CGColor as follows:
public static CGColor FromExtendedsRGBToDisplayP3(this CGColor c)
{
if (c.ColorSpace.Name != "kCGColorSpaceExtendedSRGB")
throw new Exception("Bad color space");
var mat = LinearAlgebra.Matrix<float>.Build.Dense(3, 3, new float[] { 0.8225f, 0.1774f, 0f, 0.0332f, 0.9669f, 0, 0.0171f, 0.0724f, 0.9108f });
var vect = LinearAlgebra.Vector<float>.Build.Dense(new float[] { (float)c.Components[0], (float)c.Components[1], (float)c.Components[2] });
vect = vect * mat;
var cg = new CGColor(CGColorSpace.CreateWithName("kCGColorSpaceDisplayP3"), new nfloat[] { vect[0], vect[1], vect[2], c.Components[3] });
return cg;
}
public static CGColor FromP3ToExtendedsRGB(this CGColor c)
{
if (c.ColorSpace.Name != "kCGColorSpaceDisplayP3")
throw new Exception("Bad color space");
var mat = LinearAlgebra.Matrix<float>.Build.Dense(3, 3, new float[] { 1.2249f, -0.2247f, 0f, -0.0420f, 1.0419f, 0f, -0.0197f, -0.0786f, 1.0979f });
var vect = LinearAlgebra.Vector<float>.Build.Dense(new float[] { (float)c.Components[0], (float)c.Components[1], (float)c.Components[2] });
vect = vect * mat;
var cg = new CGColor(CGColorSpace.CreateWithName("kCGColorSpaceExtendedSRGB"), new nfloat[] { vect[0], vect[1], vect[2], c.Components[3] });
return cg;
}
Note: there's lots of assumptions in the matrices w.r.t white point and gammas. But it works for me. Let me know if there are better approaches out there, or if you can tell me why my use of CGColor.CreateByMatchingToColorSpace didn't quite work.
Reading Resources:
Reading this: https://stackoverflow.com/a/49040628/6257435
then this: https://bjango.com/articles/colourmanagementgamut/
are essential starting points.
Image of the iOS Color Picker:
OK so the question is self-explanatory.
Currently, I am doing the following
Using html2canvas npm package to convert my html to canvas.
Converting Canvas to image
canvas.toDataURL("image/png");
Using jsPDF to create a pdf using this image after some manipulations.
So basically all of this process is CPU intensive and leads to unresponsive page.
I'm trying to offload the process to a web worker and I'm unable to postMessage either HTML Node or Canvas Element to my worker thread.
Hence, trying to use OffScreenCanvas but I'm stuck about how to go on with this.
The first step can not be done in a Web-Worker. You need access to the DOM to be able to draw it, and Workers don't have access to the DOM.
The second step could be done on an OffscreenCanvas, html2canvas does accept a { canvas } parameter that you can also set to an OffscreenCanvas.
But once you get a context from an OffscreenCanvas you can't transfer it anymore, so you won't be able to pass that OffscreenCanvas to the Worker and you won't win anything from it since everything will still be done on the UI thread.
So the best we can do is to let html2canvas initialize an HTMLCanvasElement, draw on it and then convert it to an image in a Blob. Blobs can traverse realms without any cost of copying and the toBlob() method can have its compression part be done asynchronously.
The third step can be done in a Worker since this PR.
I don't know react so you will have to rewrite it, but here is a bare JS implementation:
script.js
const dpr = window.devicePixelRatio;
const worker = new Worker( "worker.js" );
worker.onmessage = makeDownloadLink;
worker.onerror = console.error;
html2canvas( target ).then( canvas => {
canvas.toBlob( (blob) => worker.postMessage( {
source: blob,
width: canvas.width / dpr, // retina?
height: canvas.height / dpr // retina?
} ) );
} );
worker.js
importScripts( "https://unpkg.com/jspdf#latest/dist/jspdf.umd.min.js" );
onmessage = ({ data }) => {
const { source, width, height } = data;
const reader = new FileReaderSync();
const data_url = reader.readAsDataURL( source );
const doc = new jspdf.jsPDF( { unit: "px", format: [ width, height ], orientation: width > height ? "l" : "p" });
doc.addImage( data_url, "PNG", 0, 0, width, height, { compression: "NONE" } );
postMessage( doc.output( "blob" ) );
};
And since StackSnippet's over-protected iframes do break html2canvas, here is an outsourced live example.
Problem constraints:
I am not using three.js or similar, but pure WebGL
WebGL 2 is not an option either
I have a couple of models loaded stored as Vertices and Normals arrays (coming from an STL reader).
So far there is no problem when both models are the same size. Whenever I load 2 different models, an error message is shown in the browser:
WebGL: INVALID_OPERATION: drawArrays: attempt to access out of bounds arrays so I suspect I am not manipulating multiple buffers correctly.
The models are loaded using the following typescript method:
public AddModel(model: Model)
{
this.models.push(model);
model.VertexBuffer = this.gl.createBuffer();
model.NormalsBuffer = this.gl.createBuffer();
this.gl.bindBuffer(this.gl.ARRAY_BUFFER, model.VertexBuffer);
this.gl.bufferData(this.gl.ARRAY_BUFFER, model.Vertices, this.gl.STATIC_DRAW);
model.CoordLocation = this.gl.getAttribLocation(this.shaderProgram, "coordinates");
this.gl.vertexAttribPointer(model.CoordLocation, 3, this.gl.FLOAT, false, 0, 0);
this.gl.enableVertexAttribArray(model.CoordLocation);
this.gl.bindBuffer(this.gl.ARRAY_BUFFER, model.NormalsBuffer);
this.gl.bufferData(this.gl.ARRAY_BUFFER, model.Normals, this.gl.STATIC_DRAW);
model.NormalLocation = this.gl.getAttribLocation(this.shaderProgram, "vertexNormal");
this.gl.vertexAttribPointer(model.NormalLocation, 3, this.gl.FLOAT, false, 0, 0);
this.gl.enableVertexAttribArray(model.NormalLocation);
}
After loaded, the Render method is called for drawing all loaded models:
public Render(viewMatrix: Matrix4, perspective: Matrix4)
{
this.gl.uniformMatrix4fv(this.viewRef, false, viewMatrix);
this.gl.uniformMatrix4fv(this.perspectiveRef, false, perspective);
this.gl.uniformMatrix4fv(this.normalTransformRef, false, viewMatrix.NormalMatrix());
// Clear the canvas
this.gl.clearColor(0, 0, 0, 0);
this.gl.viewport(0, 0, this.canvas.width, this.canvas.height);
this.gl.clear(this.gl.COLOR_BUFFER_BIT | this.gl.DEPTH_BUFFER_BIT);
// Draw the triangles
if (this.models.length > 0)
{
for (var i = 0; i < this.models.length; i++)
{
var model = this.models[i];
this.gl.bindBuffer(this.gl.ARRAY_BUFFER, model.VertexBuffer);
this.gl.enableVertexAttribArray(model.NormalLocation);
this.gl.enableVertexAttribArray(model.CoordLocation);
this.gl.vertexAttribPointer(model.CoordLocation, 3, this.gl.FLOAT, false, 0, 0);
this.gl.uniformMatrix4fv(this.modelRef, false, model.TransformMatrix);
this.gl.uniform3fv(this.materialdiffuseRef, model.Color.AsVec3());
this.gl.drawArrays(this.gl.TRIANGLES, 0, model.TrianglesCount);
}
}
}
One model works just fine. Two cloned models also work OK. Different models fail with the error mentioned.
What am I missing?
The normal way to use WebGL
At init time
for each shader program
create and compile vertex shader
create and compile fragment shader
create program, attach shaders, link program
for each model
for each type of vertex data (positions, normal, color, texcoord
create a buffer
copy data to buffer
create textures
Then at render time
for each model
use shader program appropriate for model
bind buffers, enable and setup attributes
bind textures and set uniforms
call drawArrays or drawElements
But looking at your code it's binding buffers, and enabling and setting up attributes at init time instead of render time.
Maybe see this article and this one
Is there an easy way to visualize a custom hit area shape?
As described here
https://konvajs.github.io/docs/events/Custom_Hit_Region.html
the hitFunc attribute can be set to a function that uses the supplied context to draw a custom hit area / region. Something like this:
var star = new Konva.Star({
...
hitFunc: function (context) {
context.beginPath()
context.arc(0, 0, this.getOuterRadius() + 10, 0, Math.PI * 2, true)
context.closePath()
context.fillStrokeShape(this)
}
})
For debugging purposes, I would like an easy way to toggle visual rendering of the shape (circle in this case), eg by filling it yellow.
Thanks :)
Currently, there is no public API for that. But you still can add hit canvas into the page somewhere and see how it looks:
const hitCanvas = layer.hitCanvas._canvas;
document.body.appendChild(hitCanvas);
// disable absolute position:
hitCanvas.style.position = '';
http://jsbin.com/mofocagupi/1/edit?js,output
Or you can add hit canvas on top of the stage and apply an opacity to make scene canvases visible:
const hitCanvas = layer.hitCanvas._canvas;
stage.getContainer().appendChild(hitCanvas);
hitCanvas.style.opacity = 0.5;
hitCanvas.style.pointerEvents = 'none';
http://jsbin.com/gelayolila/1/edit?js,output
I've been trying to use node-gm + Imagemagick to circular crop an image.
Anyways, here's my attempt at creating a mask using a black circle.
var original = 'app-server/photo.jpg';
var output = 'app-server/photo.png';
var maskPath = 'app-server/photo-mask.png';
gm(original)
.crop(233, 233,29,26)
.resize(80, 80)
.setFormat('png')
.write(output, function (err) {
console.log(err || 'cropped to target size');
gm(output)
.out('-size', '80x80')
.background('black')
.drawCircle(20,20, 0, 0)
.toBuffer('PNG',function (err, buffer) {
console.log(err || 'created circular black mask');
//docs say "a buffer can be passed instead of
//a filepath" but this is apparently false
//and say something unclear about using black/white colors for masking.
//I'm clearly lost
gm(output)
.mask(maskPath)
.write(output, function (err) {
console.log(err || 'applied circular black mask to image');
});
});
});
I'm sure this can be done via some fancy string command concatenation, but despite my lack of image processing prowess, I still want to keep the code clean. I'm really looking for a solution using node-gm functions, preferably with less operations than my attempt (also preferably something that works, unlike mine).
I also tried to chain out the function calls for this command with no success:
https://stackoverflow.com/a/999563/1267778
Note I need to crop at a specific location (w,h,x,y) so these solutions also don't work for me:
node-pngjs
node-circle-image
Got it! After many hours of fiddling, I got exactly what I needed.
gm(originalFilePath)
.crop(233, 233,29,26)
.resize(size, size)
.write(outputFilePath, function(err) {
gm(size, size, 'none')
.fill(outputFilePath)
.drawCircle(size/2,size/2, size/2, 0)
.write(output, function(err) {
console.log(err || 'done');
});
});
I'm using JCrop to allow the user to crop the image on the front-end and pass the coordinates (w,h,x,y) into crop().