I've been trying to use node-gm + Imagemagick to circular crop an image.
Anyways, here's my attempt at creating a mask using a black circle.
var original = 'app-server/photo.jpg';
var output = 'app-server/photo.png';
var maskPath = 'app-server/photo-mask.png';
gm(original)
.crop(233, 233,29,26)
.resize(80, 80)
.setFormat('png')
.write(output, function (err) {
console.log(err || 'cropped to target size');
gm(output)
.out('-size', '80x80')
.background('black')
.drawCircle(20,20, 0, 0)
.toBuffer('PNG',function (err, buffer) {
console.log(err || 'created circular black mask');
//docs say "a buffer can be passed instead of
//a filepath" but this is apparently false
//and say something unclear about using black/white colors for masking.
//I'm clearly lost
gm(output)
.mask(maskPath)
.write(output, function (err) {
console.log(err || 'applied circular black mask to image');
});
});
});
I'm sure this can be done via some fancy string command concatenation, but despite my lack of image processing prowess, I still want to keep the code clean. I'm really looking for a solution using node-gm functions, preferably with less operations than my attempt (also preferably something that works, unlike mine).
I also tried to chain out the function calls for this command with no success:
https://stackoverflow.com/a/999563/1267778
Note I need to crop at a specific location (w,h,x,y) so these solutions also don't work for me:
node-pngjs
node-circle-image
Got it! After many hours of fiddling, I got exactly what I needed.
gm(originalFilePath)
.crop(233, 233,29,26)
.resize(size, size)
.write(outputFilePath, function(err) {
gm(size, size, 'none')
.fill(outputFilePath)
.drawCircle(size/2,size/2, size/2, 0)
.write(output, function(err) {
console.log(err || 'done');
});
});
I'm using JCrop to allow the user to crop the image on the front-end and pass the coordinates (w,h,x,y) into crop().
Related
I'm learning p5.js. I've tried the following code to draw a circle each time I move the mouse with fill color that changes according to color of an image.
let img;
function setup() {
createCanvas(400, 400);
loadImage('https://upload.wikimedia.org/wikipedia/commons/e/ef/Hayao_Miyazaki.jpg', img => {
image(img, 0, 0);
});
noStroke();
}
function draw() {
let c = get(mouseX, mouseY);
fill(c);
circle(mouseX, mouseY, 30);
}
But it seems to take the color from the canvas, not from an image. Because of that, if you don't move your mouse fast enough the color doesn't change at all, and even if you do, the amount of color is much more limited, in other words it's not what I intended.
I can get the colors right if I put the loadImage() part inside of a draw function, but then only one circle at a time is visible.
May be I should store every pixel of an image in the array and get the values from an array, without using get()? Is it possible?
I think I'm missing something simple, please help.
img.get(mouseX, mouseY) to get values from image, not a whole canvas
I too though that img.get(mouseX, mouseY); would work and #mevfy-y also said it so it might work?!
I use the image processing sharp-js. In the past I used imagemagick. There is a nice function OilPaint for smoothing edges. i.e. in perl
$error = $image->[0]->OilPaint( radius => 0 )
Now I want migrate from perl + imagemagick to node + sharp-js.
How can I smooth edges in sharp-js?
There are none filter in sharpjs like OilPaint in imagemagik. But its no problem use sharp-js and graphicsmagick (it use imagemagik) in combination i.e.
await sharp("cat.png")
.extract({
left: ankerX,
top: ankerY,
width: sizeX,
height: sizeY,
})
.toFile("cat_section.png");
await gm("cat_section.png")
.paint(1)
.write("cat_section_oil.png", function (err) {
if (err) console.log(err);
console.log("Oil Done!");
});
If you want, you can easy write your own filter. Some useful code + instruction you find here Apply a Oil Paint/Sketch effect to a photo using Javascript
I'm trying to find a way to draw, by OpenGL calls in C++, to a HTML element, in such a way that anything behind the canvas (background image, HTML text, ...) is visible where the GL context framebuffer has no opaque color to be drawn, or maybe even with the ability to use blending.
I've tried setting opacity to zero in the glClearColor() call. I use emscripten to compile my C++ code, and when I use it to generate a module loader, I've noticed that there's code in the generated output to make the canvas element's background color black when no background color has explicitly been set. I've tried disabling this behaviour, in the hopes of gaining transparency, to no avail.
I know that Unity's WebGL version supports transparent canvases, as is demonstrated here. But I'm not sure if the Unity of which spoken actually uses WebAssembly, because I do know the JavaScript end to a canvas element can be used to draw on a transparent background.
Is this already possible? Will it ever be?
Of course it's 100% possible because WebGL can do it. You can always fork the emscripten libraries,change a few lines, and use your fork. Unfortunately I can't give you an answer unless you specify how your are intializing OpenGL in emscripten. SDL? EGL? GLEW? GLUT? Each will have a different answer.
The first thing I would do is go look at the source for these libraries.
For SDL we see this
$SDL: {
defaults: {
width: 320,
height: 200,
// If true, SDL_LockSurface will copy the contents of each surface back to the Emscripten HEAP so that C code can access it. If false,
// the surface contents are captured only back to JS code.
copyOnLock: true,
// If true, SDL_LockSurface will discard the contents of each surface when SDL_LockSurface() is called. This greatly improves performance
// of SDL_LockSurface(). If discardOnLock is true, copyOnLock is ignored.
discardOnLock: false,
// If true, emulate compatibility with desktop SDL by ignoring alpha on the screen frontbuffer canvas. Setting this to false will improve
// performance considerably and enables alpha-blending on the frontbuffer, so be sure to properly write 0xFF alpha for opaque pixels
// if you set this to false!
opaqueFrontBuffer: true
},
and this
var webGLContextAttributes = {
antialias: ((SDL.glAttributes[13 /*SDL_GL_MULTISAMPLEBUFFERS*/] != 0) && (SDL.glAttributes[14 /*SDL_GL_MULTISAMPLESAMPLES*/] > 1)),
depth: (SDL.glAttributes[6 /*SDL_GL_DEPTH_SIZE*/] > 0),
stencil: (SDL.glAttributes[7 /*SDL_GL_STENCIL_SIZE*/] > 0),
alpha: (SDL.glAttributes[3 /*SDL_GL_ALPHA_SIZE*/] > 0)
};
for EGL there's this
var LibraryEGL = {
$EGL__deps: ['$Browser'],
$EGL: {
// This variable tracks the success status of the most recently invoked EGL function call.
errorCode: 0x3000 /* EGL_SUCCESS */,
defaultDisplayInitialized: false,
currentContext: 0 /* EGL_NO_CONTEXT */,
currentReadSurface: 0 /* EGL_NO_SURFACE */,
currentDrawSurface: 0 /* EGL_NO_SURFACE */,
alpha: false,
For GLUT there is this
glutCreateWindow: function(name) {
var contextAttributes = {
antialias: ((GLUT.initDisplayMode & 0x0080 /*GLUT_MULTISAMPLE*/) != 0),
depth: ((GLUT.initDisplayMode & 0x0010 /*GLUT_DEPTH*/) != 0),
stencil: ((GLUT.initDisplayMode & 0x0020 /*GLUT_STENCIL*/) != 0),
alpha: ((GLUT.initDisplayMode & 0x0008 /*GLUT_ALPHA*/) != 0)
};
which seem like they might lead to an answer?
Otherwise if you can't be bothered to read through the source then you can force it. Add this code to the top of your html file before any other scripts
<script>
(function() {
if (typeof HTMLCanvasElement !== "undefined") {
wrapGetContext(HTMLCanvasElement);
}
if (typeof OffscreenCanvas !== "undefined") {
wrapGetContext(OffscreenCanvas);
}
function wrapGetContext(ContextClass) {
const isWebGL = /webgl/i;
ContextClass.prototype.getContext = function(origFn) {
return function(type, attributes) {
if (isWebGL.test(type)) {
attributes = Object.assign({}, attributes || {}, {alpha: true});
}
return origFn.call(this, type, attributes);
};
}(ContextClass.prototype.getContext);
}
}());
</script>
I tested that with this sample, changed this line
SDL_SetRenderDrawColor(renderer, 255, 255, 255, 255);
to this
SDL_SetRenderDrawColor(renderer, 0, 0, 0, 0);
and it worked for me.
I'm having an issue using Pixi.js, where the lineTo method seems to draw lines that aren't specified. The bad lines aren't uniform width (seem to taper off towards the ends) and are much longer than they should be. Example jsfiddle showing the problem here:
http://jsfiddle.net/b1e48upd/1/
var stage, renderer;
function init() {
stage = new PIXI.Stage(0x001531, true);
renderer = new PIXI.WebGLRenderer(800, 600);
// renderer = new PIXI.CanvasRenderer(400, 300);
document.body.appendChild(renderer.view);
requestAnimFrame( animate );
graphics = new PIXI.Graphics();
stage.addChild(graphics);
graphics.beginFill(0xFF0000);
graphics.lineStyle(3, 0xFF0000);
graphics.moveTo(200, 200);
graphics.lineTo(192, 192);
graphics.lineTo(198, 183);
graphics.lineTo(189, 197);
}
function animate() {
requestAnimFrame( animate );
renderer.render(stage);
}
init();
Using the canvas renderer gives correct results.
Trying to search for this problem, I've gathered that the WebGL renderer may have an issue with non-integer values (shown in the question here: Pixi.js lines are rendered outside dedicated area), and I've also seen that sending consecutive lineTo commands to the same coordinates will cause issues, but my example doesn't have either of those.
I want to be able to draw images to the viewport in my 3d Max Plugin,
The GraphicsWindow Class has functions for drawing 3d objects in the viewport but these drawing calls are limited by the current viewport and graphics render limits.
This is undesirable as the image I want to draw should always be drawn no matter what graphics mode 3d max is in and or hardware is used, futher i am only drawing 2d images so there is no need to draw it in a 3d context.
I have managed to get the HWND of the viewport and the max sdk has the function
DrawIconButton();
and i have tried using this function but it does not function properly, the image flickers randomly with user interaction, but disappears when there is no interactivity.
i Have implemented this function in the
RedrawViewsCallback function, however the DrawIconButton() function is not documented and i am not sure if this is the correct way to implemented it.
Here is the code i am using to draw the image:
void Sketch_RedrawViewsCallback::proc (Interface * ip)
{
Interface10* ip10 = GetCOREInterface10();
ViewExp* viewExp = ip10->GetActiveViewport();
ViewExp10* currentViewport;
if (viewExp != NULL)
{
currentViewport = reinterpret_cast<ViewExp10*>(viewExp->Execute(ViewExp::kEXECUTE_GET_VIEWEXP_10));
} else {
return;
}
GraphicsWindow* gw = currentViewport->getGW();
HWND ViewportWindow = gw->getHWnd();
HDC hdc = GetDC(ViewportWindow);
HBITMAP bitmapImage = LoadBitmap(hInstance, MAKEINTRESOURCE(IDB_BITMAP1));
Rect rbox(IPoint2(0,0),IPoint2(48,48));
DrawIconButton(hdc, bitmapImage, rbox, rbox, true);
ReleaseDC(ViewportWindow, hdc);
ip->ReleaseViewport(currentViewport);
};
I could not find a way to draw directly to the view-port window, however I have solved the problem by using a transparent modeless dialog box.
May be a complete redraw will solve the issue. ForceCompleteRedraw