photoshop has strong anti-alias text effect.
Although imagemagick has anti-alias option. but, does not have anti-alias type like photoshop.
is there any way to get similar strong anti-alias text effect with imagemagick ?
It's not a perfect solution (I'm just learning this myself) but it will get you close: You can print the text larger and add a stroke at whatever size you choose then shrink it down after. Example code:
$template_file= "blank.png"; // a transparent png
$template_blob = file_get_contents($template_file);
$width = 100;
$height = 50;
$mult = 6;
$template = new imagick();
$template->readImageBlob($template_blob);
$template->setImageDepth(8);
$template->setCompressionQuality(100);
$template->setCompression(Imagick::COMPRESSION_NO);
$template->setImageFormat("png");
$points = array(
$mult, //scale by which you enlarge it
0 //# rotate
);
$template->distortImage( imagick::DISTORTION_SCALEROTATETRANSLATE, $points, TRUE );
$color = '#000000';
$draw = new ImagickDraw();
$pixel = new ImagickPixel('none');
$draw->setFont('Arial.ttf');
$draw->setFontSize($font_size*$mult);
$draw->setFillColor($color);
$draw->setStrokeColor($color);
$draw->setStrokeWidth(1);
$draw->setStrokeAntialias(true);
$draw->setTextAntialias(true);
$draw->settextkerning($mult); // adjust the kerning if you like
$template->annotateImage($draw, $x_indent, $y_indent, $some_angle, $text);
$points = array(
1/$mult, // set it back to the original scale
0 // rotate
);
$template->distortImage( imagick::DISTORTION_SCALEROTATETRANSLATE, $points, TRUE );
//Do something with the $template here like:
$template->writeImage("test.png");
$template->clear();
$template->destroy();
$draw->clear();
$draw->destroy();
Related
I would like to create a brush for drawing on a PGraphics element with Processing. I would like past brush strokes to be visible. However, since the PGraphics element is loaded every frame, previous brush strokes disappear immediatly.
My idea was then to create PGraphics pg in setup(), make a copy of it in void(), alter the original graphic pg and update the copy at every frame. This produces a NullPointerException, most likely because pg is defined locally in setup().
This is what I have got so far:
PGraphics pg;
PFont font;
void setup (){
font = createFont("Pano Bold Kopie.otf", 600);
size(800, 800, P2D);
pg = createGraphics(800, 800, P2D);
pg.beginDraw();
pg.background(0);
pg.fill(255);
pg.textFont(font);
pg.textSize(400);
pg.pushMatrix();
pg.translate(width/2, height/2-140);
pg.textAlign(CENTER, CENTER);
pg.text("a", 0 , 0);
pg.popMatrix();
pg.endDraw();
}
void draw () {
copy(pg, 0, 0, width, height, 0, 0, width, height);
loop();
int c;
loadPixels();
for (int x=0; x<width; x++) {
for (int y=0; y<height; y++) {
pg.pixels[mouseX+mouseY*width]=0;
}
}
updatePixels();
}
My last idea, which I have not attempted to implement yet, is to append pixels which have been touched by the mouse to a list and to draw from this list each frame. But this seems quite complicated to me as it might result into super long arrays needing to be processed on top of the original image. So, I hope there is another way around!
EDIT: My goal is to create a smudge brush, hence a brush which kind of copies areas from one part of the image to other parts.
There's no need to manually copy pixels like that. The PGraphics class extends PImage, which means you can simply render it with image(pg,0,0); for example.
The other thing you could do is an old trick to fade the background: instead of clearing pixels completely you can render a sketch size slightly opaque rectangle with no stroke.
Here's a quick proof of concept based on your code:
PFont font;
PGraphics pg;
void setup (){
//font = createFont("Pano Bold Kopie.otf", 600);
font = createFont("Verdana",600);
size(800, 800, P2D);
// clear main background once
background(0);
// prep fading background
noStroke();
// black fill with 10/255 transparnecy
fill(0,10);
pg = createGraphics(800, 800, P2D);
pg.beginDraw();
// leave the PGraphics instance transparent
//pg.background(0);
pg.fill(255);
pg.textFont(font);
pg.textSize(400);
pg.pushMatrix();
pg.translate(width/2, height/2-140);
pg.textAlign(CENTER, CENTER);
pg.text("a", 0 , 0);
pg.popMatrix();
pg.endDraw();
}
void draw () {
// test with mouse pressed
if(mousePressed){
// slowly fade/clear the background by drawing a slightly opaque rectangle
rect(0,0,width,height);
}
// don't clear the background, render the PGraphics layer directly
image(pg, mouseX - pg.width / 2, mouseY - pg.height / 2);
}
If you hold the mouse pressed you can see the fade effect.
(changing transparency to 10 to a higher value with make the fade quicker)
Update To create a smudge brush you can still sample pixels and then manipulate the read colours to some degree. There are many ways to implement a smudge effect based on what you want to achieve visually.
Here's a very rough proof of concept:
PFont font;
PGraphics pg;
int pressX;
int pressY;
void setup (){
//font = createFont("Pano Bold Kopie.otf", 600);
font = createFont("Verdana",600);
size(800, 800, P2D);
// clear main background once
background(0);
// prep fading background
noStroke();
// black fill with 10/255 transparnecy
fill(0,10);
pg = createGraphics(800, 800, JAVA2D);
pg.beginDraw();
// leave the PGraphics instance transparent
//pg.background(0);
pg.fill(255);
pg.noStroke();
pg.textFont(font);
pg.textSize(400);
pg.pushMatrix();
pg.translate(width/2, height/2-140);
pg.textAlign(CENTER, CENTER);
pg.text("a", 0 , 0);
pg.popMatrix();
pg.endDraw();
}
void draw () {
image(pg,0,0);
}
void mousePressed(){
pressX = mouseX;
pressY = mouseY;
}
void mouseDragged(){
// sample the colour where mouse was pressed
color sample = pg.get(pressX,pressY);
// calculate the distance from where the "smudge" started to where it is
float distance = dist(pressX,pressY,mouseX,mouseY);
// map this distance to transparency so the further the distance the less smudge (e.g. short distance, high alpha, large distnace, small alpha)
float alpha = map(distance,0,30,255,0);
// map distance to "brush size"
float size = map(distance,0,30,30,0);
// extract r,g,b values
float r = red(sample);
float g = green(sample);
float b = blue(sample);
// set new r,g,b,a values
pg.beginDraw();
pg.fill(r,g,b,alpha);
pg.ellipse(mouseX,mouseY,size,size);
pg.endDraw();
}
As the comments mention, one idea is to sample colour on press then use the sample colour and fade it as your drag away from the source area. This shows simply reading a single pixel. You may want to experiment with sampling/reading more pixels (e.g. a rectangle or ellipse).
Additionally, the code above isn't optimised.
A few things could be sped up a bit, like reading pixels, extracting colours, calculating distance, etc.
For example:
void mouseDragged(){
// sample the colour where mouse was pressed
color sample = pg.pixels[pressX + (pressY * pg.width)];
// calculate the distance from where the "smudge" started to where it is (can use manual distance squared if this is too slow)
float distance = dist(pressX,pressY,mouseX,mouseY);
// map this distance to transparency so the further the distance the less smudge (e.g. short distance, high alpha, large distnace, small alpha)
float alpha = map(distance,0,30,255,0);
// map distance to "brush size"
float size = map(distance,0,30,30,0);
// extract r,g,b values
int r = (sample >> 16) & 0xFF; // Like red(), but faster
int g = (sample >> 8) & 0xFF;
int b = sample & 0xFF;
// set new r,g,b,a values
pg.beginDraw();
pg.fill(r,g,b,alpha);
pg.ellipse(mouseX,mouseY,size,size);
pg.endDraw();
}
The idea is to start simple with clear, readable code and only at the end, if needed look into optimisations.
I'm using Direct2D with SharpDX. I'm filling a rectangle with a LinearGradientBrush:
var brush = new LinearGradientBrush(
renderTarget,
new LinearGradientBrushProperties
{
StartPoint = bounds.ToRawVector2TopLeft(),
EndPoint = bounds.ToRawVector2BottomLeft()
},
new GradientStopCollection(
renderTarget,
new[]
{
new GradientStop
{
Color = new RawColor4(0.2f, 0.2f, 0.2f, 1f),
Position = 0
},
new GradientStop
{
Color = new RawColor4(0.1f, 0.1f, 0.1f, 1f),
Position = 1
}
}));
...
renderTarget.FillRectangle(bounds.ToRawRectangleF(), brush);
Unfortunately, the gradient quality is terrible (500x250 so StackOverflow doesn't shrink it):
The problem is much more visible at larger render target sizes.
Same machine, here's Photoshop CC 2019 with the same stop colors (500x250 so StackOverflow doesn't shrink it):
I zoomed way in on each screenshot and Direct2D's dithering algorithm seems to be terrible compared to Photoshop's. I realize that there simply aren't many colors available in this fine of a gradient, so I'm not asking for more colors but rather better dithering. Are there any settings I can use to improve the rendering quality of fine gradients like this in Direct2D?
I also read about GradientStopCollection1, which has a wrapper in SharpDX but doesn't appear to be usable anywhere. Would this solve my issue?
SO basically, I need performance. Currently in my job we use GDI+ graphics to draw bitmap. Gdi+ graphics contains a method called DrawImage(Bitmap,Points[]). That array contains 3 points and the rendered image result with a skew effect.
Here is an image of what is a skew effect :
Skew effect
At work, we need to render between 5000 and 6000 different images each single frame which takes ~ 80ms.
Now I thought of using SharpDX since it provides GPU accelerations. I use direct2D since all I need is in 2 dimensions. However, the only way I saw to reproduce a skew effect is the use the SharpDX.effects.Skew and calculate matrix to draw the initial bitmap with a skew effect ( I will provide the code below). The rendered image is exactly the same as GDI+ and it is what I want. The only problem is it takes 600-700ms to render the 5000-6000images.
Here is the code of my SharpDX :
To initiate device :
private void InitializeSharpDX()
{
swapchaindesc = new SwapChainDescription()
{
BufferCount = 2,
ModeDescription = new ModeDescription(this.Width, this.Height, new Rational(60, 1), Format.B8G8R8A8_UNorm),
IsWindowed = true,
OutputHandle = this.Handle,
SampleDescription = new SampleDescription(1, 0),
SwapEffect = SwapEffect.Discard,
Usage = Usage.RenderTargetOutput,
Flags = SwapChainFlags.None
};
SharpDX.Direct3D11.Device.CreateWithSwapChain(DriverType.Hardware, DeviceCreationFlags.BgraSupport | DeviceCreationFlags.Debug, swapchaindesc, out device, out swapchain);
SharpDX.DXGI.Device dxgiDevice = device.QueryInterface<SharpDX.DXGI.Device>();
surface = swapchain.GetBackBuffer<Surface>(0);
factory = new SharpDX.Direct2D1.Factory1(FactoryType.SingleThreaded, DebugLevel.Information);
d2device = new SharpDX.Direct2D1.Device(factory, dxgiDevice);
d2deviceContext = new SharpDX.Direct2D1.DeviceContext(d2device, SharpDX.Direct2D1.DeviceContextOptions.EnableMultithreadedOptimizations);
bmpproperties = new BitmapProperties(new SharpDX.Direct2D1.PixelFormat(SharpDX.DXGI.Format.B8G8R8A8_UNorm, SharpDX.Direct2D1.AlphaMode.Premultiplied),
96, 96);
d2deviceContext.AntialiasMode = AntialiasMode.Aliased;
bmp = new SharpDX.Direct2D1.Bitmap(d2deviceContext, surface, bmpproperties);
d2deviceContext.Target = bmp;
}
And here is my code I use to recalculate every image positions each frame (each time I do a mouse zoom in or out, I asked for a redraw). You can see in the code two loop of 5945 images where I asked to draw the image. No effects takes 60ms and with effects, it takes up to 700ms as I mentionned before :
private void DrawSkew()
{
d2deviceContext.BeginDraw();
d2deviceContext.Clear(SharpDX.Color.Blue);
//draw skew effect to 5945 images using SharpDX (370ms)
for (int i = 0; i < 5945; i++)
{
AffineTransform2D effect = new AffineTransform2D(d2deviceContext);
PointF[] points = new PointF[3];
points[0] = new PointF(50, 50);
points[1] = new PointF(400, 40);
points[2] = new PointF(40, 400);
effect.SetInput(0, actualBmp, true);
float xAngle = (float)Math.Atan(((points[1].Y - points[0].Y) / (points[1].X - points[0].X)));
float yAngle = (float)Math.Atan(((points[2].X - points[0].X) / (points[2].Y - points[0].Y)));
Matrix3x2 Matrix = Matrix3x2.Identity;
Matrix3x2.Skew(xAngle, yAngle, out Matrix);
Matrix.M11 = Matrix.M11 * (((points[1].X - points[0].X) + (points[2].X - points[0].X)) / actualBmp.Size.Width);
Matrix.M22 = Matrix.M22 * (((points[1].Y - points[0].Y) + (points[2].Y - points[0].Y)) / actualBmp.Size.Height);
effect.TransformMatrix = Matrix;
d2deviceContext.DrawImage(effect, new SharpDX.Vector2(points[0].X, points[0].Y));
effect.Dispose();
}
//draw no effects, only actual bitmap 5945 times using SharpDX (60ms)
for (int i = 0; i < 5945; i++)
{
d2deviceContext.DrawBitmap(actualBmp, 1.0f, BitmapInterpolationMode.NearestNeighbor);
}
d2deviceContext.EndDraw();
swapchain.Present(1, PresentFlags.None);
}
After benching a lot, I realized the line that make it really slow is :
d2deviceContext.DrawImage(effect, new SharpDX.Vector2(points[0].X, points[0].Y));
My guess is my code or my setup does not use GPU acceleration of SharpDX like it should and this is why the code is really slow. I would expect at least better performance from SharpDX than GDI+ for this kind of stuff.
I'm using imagemagick to resize uploaded files but while they're processing I want to show a rotating gif wheel to the user where the thumbnail would normally be. I serve about 7 sizes of thumbs and would like the wheel to remain at it's 32x32 size in the middle, that's the simple bit.
What I need to know is, can I do the above while still retaining the animation
Example:
This Image:
Starting at this size
With Animation
Check out this fiddle, it might contain what you want: http://jsfiddle.net/TGdFB/1/
It uses jQuery, but should be easily adaptable...
Ended up doing this manually by Photoshop after not being able to find an automated way of doing this through imagemagick. I found the 'coalesce' flag but not much else.
there is a solution in php witch i use to watermark gif animated images ....
it create an black bacground put an image on it and then put watermark ...
watermarkpath = 'path to wathermarkimage.jpg|gif|png';
$imagepath= 'path to the image';
$watermark = new Imagick($watermarkpath);
$GIF = new Imagick();
$GIF->setFormat("gif");
$animation = new Imagick($imagepath);
foreach ($animation as $frame) {
$iWidth = $frame->getImageWidth();
$iHeight = $frame->getImageHeight();
$wWidth = $watermark->getImageWidth();
$wHeight = $watermark->getImageHeight();
if ($iHeight < $wHeight || $iWidth < $wWidth) {
// resize the watermark
$watermark->scaleImage($iWidth, $iHeight);
// get new size
$wWidth = $watermark->getImageWidth();
$wHeight = $watermark->getImageHeight();
}
$bgframe = new Imagick();
$bgframe->newImage(($iWidth), ($iHeight + 80), new ImagickPixel('Black'));
$bgframe->setImageDelay($frame->getImageDelay());
$x = ($iWidth) - $wWidth - 5;
$y = ($iHeight + 80) - $wHeight - 5;
$bgframe->compositeImage($frame, imagick::COMPOSITE_DEFAULT, 0, 0);
$bgframe->flattenImages();
$bgframe->compositeImage($watermark, imagick::COMPOSITE_OVER, $x, $y);
$bgframe->flattenImages();
$GIF->addImage($bgframe);
}
$GIF->writeimages($imagepath,true);
I used THREE.js r49 create 2 cube geometry with a directional light to cast shadow on them and got result as in the following picture.
I noticed that the shadow in the green circle should not be appeared, since the directional light is behind both of the cube. I guess that this is the material issue, I've tried changing various material parameters as well as change the material type itself, but the result still the same. I also tested the same code with r50 and r51 and got the same result.
Could anybody please give me some hint how to get rid of that shadow.
Both cube are creating using CubeGeometry and MeshLambertMaterial as following code.
The code:
// ambient
var light = new THREE.AmbientLight( 0xcccccc );
scene.add( light );
// the large cube
var p_geometry = new THREE.CubeGeometry(10, 10, 10);
var p_material = new THREE.MeshLambertMaterial({ambient: 0x808080, color: 0xcccccc});
var p_mesh = new THREE.Mesh( p_geometry, p_material );
p_mesh.position.set(0, -5, 0);
p_mesh.castShadow = true;
p_mesh.receiveShadow = true;
scene.add(p_mesh);
// the small cube
var geometry = new THREE.CubeGeometry( 2, 2, 2 );
var material = new THREE.MeshLambertMaterial({ambient: 0x808080, color: Math.random() * 0xffffff});
var mesh = new THREE.Mesh( geometry, material );
mesh.position.set(0, 6, 3);
mesh.castShadow = true;
mesh.receiveShadow = true;
// add small cube as the child of large cube
p_mesh.add(mesh);
p_mesh.quaternion.setFromAxisAngle(new THREE.Vector3(0, 1, 0), 0.25 * Math.PI );
// the light source
var light = new THREE.DirectionalLight( 0xffffff );
light.castShadow = true;
light.position.set(0, 10, -8); // set it light source to top-behind the cubes
light.target = p_mesh // target the light to the large cube
light.shadowCameraNear = 5;
light.shadowCameraFar = 25;
light.shadowCameraRight = 10;
light.shadowCameraLeft = -10;
light.shadowCameraTop = 10;
light.shadowCameraBottom = -10;
light.shadowCameraVisible = true;
scene.add( light );
Yes, this is a known, and long-standing, WebGLRenderer issue.
The problem is that the dot product of the face normal and the light direction is not taken into consideration in the shadow calculation. As a consequence, "shadows show through from the back".
As a work-around, you could have a different material for each face, with only certain materials receiving shadows.
three.js r.71