I'm learning ImageMagick. I want to create (with ImageMagick) a series of images
that after using the command
convert -delay 0 -loop 0 frame*.gif final.gif
gives the result like the attached animated gif.
I want to program the series of commands myself, but I need a hint for which effects and drawing instructions will give me the most similar result, so I'm looking for something like:
draw a circle
blur it
save the frame
increase the radius of the circle
repeat
but, probably the above is not enough.
Is this question very vague or can somebody give me a hint?
If you are in a shell environment (OSX or Linux) you could do something like this, which creates a series of blurred circles in ever-increasing radii and saves the image in appropriately named files...
for RAD in {10,20,30,40,50,60,70,80,90}; do convert -size 400x300 xc:black -fill cyan -stroke cyan -draw "translate 200,150 circle 0,0 $RAD,0" -blur 0x8 blur_circle$RAD.gif; done
To get fancier, you could add repeated blur operations, or more blur for the larger radii.
Then, as you suggest convert those to an animated gif:
convert -delay 0 -loop 0 blur_circle*.gif final.gif
EDIT
Working version closer to original "vision"
Here is a version that uses python to generate two circles, whose transparency varies on different time scales. With the current settings, transparency over time looks like this graph, but you can generate any lists of integer values to get different effects:
Here is the image produced by the program:
And here is the program itself:
#! /usr/bin/env python
""" Using imagemagick, generate a series of .gifs with fuzzy circles in them...
When the program is done, create an animated gif with:
convert -delay 0 -loop 0 blur_circle*.gif final.gif
(P.S. I know it is not 'convention' to capitalize variable names, but it makes
it easier to distinguish my definitions from built-ins)"""
import subprocess
import sys
CanvasW = 400
CanvasH = 400
CenterX = int(CanvasW/2)
CenterY = int(CanvasH/2)
InnerDia = 75
OuterDia = 155
BaseNameString = "blur_circles"
# play with overall blur level - 2 to 8
# this is in addition to the distance fuzzing
BlurLevel = '8'
# The following three lists must be the same length
# Transparency levels of the inner circle, ramping up and down
InnerAlphaList = range(0,101,10) + range(90,9,-20) + [5,0,0,0]
print "Inner:",len(InnerAlphaList),InnerAlphaList
# Transparency of the outer circle
OuterAlphaList = range(0,51,8) + range(40,9,-12) + range(8,0,-2) + [0,0,0,0,0,0]
print "Outer:",len(OuterAlphaList), OuterAlphaList
# Add 100 so the file names get sorted properly?
NameNumberList = range(101, 101+len(InnerAlphaList))
# Changing the Euclidaan distance parameters affects how fuzzy the objects are
BaseCommand = '''convert -size {w}x{h} xc:black \
-fill "rgba( 0, 255,255 , {outal:0.1f} )" -stroke none -draw "translate {cenx},{ceny} circle 0,0 {outdia},0" -morphology Distance Euclidean:4,1000\
-fill "rgba( 0, 255,255 , {inal:0.1f} )" -stroke none -draw "translate {cenx},{ceny} circle 0,0 {india},0" -morphology Distance Euclidean:4,500 \
-blur 0x{blur} {basename}_{namenum}.gif'''
sys.stderr.write("Starting imagegen .")
for InAlpha,OutAlpha,NameNumber in zip(InnerAlphaList,OuterAlphaList,NameNumberList):
FormattedCommand = BaseCommand.format(
w = CanvasW, h = CanvasH,
cenx = CenterX, ceny = CenterY,
india = InnerDia, outdia = OuterDia,
blur = BlurLevel,
inal = InAlpha/100.0, outal = OutAlpha/100.0,
basename = BaseNameString,
namenum = NameNumber
)
sys.stderr.write(".")
# sys.stderr.write("{}\n".format(FormattedCommand))
ProcString = subprocess.check_output(FormattedCommand, stderr=subprocess.STDOUT,shell=True)
if ProcString:
sys.stderr.write(ProcString)
sys.stderr.write(" Done.\n")
""" BASE COMMANDS:
# inner circle:
convert -size 400x300 xc:black -fill "rgba( 0, 255,255 , 1 )" -stroke none -draw "translate 200,150 circle 0,0 75,0" -blur 0x8 -morphology Distance Euclidean:4,1000 blur_circle100.gif
#outer circle
convert -size 400x300 xc:black -fill "rgba( 0, 255,255 , .5 )" -stroke none -draw "translate 200,150 circle 0,0 150,0" -morphology Distance Euclidean:4,500 -blur 0x6 blur_circle100.gif
#both circles
convert -size 400x300 xc:black -fill "rgba( 0, 255,255 , .5 )" -stroke none -draw "translate 200,150 circle 0,0 150,0" -morphology Distance Euclidean:4,500 \
-fill "rgba( 0, 255,255 , 1 )" -stroke none -draw "translate 200,150 circle 0,0 75,0" -morphology Distance Euclidean:4,1000\
-blur 0x8 blur_circle100.gif
"""
Related
Hey all I have these 3 imageMagick scripts (command line arguments) that I am trying to combine into Imagemagick.NET code.
First (merging 2 images together):
convert ^
( testingl.jpg -resize 610x440^^ -gravity West -extent 1080x440 ) ^
( testingr.jpg -resize 610x440^^ -gravity East -extent 1080x440 ) ^
blend_mask.png -blur 0x7 ^
-composite bothImagesMerged.jpg
Second (Create 2 round objects with photo inside):
convert lisa.jpg -resize 100x100! ^
null: ( -size 100x100 xc:black -fill white -draw "circle 50,50 50,88" ) ^
-alpha off -compose copy_opacity -layers composite ^
null: ( -size 100x100 xc:"graya(100%,0)" -fill black -draw "circle 50,50 50,90" -blur 0x5 ) ^
-compose dstover -layers composite ^
-background none -gravity center +smush -25+0 ^
roundImageLisa.png
convert homer.jpg -resize 100x100! ^
null: ( -size 100x100 xc:black -fill white -draw "circle 50,50 50,88" ) ^
-alpha off -compose copy_opacity -layers composite ^
null: ( -size 100x100 xc:"graya(100%,0)" -fill black -draw "circle 50,50 50,90" -blur 0x5 ) ^
-compose dstover -layers composite ^
-background none -gravity center +smush -25+0 ^
roundImageHomer.png
Third (Write text on top of photo):
convert -size 1080x440 xc:none -gravity center ^
-font arial -pointsize 40 ^
-stroke black -strokewidth 2 -annotate +-330+-150 "Lisa Simpson" ^
-stroke black -strokewidth 2 -annotate +330+-150 "Homer Simpson" ^
-background none -shadow 520x3+0+0 +repage ^
-stroke none -fill white -annotate +-330+-150 "Lisa Simpson" ^
-stroke none -fill white -annotate +330+-150 "Homer Simpson" ^
bothImagesMerged.jpg +swap -gravity center -geometry +0-3 ^
-composite textOverImg.jpg
If I was able to combine all 3 of those then the output would look something like this:
I've tried to put all of them into a one-liner but can not seem to find the correct way (order mainly) into doing so.
I do have some code that produces the round images in C#:
Bitmap bitmap = new Bitmap("lisa.jpg");
MagickImageCollection images = new MagickImageCollection();
IMagickImage roundImg = null;
IMagickImage mask = new MagickImage("xc:black", 100, 100);
mask.Settings.FillColor = MagickColors.White;
mask.Draw(new DrawableCircle(50, 50, 50, 90));
mask.HasAlpha = false;
roundImg = new MagickImage(bitmap);
roundImg.Resize(100, 100);
roundImg.Composite(mask, CompositeOperator.CopyAlpha);
roundImg.Draw(new DrawableStrokeColor(MagickColors.Black),
new DrawableStrokeWidth(1),
new DrawableFillColor(MagickColors.None),
new DrawableCircle(50, 50, 50, 90));
IMagickImage shadow = new MagickImage("xc:none", 100, 100);
shadow.Settings.FillColor = MagickColors.Black;
shadow.Draw(new DrawableCircle(50, 50, 50, 90));
shadow.Blur(0, 5);
roundImg.Composite(shadow, CompositeOperator.DstOver);
images.Add(roundImg);
images.First().BackgroundColor = MagickColors.None;
IMagickImage result = new MagickImage();
result = images.SmushHorizontal(-35);
result.Write("lisa_round.png");
mask.Dispose();
shadow.Dispose();
result.Dispose();
images.Dispose();
Assistance would be great! #fmw42
Given an input image, I was thinking about how the image could be re-colored to a single new color keeping the luminance of the image similar to what it was earlier.
So I wrote a naive code:
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <bits/stdc++.h>
using namespace cv;
using namespace std;
int main() {
Mat img = imread("test2.png", 1);
Mat hsv; cvtColor(img, hsv, CV_BGR2HSV);
vector<Mat > channels;split(hsv, channels);
Mat luminance; channels[2].copyTo(luminance);
Mat res; img.copyTo(res);
channels.clear(); split(res, channels);
for (int i = 0; i<res.rows; i++) {
for (int j = 0; j<res.cols; j++) {
channels[0].at<uchar>(i, j) = 0;
channels[1].at<uchar>(i, j) = 0;
channels[2].at<uchar>(i, j) = 255;
}
}
merge(channels, res);
cvtColor(res, hsv, CV_BGR2HSV);
channels.clear(); split(hsv, channels);
luminance.copyTo(channels[2]);
merge(channels, res);
cvtColor(res, res, CV_HSV2BGR);
imwrite("result.png", res);
return 0;
}
What I actually did is just extracted the luminance map of the original image, then created an image with the color I want it to be in, and replaced the luminance map of this output image with the luminance map of input image.
But resultant image seems to be darker in shade. Is there any better way to do this?
Input image:
Resulting image:
I think you are looking for "tinting". I don't have any references for how you to do it with OpenCV but there is a description in Anthony Thyssen's excellent ImageMagick notes here - search for the word "somehow". Maybe you can adapt it to OpenCV, if the effect is what you seek.
At the command-line, with ImageMagick, I did this:
convert drop.png -fill red -tint 50% result.jpg
Here is another way in Imagemagick.
convert \( input.png -colorspace gray \) \( -clone 0 -fill red -colorize 100 \) \( -clone 0 \) -compose colorize -composite result1.png
convert \( input.png -colorspace lab -channel red -separate \) \( -clone 0 -fill red -colorize 100 \) \( -clone 0 \) -compose colorize -composite result2.png
convert \( input.png -colorspace hsi -channel blue -separate \) \( -clone 0 -fill red -colorize 100 \) \( -clone 0 \) -compose colorize -composite result3.png
Choose what colorspace represents the intensity/luminance you want to use. See my script, color2gray at http://www.fmwconcepts.com/imagemagick/color2gray/index.php to see what different colorspace intensity/luminance show as gray.
I am trying to overlay 100X100 image on blank 1080X1320 image by repetition.
$ThumbImg = 'thumb_100x100.png';
$new_image = "new.png";
exec("convert -size 1080x1320! xc:transparent all_images/" . $new_image);
$new_image_path = 'all_images/' . $new_image;
// main image width=1080 ,height = 1320,
// thumb image width = 100 , height = 100,
// row=1320/100=14
// col=1080/100=11
for ($row = 0; $row < 14; $row++)
{
for ($col = 0; $col < 11; $col++)
{
exec('composite -geometry +' . ($col * 100) . '+' . ($row * 100) . ' ' . $ThumbImg . ' ' . $new_image_path . ' ' . $new_image_path);
}
}
When i use above code on version 6.9 it works fine and 100x100 image gets repeated uniformly over blank 1080x1320 image. But this doesn't work on 7.0.3 version (latest IM version).
what change would be needed in exec command to make this work on newer version?
UPDATE -
Solution suggested by mark-setchell is working for some patterns but for following it doesn't create an image but instead only blank image is created.
UPDATE -
We need to use -set colorspace RGB in command to get the right result. So command would be
exec('convert new.png -fill thumb.png -set colorspace RGB -draw "color 0,0 reset" result.png');
I think it would be simpler to just use ImageMagick's -fill operator to tile your 100x100 image over a background.
So, if we create a 100x100 tile to repeat like this:
convert -size 100x100 gradient:cyan-magenta tile.png
Then you can tile that all over a 1080x1320 background like this:
convert xc:black"[1080x1320]" -fill tile.png -draw "color 0,0 reset" result.png
If you want to generate the tile pattern "on-the-fly" in one command, you can do it like this using an MPR (Magick Pixel Register) in memory to hold the fill:
convert -size 100x100 gradient:cyan-magenta -write MPR:tile +delete \
xc:black"[1080x1320]" -fill MPR:tile -draw "color 0,0 reset" result.png
If you wish to continue to use the original composite command, you need to re-order the parameters as follows with IM v7:
composite new.png -geometry +400+900 tile.png result.png
I got this code to check if an image file contains blue pixels with Imagemagick and counting them - then saving the result.
It works well, but it seems like many processes of Imagemagick hang forever on the server and are making it very slow.
Is there a way to improve this code and avoid this trouble?
module.exports = function (File) {
File.observe('after save', function countPixels(ctx, next) {
if (ctx.instance && !ctx.instance.blue_pixels) {
var exec = require('child_process').exec;
// Convert file to retrieve only blue pixels:
exec('convert ' + ctx.instance.path + ' -fx "u.b>(u.g+0.2)&&u.b>(u.r+0.2)&&saturation>0.6" -format "%[fx:mean*w*h]" info:',
function (error, stdout, stderr) {
if (error !== null) {
return next(error);
} else {
ctx.instance.blue_pixels = stdout;
File.upsert(ctx.instance);
}
});
}
next();
});
};
The -fx operator that you are using is notoriously slow - especially for large images. I had a try at casting the same formula using faster methods which may help you. So, I made a sample image:
convert xc:red xc:lime -append \( xc:blue xc:cyan -append \) +append -resize 256x256! input.png
And then rewrote your expression like this:
convert input.png \
\( -clone 0 -separate -delete 0 -evaluate-sequence subtract -threshold 20% -write BG.png \) \
\( -clone 0 -separate -delete 1 -evaluate-sequence subtract -threshold 20% -write BR.png \) \
\( -clone 0 -colorspace hsl -separate -delete 0,2 -threshold 60% -write S.png \) \
-delete 0 \
-evaluate-sequence min result.png
Note that the -write XYZ.png are just debug statements that can be removed.
Basically, I am building a mask of all pixels that meet your criteria and making them white, and making all the ones that don't match your criteria black and at the end, I run -evaluate-sequence min to find the minimum of each pixel so that all three of your conditions must effectively be met:
that blue exceeds green by 20%
that blue exceeds red by 20%
that the saturation exceeds 60%
The -separate -delete N splits your image into RGB channels and then deletes one of the resulting channels, so if I -delete 1 (that is the Green channel) I am left with Red and Blue. Here are the intermediate, debug images. The first one is the condition Blue exceeds Red by 20%:
Then that Blue exceeds Green by 20%:
And finally that the Saturation exceeds 60%:
And then the result:
You'll need to put your -format "%[fx:mean*w*h]" info: back on the end in place of the output image name to get the count of saturated blue pixels.
If I run your command:
convert input.png -fx "u.b>(u.g+0.2)&&u.b>(u.r+0.2)&&saturation>0.6" result.png
My brain is not quite right today, so please run some checks - I may have something back-to-front somewhere!
As a benchmark, on a 10,000x10,000 pixel PNG, my code runs in 30 seconds, whereas the -fx equivalent takes nearly 7 minutes.
I don't know imagelagick part. But for node part I see that you call next non regarding to imagemgick opertion.
module.exports = function (File) {
File.observe('after save', function countPixels(ctx, next) {
if (ctx.instance && !ctx.instance.blue_pixels) {
var exec = require('child_process').exec;
// Convert file to retrieve only blue pixels:
exec('convert ' + ctx.instance.path + ' -fx "u.b>(u.g+0.2)&&u.b>(u.r+0.2)&&saturation>0.6" -format "%[fx:mean*w*h]" info:',
function (error, stdout, stderr) {
if (error !== null) {
return next(error);
} else {
ctx.instance.blue_pixels = stdout;
File.upsert(ctx.instance);
next();
}
});
}
else{next();}
//next(); //run next hook ASAP (before imagemagick returns back the result)
});
};
We are trying to apply an overlay on a series of images before merging them into one. Right now it seems imagemagick is converting the image to the color applied instead of applying an overlay. The docs are not very clear as to what we should be doing differently. I'd appreciate if you have any insight on this. Code follows:
def self.concatenate_images (source, image)
height = FastImage.size(image.url)[0]
width = FastImage.size(image.url)[1]
source = source.first
source = source.resize_to_fill(height, width).quantize(256, Magick::GRAYColorspace).contrast(true)
User.color_variant.each_slice(3).with_index do |slice,variant_index|
slice.each_with_index do |color,color_index|
colored = Magick::Image.new(height, width) { self.background_color = color.keys[0]}
colored.composite!(source.negate, 0, 0, Magick::CopyOpacityCompositeOp)
colored.write("#{User.get_img_path}#{color.values[0]}.png")
if variant_index == 2 && color_index == 0
system "convert #{User.get_img_path}#{slice[0].values[0]}.png #{image.url} +append #{User.get_img_path}#{slice[0].values[0]}.png"
end
if color_index!=0 && variant_index != 3
system "convert #{User.get_img_path}#{slice[0].values[0]}.png #{User.get_img_path}#{slice[color_index].values[0]}.png +append #{User.get_img_path}#{slice[0].values[0]}.png"
end
end
end
I don't speak Ruby, but I suspect you have the wrong blending mode. At the command line, you can see the available blending modes with:
identify -list compose
Output
Atop
Blend
Blur
Bumpmap
ChangeMask
Clear
ColorBurn
ColorDodge
Colorize
CopyBlack
CopyBlue
CopyCyan
CopyGreen
Copy
CopyMagenta
CopyOpacity
CopyRed
CopyYellow
Darken
DarkenIntensity
DivideDst
DivideSrc
Dst
Difference
Displace
Dissolve
Distort
DstAtop
DstIn
DstOut
DstOver
Exclusion
HardLight
HardMix
Hue
In
Lighten
LightenIntensity
LinearBurn
LinearDodge
LinearLight
Luminize
Mathematics
MinusDst
MinusSrc
Modulate
ModulusAdd
ModulusSubtract
Multiply
None
Out
Overlay
Over
PegtopLight
PinLight
Plus
Replace
Saturate
Screen
SoftLight
Src
SrcAtop
SrcIn
SrcOut
SrcOver
VividLight
Xor
I expect you can see something similar if you look in the file where your Magick::CopyOpacityCompositeOp is defined. So, if I take Mr Bean and a magenta rectangle the same size:
I can run a command like this:
convert MrBean.jpg overlay.png -compose blend -composite output.jpg
and I'll get this:
Now, that may, or may not be what you want, so I can run through all the available blending modes like this:
for blend in $(identify -list compose|grep -v Blur ); do
convert -label "$blend" MrBean2.jpg overlay.png -compose $blend -composite miff:-
done | montage - -tile 5x result.png
which gives this which shows the various results:
I am not into RoR, but I believe you are replacing your image with the solid color, instead of overlaying your image, because Copy_Opacity composite method replaces the alpha channel (Copy_Opacity method).
Instead of:
colored = Magick::Image.new(height, width) { self.background_color = color.keys[0]}
colored.composite!(source.negate, 0, 0, Magick::CopyOpacityCompositeOp)
Try this:
colored = Magick::Image.new(height, width) { self.background_color = color.keys[0]}
your_overlayed_image.composite!(colored, 0, 0, Magick::ColorizeCompositeOp)
See Alpha Compositing (RMagick) - The colorize composite operation