Please help me for writing command for ImageMagick. I need to generate a border, while having only the top piece of this border. Here is algoritm:
Take the piece height, generate canvas width = height*3 and height = height*3
Take the top piece into top center place.
Rotate piece for 90 degree three times and position it into center right, bottom and center left position.
Make a corners from each slice. I think it must long each slice, next calculate a triangle mask for cuttng the unneccessary edge, but then I'm afraid it will cover each other. So, I down know, what to do. Have any ideas?
I find your question quite hard to understand, but I gather you basically want to make an entire picture frame from a picture of one side of it - i.e. a picture of the frame moulding.
I realise it is a bit late, but fancied a little challenge, so here is a bash script that takes a picture of the top piece of moulding as a parameter and makes a frame from it...
#!/bin/bash
################################################################################
# pictureframe
# Mark Setchell
#
# Make a picture frame from a single horizontal moulding picture - with mitred
# corners and a transparent middle for a photo!
################################################################################
# Set DEBUG=1 for debugging info
DEBUG=1
# Image of the moulding is the 1st parameter, use "moulding.jpg" if none given
moulding=${1:-moulding.jpg}
[ $DEBUG -gt 0 ] && echo "DEBUG: Using file: $moulding"
# Get width and height of image
read w h < <(convert "$moulding" -format "%w %h" info: )
[ $DEBUG -gt 0 ] && echo "DEBUG: Dimensions: ${w}x${h}"
# Determine corner points of mitring mask - it looks like this
#
# ----------------------------
# \ /
# ----------------------
#
polyline=$(convert "$moulding" -format "0,0 %w,0 %[fx:w-h],%h %h,%h" info:)
[ $DEBUG -gt 0 ] && echo "DEBUG: Mitring mask: $polyline"
# Mitre corners
dbg=""
[ $DEBUG -gt 0 ] && dbg="-write mask.png" && echo "DEBUG: Mask is in file mask.png"
convert "$moulding" \( +clone -evaluate set 0 -fill white -draw "polyline $polyline" -alpha off $dbg \) -compose copyopacity -composite top.png
# Build whole frame by rotating top through the angles and compositing onto extended blank canvas
convert top.png -background none -extent x${w} \
\( top.png -rotate 90 \) -gravity east -composite \
\( top.png -rotate 180 \) -gravity south -composite \
\( top.png -rotate 270 \) -gravity west -composite -flatten result.png
[ $DEBUG -gt 0 ] && echo "DEBUG: Output file is: result.png"
So, if I start with this moulding,
it will generate this:
And if I start with this
it will generate this:
Note that this script may not make the same aesthetic decisions as a professional picture framer :-)
But then again, it will never cut a badly fitting mitre :-)
Related
I have a set of images, and I can use the Imagemagick montage command on them to produce a montage image file with transparent background (let's call this fgimg). Now I have another existing image (let's call this bgimg) that I'd like to use (after some special processing with the convert command) as the background for fgimg, which can be achieved within the same convert command. At this point it seems trivial to avoid writing the temporary fgimg to disk, simply by piping the standard output of montage to the standard input of convert.
My problem is that the special processing I'm applying to bgimg will require some knowledge of the image properties of fgimg (e.g., resizing bgimg to have the same size as fgimg), which I don't know in advance. How can this information be retrieved and used in the convert command?
Note: I'm using Imagemagick version 6.9.7-4 on Linux.
I'll include some commands below to further illustrate the problem in detail.
The following command produces the montage image fgimg from a set of input images. The output is in the 'special' miff format (which seems best for temporary output to be worked on later), and has transparent background so that the actual background can be applied later. Most of the other options here are not important, but the point is that the output size (dimensions) cannot be determined in advance.
montage input_*.jpg -tile 5x -border 2 -geometry '200x200>+20+20' \
-gravity center -set label '%f\n%G' -background none -fill white \
-title 'Sample Title' miff:fgimg
Next, I have another input image bgimg.jpg. I want to perform some processing on it before using it as background to fgimg. The processing can be quite complex in general, but in this example, I want to:
resize bgimg.jpg to fit inside the dimensions of fgimg without any cropping;
apply a fade-to-black effect around the edges;
make it the same size as fgimg, with a black background;
combine this with fgimg to produce the final output.
Notice that I need the size of fgimg in two places. I can first extract this into a shell variable:
size=$(identify -format '%G' miff:fgimg)
Then I can do all the steps above in one convert command (note that $size is used twice):
convert "bgimg.jpg[$size]" -gravity center \
\( +clone -fill white -colorize 100% -bordercolor black \
-shave 20 -border 20 -blur 0x20 \) -compose multiply -composite \
-background black -compose copy -extent $size \
miff:fgimg -compose over -composite final_out.jpg
Now here is the problem: I want to avoid writing the temporary file fgimg to disk.
I could replace miff:fgimg with miff:- in both the montage and convert commands and then just pipe one to the other: montage ... | convert .... But how do I deal with the $size?
I tried to use file descriptors (miff:fd:3) but this does not seem to work, which is confirmed by the comments to this question.
Is there a way to do this (in Imagemagick v6) without creating a temporary file?
This example command uses ImageMagick v6 on a bash shell. Instead of "montage" it starts by using "convert" to create a "logo:", one of IM's built-in sample images, then pipes it out as a MIFF and into the "convert" command that follows. You can pipe the output of "montage" just as easily. And it uses another IM built-in image "rose:" as your "bgimg.jpg"...
convert logo: miff:- | convert - rose: \
+distort SRT "%[fx:t?min(u.w/v.w,u.h/v.h):1] 0" \
-shave 1 +repage -gravity center -bordercolor black \
\( -clone 1 -fill white -colorize 100 -shave 6 -border 6 \
-blur 0x6 -clone 1 -compose multiply -composite \) -delete 1 \
\( -clone 0 -alpha off -fill black -colorize 100 \
-clone 1 -compose over -composite \) -delete 1 \
+swap -composite final_out.jpg
That reads the piped image "-" and the background image "rose:".
Then it uses "+distort" with an FX expression to scale "rose:" to the maximum dimensions that still fit within the original piped input image. That operation adds a pixel all around so we use "-shave 1" to get rid of that.
Next inside parentheses it clones that re-scaled background image, makes an edge blur mask, and composites them to make the fade-to-black edge on the background image. Right after the parentheses it deletes the non-edged background image.
In the next parentheses it clones the input image, makes it black, clones the modified background image, and composites it centered over the black one. Again the non-extended background image is discarded after the parentheses with "-delete 1".
Finally the modified background and the input image are put in the proper order with "+swap" and composited for the final output. Run this command without the last "-composite" to see the two images that result from the prior parts of the command.
Both the main input image and background image can be any size, any dimensions, and any aspect ratio. This works for me on v6.8.9 on a bash shell. It should work on any ImageMagick newer. It should work on Windows by removing all the backslashes that escape parentheses and changing the continued-line backslashes "\" to carets "^".
EDITED TO ADD:
You can use that FX expression to find the scaling amount, save it as a variable, then isolate the background image inside parentheses and use that variable to do the scaling and shaving there. That way it only affects the background image. There may be a rounding error with that, but the main image, which determines the exact final output dimensions, will be unaffected. Note the difference in the first few lines of this command...
convert logo: miff:- | convert - rose: \
-set option:v1 "%[fx:min(u.w/v.w,u.h/v.h)]" \
\( -clone 1 +distort SRT "%[v1] 0" -shave 1 \) -delete 1 \
+repage -gravity center -bordercolor black \
\( -clone 1 -fill white -colorize 100 -shave 6 -border 6 \
-blur 0x6 -clone 1 -compose multiply -composite \) -delete 1 \
\( -clone 0 -alpha off -fill black -colorize 100 \
-clone 1 -compose over -composite \) -delete 1 \
+swap final_out.jpg
How do I adjust the color ratio of a gradient?
I currently use the following to create my gradient.
convert -size 200x600 gradient:none-black output.png
Although at least one acceptable solution has been provided, here are a couple other ideas...
Example 1: This command creates a red-blue gradient of the finished dimensions, crops it into a top and bottom half, resizes them to 40 and 60 percent of the input height, and appends them back to make a single image. What started as the color at the exact vertical center is now at 40% down from the top with clean gradients going up and down from there.
convert -size 200x600 gradient:red-blue -crop 1x2# \
\( -clone 0 -resize 100x40% \) \( -clone 1 -resize 100x60% \) \
-delete 0,1 -append result.png
That splits the gradient image into a top and bottom half, then inside parentheses it resizes each to the required proportion. After that it deletes the 50/50 crops from before the parentheses, appends the two resized remaining images, and writes the output.
Example 2: This next example starts by creating the red-blue gradient in the final dimensions, then sets variables to hold the top color, the exact middle color, and the bottom color.
Then inside the first parentheses it clones and crops the image to 60% its original height. It uses "-sparse-color" to fill that with a gradient from "color1" to "color2".
Inside the second parentheses it clones and crops the image to 40% its original height, and using "-sparse-color" again it fills it with a gradient from "color2" to "color3".
After creating those two gradients, delete the original, append the other two together, and write the output.
convert -size 200x600 gradient:red-blue \
-set option:color1 "%[pixel:p{0,0}]" \
-set option:color2 "%[pixel:p{0,h/2}]" \
-set option:color3 "%[pixel:p{0,h}]" \
\( -clone 0 -extent 100x60% \
-sparse-color barycentric "0,0 %[color1] 0,%[h] %[color2]" \) \
\( -clone 0 -extent 100x40% \
-sparse-color barycentric "0,0 %[color2] 0,%[h] %[color3]" \) \
-delete 0 -append result.png
Maybe you want this, where you get to the half-red/half-blue colour just 20% of the way down the height of the image. It is done by creating two gradients of different lengths and putting them back-to-back:
midcolour="rgb(127,0,127)"
convert -size 100x20 gradient:red-"$midcolour" \
-size 100x80 gradient:"$midcolour"-blue \
-append result.png
Another way is to put 3 single pixels together in a row and then resize that up to what you want. I know you want the middle to be 40% red and 60% blue, but, for ease of viewing, I'll make it lime green:
convert -size 1x1 xc:red xc:lime xc:blue -append -resize 100x100\! result.png
You would change lime to something like "rgb(100,0,155)".
I am not quite sure I understand. But if you want to start with 90% transparent (10% opaque black) and end with black. You can do:
convert -size 200x600 gradient:"graya(0,0.1)-black" output.png
graya means gray with alpha. So graya(0,0.1) is gray(0) or black with 0.1 fraction opacity, so 90% transparent.
Perhaps this is what you want:
Normal 50-50:
convert -size 200x600 gradient:red-blue red_blue1.png
60-40:
rr=`convert xc: -format "%[fx:0.6*255]" info:`
bb=`convert xc: -format "%[fx:0.4*255]" info:`
convert -size 200x600 gradient:"rgb($rr,0,$bb)-rgb(0,0,255)" red_blue2.png
Or perhaps this is what you want:
bb=`convert xc: -format "%[fx:0.1*255]" info:`
convert -size 200x600 gradient:"rgb(255,0,$bb)-rgb(0,0,255)" red_blue3.png
I have an Imagemagick bash shell script called, multigradient, which may do what you want. It allows you to create gradients of many colors each with stops to set where they start. For example:
multigradient -w 200 -h 600 -s "blue 0 red 80" -d to-top result.png
The first color must start at 0, but the direction can be many options. Here I go from bottom to top with pure blue at the bottom and pure red starting at 80 up from the bottom going to the top
(See)
I use this code to auto generate slides from a .txt file where I wrote captions this way:
CAPTION 1
CAPTION 2
...
CAPTION N
This is the script I use
#!/bin/bash
i=0
# loop through files
while IFS= read -r p; do
# if it's not an empty line
if ! [ -z "$p"]; then
# echo line
echo "$p";
convert -background none -font Trebuchet-MS -fill white -pointsize 60 -gravity center -draw "text 0,300 'pango:$p'" slide_template.png slides/slide-$i.png
i=$((i+1))
fi;
# pass input
done <$#
slide_template.png is simply an empty (transparent) 1920x1080 png.
I pass my .txt file this way:
$ sh my_script.sh my_file.txt
And it generates my slides in /slides.
Now I'd like to use some format code into my slides, like
MY <b>CAPTION</b> 1
MY <i>CAPTION</i> 2
...
MY CAPTION N
But I can't understand how to use pango in my previous code. I need to reposition my caption line centered, 300 pixels from the bottom.
If I use:
convert -background none -font Trebuchet-MS -fill white -pointsize 60 -gravity center -draw "text 0,300 '$p'" slide_template.png slides/slide-$i.png
I get:
If I use this line:
convert -background none -font Trebuchet-MS -fill white -pointsize 60 -gravity center pango:"$p" slide_template.png slides/slide-$i.png
I get TWO files (why?), where the first one is correctly parsed but cropped to the text size:
And the second one is the background. Filenames this way are slide-0-0.png and slide-0-1.png
Solved: I need to pipe one image to another.
The first contains the formatted code, the second overlays the piped data onto the background.
#!/bin/bash
i=0
# loop through files
while IFS= read -r p; do
# if it's not an empty line
if ! [ -z "$p"]; then
convert -background none -font Trebuchet-MS -fill white -pointsize 60 -gravity center -size 1920x300 pango:"$p" png:- | convert slide_template.png png:- -geometry +0+800 -composite slides/slide-$i.png
i=$((i+1))
fi;
# pass input
done <$#
I want a combined command that can perform the following task in single execution. I searched the internet, but hardly found any tutorial that guide us to write any stack command. I can found single command for each operation, such as -composite, -blur, etc, and I know I can pipe the command as such
convert ... mpr:- | convert ... mpr:- | ... | convert ... png:-
However, I want a combined command that use \( ... \) and mpr:{label} since this will increase the performance as all operations are executed in single process (pipeline in shell can degrade the performance, and the process sequence is required to be in order).
The process sequence as such:
put flower.png on top of the frame.png -> mpr:framedFlower
put mpr:framedFlower on top of the background.png -> mpr:out2
blur the heart.png, right-gradient-transparent the smiley.png and put both image on top of mpr:out2 -> mpr:out3
annotate the mpr:out3 with "Hello world" (placement=bottom) -> png:-
I don't include the commands that I have tried because they are too messy and it will be an insult to those users who read it. I tried for many hours, but can't get it done. Please advise.
I haven't spent ages futzing with the exact coordinates as I am only using sample pictures, but this one-liner contains every technique you need to do what you are asking.
There is basically one line of code per element in the final image:
convert frame.png -resize 500x400\! \( flower.png -resize 400x300\! \) -gravity center -composite \
background.png +swap -gravity northwest -geometry +100+150 -composite \
\( heart.png -resize 200x200 -blur 0x8 \) -geometry +1200+250 -composite \
-gravity south -pointsize 72 -fill red -annotate +0+60 'Hello world' \
\( emoji.png -resize 250x250 -channel a -fx "u.a*(1-(i/w))" \) -gravity northwest -geometry +1200+500 -composite result.png
The first line reads in the picture frame and the flower and resizes them each independently because of the parentheses and then composites the flower into the frame.
The next line loads the background and then uses +swap to put it behind the framed picture from the previous line. It then sets -gravity to northwest as the origin for the ensuing -geometry before compositing the framed picture onto the background.
The next line loads the heart and resizes and blurs just the heart before compositing that onto the main picture at your specified position.
Next up is the annotation - the only interesting thing is that I set the -geometry to south which means that the offsets to -annotate are relative to the bottom centre of the background.
Finally, I load the emoji-thing and resize just it in parentheses before compositing over the main image. The only interesting thing is that I use -fx to alter the alpha channel (-channel a) and I multiply the existing transparency (u.a) by the fraction of the inverse of the distance we are across the image, namely (1-(i/w)).
Hope that is fairly clear!
Start Images
I have written the following script which uses the ImageMagick* convert utility to append axis labels to an existing image.
LEFT_="l -30,0 +2,+2 -6,-2 +6,-2 -2,+2 z"
RIGHT_="l 30,0 -2,+2 +6,-2 -6,-2 +2,+2 z"
convert -size 240x160 pattern:SMALLFISHSCALES \
-pointsize 16 -fill black -background white \
-gravity SouthEast -splice 0x20 \
-draw "translate 40,0 text 0,0 'Time' stroke red path 'm 5,2 $RIGHT_'" \
-gravity NorthWest -splice 20x0 \
-draw "rotate +90 translate 40,-10 text 0,0 'Value' path 'm -5,2 $LEFT_'" \
example.png
Which produces the following image:
This is almost exactly what I am after, except that the red arrow is out of place. I expected the red arrow to appear next to the Time label, since its start point is specified as a relative position in the same draw command. Unfortunately, it looks like the -gravity option is affecting the text primitive, but not the path primitive.
Is there a way to reference the SouthEast corner, or the Time text label when specifying the start position of the red arrow? I can't use absolute coordinates, because the size of the image varies.
*ImageMagick 6.7.8-9 on CentOS 7
Updated Answer
Maybe you can make Unicode text arrows like this then they will be affected by gravity...
perl -e 'binmode(STDOUT,":utf8"); print "Time ... \x{2192}\x{2191}";'|
convert -font TimesNewRoman -pointsize 36 label:#- arrows.png
Depending on your OS, the following may do as a replacement for the Perl above...
printf "%b" "\u2192" | convert ...
Original Answer
I am not at all familiar with paths, but I can suggest a way to achieve what you want that doesn't use gravity at all, and maybe that will help.
Rather than use -splice, you can clone your original image and crop it to the size you planned to splice on, and then -append the strips that label the axes. It is easier to show you the command than explain it!
convert -size 240x160 pattern:SMALLFISHSCALES \
\( +clone -crop x20+0+0 -fill blue -colorize 100% \) \
-append \
\( +clone -crop 20x+0+0 -fill red -colorize 100% \) \
+swap +append result.png
I have filled the x-axis blue, but remove that and add whatever labelling and arrows you need, and I filled the y-axis red, but likewise remove that and add labelling and arrows - rotating as necessary.
Two tricky things to note...
-append will append the second image below the first
+append will append the second image to the right of the first, so I +swap beforehand to put it on the left side.