Using ImageMagick I need to splice half of two images into a new image - machine-learning

I am trying to figure out the best way to do this, since I need to create a script that will complete this task for about 3000 image files. So, I have two sets of images and I want to create two more sets. The object is to take the left vertical half of image A and combine it with the right half of image B, creating set AB. I will also need to do the opposite and create set BA, which would be the forth set. I need to do this in a way such that it continues the naming convention for the files, which have names like name.001.jpg. Any help will be much appreciated.

Here is an example of how to do that in ImageMagick Unix syntax for two images.
Image1:
Image2:
convert logo.jpg logo_b5.jpg -crop 50x100% \( -clone 0,3 +append +write logo_A.jpg \) \( -clone 2,1 +append +write logo_B.jpg \) null:
logo_A.jpg
logo_B.jpg
Please explain where your two sets of data are stored and how they are named. For multiple pairs of images, you will need to write a loop assuming you have each set with the same name in two different folders. But you need to say what OS you are using since Windows bat scripting is different from Unix shell scripting.

Related

Apply imagemagick transformation on only part of an image, whilst keeping the rest "stock"?

I have many documents per day that are photographed and I need to organise by QR code. The problem is, zbarimg is struggling to pick up many of the QR codes in the photos, so I have been trialling processing them with imagemagick first, using morphology open, thesholding, etc, which has yielded much better results.
The only issue with this is these apply to the whole image, which makes the rest of the file unusable for me, as I deal with the rest of the image based on colours and information which all gets destroyed in the processing. Could anybody give me an example on how I could apply my imagemagick filters to only a part of an image (coordinate based is fine) and leave the rest of the image untouched, so I can continue my process? I will be applying this to all images in a folder, so it's a batch file running this for me in most instances.
I have tried using crops, however this obviously leaves me with only the cropped portion of the image, which doesn't actually help when trying to process the rest of the file.
I'm running my scripts on Windows 11, if that means anything in terms of the solution.
Many thanks!
Tom
EDIT:
Thank you all for the advice given!
I solved my problem using the following:
convert a.jpg ( -clone 0 -fill white -colorize 100 -fill black -draw "polygon 500,300 500,1500 1300,1500 1300,300" -alpha off -write mpr:mask +delete ) -mask mpr:mask +repage -threshold 50% -morphology open square:4 +mask c.jpg
I did post this as an answer, but (and I have no idea why, I'm brand new to stack exchange) my answer was deleted. I used the clone to make the mask with the coordinates needed, then added the threshold and morphology that would make my QR codes more legible!
Thanks again everyone, really helped me out on my journey to figure it out :D
You can use -region to specify a region to process. So starting with this:
You can then specify a region to colorise with blue and then change the region to blur part of the blue and part of the original:
magick swirl.jpg -region 100x100+50+50 -fill blue -colorize 100% -region 100x100+100+100 -blur x20 result.png
The solution using -region may be the most direct. In ImageMagick versions where -region is not supported the same result can usually be achieved by cropping and modifying a clone inside parentheses.
magick swirl.jpg ( +clone -crop 100x100+50+50 -fill blue -colorize 50 ) -flatten result.png
The cloned, cropped, and and modified piece maintains its original geometry, so the -flatten operation puts it back where it was on the input image after the parentheses.

Missing layer when Combining images into PSD using ImageMagick

I'm trying to combine 3 images using imageMagicks 7 latest version however the first layer is always missing.
convert "image_03.png" "image_02.png" "image_01.png" -background none -alpha set "product.psd"
However i only get two layers??
Attached are the images below ...
A PSD file expects a layer that is the flattened layer from all the other layers. It needs to be the first layer. Photoshop assumes the first layer is the flattened layer. It must be created in Imagemagick and combined with the other individual layers as the first image in the command sequence when writing a PSD file. So I create it last from clones and then insert it at the first position (0).
Try the following.
Unix syntax:
convert "image_03.png" "image_02.png" "image_01.png" \( -clone 0-2 -flatten \) -insert 0 -background none -alpha set "product.psd"
If on Windows, remove the \s from in front of the parentheses.
Reorder the images as desired for the layers in the PSD file.

Compose multiple regions of an image into a target

I'm trying to use ImageMagick to compose different pieces of a rendered PDF into a target. E.g., I want to have ImageMagick render the PDF at 300dpi, then create a 300x400 pixel output image, then take from the PDF the area 10x20+30+40 and place it in the target (300x400 pixel image) at 12,34. Then take another (and a third and fourth) chunk at different coordinates with different sizes and place them at different places.
I cannot seem to figure out how to do this in one go, and doing it in multiple runs always re-renders the PDF and takes awfully long. Is this even possible?
Here's an idea of how you can approach this. It uses the MPR or "Memory Program Register" that Fred suggested in the comments. It is basically a named chunk of memory that I write into at the start and which I recall later when I need it.
Here is a rather wonderful start image from the Prokudin-Gorskii collection:
The code resizes the image and saves a copy in the MPR. Then, takes a copy of the MPR, crops out a head, resizes it and composites the resized result onto the resized original at a different location and then repeats the process for another head.
magick Prokudin.png -resize 300x400\! -write MPR:orig \
\( MPR:orig -crop 50x50+180+84 -resize 140x140 \) -geometry +10+240 -compose src-over -composite \
\( MPR:orig -crop 40x40+154+184 \) -geometry +40+100 -compose src-over -composite \
result.png
If you have trouble understanding it, try running it with the second or third line omitted so it just does one head ;-)
Hopefully it covers all the aspects of your question and you can adapt it to your PDF.

Why does -composite ignore the first image when called incorrectly?

I've been trying to understand ImageMagick's image stack. Here's my understanding:
Whenever you reference an image on the command line, you add an image to the stack. Certain operators can pop multiple images off the stack and push a result image, such as +append or -composite. For example:
convert a.png b.png -composite output.png
This composites b on top of a, as expected.
But when I run this (I know it doesn't make sense, but I'm trying to understand the behavior):
convert a.png -composite b.png output.png
I get a picture consisting of just b.png. Why is that? Where did the first image go? Wouldn't you expect this to error since composite doesn't have two images to work with?
In addition, if I run this,
convert -composite a.png b.png output.png
I get the same result as if I ran a.png b.png -composite. Why is this? Wouldn't you expect this to also error?
This confuses me because I expect malformed inputs to throw errors rather than producing unexpected output. How do I avoid issues like these when working with ImageMagick?
I think it is related to the desire of the ImageMagick team to simplify and rationalise the parameter order but still accommodate folks who use the original, old parameter order.
Your first example is, IMHO, the new and preferred way of doing things, load first image, then second image, then do something with the two.
convert a.png b.png -composite output.png
Your second and third example work because of ImageMagick trying to accommodate an illogical, or maybe old-fashioned way of using it. If operators (such as -composite) are used when there are insufficient images, it kind of remembers that and then applies it when it has enough images.
I wrote another answer here that is very related and may help clarify a bit more.
There is a good explanation here.

How to get the result of an ImageMagick convert command as bitmap data

I am working on a project that will make a jigsaw puzzle from an image and present it to the user as separate pieces in a browser. I have done all the prototyping in Python. At the moment I can produce separate images for each puzzle piece.
As a last step I want to make a nice bevel on the pieces to make them look realistic. I found a ImageMagick convert command to do that just fine:
convert piece.png -alpha extract -blur 0x2 -shade 120x30 piece.png -compose Overlay -composite piece.png -alpha on -compose Dst_In -composite result.png
I execute the command by using os.system, but this is taking way too long to complete.
Can you give me an advice on a solution to execute the ImageMagick processing in the fastest way? I think that would involve executing the processing directly with the ImageMagick libraries, sending it the input bitmap data and receiving the result also as bitmap data. Then I can stream the result to user. The solution does not have to be Python.
Update
I have just been looking at your command again - I kind of assumed it was sensible as you implied you got it from Anthony Thyssen's excellent ImageMagick Usage pages - however I see you are reading the image piece.png three times which it must be possible to avoid by using -clone or -write MPR:save. Let me experiment some more. I haven't got your jigsaw piece to test with, so I am in the dark here, but you must be able to change your command to something like this:
convert piece.png -write mpr:piece \
\( +clone -alpha extract -blur 0x2 -shade 120x30 \) \
-compose Overlay -composite \
mpr:piece -alpha on -compose Dst_In -composite result.png
MPR is a Memory Program Register, or basically a named lump of RAM that ImageMagick can read and write to. There are details and examples here.
Original Answer
Three things spring to mind... which one, or which combination of things, will help depends on the specification of your CPU, memory and disks as well as the sizes of your pieces - none of which I know or can test,
Firstly, if you used the libraries, you would avoid the overhead of creating a new process to run the convert - so that should help, but if your pieces are large and the bottleneck is actually the processing, using the libraries will make little difference.
Secondly, if your images are large, the time to read them in off disk and write them back to disk may be what is killing your performance. To test this, I would create a small RAMdisk and store the images on there and see if that helps. It is a quick and relatively easy test.
Thirdly, I assume you are generating many pieces and you currently do them one after the other in a sequential fashion. If this is the case, I would definitely recommend going multi-threaded. Either do this in your code with your language's threading environment, or try out GNU Parallel which has always been brilliant for me. So, if you were going to do
convert piece1.png -alpha extract ... -composite result1.png
convert piece2.png -alpha extract ... -composite result2.png
convert piece3.png -alpha extract ... -composite result3.png
...
convert piece1000.png -alpha extract ... -composite result1000.png
just either send all those commands to GNU Parallel on its stdin and it will execute them all in parallel on as many cores as your CPU has like this
(
echo convert piece1.png ... -composite result1.png
echo convert piece2.png ... -composite result2.png
echo convert piece3.png ... -composite result3.png
) | parallel
or build the command like this
parallel convert {} -alpha ..... result-{} ::: piece*.png

Resources