How to make ImageMagick do this kind of halftone (see image and description)? - imagemagick

Been looking at the halftone capabilites of ImageMagick. I've tried many combinations of -ordered-dither but without being near the result I want. I've tried Photoshop but that is not an option due to Linux platform and Licences and so on.
In Photoshop I get the result I want by making it Grayscale and then going through this line of choices: Image > Mode > Bitmap and Method Use: halftone screen with these parameters Frequency: 22 LPI Angle: 55 degrees Shape: Round
What would the convert command equivalent be for such outcome?
The original image
Photoshop halftone made by clicking Image > Mode > Bitmap and the choosing halftone screen with these parameters Frequency: 22 lpi Angle: 55 degrees Shape: round

Related

Eliminate hairlines from a vector graphics by converting to oversampled bitmap and then downscaling - How with ImageMagick?

I used Apple Numbers (a Spreadsheet app with styling options) to create a UX flowchart of various user interfaces of an app.
Apple Numbers has a PDF export option.
The problem is that even though some border lines in the table have been set to "none" in the export you nevertheless get small visible hairlines, see this cutout:
[
I want to to eliminate the hairlines by image processing
Before creating a flyover video over the graphics.
My basic idea is:
Convert vector to bitmap with very high resolution (oversampling, e.g. to 600 or 1200 DPI)
Then downsample to the target resolution (e.g. 150 DPI) with an algorithm which eliminates the hairlines (disappearing in the dominance of neighboring pixels) while overally still remaining as crisp and sharp as possible.
So step 1, I already figured out, by these two possibilities:
a. Apple Preview has a PDF to PNG export option where you can specify the DPI.
b. ImageMagick convert -density 600 source.pdf export.png
But for step 2 there are so many possibilities:
resample <DPI> or -filter <FilterName> -resize 25% or -scale 12.5% (when from 1200 to 150)
Please tell me by which methods (resample, resize, scale) and which of the interpolation algorithms or filters I shall use to achieve my goal of eliminating the hairlines by dissolving them into their neighboring pixels, with the rest (normal 1px lines, rendered text and symbols, etc) remaining as crisp as possible.
ImageMagick PDF tp PNG conversion with different DPI settings:
convert -density XXX flowchart.pdf flowchart-ImageMagick-XXX.png
flowchart-ImageMagick-150.png ; flowchart-ImageMagick-300.png ; flowchart-ImageMagick-600.png
Apple Preview PDF to PNG export with different DPI settings:
flowchart-ApplePreview-150.png ; flowchart-ApplePreview-300.png ; flowchart-ApplePreview-600.png
Different downscaling processings
a) convert -median 3x3 -resize 50% flowchart-ApplePreview-300.png flowchart-150-from-ApplePreview-300-median-3x3.png thanks to the hint from #ChristophRackwitz
b) convert -filter Box -resize 25% flowchart-ImageMagick-600.png flowchart-150-from-ImageMagick-600-resize-box.png
Comparison
flowchart-ApplePreview-150.png
flowchart-150-from-ApplePreview-300-median-3x3.png
✅ Hairlines gone
❌ But font is not as crisp anymore, median destroyed that.
flowchart-150-from-ImageMagick-600-resize-box.png
🆗 Overally still quite crisp
🆗 Hairline only very very faint, even only faint when zoomed in
Both variants are somehow good enough for my KenBurns / Dolly cam ride over them. Still I wished that there'd be an algorithm that keeps cripness but still eliminates 1px lines in very high DPI bitmaps. But I guess this is a Jack of all trades only in my phantasy.
Processing Durations
MacBook Pro 15'' (Mid 2014, 2,5 GHz Quad-Core Intel Core i7)
ImageMagick PDF to PNG
PDF source Ca. 84x60cm (33x23'')
300dpi -> 27s
600dpi -> 1m58s
1200dpi -> 37m34s
ImageMagic Downscaling
time convert -filter Box -resize 25% 1#600.png 1#150-from-600.png
# PNG # 39700 × 28066: 135.57s user 396.99s system 109% cpu 8:08.08 total
time convert -median 3x3 -resize 50% 2#300.png 2#150-from-300-median3x3.png
# PNG # 19850 × 14033: 311.48s user 9.42s system 536% cpu 59.76 total
time convert -median 3x3 -resize 50% 3#300.png 3#150-from-300-median3x3.png
# PNG # 19850 × 14033: 237.13s user 8.33s system 544% cpu 45.05 total

Image Processing: Determining a trapezoid from a list of points

The problem is fairly simple: I have the following image.
My list of points is the white pixels, I have them stored in a texture. What would be the best and possibly most efficient method to determine the trapezoid they define? (Convex shape with 4 corners, doesn't necessarily have 90 degree angles).
The texture is fairly small (800x600) so going for CUDA/CL is definetly not worth it (I'd rather iterate over the pixels if possible).
You should be able to do what you want, i.e. detect lines from incomplete information, using the Hough Transform.
There is a cool demo of it in the examples accompanying CImg which itself is a rather nice, simple, header-only C++ image processing library. I have made a video of it here, showing how the accumulator space on the right is updated as I move the mouse first along a horizontal bar of the cage and then down a vertical bar. You can see the votes cast in the accumulator and that the point in the accumulator gradually builds up to a peak of bright white:
You can also experiment with ImageMagick on the command-line without needing to write or compile any code, see example here. ImageMagick is installed on most Linux distros and is available for macOS and Windows.
So, using your image:
magick trapezoid.png -background black -fill red -hough-lines 9x9+10 result.png
Or, if you want the underlying information that identifies the 4 lines:
magick trapezoid.png -threshold 50% -hough-lines 9x9+10 mvg:
# Hough line transform: 9x9+10
viewbox 0 0 784 561
# x1,y1 x2,y2 # count angle distance
line 208.393,0 78.8759,561 # 14 13 312
line 0,101.078 784,267.722 # 28 102 460
line 0,355.907 784,551.38 # 14 104 722
line 680.493,0 550.976,561 # 12 13 772
If you look at the numbers immediately following the hash (#), i.e. 14, 28, 14, 12 they are the votes which correspond to the number of points/dots in your original image along that line. That's is why I set the threshold to 10, in the 9x9+10 part - rather than using the 40 in the ImageMagick example I linked to. I mean you have relatively few points on each line so you need a lower threshold.
Note that the Hough Transform is also available in other packages, such as OpenCV.

Stitching images using GraphicsMagick with Blending

I have to stitch number of tiles using GraphicsMagick to create one single image. I am currently using -convert with -mosaic with some overlap to stitch tiles. But the stitched image has border where the overlap is done.
Following is the command I am using:
gm convert -background transparent
-page "+0+0" "E:/Images/Scan 001_TileScan_001_s00_ch00.tif"
-page "+0+948" "E:/Images/Scan 001_TileScan_001_s01_ch00.tif"
-page "+0+1896" "E:/Images/Scan 001_TileScan_001_s02_ch00.tif"
-page "+0+2844" "E:/Images/Scan 001_TileScan_001_s03_ch00.tif"
-mosaic "E:/Output/temp/0.png"
The final image looks like this:
How to stitch and Blend without Border?
I've been part of several projects to make seamless image mosaics. There are a couple of other factors you might like to consider:
Flatfielding. Take a shot of a piece of white card with your lens and lighting setup, then use that to flatten out the image lightness. I don't know if GM has a thing to do this, #fmw42 would know. A flatfield image is specific to a lighting setup, lens aperture setting, focus setting and zoom setting, so you need to lock focus/aperture/zoom after taking one. You'll need to do this correction in linear light.
Lens distortion. Some lenses, especially wide-angle ones, will introduce significant geometric distortion. Take a shot of a piece of graph paper and check that the lines are all parallel. It's possible to use a graph-paper shot to automatically generate a lens model you can use to remove geometric errors, but simply choosing a lens with low distortion is easier.
Scatter. Are you moving the object or the camera? Is the lighting moving too? You can have problems with scatter if you shift the object: bright parts of the object will scatter light into dark areas when they move under a light. You need to model and remove this or you'll see seams in darker areas.
Rotation. You can get small amounts of rotation, depending on how your translation stage works and how carefully you've set the camera up. You can also get the focus changing across the field. You might find you need to correct for this too.
libvips has a package of functions for making seamless image mosaics, including all of the above features. I made an example for you: with these source images (near IR images of painting underdrawing):
Entering:
$ vips mosaic cd1.1.jpg cd1.2.jpg join.jpg horizontal 531 0 100 0
Makes a horizontal join to the file join.jpg. The numbers give a guessed overlap of 100 pixels -- the mosaic program will do a search and find the exact position for you. It then does a feathered join using a raised cosine to make:
Although the images have been flatfielded, you can see a join. This is because the camera sensitivity has changed as the object has moved. The libvips globalbalance operation will automatically take the mosaic apart, calculate a set of weightings for each frame that minimise average join error, and reassemble it.
For this pair I get:
nip2, the libvips GUI, has all this with a GUI interface. There's a chapter in the manual (press F1 to view) about assembling large image mosaics:
https://github.com/jcupitt/nip2/releases
Global balance won't work from the CLI, unfortunately, but it will work from any of the libvips language bindings (C#, Python, Ruby, JavaScript, C, C++, Go, Rust, PHP etc. etc.). For example, in pyvips you can write:
import pyvips
left = pyvips.Image.new_from_file("cd1.1.jpg")
right = pyvips.Image.new_from_file("cd1.2.jpg")
join = left.mosaic(right, "horizontal", 531, 0, 100, 0)
balance = join.globalbalance()
balance.write_to_file("x.jpg")
Here is an example using ImageMagick. But since colors are different, you will only mitigate the sharp edge with a ramped blend. The closer the colors are and the more gradual the blend (i.e. over a larger area), the less it will show.
1) Create red and blue images
convert -size 500x500 xc:red top.png
convert -size 500x500 xc:blue btm.png
2) Create mask that is solid white for most and a gradient where you want to overlap them. Here I have 100 pixels gradient for 100 pixel overlap
convert -size 500x100 gradient: -size 500x400 xc:black -append -negate mask_btm.png
convert mask_btm.png -flip mask_top.png
3) Put masks into the alpha channels of each image
convert top.png mask_top.png -alpha off -compose copy_opacity -composite top2.png
convert btm.png mask_btm.png -alpha off -compose copy_opacity -composite btm2.png
4) Mosaic the two images one above the other with an overlap of 100
convert -page +0+0 top2.png -page +0+400 btm2.png -background none -mosaic result.png
See also my tidbit about shaping the gradient at http://www.fmwconcepts.com/imagemagick/tidbits/image.php#composite1. But I would use a linear gradient for such work (as shown here), because as you overlap linear gradients they sum to a constant white, so the result will be fully opaque where they overlap.
One other thing to consider is trying to match the colors of the images to some common color map. This can be done by a number of methods. For example, histogram matching or mean/std (brightness/contrast) matching. See for example, my scripts: histmatch, matchimage and redist at http://www.fmwconcepts.com/imagemagick/index.php and ImageMagick -remap at https://www.imagemagick.org/Usage/quantize/#remap

How to blend 80x60 thermal and 640x480 RGB image?

How do I blend two images - thermal(80x60) and RGB(640x480) efficiently?
If I scale the thermal to 640x480 it doesn't scale up evenly or doesn't have enough quality to do any processing on it. Any ideas would be really helpful.
RGB image - http://postimg.org/image/66f9hnaj1/
Thermal image - http://postimg.org/image/6g1oxbm5n/
If you scale the resolution of the thermal image up by a factor of 8 and use Bilinear Interpolation you should get a smoother, less-blocky result.
When combining satellite images of different resolution, (I talk about satellite imagery because that is my speciality), you would normally use the highest resolution imagery as the Lightness or L channel to give you apparent resolution and detail in the shapes because the human eye is good at detecting contrast and then use the lower resolution imagery to fill in the Hue and Saturation, or a and b channels to give you the colour graduations you are hoping to see.
So, in concrete terms, I would consider converting the RGB to Lab or HSL colourspace and retaining the L channel. The take the thermal image and up-res it by 8 using bilinear interpolation and use the result as the a, or b or H or S and maybe fill in the remaining channel with the one from the RGB that has the most variance. Then convert the result back to RGB for a false-colour image. It is hard to tell without seeing the images or knowing what you are hoping to find in them. But in general terms, that would be my approach. HTH.
Note: Given that a of Lab colourspace controls the red/green relationship, I would probably try putting the thermal data in that channel so it tends to show more red the "hotter" the thermal channel is.
Updated Answer
Ok, now I can see your images and you have a couple more problems... firstly the images are not aligned, or registered, with each other which is not going to help - try using a tripod ;-) Secondly, your RGB image is very poorly exposed so it is not really going to contribute that much detail - especially in the shadows - to the combined image.
So, firstly, I used ImageMagick at the commandline to up-size the thermal image like this:
convert thermal.png -resize 640x480 thermal.png
Then, I used Photoshop to do a crude alignment/registration. If you want to try this, the easiest way is to put the two images into separate layers of the same document and set the Blending mode of the upper layer to Difference. Then use the Move Tool (shortcut v) to move the upper image around till the screen goes black which means that the details are on top of each other and when subtracted they come to zero, i.e. black. Then crop so the images are aligned and turn off one layer and save, then turn that layer back on and the other layer off and save again.
Now, I used ImageMagick again to separate the two images into Lab layers:
convert bigthermalaligned.png -colorspace Lab -separate thermal.png
convert rgbaligned.png -colorspace Lab -separate rgb.png
which gives me
thermal-0.png => L channel
thermal-1.png => a channel
thermal-2.png => b channel
rgb-0.png => L channel
rgb-1.png => a channel
rgb-2.png => b channel
Now I can take the L channel of the RGB image and the a and b channels of the thermal image and put them together:
convert rgba-0.png thermal-1.png thermal-2.png -normalize -set colorpsace lab -combine result.png
And you get this monstrosity! Obviously you can play around with the channels and colourpsaces and a tripod and proper exposures, but you should be able to see some of the details of the RGB image - especially the curtains on the left, the lights, the camera on the cellphone and the label on the water bottle - have come through into the final image.
Assuming that the images were not captured using a single camera, you need to note that the two cameras may have different parameters. Also, if it's two cameras, they are probably not located in the same world position (offset).
In order to resolve this, you need to get the intrinsic calibration matrix of each of the cameras, and find the offset between them.
Then, you can find a transformation between a pixel in one camera and the other. Unfortunately, if you don't have any depth information about the scene, the most you can do with the calibration matrix is get a ray direction from the camera position to the world.
The easy approach would be to ignore the offset (assuming the scene is not too close to the camera), and just transform the pixel.
p2=K2*(K1^-1 * p1)
Using this you can construct a new image that is a composite of both.
The more difficult approach would be to reconstruct the 3D structure of the scene by finding features that you can match between both images, and then triangulate the point with both rays.

Change the gray levels of pixels of an image

In particular, I have 2 vectors that have been filled by integers between 0 and 255 and a gray scale image.
I want to change the gray level of pixels of the image that matches with vec1[i] to vec2[i].
Do you know any function or fast procedure that can perform this in OpenCV?
I couldnt find a built-in function that returns all pixels with a specified gray level in OpenCV.
Best
Ali
That is known as a lookup-table transform, and that exists in opencv (link to documentation). You will have to adapt your input format a bit though .

Resources