I want to display a DEM file (.raw) using Python, but there may be something wrong with the result.
Below is my code:
img1 = open('DEM.raw', 'rb')
rows = 4096
cols = 4096
f1 = np.fromfile(img1, dtype = np.uint8, count = rows * cols)
image1 = f1.reshape((rows, cols)) #notice row, column format
img1.close()
image1 = cv2.resize(image1, (image1.shape[1]//4, image1.shape[0]//4))
cv2.imshow('', image1)
cv2.waitKey(0)
cv2.destroyAllWindows()
And I got this result:
display result
The original DEM file is placed here: DEM.raw
There's nothing wrong with your code, that's what's in your file. You can convert it to a JPEG or PNG with ImageMagick at the command line like this:
magick -size 4096x4096 -depth 8 GRAY:DEM.raw result.jpg
And you'll get pretty much the same:
The problem is elsewhere.
Taking the hint from Fred (#fmw42) and playing around, oops I mean "experimenting carefully and scientifically", I can get a more likely looking result if I treat your image as 4096x2048 pixels with 16 bpp and MSB first endianness:
magick -size 4096x2048 -depth 16 -endian MSB gray:DEM.raw -normalize result.jpg
Related
I am trying to convert some image (raw) that has no header, just the pixel values to a png, so I can view it. I found some information here (using imageMagik) and it works if the image is one channel. I used the command
convert -depth 8 -size 5312x2988+0 gray:image.raw pic.png
I searched more and I found that using the rpg is a way to treat more channels image, so I changed the syntax to
convert -depth 8 -size 5312x2988+0 rgb:image.raw pic.png
... but the output seems to be more like a 9x9 matrix containing the small images, like
R R R
G G G
B B B
The size is not wrong, but it may be the way the pixels are stored (interlaced/not-interlaced).
Can anyone help me to convert the 3-channel image, the correct way?
You could try specifying -interlace plane before the input file. Or maybe -interlace line, like this:
convert -interlace plane -depth 8 -size 5312x2988+0 rgb:image.raw pic.png
Like many ImageMagick parameters, you can enumerate the options at the command line with identify -list OPTION, so, in this case:
identify -list interlace
Output
Line
None
Plane
Partition
GIF
JPEG
PNG
I have a big big image, lets name it orig-image.tiff.
I want to cut it in smaller pieces, apply things on it, and stitch back together the newly created little images.
I cut it into pieces with this command :
convert orig-image.tiff -crop 400x400 crop/parts-%04d.tiff
then I'll generate many images by applying a treatment to each part-XXXX.tiff image and end up with images from part-0000.png to part-2771.png
Now I want to stitch back the images into a big one. Can imagemagick do that?
If you were using PNG format, the tiles would "remember" their original position, as #Bonzo suggests, and you could take them apart and reassemble like this:
# Make 256x256 black-red gradient and chop into 1024 tiles of 8x8 as PNGs
convert -size 256x256 gradient:red-black -crop 8x8 tile-%04d.png
and reassemble:
convert tile*png -layers merge BigBoy.png
That is because the tiles "remember" their original position on the canvas - e.g. +248+248 below:
identify tile-1023.png
tile-1023.png PNG 8x8 256x256+248+248 16-bit sRGB 319B 0.000u 0:00.000
With TIFs, you could do:
# Make 256x256 black-red gradient and chop into 1024 tiles of 8x8 as TIFs
convert -size 256x256 gradient:red-black -crop 8x8 tile-%04d.tif
and reassemble with the following but sadly you need to know the layout of the original image:
montage -geometry +0+0 -tile 32x32 tile*tif BigBoy.tif
Regarding Glenn's comment below, here is the output of pngcheck showing the "remembered" offsets:
pngcheck tile-1023*png
Output
OK: tile-1023.png (8x8, 48-bit RGB, non-interlaced, 16.9%).
iMac:~/tmp: pngcheck -v tile-1023*png
File: tile-1023.png (319 bytes)
chunk IHDR at offset 0x0000c, length 13
8 x 8 image, 48-bit RGB, non-interlaced
chunk gAMA at offset 0x00025, length 4: 0.45455
chunk cHRM at offset 0x00035, length 32
White x = 0.3127 y = 0.329, Red x = 0.64 y = 0.33
Green x = 0.3 y = 0.6, Blue x = 0.15 y = 0.06
chunk bKGD at offset 0x00061, length 6
red = 0xffff, green = 0xffff, blue = 0xffff
chunk oFFs at offset 0x00073, length 9: 248x248 pixels offset
chunk tIME at offset 0x00088, length 7: 13 Dec 2016 15:31:10 UTC
chunk vpAg at offset 0x0009b, length 9
unknown private, ancillary, safe-to-copy chunk
chunk IDAT at offset 0x000b0, length 25
zlib: deflated, 512-byte window, maximum compression
chunk tEXt at offset 0x000d5, length 37, keyword: date:create
chunk tEXt at offset 0x00106, length 37, keyword: date:modify
chunk IEND at offset 0x00137, length 0
No errors detected in tile-1023.png (11 chunks, 16.9% compression).
I would like to overlay 2 (or more) RGB images in Digital Micrograph by scripting.
Unlike some realimages without color that can be merged by summing the intensity, RGB images should be merged in another way but I have no clue.
Thanks for helping!
You can sum RGB images just like regular images, but your problem is that you need to define what you mean by "overlay".
RGB Images are triplets holding a value for each of the 3 channels RED, GREEN, BLUE and these values are clipped between [0 and 255].
"Summing" RGB images will give you again a triplet, but any value bigger then 255 is truncated to 255, so you will shift more and more towards "white" in the image.
You could define an "overlay" as the mean-values instead, but the effect of "overlaying" then becomes more and more towards "average gray".
Or you could define an "overlay" as the "max-values" or "min-values" of the involved channels.
Or, or, or....
When you think of "overlaying" RGB images, it is helpful to think of other graphic programs like Photoshop which allow you combinining "layers". Usually these programs offer you multiple options ( "overlay, screen, lighten, darken, you name it..." ) which all define a different mathematical relationship between the three color values of the first and the three color values of the second layer.
The commands you need to do this maths are RGB( ), RED( ), GREEN( ), and BLUE( ) as well as simple maths. See the example:
image img1r := RealImage("Red 1",4,256,256)
image img1g := RealImage("Green 1",4,256,256)
image img1b := RealImage("Blue 1",4,256,256)
img1r = icol/iwidth * 256
img1b = iradius/iwidth * 256
img1g = irow/iwidth * 256
RGBImage img1 = RGB(img1r,img1g,img1b)
img1.Setname( "Image 1 (RGB)")
image img2r := RealImage("Red 2",4,256,256)
image img2g := RealImage("Green 2",4,256,256)
image img2b := RealImage("Blue 2",4,256,256)
img2r = (icol%10)<5 ? 256 : 100
img2g = (irow%10)<5 ? 256 : 100
img2b = (iradius%10)<5 ? 256 : 100
RGBImage img2 = RGB(img2r,img2g,img2b)
img2.Setname( "Image 2 (RGB)")
image sumImg = img1 + img2
sumImg.SetName( "SUM" )
image avImg = (img1 + img2)/2
avImg.SetName( "AVERAGE" )
image maxImg = RGB( max(red(img1),red(img2)), max(green(img1),green(img2)), max(blue(img1),blue(img2)))
maxImg.SetName( "Channel MAX" )
image minImg = RGB( min(red(img1),red(img2)), min(green(img1),green(img2)), min(blue(img1),blue(img2)))
minImg.SetName( "Channel MIN" )
// Arrange display
EGUPerformActionWithAllShownImages( "delete" )
minImg.ShowImage()
maxImg.ShowImage()
avImg.ShowImage()
sumImg.ShowImage()
img2.ShowImage()
img1.ShowImage()
TagGroup layout = SLMCreateGridLayout( 2 , 3 )
EGUArrangeAllShownImagesInLayout( layout )
It should also be noted that some "overlay" combinations are not based on the Red/Green/Blue (RGB) color model, but on the alternative Hue/Saturation/Brightness (HSB) color model.
DigitalMicrograph scripting does natively only support RGB, but you can do the maths yourself.
You might also find it useful to look at the examples script "Display as HSB.s" on the Gatan script example site.
You can script image merging very simply with ImageMagick which is installed on most Linux distros and is available for OSX and Windows.
As you have not provided any sample images, I have made a couple - image1.png and image2.png like this:
Now, there are lots of Blend Modes available - some of the more common ones are Lighten, Darken, Overlay, Blend. So, let's try a few at the command line in Terminal:
convert image1.png image2.png -compose darken -composite result.png
convert image1.png image2.png -compose lighten -composite result.png
convert image1.png image2.png -compose overlay -composite result.png
The options are endless - you can get a list of the blend modes available in ImageMagick like this:
identity -list compose
Output
Atop
Blend
Blur
Bumpmap
ChangeMask
Clear
ColorBurn
ColorDodge
Colorize
CopyBlack
CopyBlue
CopyCyan
CopyGreen
Copy
CopyMagenta
CopyOpacity
CopyRed
CopyYellow
Darken
DarkenIntensity
DivideDst
DivideSrc
Dst
Difference
Displace
Dissolve
Distort
DstAtop
DstIn
DstOut
DstOver
Exclusion
HardLight
HardMix
Hue
In
Lighten
LightenIntensity
LinearBurn
LinearDodge
LinearLight
Luminize
Mathematics
MinusDst
MinusSrc
Modulate
ModulusAdd
ModulusSubtract
Multiply
None
Out
Overlay
Over
PegtopLight
PinLight
Plus
Replace
Saturate
Screen
SoftLight
Src
SrcAtop
SrcIn
SrcOut
SrcOver
VividLight
Xor
Here are all the options:
I am trying to convert a BMP from 24 bits/pixel to 16 bit/pixel Mode in ImageMagick.
convert /tmp/a/new/37.bmp -depth 5 -define bmp:format=bmp2 /tmp/a/new/37_v2_16bit.bmp
convert /tmp/a/new/37.bmp -depth 5 -define bmp:format=bmp3 /tmp/a/new/37_v3_16bit.bmp
The result has the same 8 bit per R., per G. and per B., according to output of:
identify -verbose
What am I doing wrong? How to get 16-bit color in BMP ?
Thank you!
P. S.
-depth value
depth of the image. This is the number of bits in a pixel. The only acceptable values are 8 or 16.
http://linux.math.tifr.res.in/manuals/html/convert.html
=(
Official Documentation says (no restrictions mentioned):
-depth value
depth of the image.
This the number of bits in a color sample within a pixel. Use this option to specify the depth of raw images whose depth is unknown such as GRAY, RGB, or CMYK, or to change the depth of any image after it has been read.
convert /tmp/a/new/37.bmp -colors 256 /tmp/a/new/37_256.bmp
makes the file smaller, but visually it is the same! wth?! )))))
convert /tmp/a/new/37.bmp -colors 65536 /tmp/a/new/37_64k.bmp
same size, same visual picture.
convert /tmp/a/new/37.bmp -dither None -colors 256 /tmp/a/new/37_256_nd.bmp
a bit smaller again, but it does not look like 256-colored! bug? 256 colored 800x600 BMP is ~ 800x600x1 Bytes (without headers) ~ 480 000 Bytes. But it says ~650 000 Bytes)))) funny program))
The documentation you quoted from linux.math... is pretty old (2001) and is incorrect about -depth. The "-depth 16" option does not mean 16-bit pixels (like R5G6R5 or R5G5R5A1); -depth 16 means 48-bit/pixel R16, G16, B16 or 64-bit/pixel R16, G16, B16, A16 pixels. The "Official documentation" that you quoted (2015) is correct.
ImageMagick doesn't support that kind of 16 bit/pixel formats, so you'll need to store them in an 8 bit/channel format and live with the larger filesize.
It also appears that for images with 256 or fewer colors, it will write a colormapped image with 1, 4, or 8-bit indices. You don't have to make any special request, it'll do that automatically. Use "-compress none" for uncompressed BMP's. The current ImageMagick (version 6.9.2-8) gives me the expected 480kbyte file if I start with an 800x600 image with more than 256 colors and use
convert im.bmp -colors 256 -compress none out.bmp
ImageMagick does support a 16-bit "bitfields" BMP format while reading but I don't see any indication that it can write them, and haven't tried either reading or writing such images.
It's not ImageMagick but ffmpeg, more associated with video, can create a 16bit bmp image if you are referring to the 565 format?
ffmpeg -i ffmpeg-logo.png -sws_flags neighbor -sws_dither none -pix_fmt rgb565 -y ffmpeg-logo-16bit-nodither.bmp
That intentionally disables dithering but if you want that just omit the sws parts, e.g.
ffmpeg -i ffmpeg-logo.png -pix_fmt rgb565 -y ffmpeg-logo-16bit-dithered.bmp
If your images are inherently from an rgb565 source then it should not dither them but I'd always be cautious and inspect a few closely before doing any batch conversions.
Based on the discussion in the comments it sounds like PNG would be a good format for preserving old screenshots verbatim as it uses lossless compression but maybe that's not applicable due to use with vintage software?
I have a command:
composite -colorspace gray -quality 99 -compose plus t-1.jpg x-1.jpg 2.jpg
I would like to produce the same effect in python. I tried this:
from PIL import Image
imga = Image.open('t-1.jpg')
imgb = Image.open('x-1.jpg')
ab = PIL.Image.blend( imga, imgb, 0.5)
ab.save("test.jpg")
They test.jpg and 2.jpg do not looking anything alike. This would mean that -compose plus does not equal to blend( imga, imgb, 0.5). Docs for the compose plus command is here. What is a comparable operation for -compose plus in imagemagick?
Found it right after I asked the question.
from PIL import Image, ImageChops
imga = Image.open('t-1.jpg')
imgb = Image.open('x-1.jpg')
ab = ImageChops.add(imga, imgb,1,0)
ab.save("test.jpg")