When downloading images from the METEOR M-2 satellite, the image is compressed near the edges. This is corrected by a windows utility called SmoothMeteor. The problem with this app is that it's windows only, and doesn't seem to have a batch mode.
Is it possible to use ImageMagick to stretch an image only along the X axis, so that in the center there is no stretching, but the closer it is near the border, the more stretching is applied?
An example is provided here:
Notice how the center of the map is largely unaffected but the clouds near the left edge look about 4 times wider than the original.
I would guess this is like a pincushion transformation but only on the X axis, but I'm not sure if I'm even in the right track.
What is your platform? If on Unix-like (Linux, Mac OSX, Window 10 Unix or Windows w/Cygwin), then I have a bash shell script calling ImageMagick, called "xpand", that does what you ask. See my web site at http://www.fmwconcepts.com/imagemagick/index.php
Input:
xpand -d 350 -m horizontal img.png result.png
-d can be either a dimension or an aspect ratio (w:h). I note that my approach (a 2nd order stretch) seems to stretch the result a bit more than your SmoothMeteor tool.
Related
I am trying to do a pyramid of tiles with a non-square image (width: 32768px and height: 18433px)
I am using libvips as follows:
vips dzsave my_image.tif out_folder --layout google --suffix .png
For the same purpose I have also used gdal2tiles:
python gdal2tiles.py -p raster -z 0-7 -w none my_image.tif
Because my image is not square, some padding is necessary when the 256x256 tiles are created. Padding however is different between vips and gdal2tiles. The former adds padding at the bottom of the tile where as the latter at the top (and is trasparent). See image below. What is shown in the the 256x256 tile at the root of the pyramid (ie zoom level=0). I have manually added the yellow background and the black outline.
With vips, is it possible to have similar padding to gdal2tiles so that the bottom-left corner of the tile coincide with that from the image? I am plotting points on my image, hence it helps to have the origin at the bottom-left.
How can I also have transparent background with vips? (that might better be in a separate post though...)
You can run dzsave as the output of any vips operation by using .dz as the file extension and putting the arguments in square brackets after the filename. For example, this command:
vips dzsave my_image.tif out_folder --layout google --suffix .png
Can also be written as:
vips copy my_image.tif out_folder.dz[layout=google,suffix=.png]
So you can solve your problem by expanding your input image to a square before running dzsave.
For example:
$ vips gravity Chicago.jpg dir.dz[layout=google,suffix=.png,skip_blanks=0] south-west 32768 32768 --extend white
32768 is the nearest power of two above that image width. The skip_blanks option makes dzsave not output tiles equal to the blank background tile.
That command makes this dir/0/0/0.png:
(I added the black lines to show the edges)
To get a transparent background, you need to add an alpha. This would require another command, and is beyond what the vips CLI is really designed for.
I would switch to something like Python. With pyvips, for example, you can write:
import sys
import pyvips
im = pyvips.Image.new_from_file(sys.argv[1], access='sequential')
im = im.addalpha()
# expand to the nearest power of two larger square ... by default, gravity will
# extend with 0 (transparent) pixels
size = 1 << int.bit_length(max(im.width, im.height))
im = im.gravity('south-west', size, size)
im.dzsave(sys.argv[2],
layout='google', suffix='.png', background=0, skip_blanks=0)
Run like this:
$ ./mkpyr.py ~/pics/Chicago.jpg x
To make this x/0/0/0.png:
(added the green background to show the transparency)
On this similar thread they have been proposed solutions to convert the background color of some image to transparent.
But sometimes the background is a simple pattern, like in this case:
Note the square background pattern.
When processing images, the background does often need to be removed or changed, but firstly you need to detect it (i.e: change its color, or making it transparent). For example, on the above image example, I would like to obtain:
How can I detect/change a specified pattern inside an image?
GUI solutions accepted.
Open source solutions preferred (free at least required).
The simplest solution will be preferred (I would like to avoid installing some hundreds of MB program).
Note: I was thinking about posting this question at Photography
StackExchange site, but I would rather say the mind of a programmer (I
could need to edit dozens of such images) is more close to what I
want. I am not an artist.
This is not a fully developed answer, but it does go some way towards thinking about a method - maybe someone else would like to develop it...
If, as you say, your pattern is specified, you know what it is - good, aren't I? So, you could look for the "Minimum Repeating Unit" of your pattern in your image. In your case it is a 16x16 grid like this:
Now you can search for that pattern in your image. I am using ImageMagick at the command-line, but you can use other tools if you prefer. ImageMagick is installed on most Linux distros and is available for OSX and Windows for free. So, I search for that grid in your globe image and ImageMagick gives me an output image showing white dots at the top-left corner of every location where the two images match:
compare -metric ae -dissimilarity-threshold 1.0 -compose src -subimage-search globe.gif grid.png res.png
That gets me this in file res-1.png
Now the white dots are where the 16x16 "Minimum Repeating Unit" is found in the image, but at the top-left corner so I shift them right and down by 8 pixels to the centre of the matching area, then I create a new output image where each pixel is the maximum pixel of the 16x16 grid in which it existed before:
convert res-1.png -roll +8+8 -statistic maximum 16x16 z.png
I can now invert that mask and then use it to set the opacity of the original image, thereby blanking areas that matched the "Minimum Repeating Unit":
convert globe.gif \( z.png -negate \) -compose copy_opacity -composite q.png
No, it's not perfect, but it is an idea for an approach that could be refined...
I've created simple 7 seconds clop which uses standard plugin for FCPX: "Bold Fin" title.
While i am editing this clip - everything fits to the screen:
also everything is ok when i am starts to export this clip to the master file:
but when actual file is ready - it seems like it is cropped:
Could somebody please help to find a reason why my output actually cropped? And how fix this issue?
Judging by the image you provided, you have non-square pixels in your footage or in projects settings (or footage's aspect ratio isn't 16:9). I print-screened the image inside FCP's canvas and found that you have rectangular pixels stretched along X axis, instead of square ones.
Seemingly, FCPX trying to compensate pixel aspect ratio for FullHD export (par = 1.0, ar = 16:9), stretched pixels along Y axis, which led to cropping.
We need to have huge amounts of png's resized to be divisible by 12, each png is variable in size and the image needs to stay 1:1 in the top left.
At the moment were having to manually bring in each file into Photoshop and enlarge the canvas on the x+y to be divisible by 12 and keep the image in the top left corner. With the amount of png's we need doing now and in future we need an automated process.
I would do this with ImageMagick, which is free and installed on most Linux distros and also available for OSX and Windows from here.
This little bash script will resize all the PNG files in the current directory and save them with the original name in the subdirectory called output. It is pretty easy to read - it basically loops through all the PNG files in the directory. It then uses ImageMagick's built-in calculator to work out the size of your output file as nearest multiple of 12. Then it loads the image and extends the background using transparent pixels (-background none) to that size (using -extent) and leaves the original image in the top-left corner (-gravity NorthWest).
#!/bin/bash
# Make output directory - ignore errors
mkdir output 2> /dev/null
# Make sure we don't barf if there are no files
shopt -s nullglob
# Make sure we process *.png, *.PNG, *.pNg etc
shopt -s nocaseglob
# Loop through all pngs in current directory
for f in *.png; do
# Calculate new extent as nearest multiple of 12
# In general, to round x to nearest n, you do ((x+n-1)/n)*n
extent=$(convert "$f" -format "%[fx:12*round(((w+11)/12)-0.5)]x%[fx:12*round(((h+11)/12)-0.5)]" info: )
# Now extend canvas transparently to new size and leave original image in top-left
convert "$f" -background none -gravity northwest -extent $extent output/"$f"
done
P.S. If installing ImageMagick on OSX, please ask for advice before trying.
P.P.S. If you have 10,000+ images to resize, and you do it often, and you are on OSX or Linux (probably not Windows), I would recommend GNU Parallel. If that is likely, please ask.
Never mind, this is a possible solution for your problem. This script will run in MATLAB or Octave (Octave is an open-source alternative to MATLAB, so you might want to use that.)
Copy the following function into a file and call it resizeIm.m. Then start Octave and call this function for every image you have.
function resizeIm(fileName)
% Read image
origIm = imread(fileName);
% Get size and calculate new size
origSize = size(origIm);
div = ceil(origSize ./ 12);
% Create new, padded image
newIm = zeros(12*div,class(origIm));
newIm(1:origSize(1),1:origSize(2)) = origIm;
% Write image to new file
[dir, name, ext] = fileparts(fileName);
newFileName = [dir,name,"_resized",ext];
imwrite(newIm,newFileName);
end
The function can be called by
resizeIm("C:\path\to\file\myimage.png")
I would like to resize (downscale) some images the way that Facebook does it. ImageMagick, but hey, I'm open for suggestions :)
I believe Facebook is doing this:
Say you have a max width x height of 250x200, Facebook is optimizing the use of this. Tries to use as much of the 250x200 as possible. If for instance you scale down an image and get 220x200, then they cut from the top and the bottom of the image until they use as much as possible of the 250x200 frame. Actually I think they take more from the bottom, than the top (around 1:2.5), which I believe is because most pictures have the head at the top and Facebook realizes this.
Is there any name for this kind of resizing algorithm? And is there any way to have ImageMagick do this?
Thanks in advance!
Edit
It actually appears that Facebook might not be doing this "smart" resizing technique after all. They just resize where they have a minwidth/minheight. Then when they show the image in their album, they cut from the top/bottom or left/right to use as much as possible for the frame (that is how I perceive it at least).
-Tobias
You can use ImageMagick to get the dimensions of an image, scale then crop it. As to whether you are accurately describing the algorithm Facebook uses, I don't know.
I think the following link addresses the problem you're trying to tackle:
http://www.imagemagick.org/Usage/resize/#space_fill
The example they give at the very end is...
convert logo: \
-resize 160x -resize 'x160<' -resize 50% \
-gravity center -crop 80x80+0+0 +repage space_fill_2.jpg
That command resizes an image to be 160 pixels wide, resizes it to be 160 pixels tall, takes the larger of the two resized images and shrinks it by half, and crop it to 80x80.
The following may be of interest to you:
http://www.google.com/search?q=image+entroy+cropping
I've read several documents about using image entropy to choose what part of the image to crop.
Another related link -
Django, sorl-thumbnail crop picture head
edits: added related links, specified an example command for doing a similar task with link to source of example.