Apply PanoTools alignment to individual images - image-processing

I have batch of images (timelapse fotos) which I aligned using
align_image_stack from PanoTools, because the have some small shifts. The required positional shift relative to the first image is specified as roll/pitch/yaw values in the output PTO file. How is it possible to apply excatly these positional shifts to the images? I mean if it gets shifted to the right for example, then it gets some (black) border at the left and cropped off at the right.
Then I could afterwards crop out the area inside the border to get an unshifted timelapse.

I just realized the -a option of align_image_stack does this!

Related

Zeroing channels of cut out area

I imagine there is probably going to be an easy solution for this in GIMP, but I for the life of me can't figure it out.
I'm using the color picker tool with the use info window selected to verify pixel values.
Basically, I have some pixels in an image that I need to zero out. By that I mean I want the RGBA values to all be set to 0.
I attempted to:
use the rectangular select tool to select the group of pixels
use bucket fill to set the pixels to black with opacity set to 0.0
Then, to verify it was done correctly, I use the color picker to test the value of the rectangle I just filled.
Unfortunately, it seems to just contain the previous value. What am I missing here?
The "opacity" of the bucket-fill is the opacity of the paint, not the opacity of the resulting pixels. In other words the less opaque it is, the less visible the result. What you want to do is bucket-fill selection with black, then [delete].

How to remove non-periodic lines from binary image

Example Image
I want to remove the lines (shown in RED color) as they are out of order. Lines shown in black color are repeating at same period (approximately). Period is not known beforehand. Is there any way of deleting non-periodic lines( shown in red color) automatically?
NOTE: Image is binary ( back & while).. lines shown in red color only for illustration.
Of course there is any way. There is almost always some way to do something.
Infortunately you have not provided any particular problem. The entire thing is too broad to be answered here.
To help you getting started: (I highly recommend you start with pen, paper and your brain)
Detect the lines -> google or think, there are many standard ways to detect lines in an image. if you don't have noise in your binary image its trivial.
find any aequidistant sets -> think
delete the rest -> think ( you know what is good so everything else has to go away)
I assume, your lines are (almost) vertical.
The following should work
turn the image to a column sum histogram
try a Fourier transformation on the signal (potentially padding the image appropriately)
pick the maximum/peak from the Fourier spectrum as your base period
If you need the lines rather than the position of the lines, generate a mask with lines at appropriate intervals (as determined by your analysis before) and apply to the image.

OpenCV: Generating points from image after thinning

I've ran in to an issue concerning generating floating point coordinates from an image.
The original problem is as follows:
the input image is handwritten text. From this I want to generate a set of points (just x,y coordinates) that make up the individual characters.
At first I used findContours in order to generate the points. Since this finds the edges of the characters it first needs to be ran through a thinning algorithm, since I'm not interested in the shape of the characters, only the lines or as in this case, points.
Input:
thinning:
So, I run my input through the thinning algorithm and all is fine, output looks good. Running findContours on this however does not work out so good, it skips a lot of stuff and I end up with something unusable.
The second idea was to generate bounding boxes (with findContours), use these bounding boxes to grab the characters from the thinning process and grab all none-white pixel indices as "points" and offset them by the bounding box position. This generates even worse output, and seems like a bad method.
Horrible code for this:
Mat temp = new Mat(edges, bb);
byte roi_buff[] = new byte[(int) (temp.total() * temp.channels())];
temp.get(0, 0, roi_buff);
int COLS = temp.cols();
List<Point> preArrayList = new ArrayList<Point>();
for(int i = 0; i < roi_buff.length; i++)
{
if(roi_buff[i] != 0)
{
Point tempP = bb.tl();
tempP.x += i%COLS;
tempP.y += i/COLS;
preArrayList.add(tempP);
}
}
Is there any alternatives or am I overlooking something?
UPDATE:
I overlooked the fact that I need the points (pixels) to be ordered. In the method above I simply do scanline approach to grabbing all the pixels. If you look at the 'o' for example, it would grab first the point on the left hand side, then the one on the right hand side. I would need them to be ordered by their neighbouring pixels since I want to draw paths with the points later on (outside of opencv).
Is this possible?
You should look into implementing your own connected components labelling. The concept is very simple: you scan the first line and assign unique labels to each horizontally connected strip of pixels. You basically check for every pixel if it is connected to its left neighbour and assign it either that neighbour's label or a new label. In the second row you do the same, but you also check against the pixels above it. Sometimes you need a label merge: two strips that were not connected in the previous row are joined in the current row. The way to deal with this is either to keep a list of label equivalences or use pointers to labels (so you can easily do a complete label change for an object).
This is basically what findContours does, but if you implement it yourself you have the freedom to go for 8-connectedness and even bridge a single-pixel or two-pixel gap. That way you get "almost-connected components labelling". It looks like you need this for the "w" in your example picture.
Once you have the image labelled this way, you can push all the pixels of a single label to a vector, and order them something like this. Find the top left pixel, push it to a new vector and erase it from the original vector. Now find the pixel in the original vector closest to it, push it to the new vector and erase from the original. Continue until all pixels have been transferred.
It will not be very fast this way, but it should be a start.

Is there an easy way to cut a slice from an image using Gimp?

Wondering if there is an easy way to remove a rectangular slice across the entire width of an image using Gimp, and have the resulting hole closed up automatically. I hope that makes sense. If I select a slice across an image and do "cut", it leaves a blank "hole" there. I want the new top and bottom of the image to join and fill that hole, reducing the image height by the amount sliced out.
Any easy way to do this?
Here is a method that is quick and often does what you want:
Cut out the middle, leaving a transparent "hole".
Click anywhere to remove the selection (so the hole is not selected).
Click Image > Zealous crop .
This is going to remove the middle part. However, if you also have transparency in other parts of the image (like around the edges) it's going to remove that transparency too.
I believe you're asking to do something like cut out the middle of a page, leaving the header and footer and have the blank space removed with the cut action, effectively joining the header and footer together.
To my knowledge, I don't believe so. Even if you cut, or delete, that space is still part of the image even without content.
But, you would be able to highlight the top or bottom (or left or right) of the remaining space and drag it to align with the other side. It's not ideal for repetitive tasks, but should get you through if you only have to do it a few times.
Install Python and the Python Imaging Library. Back in GIMP, select and cut the full-width areas you don't want to transparent, and export the image to test.png. Then use this Python code (works only if complete lines are transparent; will not work properly if there are 100%-transparent pixels anywhere other than on a full-width row)—
from PIL import Image
i = Image.open("test.png")
b = i.tobytes()
b2 = ''.join(b[n:n+4] for n in xrange(0,len(b),4) if ord(b[n+3]))
newHeight = len(b2)/i.width/4
i2 = Image.frombytes('RGBA',(i.width,newHeight),b2)
i2.save("test.png")
Then re-load test.png and verify that the areas you cut have gone.
In gimp 2.8.1 you can easily create a new image from a selection. So if you select a rectangular than do a copy (Ctrl-C) and a past in a new image
Edit -> Paste as -> new image (or Ctrl-Shift-V).

Repositioning images on FormResize proportionally

I have a Delphi form with TImages on it. Actually, it's a "fake" desktop with "icons" (the TImages).
When the user resizes the form (scales it or maximizes it, for example) the icons on the form should align proportionally.
Right now, I'm doing something like this with the images:
ImageX.Left:=Round(ImageX.Left * (Width / OldWidth));
ImageX.Top:=Round(ImageX.Top * (Height / OldHeight));
Now this is OK, as long as I start to make the maximized form smaller.
In that case the rightmost images are cut in part by the form's border (they're off the form's client area).
If I reposition these images to fit the client area, then the position of the icons get distorted upon scaling back to maximum size.
Any ideas for a better algorithm/fix?
Thanks!
First of all, you can't have a correctly scaled desktop when you only move the images, and don't scale them as well. You can do slightly better by moving the midpoints of your images, not their top left corner. It still won't be perfect, but it will work better. Of course, now the images will be cropped on all four sides, not just bottom and right, but at least it will be symmetrical :-)
Second, you will get accumulative rounding errors since you constantly override the "original" values (ImageX's top and left coordinate). You'd be better off having the original values stored in some sort of collection or array, and setting the new position based on the original value, rather than the previous value.
Something like this:
ImageX.Left:=Round(ImageX_OriginalLeft * (Width / Original_Width));

Resources