Gimp python-fu: How to crop layer to selection - gimp

What is the GIMP API call to crop a layer to a selection, equivalent to Layer -> Crop to Selection in the GUI?
I looked in the Procedure Browser but the calls I found (gimp-crop and gimp-image-crop) perform the crop on the image, not a layer.
(What I really want to do is cut-and-paste multiple layers at once; I'm making a plug-in to help.)

You use pdb.gimp_layer_resize() using the data from pdb.gimp_selection_bounds(image).
x0,y0 = pdb.gimp_drawable_offsets(layer)
non_empty, x1, y1, x2, y2 = pdb.gimp_selection_bounds(image)
pdb.gimp_layer_resize(layer,x2-x1,y2-y1,x0-x1,y0-y1)

Related

Placing a shape inside another shape using opencv

I have two images and I need to place the second image inside the first image. The second image can be resized, rotated or skewed such that it covers a larger area of the other images as possible. As an example, in the figure shown below, the green circle need to be placed inside the blue shape:
Here the green circle is transformed such that it covers a larger area. Another example is shown below:
Note that there may be some multiple results. However, any similar result is acceptable as shown in the above example.
How do I solve this problem?
Thanks in advance!
I tested the idea I mentioned earlier in the comments and the output is almost good. It may be better but it takes time. The final code was too much and it depends on one of my old personal projects, so I will not share. But I will explain step by step how I wrote such an algorithm. Note that I have tested the algorithm many times. Not yet 100% accurate.
for N times do this:
1. Copy from shape
2. Transform it randomly
3. Put the shape on the background
4-1. It is not acceptable if the shape exceeds the background. Go to
the first step.
4.2. Otherwise we will continue to step 5.
5. We calculate the length, width and number of shape pixels.
6. We keep a list of the best candidates and compare these three
parameters (W, H, Pixels) with the members of the list. If we
find a better item, we will save it.
I set the value of N to 5,000. The larger the number, the slower the algorithm runs, but the better the result.
You can use anything for Transform. Mirror, Rotate, Shear, Scale, Resize, etc. But I used warpPerspective for this one.
im1 = cv2.imread(sys.path[0]+'/Back.png')
im2 = cv2.imread(sys.path[0]+'/Shape.png')
bH, bW = im1.shape[:2]
sH, sW = im2.shape[:2]
# TopLeft, TopRight, BottomRight, BottomLeft of the shape
_inp = np.float32([[0, 0], [sW, 0], [sW, sH], [0, sH]])
cx = random.randint(5, sW-5)
ch = random.randint(5, sH-5)
o = 0
# Random transformed output
_out = np.float32([
[random.randint(-o, cx-1), random.randint(1-o, ch-1)],
[random.randint(cx+1, sW+o), random.randint(1-o, ch-1)],
[random.randint(cx+1, sW+o), random.randint(ch+1, sH+o)],
[random.randint(-o, cx-1), random.randint(ch+1, sH+o)]
])
# Transformed output
M = cv2.getPerspectiveTransform(_inp, _out)
t = cv2.warpPerspective(shape, M, (bH, bW))
You can use countNonZero to find the number of pixels and findContours and boundingRect to find the shape size.
def getSize(msk):
cnts, _ = cv2.findContours(msk, cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
cnts.sort(key=lambda p: max(cv2.boundingRect(p)[2],cv2.boundingRect(p)[3]), reverse=True)
w,h=0,0
if(len(cnts)>0):
_, _, w, h = cv2.boundingRect(cnts[0])
pix = cv2.countNonZero(msk)
return pix, w, h
To find overlaping of back and shape you can do something like this:
make a mask from back and shape and use bitwise methods; Change this section according to the software you wrote. This is just an example :)
mskMix = cv2.bitwise_and(mskBack, mskShape)
mskMix = cv2.bitwise_xor(mskMix, mskShape)
isCandidate = not np.any(mskMix == 255)
For example this is not a candidate answer; This is because if you look closely at the image on the right, you will notice that the shape has exceeded the background.
I just tested the circle with 4 different backgrounds; And the results:
After 4879 Iterations:
After 1587 Iterations:
After 4621 Iterations:
After 4574 Iterations:
A few additional points. If you use a method like medianBlur to cover the noise in the Background mask and Shape mask, you may find a better solution.
I suggest you read about Evolutionary Computation, Metaheuristic and Soft Computing algorithms for better understanding of this algorithm :)

Find and reject outliers of circles in individual video frames - OpenCV

Circles with outlier
Link to picture I am working with is above. I am working with OpenCV and C#. i have a video running at 30FPS and I am trying to find the circles using the Hough algorithm (varying levels of light in the video mean I have to adapt the parameters on the fly). The picture shows the problem I am having. Depending on how the camera moves, the algorithm may mark a circle where there is none.
What I would like to do is find and disregard these false circles as outliers using the point list the program generates of each circle's center point location (list is below). Basically, I would rather the circle not be displayed if it has a low likelihood of being correct. I was thinking of writing a function that uses the population standard deviation of each circle's location (circles are relatively uniform distance apart in video) and disregarding circles that are more than two or three standard deviations off in that frame.
My biggest limitation is that this calculation needs to be run for each frame so I need to make it as efficient as possible (several things going on in the background which are also consuming CPU cycles). I found a function on Stack Overflow to calculate Standard Deviation which looks like it may work, but it's only for a single value. I need to figure out how to apply it to coordinates or just think of another way to solve the problem. Here is the code and the point list:
public static double StandardDeviation(this IEnumerable<double> values)
{
double avg = values.Average();
return Math.Sqrt(values.Average(v=>Math.Pow(v-avg,2)));
}
List pointList = new List() {new PointF(122.5F, 157.5F),
new PointF(77.5F, 232.5F),
new PointF(167.5F, 237.5F),
new PointF(42.5F, 152.5F),
new PointF(172.5F, 82.5F),
new PointF(212.5F, 162.5F),
new PointF(257.5F, 242.5F),
new PointF(122.5F, 307.5F),
new PointF(87.5F, 82.5F),
new PointF(32.5F, 302.5F),
new PointF(207.5F, 317.5F),
new PointF(347.5F, 247.5F),
new PointF(442.5F, 97.5F),
new PointF(402.5F, 17.5F),
new PointF(137.5F, 7.5F),
new PointF(312.5F, 167.5F),
new PointF(297.5F, 322.5F),
new PointF(397.5F, 172.5F),
new PointF(437.5F, 247.5F),
new PointF(387.5F, 322.5F),
new PointF(352.5F, 87.5F),
new PointF(312.5F, 7.5F),
new PointF(272.5F, 82.5F),
new PointF(222.5F, 7.5F),
new PointF(77.5F, 372.5F),
new PointF(477.5F, 322.5F) };
You want the root mean square distance around the centers of the circles. This answer from the OpenCV QA forum has a simple solution:
Use both x and y coordinates , loop over your points and use this formula :
MEAN_X += pow(x - x0, 2);
MEAN_Y += pow(y - y0, 2);
TOTAL_RMS = sqrt( 1/n * (MEAN_X + MEAN_Y));
x , y : coordinates from image2
x0 , y0: coordinates from image1
n : number of points

getRectSubPix and borderInterpolate in OpenCV

The documentation for OpenCVs getRectSubPix() function:
C++: void getRectSubPix(InputArray image, Size patchSize,
Point2f center, OutputArray patch, int patchType=-1 )
contains the statement:
While the center of the rectangle must be inside the image,
parts of the rectangle may be outside. In this case,
the replication border mode (see borderInterpolate() )
is used to extrapolate the pixel values outside of the image.
But I can't see a way to set the borderInterpolate mode in getRectSubPix. Many other OpenCV functions (boxFilter, copyMakeBorder, ...) allow you to pass in the borderInterpolate enum, but not getRectSubPix.
Is this just a documentation error?
The statement "replication border mode (see borderInterpolate() ) is used to extrapolate the pixel values", clearly states that it uses a predefined mode known as BORDER_REPLICATE to estimate the pixels outside the image boundary, You cannot use other Border methods like BORDER_REFLECT, BORDER_WRAP, BORDER_CONSTANT, etc.

Teechart Custom Lengend tool

I want to show a table for Box Plot containing Values such as mean Median , S.D , Range etc.
The Data Table tool shows only X, X2 data doesn't allow for customization data.I am trying too use Custom Legend Tool using which we can create Table specifying Grid Row and Column. Can anyone let me know how we can enter data into the table.
Thanks
Akshay
If I'm not wrong, you are using VC++. The CustomLegend tool is a quite new tool and I'm afraid there are some features missing in it for VC++.
I've added it to the wish list to be implemented in future releases (TA05015410/B395).
In the meanwhile, note TeeChart ActiveX supports custom drawing so you can manually draw your table if the other tools in the component don't allow you to draw what you exactly want to.
Custom drawing techniques basically consist on a set of methods and properties (set the Canvas Pen, Brush and Font, and draw lines, shapes or texts) to draw directly onto the canvas. These methods are commonly called at the OnAfterDraw event so the custom drawing can be redone after each repaint.
You can find examples written in VC++ under the \Examples\Visual C++\Version 6\ folder in your TeeChart ActiveX installation. Concretely, you can see a simple example of how to use custom drawing techniques in the Dragging Points project. In the DraggingDlg.cpp file you can see how some custom drawing techniques are used in the OnAfterDraw method:
void CDraggingDlg::OnAfterDrawTChart()
{
// Draw a white circle around the clicked pyramid...
if (-1 != m_ClickedBar)
{
CCanvas aCanvas = m_ctrlChart.GetCanvas();
CPen1 aPen = aCanvas.GetPen();
aPen.SetColor(RGB(255, 255, 255));
aPen.SetWidth(1);
aPen.SetStyle(psDot);
aCanvas.GetBrush().SetStyle(bsClear);
int x = m_ctrlChart.Series(0).CalcXPos(m_ClickedBar);
int y = m_ctrlChart.Series(0).CalcYPos(m_ClickedBar);
aCanvas.Ellipse(x, y, x + 40, y + 40);
}
}

How to add pixels in EmguCV

Hi I'm using EmguCV and I enjoy programming with it.
However I'm wondering whether there is an elegant way to add two pixels individually.
To add images, you can use CvInvoke.Add(), but for individual pixel operation, you seems to have to write it in an ugly way,
say you have p, p1 and p2 as EmguCV::Bgr,
you have to write
p = new Bgr(p1.b + p2.b, p1.g + p2.g, p1.r + p2.r);
I really hate this and tried to write an operator for this. But this is apparently impossible since operator overloading must be in the host class.
Is there any way to do this elegantly?
================Edit================
What I want to do is to calculate the summation of the pixels in an image. So the basic operation in this is to add pixels, or Bgr class.
Let's suppose you have two images img1 and img2:
If you want to add them you can do img3 = img1+ img2
If you simply want the summation of each color channel on a single image img1 you can do:
Bgr sums = img1.GetSum();
double TotalVal = sums.Blue + sums.Green + sums.Red;
Hope this helps,
Luca

Resources