I want to apply a homogeneous deformation on a single block by specifying the displacement field u = {ux,uy,uz}, where a bold letter indicates a vector and the terms in the curly brackets are its Cartesian components.
Specifically, the time-dependent displacement field is given by:
ux = A sin(wt) X + q [1-B sin(wt)] sin(2wt) Y
uy = -B sin(wt) Y
uz = C sin(wt) Z
where {X,Y,Z} are the position vector components of a material point, {t} stands for time, and the remaining constants are known numbers.
How can I do that using an input file or directly through ABAQUS?
Thanks!
Related
There are 9 parameters in the fundamental matrix to relate the pixel co-ordinates of left and right images but only 7 degrees of freedom (DOF).
The reasoning for this on several pages that I've searched says :
Homogenous equations means we lose a degree of freedom
The determinant of F = 0, therefore we lose another degree of freedom.
I don't understand why those 2 reasons mean we lose 2 DOF - can someone explain it?
We initially have 9 DOF because the fundamental matrix is composed of 9 parameters, which implies that we need 9 corresponding points to compute the fundamental matrix (F). But because of the following two reasons, we only need 7 corresponding points.
Reason 1
We lose 1 DOF because we are using homogeneous coordinates. This basically is a way to represent nD points as a vector form by adding an extra dimension. ie) A 2D point (0,2) can be represented as [0,2,1], in general [x,y,1]. There are useful properties when using homogeneous coordinates with 2D/3D transformation, but I'm going to assume you know that.
Now given the expression p and p' representing pixel coordinates:
p'=[u',v',1] and p=[u,v,1]
the fundamental matrix:
F = [f1,f2,f3]
[f4,f5,f6]
[f7,f8,f9]
and fundamental matrix equation:
(transposed p')Fp = 0
when we multiple this expression in algebra form, we get the following:
uu'f1 + vu'f2 + u'f3 + uv'f4 + vv'f5 + v'f6 + uf7 + vf8 + f9 = 0.
In a homogeneous system of linear equation form Af=0 (basically the factorization of the above formula), we get two components A and f.
A:
[uu',vu',u', uv',vv',v',u,v,1]
f (f is essentially the fundamental matrix in vector form):
[f1,f2'f3,f4,f5,f6,f7,f8,f9]
Now if we look at the components of vector A, we have 8 unknowns, but one known value 1 because of homogeneous coordinates, and therefore we only need 8 equations now.
Reason 2
det F = 0.
A determinant is a value that can be obtained from a square matrix.
I'm not entirely sure about the mathematical details of this property but I can still infer the basic idea, and, hopefully, you can as well.
Basically given some matrix A
A = [a,b,c]
[d,e,f]
[g,h,i]
The determinant can be computed using this formula:
det A = aei+bfg+cdh-ceg-bdi-afh
If we look at the determinant using the fundamental matrix, the algebra would look something like this:
F = [f1,f2,f3]
[f4,f5,f6]
[f7,f8,f9]
det F = (f1*f5*f8)+(f2*f6*f7)+(f3*f4*f8)-(f3*f5*f7)-(f2*f4*f9)-(f1*f6*f8)
Now we know the determinant of the fundamental matrix is zero:
det F = (f1*f5*f8)+(f2*f6*f7)+(f3*f4*f8)-(f3*f5*f7)-(f2*f4*f9)-(f1*f6*f8) = 0
So, if we work out only 7 of the 9 parameters of the fundamental matrix, we can work out the last parameter using the above determinant equation.
Therefore the fundamental matrix has 7DOF.
The reasons why F has only 7 degrees of freedom are
F is a 3x3 homogeneous matrix. Homogeneous means there is a scale ambiguity in the matrix, so the scale doesn't matter (as shown in #Curator Corpus 's example). This drops one degree of freedom.
F is a matrix with rank 2. It is not a full rank matrix, so it is singular and its determinant is zero (Proof here). The reason why F is a matrix with rank 2 is that it is mapping a 2D plane (image1) to all the lines (in image 2) that pass through the epipole (of image 2).
Hope it helps.
As for the highest votes answer by nbro, I think it can be interpreted as this way where we have reason two, matrix F has a rank2, so its determinant is zero as a constraint to the f variable function. So, we only need 7 points to determine the rest of variables (f1-f8), with the previous constriant. And 8 equations, 8 variables, leaving only one solution. So there is 7 DOF.
Why it is said that "convolution of an image in spatial domain is equal to multiplication in frequency domain" ?
Could anyone please explain it briefly?
StackOverflow, unfortunately, doesn't support MathJaX hence it is hard to show the math here.
One way to explain is that Convolution is Linear Invariant Operator.
As you know, Linear Time / Spatially Invariant Systems basically do one thing - Delay and Scaling.
The Eigen Functions of Delay and Scaling are the Harmonic Functions.
Which means that give a signal described by harmonic signals (Practically its Fourier Transform) Linear Time / Spatially Invariant Operator only scales it by complex number (Scaling and shifting by phase) which is what you do in the Fourier Domain.
It is similar to Diagonalization in Linear Algebra.
For instance let's thing of the Filter we apply on the image as an operator - A.
So the output of the system is y = A x.
If A is diagonalizable as A = P^T D P where D is diagonal matrix and P P^T = I, namely Unitary Matrix.
So y = A x = P^T D P x hence by defining z = P x and t = P y we get t = D z namely we only need to multiply each element in t and not the whole matrix multiplication.
If you think about P as the Fourier Transom operator then instead of doing Matrix Multiplication you can have element wise multiplication in other domain - Fourier Domain.
I have three points: (1,1), (2,3), (3, 3.123). I assume the hypothesis is , and I want to do linear regression on the three points. I have two methods to calculate θ:
Method-1: Least Square
import numpy as np
# get an approximate solution using least square
X = np.array([[1,1],[2,1],[3,1]])
y = np.array([1,3,3.123])
theta = np.linalg.lstsq(X,y)[0]
print theta
Method-2: Matrix multiplication
We have the following derivation process:
# rank(X)=2, rank(X|y)=3, so there is no exact solution.
print np.linalg.matrix_rank(X)
print np.linalg.matrix_rank(np.c_[X,y])
theta = np.linalg.inv(X.T.dot(X)).dot(X.T.dot(y))
print theta
Both method-1 and method-2 can get result [ 1.0615 0.25133333], it seems that method-2 is equivalent to least square. But, I don't know why, can anyone reveal the underlying principle of their equivalence?
Both approaches are equivalent, because least squares method is
θ = argmin (Xθ-Y)'(Xθ-Y) = argmin ||(Xθ-Y)||^2 = argmin ||(Xθ-Y)||, that means that you try to minimize length of vector (Xθ-Y), so you try to minimize distance between Xθ and Y. X is a constant matrix, so Xθ is vector from column space of X. That means the shortest distance between these two vectors is when Xθ is equal to projection of vector Y to column space of X (can be easy observed from picture). That results to Y^(hat) = Xθ = X(X'X)^(-1)X'Y, where X(X'X)^(-1)X' is the projection matrix to column space of X. After some changes you can observe that this is equivalent with (X'X)θ = X'Y. You can find exact proof in any linear algebra book.
Following is the conversion for spherical to cartesian coordinate
X = r cosθ sinΦ
Y = r sinθ sinΦ
Z = rcosΦ
we are using the reverse computation to compute spherical coordinate from cartesian coordinate which is defined as
r = √(x^2+y^2+z^2 )
θ = atan(Y./X)
Φ = atan(√(X^2+Y^2 )./Z)
The problem arises when Y and X are zero so θ can take any arbitrary value so during Matlab computations this results in NAN(not a number ) which makes θ discontinuous. Is there any interpolation technique to remove this discontinuity and how to interpret θ in this case.
θ is a matrix at various point and it gives following result it has jumps and black patched that represent discontinuity whereas I need to generate the following image with smooth variation. Please see the obtained theta and correct theta variation by clicking on the link and suggest some changes.
Discontinuous_Theta_variation
Correct Theta variation
While doing conversion from Cartesian to Spherical coordinate system, however the formulas which are written here are correct but you first need to understand their physical significance.
'r' is the distance of the point from origin. θ is the angle from the positive x axis to the line which is made by projecting the given point to XY plane. And Φ being the angle from positive z-axis to the line which joins origin and given point.
http://www.learningaboutelectronics.com/Articles/Cartesian-rectangular-to-spherical-coordinate-converter-calculator.php#answer
So say, for a point which has X and Y coordinates as 0, that means it lies on z axis and hence, its projection on XY plane lies on the origin. So we cannot exactly determine the angle of origin from X axis. But please note that, since the point lies on Z axis, so Φ=0 or pi (depending whether Z is positive or negative).
So while coding this problem, you may adapt this approach that you first check for Φ, if it is 0 or pi then theta = 0 (by default).
I hope this serves the purpose.
consider an image matrix in which i have multiple line segments. And i have information's like start point, end points, length of the line segment, centroid and slope of all those line segments. In this scenario how do i find line segments that are nearest to a particular line segment. Also once i got nearest line segments is it possible to detect rectangles if they exist? .An example image is in this link sample.
The geometry of segment/segment distance is not so simple.
Imagine two line segments in general position, and dilate one of them. I mean, draw two parallel segments at a distance d and two half circles of radius d centered on the endpoints. This is the locus of constant distance d from the segment. You need to find the smallest d such that the other segment is hit.
You can decompose the plane in three areas: the band between the two perpendiculars through the endpoints, and the two outer half planes. You can clip the other segment in these three areas and process the clipped segments separately.
Inside the band, either the segments intersect and the distance is zero. Or they don't and the distance is the shortest distance between the line of support of the first segment and the endpoints of the other.
Inside the half planes, the distance is the shortest of the distances between the considered endpoint and both endpoints of the other segment, and the distance between the endpoint and the other segment, provided then endpoint projects inside the other segment.
ALTERNATIVE:
Maybe it is easier to use the parametric equations of the two segments and minimize the (squared) distance, like:
Min(p, q) ((Xa (1-p) + Xb p) - (Xc (1-q) + Xd q))^2 + ((Ya (1-p) + Yb p) - (Yc (1-q) + Yd q))^2 under constraints 0 <= p, q <=1.
First, you have to encode all the points in homogeneous coordinates [x, y, 1]T since this creates a symmetric relations between lines and points. Namely, in homogeneous coordinates the intersection of two lines l1 and l2 is a point p=l1xl2, where x means vector product. By the same, coin a line that passes through two points p1, p2 is l=p1xp2. Line that lie on a segment can be expressed as l=p1xp2=[a, b, c]T. The line equation then will be lT.p=0 or in Cartesian coordinates a*x+b*y+c=0
As for your task, there are two cases:
1. segments cross and then their intersection can be simply calculated as l1xl2;
2. segments don’t cross and then the closest points between two lines is the closest point between two of their 4 endpoints. To calculate 4 possible distances and choose the smallest distance between a line segment x1 and x2 and a point x0 use this formula: (x2-x1)x(x1-x0)/|x2-x1|
Let the segments be AB and CD, and running parameters along them p and q, such that 0 <= p, q <= 1.
Using vectors, the squared distance between any two points on the segments is given by:
D² = (AC - p AB + q CD)²
Let us minimize this expression by zeroing the derivatives wrt p and q:
AB.(AC - p AB + q CD) = 0
CD.(AC - p AB + q CD) = 0
When AB and CD are not parallel, this implies AC - p AB + q CD = 0, which gives you the intersection point by solving a 2x2 system, and the distance is zero.
But it can turn out that p (or q) falls out of the allowed range, let p < 0 (or p > 1). In this case, we recast the problem with p = 0 (p = 1). This amounts to finding the distance (A, CD).
D² = (AC + q CD)²
yielding
CD.(AC + q CD) = 0
even easier.
And if it turns out that q is also out of range, let q < 0, we end up with the distance (A, C):
D² = AC²
Similarly for the other out-of-range cases.
In case of parallel segments, the 2x2 system is indeterminate (both equations are equivalent). You need to solve:
CD.AC - p CD.AB + q CD² = 0
It suffices to try all four combinations with p/q = 0/1 and see if the left hand side takes different signs. This proves that there exists a solution and the distance is the same as distance (A, CD). Otherwise, the answer is one of the endpoint-to-endpoint distance.