How to make ImageTransformation produce an anamorphic version of image - image-processing

I'm experimenting with the ImageTransformation function to try to make anamorphic versions of images, but with limited progress so far. I'm aiming for the results you get using the image reflected in a cylindrical mirror, where the image curves around the central mirror for about 270 degrees. The wikipedia article has a couple of neat examples (and I borrowed Holbein's skull from them too).
i = Import["../Desktop/Holbein_Skull.jpg"];
i = ImageResize[i, 120]
f[x_, y_] := {(2 (y - 0.3) Cos [1.5 x]), (2 (y - 0.3) Sin [1.5 x])};
ImageTransformation[i, f[#[[1]], #[[2]]] &, Padding -> White]
But I can't persuade Mathematica to show me the entire image, or to bend it correctly. The anamorphic image should wrap right round the mirror placed "inside" the centre of the image, but it won't. I found suitable values for constants by putting it inside a manipulate (and turning the resolution down :). I'm using the formula:
x1 = a(y + b) cos(kx)
y1 = a(y + b) sin(kx)
Any help producing a better result would be greatly appreciated!

In ImageTransformation[f,img], the function f is such that a point {x,y} in the resulting image corresponds to f[{x,y}] in img. Since the resulting image is basically the polar transformation of img, f should be the inverse polar transformation, so you could do something like
anamorphic[img_, angle_: 270 Degree] :=
Module[{dim = ImageDimensions[img], rInner = 1, rOuter},
rOuter = rInner (1 + angle dim[[2]]/dim[[1]]);
ImageTransformation[img,
Function[{pt}, {ArcTan[-#2, #1] & ## pt, Norm[pt]}],
DataRange -> {{-angle/2, angle/2}, {rInner, rOuter}},
PlotRange -> {{-rOuter, rOuter}, {-rOuter, rOuter}},
Padding -> White
]
]
The resulting image looks something like
anamorphic[ExampleData[{"TestImage", "Lena"}]]
Note that you can a similar result with ParametricPlot and TextureCoordinateFunction, e.g.
anamorphic2[img_Image, angle_: 270 Degree] :=
Module[{rInner = 1,rOuter},
rOuter = rInner (1 + angle #2/#1 & ## ImageDimensions[img]);
ParametricPlot[{r Sin[t], -r Cos[t]}, {t, -angle/2, angle/2},
{r, rInner, rOuter},
TextureCoordinateFunction -> ({#3, #4} &),
PlotStyle -> {Opacity[1], Texture[img]},
Mesh -> None, Axes -> False,
BoundaryStyle -> None,
Frame -> False
]
]
anamorphic2[ExampleData[{"TestImage", "Lena"}]]
Edit
In answer to Mr.Wizard's question, if you don't have access to ImageTransformation or Texture you could transform the image data by hand by doing something like
anamorph3[img_, angle_: 270 Degree, imgWidth_: 512] :=
Module[{data, f, matrix, dim, rOuter, rInner = 1.},
dim = ImageDimensions[img];
rOuter = rInner (1 + angle #2/#1 & ## dim);
data = Table[
ListInterpolation[#[[All, All, i]],
{{rOuter, rInner}, {-angle/2, angle/2}}], {i, 3}] &#ImageData[img];
f[i_, j_] := If[Abs[j] <= angle/2 && rInner <= i <= rOuter,
Through[data[i, j]], {1., 1., 1.}];
Image#Table[f[Sqrt[i^2 + j^2], ArcTan[i, -j]],
{i, -rOuter, rOuter, 2 rOuter/(imgWidth - 1)},
{j, -rOuter, rOuter, 2 rOuter/(imgWidth - 1)}]]
Note that this assumes that img has three channels. If the image has fewer or more channels, you need to adapt the code.

Related

How to slice an image by table border

I have many png files like this:
I want to slice the image into 48 (=6x8) small image files for the 48 cells separated by the table borders. That is, I would like to have files img11.png, ..., img68.png, where img11.png contains the (1,1) "1.4x4x8" cell, img12.png the (1,2) "M/T" cell, img13.png the "550,000" cell, ..., img68.png the bottom right "641,500" cell.
I want to do it because I thought it would improve the performance of tesseract, which is not satisfactory because many of my image files have much poorer quality than shown above. Also, margins and sizes are diverse, and some images contain non-English characters and images.
Would there be software packages to detect the table borders and slice the image into m x n images? I am new in this area. I have read How to find table like structure in image but it's way beyond my ability. I am willing to learn, though.
Thanks for your help.
I'm using R. Bilal's suggestion (thanks) led me to the following.
Step 1: Convert the image to grayscale.
library(magick)
x <- image_read('https://i.stack.imgur.com/plBvs.png')
y <- image_convert(x, colorspace='Gray')
a <- as.integer(y[[1]])[,,1]
Step 2: Convert "dark" to 1 and "light" to 0.
w <- ifelse(a>190, 0, 1) # adjust 190
Step 3: Detect the horizontal and vertical lines.
ypos <- which(rowMeans(w) > .95) # adjust .95
xpos <- which(colMeans(w) > .95) # adjust .95
Step 4: Crop the original image (x).
xpos <- c(0,xpos, ncol(a))
ypos <- c(0,ypos, nrow(a))
outdir <- "cropped"
dir.create(outdir)
m <- 0
for (i in 1:(length(ypos)-1)) {
dy <- ypos[i+1]-ypos[i]
n <- 0
if (dy < 16) next # skip if too short
m <- m+1
for (j in 1:(length(xpos)-1)) {
dx <- xpos[j+1]-xpos[j]
if (dx < 16) next # skip if too narrow
n <- n+1
geom <- sprintf("%dx%d+%d+%d", dx, dy, xpos[j], ypos[i])
# cat(sprintf('%2d %2d: %s\n', m, n, geom))
cropped <- image_crop(x, geom)
outfile <- file.path(outdir, sprintf('%02d_%02d.png', m, n))
image_write(cropped, outfile, format="png")
}
}
The cropped (1,1) image is .

Caffe - Concat layer input and output

I read about Concat layer on Caffe website. However, I don't know if my understanding of it is right.
Let's say that as an input I have two layers that can be described as W1 x H1 x D1 and W2 x H2 x D2, where W is the width, H is height and D is depth.
Thus, as I understand with Axis set to 0 output will be (W1 + W2) x (H1 + H2) x D, where D = D1 = D2.
With Axis set to 1 output will be W x H x (D1 + D2), where H = H1 = H2 and W = W1 = W2.
Is my understanding correct? If no I would be grateful for an explanation.
I'm afraid you are a bit off...
Look at this caffe.help.
Usually, data in caffe is stored in 4D "blobs": BxCxHxW (that is, batch size by channel/feature/depth by height by width).
Now if you have two blobs B1xC1xH1xW1 and B2xC2xH2xW2 you can concatenate them along axis: 1 (along channel dimension) to form an output blob with C=C1+C2. This is only possible iff B1==B2 and H1==H2 and W1==W2, resulting with B1x(C1+C2)xH1xW1

How can I fix this issue with my Mandelbrot fractal generator?

I've been working on a project that renders a Mandelbrot fractal. For those of you who know, it is generated by iterating through the following function where c is the point on a complex plane:
function f(c, z) return z^2 + c end
Iterating through that function produces the following fractal (ignore the color):
When you change the function to this, (z raised to the third power)
function f(c, z) return z^3 + c end
the fractal should render like so (again, the color doesn't matter):
(source: uoguelph.ca)
However, when I raised z to the power of 3, I got an image extremely similar as to when you raise z to the power of 2. How can I make the fractal render correctly? This is the code where the iterations are done: (the variables real and imaginary simply scale the screen from -2 to 2)
--loop through each pixel, col = column, row = row
local real = (col - zoomCol) * 4 / width
local imaginary = (row - zoomRow) * 4 / width
local z, c, iter = 0, 0, 0
while math.sqrt(z^2 + c^2) <= 2 and iter < maxIter do
local zNew = z^2 - c^2 + real
c = 2*z*c + imaginary
z = zNew
iter = iter + 1
end
So I recently decided to remake a Mandelbrot fractal generator, and it was MUCH more successful than my attempt last time, as my programming skills have increased with practice.
I decided to generalize the mandelbrot function using recursion for anyone who wants it. So, for example, you can do f(z, c) z^2 + c or f(z, c) z^3 + c
Here it is for anyone that may need it:
function raise(r, i, cr, ci, pow)
if pow == 1 then
return r + cr, i + ci
end
return raise(r*r-i*i, 2*r*i, cr, ci, pow - 1)
end
and it's used like this:
r, i = raise(r, i, CONSTANT_REAL_PART, CONSTANT_IMAG_PART, POWER)

Create a path in a grid in prolog

I have to create a path between two given points in a grid in Prolog. The code I have so far is:
createPath(GridSize, BeginPosition, EndPosition, VisitedPoints, Path):-
nextStep(BeginPosition, NextStep, GridSize),
(
NextStep \== EndPosition,
->
nonmember(NextStep, VisitedPoints),
add(NextStep, VisitedPoints, NewVisitedPoints),
add(NextStep, Path, NewPath),
createPath(GridSize, NextStep, EndPosition, NewVisitedPoints, NewPath)
;
???
).
A little bit of explanation of my code:
GridSize is just an integer. If it is 2, the grid is a 2x2 grid. So all the grids are square.
The BeginPosition and EndPosition are shown like this: pos(X,Y).
The function nextStep looks for a valid neigbor of a given position. The values of X and Y have to be between 1 and the grid size. I've declared 4 different predicates of nextStep: X + 1, X - 1, Y + 1 and Y - 1.
This is the code:
nextStep(pos(X,Y),pos(X1,Y),GridSize):-
X1 is X + 1,
X1 =< GridSize.
nextStep(pos(X,Y),pos(X1,Y),_):-
X1 is X - 1,
X1 >= 1.
nextStep(pos(X,Y),pos(X,Y1),GridSize):-
Y1 is Y + 1,
Y1 =< GridSize.
nextStep(pos(X,Y),pos(X,Y1),_):-
Y1 is Y - 1,
Y1 >= 1.
nonmember returns true if a given element doesn't occur in a given list.
add adds an element to a given list, and returns the list with that element in it.
Another thing to know about VisitedPoints: Initially the BeginPosition and EndPosition are stored in that list. For example, if I want to find a path in a 2x2 grid, and I have to avoid point pos(2,1), then I will call the function like this:
createPath(2, pos(1,1), pos(2,2), [pos(1,1),pos(2,2),pos(2,1)], X).
The result I should get of it, should be:
X = [pos(1,2)]
Because that is the point needed to connect pos(1,1) and pos(2,2).
My question is, how can I stop the code from running when NextStep == EndPosition. In other words, what do I have to type at the location of the '???' ? Or am I handling this problem the wrong way?
I'm pretty new to Prolog, and making the step from object oriented languages to this is pretty hard.
I hope somebody can answer my question.
Kind regards,
Walle
I think you just placed the 'assignment' to path at the wrong place
createPath(GridSize, BeginPosition, EndPosition, VisitedPoints, Path):-
nextStep(BeginPosition, NextStep, GridSize),
(
NextStep \== EndPosition,
->
nonmember(NextStep, VisitedPoints),
add(NextStep, VisitedPoints, NewVisitedPoints),
% add(NextStep, Path, NewPath),
% createPath(GridSize, NextStep, EndPosition, NewVisitedPoints, NewPath)
createPath(GridSize, NextStep, EndPosition, NewVisitedPoints, Path)
;
% ???
% bind on success the output variable, maybe add EndPosition
Path = VisitedPoints
).
Maybe this is not entirely worth an answer, but a comment would be a bit 'blurry'

how to generate such an image in Mathematica

I am thinking of process an image to generate in Mathematica given its powerful image processing capabilities. Could anyone give some idea as to how to do this?
Thanks a lot.
Here's one version, using a textures. It of course doesn't act as a real lens, just repeats portions of the image in an overlapping fashion.
t = CurrentImage[];
(* square off the image to avoid distortion *)
t = ImageCrop[t, {240,240}];
n = 20;
Graphics[{Texture[t],
Table[
Polygon[
Table[h*{Sqrt[3]/2, 0} + (g - h)*{Sqrt[3]/4, 3/4} + {Sin[t], Cos[t]},
{t, 0., 2*Pi - Pi/3, Pi/3}
],
VertexTextureCoordinates -> Transpose[{
Rescale[
(1/4)*Sqrt[3]*(g - h) + (Sqrt[3]*h)/2.,
{-n/2, n/2},
{0, 1}
] + {0, Sqrt[3]/2, Sqrt[3]/2, 0, -(Sqrt[3]/2), -(Sqrt[3]/2)}/(n/2),
Rescale[
(3.*(g - h))/4,
{-n/2, n/2},
{0, 1}
] + {1, 1/2, -(1/2), -1, -(1/2), 1/2}/(n/2)
}]
],
{h, -n, n, 2},
{g, -n, n, 2}
]
},
PlotRange -> n/2 - 1
]
Here's the above code applied to the standard image test (Lena)
"I think this could be well approximated with a calculated offset for the image in each cell" - Mr.Wizard
Exactly! As you can see from reconstructed image there is no lens effect and tiles are just displacements.
What you need is a Hexagonal_tessellation and a simple algorithm to calculate displacement for each hexagon from some chosen central point (weight/2, height/2).

Resources