So when I try and map this rgb value onto my data:
image=love.image.newImageData(WIDTH,HEIGHT,"rgba16f")
image:mapPixel(pixelFunction)
image2=love.graphics.newImage(image)
function pixelFunction(x, y, r, g, b, a)
return 0,50,0,255
end
I get this
As you can see this is something like (0,255,0,255) not the rgb value I wanted, in fact it seems only able to render the max red green or blue value, making the function ponitless
As one could guess from the fact that only extreme colors are generated, the value 50 is beyond the dynamic range. Using rgba representation in fractions of one (0, 50/255, 0, 1) does result in dark green.
(0,50,0,255) used to work in love 10. According to wiki it should work in love 11 with "rgba16f" that you seem to set. But it doesn't. Proceed to their bug reports section.
Also, please note that minimal reproducible example for the question should have been done along these lines:
WIDTH=300; HEIGHT=300;
imageData=love.image.newImageData(WIDTH,HEIGHT,'rgba16f')
function pixelFunction(x, y, r, g, b, a)
return 0,50/255,0,255
end
imageData:mapPixel(pixelFunction)
image=love.graphics.newImage(imageData)
function love.draw()
love.graphics.draw(image,0,0)
end
And yes, you've botched the order of definition and usage.
Related
I have the following Maxima code:
A[t] :=
if t=0 then A0
else (a+b)*A[t-1]+B[t-1]+c
;
B[t] :=
if t=0 then B0
else (a-b)*B[t-1]+c
;
a:0.1;
b:0.1;
c:1;
A0:100;
B0:0;
wxplot2d(A[t], [t, 0, 100]);
The only remotely weird thing I can think of is that recurion equation A depends on recurion equation B. I would think everything else is extremely basic.
But when I run it, I always get the following error repeated multiple times and no plot.
Maxima encountered a Lisp error:
Control stack exhausted (no more space for function call frames).
This is probably due to heavily nested or infinitely recursive function
calls, or a tail call that SBCL cannot or has not optimized away.
Even when I plot from time steps 0 to 1 with wxplot2d(A[t], [t, 0, 1]);, which by my count would only be two recursions and one external function reference, I still get the same error. Is there no way to have Maxima plot these equations?
I find that the following seems to work.
myvalues: makelist ([t, A[t]], t, 0, 100);
wxplot2d ([discrete, myvalues]);
Just to be clear, A[t] := ..., with square brackets, defines what is called an array function Maxima, which is a memoizing function (i.e. remembers previously calculated values). An ordinary, non-memoizing function is defined as A(t) := ..., with parentheses.
Given that A and B are defined only for nonnegative integers, it makes sense that they should be memoizing functions, so there's no need to change it.
I wonder if there is any way to make functions defined within the main function be local, in a similar way to local variables. For example, in this function that calculates the gradient of a scalar function,
grad(var,f) := block([aux],
aux : [gradient, DfDx[i]],
gradient : [],
DfDx[i] := diff(f(x_1,x_2,x_3),var[i],1),
for i in [1,2,3] do (
gradient : append(gradient, [DfDx[i]])
),
return(gradient)
)$
The variable gradient that has been defined inside the main function grad(var,f) has no effect outside the main function, as it is inside the aux list. However, I have observed that the function DfDx, despite being inside the aux list, does have an effect outside the main function.
Is there any way to make the sub-functions defined inside the main function to be local only, in a similar way to what can be made with local variables? (I know that one can kill them once they have been used, but perhaps there is a more elegant way)
To address the problem you are needing to solve here, another way to compute the gradient is to say
grad(var, e) := makelist(diff(e, var1), var1, var);
and then you can say for example
grad([x, y, z], sin(x)*y/z);
to get
cos(x) y sin(x) sin(x) y
[--------, ------, - --------]
z z 2
z
(There isn't a built-in gradient function; this is an oversight.)
About local functions, bear in mind that all function definitions are global. However you can approximate a local function definition via local, which saves and restores all properties of a symbol. Since the function definition is a property, local has the effect of temporarily wiping out an existing function definition and later restoring it. In between you can create a temporary function definition. E.g.
foo(x) := 2*x;
bar(y) := block(local(foo), foo(x) := x - 1, foo(y));
bar(100); /* output is 99 */
foo(100); /* output is 200 */
However, I don't this you need to use local -- just makelist plus diff is enough to compute the gradient.
There is more to say about Maxima's scope rules, named and unnamed functions, etc. I'll try to come back to this question tomorrow.
To compute the gradient, my advice is to call makelist and diff as shown in my first answer. Let me take this opportunity to address some related topics.
I'll paste the definition of grad shown in the problem statement and use that to make some comments.
grad(var,f) := block([aux],
aux : [gradient, DfDx[i]],
gradient : [],
DfDx[i] := diff(f(x_1,x_2,x_3),var[i],1),
for i in [1,2,3] do (
gradient : append(gradient, [DfDx[i]])
),
return(gradient)
)$
(1) Maxima works mostly with expressions as opposed to functions. That's not causing a problem here, I just want to make it clear. E.g. in general one has to say diff(f(x), x) when f is a function, instead of diff(f, x), likewise integrate(f(x), ...) instead of integrate(f, ...).
(2) When gradient and Dfdx are to be the local variables, you have to name them in the list of variables for block. E.g. block([gradient, Dfdx], ...) -- Maxima won't understand block([aux], aux: ...).
(3) Note that a function defined with square brackets instead of parentheses, e.g. f[x] := ... instead of f(x) := ..., is a so-called array function in Maxima. An array function is a memoizing function, i.e. if f[x] is called two or more times, the return value is only computed once, and then returned every time thereafter. Sometimes that's a useful optimization when the domain of the function comprises a finite set.
(4) Bear in mind that x_1, x_2, x_3, are distinct symbols, not related to each other, and not related to x[1], x[2], x[3], even if they are displayed the same. My advice is to work with subscripted symbols x[i] when i is a variable.
(5) About building up return values, try to arrange to compute the whole thing at one go, instead of growing the result incrementally. In this case, makelist is preferable to for plus append.
(6) The return function in Maxima acts differently than in other programming languages; it's a little hard to explain. A function returns the value of the last expression which was evaluated, so if gradient is that last expression, you can just write grad(var, f) := block(..., gradient).
Hope this helps, I know it's obscure and complex. The Maxima programming language was not designed before being implemented, and some of the decisions are clearly questionable at the long interval of more than 50 years (!) later. That's okay, they were figuring it out as they went along. There was not a body of established results which could provide a point of reference; the original authors were contributing to what's considered common knowledge today.
cv::recoverPose has parameter "triangulatedPoints" as seen in documentation, though math behind it is not documented, even in sources (relevant commit on github).
When I use it, I get this matrix in following form:
[0.06596200907402348, 0.1074107606919504, 0.08120752154556411,
0.07162400555712592, 0.1112415181779849, 0.06479560707001968,
0.06812069103377787, 0.07274771866295617, 0.1036230973846902,
0.07643884790206311, 0.09753859499789987, 0.1050111597547035,
0.08431322508162108, 0.08653721971228882, 0.06607013741719928,
0.1088621999959361, 0.1079215237863785, 0.07874160849424018,
0.07888037486261903, 0.07311940086190356;
-0.3474319603010109, -0.3492386196164926, -0.3592673043398864,
-0.3301695131649525, -0.3398606744869519, -0.3240186574427479,
-0.3302508442361889, -0.3534091474425142, -0.3134288005980755,
-0.3456284001726975, -0.3372514921152191, -0.3229005408417835,
-0.3156005118578394, -0.3545418178651592, -0.3427899760859008,
-0.3552801904337188, -0.3368860879000375, -0.3268499974874541,
-0.3221050630233929, -0.3395139819250934;
-0.9334091581425227, -0.9288726274060354, -0.9277125424980246,
-0.9392374374147775, -0.9318967835907961, -0.941870018271934,
-0.9394698966781299, -0.9306592884695234, -0.9419749503870455,
-0.9332801148509925, -0.9343740431697417, -0.9386198310107222,
-0.9431781968459053, -0.9290466865633286, -0.9351167772249444,
-0.9264105322194914, -0.933362882155191, -0.9398254944757025,
-0.9414486961893244, -0.935785675955617;
-0.0607238817598344, -0.0607532477465341, -0.06067768097603395,
-0.06075467523485482, -0.06073245675798231, -0.06078081616640227,
-0.06074754785132623, -0.0606879948481664, -0.06089198212719162,
-0.06071522666667255, -0.06076842109618678, -0.06083346023742937,
-0.06084805655000008, -0.0606931888685702, -0.06071558440082779,
-0.06073329803512636, -0.06078189449161094, -0.06080195858434526,
-0.06083228813425822, -0.06073695721101467]
e.g. 4x20 matrix (in this case there were 20 points). I want to convert this data to std::vector in order to use it in solvePnP. How to do it, what is the math here? Thanks!
OpenCV offers a triangulatePoints function, which has the same output:
points4D 4xN array of reconstructed points in homogeneous coordinates.
Which indicates that each column is a 3D point in homogeneous coordinate system. However your points looks quite not as I would expect. For instance your first point is:
[0.06596200907402348, -0.3474319603010109, -0.9334091581425227, -0.0607238817598344]
But I would expect the last component to be 1.0 already. You should double check if something is not wrong here. You can always remove the "scaling" of the point by dividing each dimension by the last component:
[ x, y z, w ] = w [x/w, y/w, z/w, 1]
And then use the first three parts for your PnP solution.
I hope this helps
I've been using the Accelerate framework to do some audio signal processing and I've been using the vDSP_conv function to perform some cross-correlations. Usually, the values returned look like this (left column is the array index, and right column is the value of the array at that index after being returned from vDSP_conv):
125001 1.576556
125002 1.523622
125003 1.439102
125004 1.593097
125005 1.171977
125006 0.020228
125007 -0.988876
125008 -1.526720
125009 -1.056652
125010 -0.181521
125011 -0.029592
125012 0.077848
125013 0.319371
125014 0.080034
125015 -0.629983
But sometimes the results look like this, for no discernible reason:
125001 65531903404620711577128764702720.000000
125002 271523249688835947415863891591168.000000
125003 253191001846134141440285462233088.000000
125004 197376212065818453160643396632576.000000
125005 247836891833411757917279954665472.000000
125006 203601464352748581549908776976384.000000
125007 193256115501319341596977567629312.000000
125008 55431884287617507551879029063680.000000
125009 -242471930502532513482802284462080.000000
125010 -259877560883016098488551924039680.000000
125011 -201496656800953613737511541014528.000000
125012 -240627419186810410707269384667136.000000
125013 -241660441463967832878539113234432.000000
125014 -169626548145197368918504628027392.000000
125015 -157041504634723839288379166425088.000000
I ran the program again after getting these results, and they went back to the original (correct) results. Has anyone else experienced this or have any ideas as to why it's happening?
Probably this is some overflow effect in one of the vectors you are correlating. As the specs say "The length of this vector must be at least N + P - 1." So e.g. if you are doing autocorrelation of vector A of length n, you should first create a vector of length 2*n (say A_extended), copy A into that, and do
vDSP_conv(A_extended, 1, A, 1, &result, 1, n, n)
Please take a look at the screenshot below and see if you can tell me why this won't work. The examples in on the reference page for TextRecognize look pretty impressive, I don't think recognizing single letters like this should be a problem. I've tried resizing the letters as well as having the image sharpened.
For convenience in case you want to try this yourself I have included the image that I use at the bottom of this post. You can also find plenty more like this by searching for "Wordfeud" in Google Image Search.
Very cool question!
TextRecognize uses heuristics to recognize whole words from the English language. This is
the gotcha that makes recognizing single letters very hard
Consider the following line of thought:
s = Import["http://i.stack.imgur.com/JHYuh.png"];
p = ImagePartition[s, 32]
Now pick letters to form the English word 'EXIT':
x = {p[[1, 13]], p[[6, 6]], p[[3, 13]], p[[1, 12]]}
Now clean up these images a bit, like so:
d = ImageAssemble[ Map[ImageTake[#, {3, 27}, {2, 20}] &, x ]];
Then this returns the string "EXIT":
TextRecognize[d]
This is an approach completely different from using TextRecognize, so I am posting this as a separate answer. It uses the same image recognition technique from the How do I find Waldo with Mathematica.
First get the puzzle:
wordfeud = Import["http://i.stack.imgur.com/JHYuh.png"]
And then get the pieces of the puzzle:
Grid[pieces = ImagePartition[s, 32]]
Let's be interested in the letter E:
LetterE = pieces[[4, 3]]
Get the correlation image:
correlation =
ImageCorrelate[wordfeud, Binarize[LetterE],
NormalizedSquaredEuclideanDistance]
And highlight the matches:
positions = Dilation[ColorNegate[Binarize[correlation, .1]], DiskMatrix[20]];
found = ImageMultiply[wordfeud, ImageAdd[ColorConvert[positions, "GrayLevel"], .5]]
As before, this requires a bit of tuning on binarizing the correlation image, but other than
that this should help to identify bits and pieces of this puzzle.
I thought the quality of your image might be interfering. Binarizing your image did not help : recognition was zilch. I also tried a very sharp black and white image of a crossword puzzle solution. (see below) Again, nothing was recognized whether in regular or binarized format.
So I removed the black background leaving only the letters and their thin black frames. Again, recognition was about 0%.
When I removed the frames from around some of the letters AND binarized the image the only parts that were recognizable were those regions in which there was nothing but letters. (see below)
Notice in the output below, ANTS, TIRES, and TEXAS are correctly identified (as well as VECTORS), but just about nothing else.
Notice also that, even though the strings were widely spaced, mma interpreted them as words, rather than separate letters. Note "TEXAS" instead of "T E X A S".
TextRecognize[Binarize#img]
(* output *)
ANTS FFWWW FEEWF
E R o If IU I?
E A FI5F WWWFF 5
5552? L E F F
T s E NTT BT|
H0RWW#0WVlWF;EE F
5 W E ; OCS
FOFT W W R AL%AE
A TT I T ? _
i iE#W'NF WG%S W
A A EW F I i
SWWTW W ALTFCWD N
H A V 5 A F F
PLATT EWWLIGHT
W N E T
HE TIRES C
TEXAS VECTORS
I didn't have the patience to completely clean up the image. It would have been much faster to retype the text by hand.
Conclusion: Don't use text recognition in mma unless you have absolutely clear text against an even-colored, bright, preferrably white, background.
The results also varied depending on the file format used. Avoid .pdf altogether.
Edit
acl captured and tried to recognize the last 5 lines (above Edit). His results (in a comment below): mostly gibberish.
I decided to do the same. But since Prashant warned that text size makes a difference, I zoomed in first so that the text appear (to my eyes) to be about 20 pica. Below is the picture of the text I scanned and TextRecognized.
Here's the result of an unbinarized TextRecognize (at that large size):
Gliii. Q lk-ii`t`*¥ if EY £\[CloseCurlyDoubleQuote]1\[Euro]'EE \
Di'¥C~E\"P ITF SKI' T»f}!E'!',IL:?E\[CloseCurlyDoubleQuote] I 2 VEEE5\
\[CloseCurlyQuote] LEP \"- \"VE
1. ur e=\\..r.1.»».»\\\\ rw r 1»»\\|a'*r | r .fm -»'-an \
\[OpenCurlyQuote] -.-rr -_.»~|-.'i~-.w~,.-- nv n.w~»-\
\[OpenCurlyDoubleQuote]~"
Now, here's the result for the TextRecognize of the binarized image. The original image was a .png from Jing.
I didn't have the patience to completely clean up the image. It would \
have been much faster to retype the
text by hand.
Conclusion: Don't use text recognition in mma unless you have \
absolutely clear text against an even-
colored, bright, preferrably white, background.
The results also varied depending on the file format used. Avoid .pdf \
altogether.