I'm experimenting with creating a small library/DSL for image synthesis in Clojure. Basically the idea is to allow users of the library to compose sets of mathematical functions to procedurally create interesting images.
The functions need to operate on double values, and take the form of converting a location vector into a colour value, e.g. (x,y,z) - > (r,g,b,a)
However I'm facing a few interesting design decisions:
Inputs could have 1,2,3 or maybe even 4 dimensions (x,y,z plus time)
It would be good to provide vector maths operations (dot products, addition, multiplication etc.)
It would be valuable to compose functions with operations such as rotate, scale etc.
For performance reasons, it is important to use primitive double maths throughout (i.e. avoid creating boxed doubles in particular). So a function which needs to return red, green and blue components perhaps needs to become three separate functions which return the primitive red, green and blue values respectively.
Any ideas on how this kind of DSL can reasonably be achieved in Clojure (1.4 beta)?
A look at the awesome ImageMagick tools http://www.imagemagick.org can give you an idea of what kind of operations would be expected from such a library.
Maybe you'll see that you won't need to drop down to vector math if you replicate the default IM toolset.
OK, so I eventually figured out a nice way of doing this.
The trick was to represent functions as a vector of code (in the "code is data" sense, e.g.
[(Math/sin (* 10 x))
(Math/cos (* 12 y))
(Math/cos (+ (* 5 x) (* 8 y)))]
This can then be "compiled" to create 3 objects that implement a Java interface with the following method:
public double calc(double x, double y, double z, double t) {
.....
}
And these function objects can be called with primitive values to get the Red, Green and Blue colour values for each pixel. Results are something like:
Finally, it's possible to compose the functions using a simple DSL, e.g. to scale up a texture you can do:
(vscale 10 some-function-vector)
I've published all the code on GitHub for anyone interested:
https://github.com/mikera/clisk
Related
In order to understand ECIES completely and use my favorite library I implemented some parts of ECIES myself. Doing this and comparing the results led to one point which is not really clear for me: what exacly is the input of KDF?
The result of ECDH is an vector, but what do you use for the KDF? Is it just the X value, or is it X + Y (perhaps with an prepended 04)? You can find both concept in the wild, and for sake of interoberability, it would be really interesting which way is the correct way (if there is a correct way at all - I know that ECIEs is more a concept and has several degrees of freedom).
Explanation (correct me if I'm wrong at a specific point, please). If I talk about byte length, this will refer to ECIES with 256 Bit EC Keys.
So, first, the big picture: here's the ECIES process, and I'm talking about the step 2 -> 3:
The recipient's public key is an vector V, the sender's emphemal private key is a scalar u, and key agreement function KA is ECDH which is basicly a multiplication of V * u. As a result, you get a shared key which is also a vector - let's call it "shared key".
Then you take the sender's public key, concat it with the shared key, and use this as an input for the key derival function KDF.
But: If you want to use this vector for the key derival function KDF, you have two ways of doing this:
you can use just shared key's X. Then you have a bytestring of 32 bytes.
you can use shared key's X and Y and prepend it 0x04 as you with public keys. Then you have a bytestring of 01 + 32 + 32 bytes
[3) just to be complete: you can also use X + Y as a compressed point)
The length of the bytestring does not really matter, because after KDF (which usually involves hashing) you always have a fixed value, e.g. 32 bytes (if you use sha256).
But of course the result of KDF is quite different if you choose one or the other method. So the question is: what's the correct way?
eciespy uses method 2 https://github.com/ecies/py/blob/master/ecies/utils.py#L143
python cryptography gives just X back at their ECDH: https://cryptography.io/en/latest/hazmat/primitives/asymmetric/ec/#cryptography.hazmat.primitives.asymmetric.ec.ECDH . They have no ECIES support.
if I understand CryptoC++s documentation correctly, they also just give X back: https://cryptopp.com/wiki/Elliptic_Curve_Diffie-Hellman
same with Java BountyCastle, if I read this correctly - result is an integer: https://github.com/bcgit/bc-java/blob/master/core/src/main/java/org/bouncycastle/crypto/agreement/DHBasicAgreement.java#L79
but you can also find online calculators with both, X and Y: http://www-cs-students.stanford.edu/~tjw/jsbn/ecdh.html
So, I tried to get more information in documentation:
there's the ISO propsal for ECIES. They don't describe it in detail (or I was not able to find it), but I would interpret it as the way with the full vector, X and Y: https://www.shoup.net/papers/iso-2_1.pdf
there is this paper which is widely linked in the internet which refers to just using X at page 27: http://www.secg.org/sec1-v2.pdf
So, result is: I'm confused. Can anybody point me in the right direction, or is this just a degree of freedom you have (and reason for lot's of fun when it comes to compatibility)?
To answer my quesion myself: yes, this is a degree of freedom. The X coordinate way is called compact representation, and it's defined in RFC 6090. So both are valid.
They are also equally secure, because you can calculate Y out of X as described in appendix C at RFC 6090.
The default way is using compact representation. Both ways are not compatibile to each other, so if you stumble across compatibility issues between libaries this might be an interesting point to find out.
I am writing an imageJ/Fiji plugin in Jython using the pydev plugin in eclipse.The plugin will be the ImageJ version of an already existing denoising software called CANDLE written as a matlab program. Changing the value of every pixel(voxel) of an image in matlab is trivial:
InputImage = 2 * sqrt(InputImage + (3/8));
Median3DFilteredImage = 2 * sqrt(Median3DFiltered + (3/8));
Here "InputImage" and "Median3DFilteredImage" are 3D Matrices, with the last dimension being time (slices). To reproduced the following operation on an ImageJ image, I had to employ two for loops, one to iterate through the image slices (3rd dimension) and the other loop to iterate over all the pixels in a particular slice:
medFiltStack = medianFilteredImage.getStack()
newMedFiltStack = ImageStack(medianFilteredImage.width, medianFilteredImage.height)
InputStack = InputImage.getStack()
newInputStack = ImageStack(InputImage.width, InputImage.height)
for i in xrange(1 , medianFilteredImage.getNSlices() + 1):
ip = medFiltStack.getProcessor(i).convertToFloat()
ip2 = InputStack.getProcessor(i).convertToFloat()
pixels = ip.getPixels()
pixels2 = ip2.getPixels()
for j in xrange (len(pixels)):
pixels[j] = 2 * javaMath.sqrt(pixels[j] + (3.0/8.0) )
pixels2[j] = 2 * javaMath.sqrt(pixels2[j] + (3.0/8.0) )
newMedFiltStack.addSlice(ip)
newInputStack.addSlice(ip2)
medianFilteredImage = ImagePlus("MedianFiltered-Image", newMedFiltStack)
InputImage = ImagePlus("Input-Image", newInputStack)
My question is as follows: Is there a way to perform mathematical operations on an image Stack, i.e. on every pixel (voxel) in the image stack, without having to write code that explicitly visits every pixel in every slice of the image, i.e. for loops. It just seems to be a very primitive way of going about it and I am wondering if there isn't an optimal way of doing this operation. I also had to work with copies and then gave the new images the same names as before as opposed to working with the original images and editing them directly. So is there a way to edit the pixel values of the original images rather than copies of the images? Any help would be appreciated as there are plenty of more math operations that I have to perform. It would be super useful to find a way to do mathematical operations on images in an optimal way both in terms of the amount of code and if possible, in terms of speed.
In pure ImageJ 1.x, the answer is: no, there's no other way than to visit every slice and get its ImageProcessor. That's the way how ImageJ1 deals with its limited number of dimensions (z, time, channel), you always have a (Hyper-)Stack of 2D planes.
There is however a more powerful way of dealing with n-dimensional images called ImgLib, which is included into Fiji together with ImageJ2.
To avoid re-inventing the wheel, you should have a look a Jean-Yves Tinevez's great plugin Image Expression Parser. Use it headlessly with Fiji, or just have look at its source code (it uses a previous version though, ImgLib1, but the idea is the same: you avoid hard-coding the dimensions by using Java generics), see e.g. for the sqrt function:
public final <R extends RealType<R>> float evaluate(final R alpha) {
return (float) Math.sqrt(alpha.getRealDouble());
}
I want to get the properly rendered projection result from a Stage3D framework that presents something of a 'gray box' interface via its API. It is gray rather than black because I can see this critical snippet of source code:
matrix3D.copyFrom (renderable.getRenderSceneTransform (camera));
matrix3D.append (viewProjection);
The projection rendering technique that perfectly suits my needs comes from a helpful tutorial that works directly with AGAL rather than any particular framework. Its comparable rendering logic snippet looks like this:
cube.mat.copyToMatrix3D (drawMatrix);
drawMatrix.prepend (worldToClip);
So, I believe the correct, general summary of what is going on here is that both pieces of code are setting up the proper combined matrix to be sent to the Vertex Shader where that matrix will be a parameter to the m44 AGAL operation. The general description is that the combined matrix will take us from Object Local Space through Camera View Space to Screen or Clipping Space.
My problem can be summarized as arising from my ignorance of proper matrix operations. I believe my failed attempt to merge the two environments arises precisely because the semantics of prepending one matrix to another is not, and is never intended to be, equivalent to appending that matrix to the other. My request, then, can be summarized in this way. Because I have no control over the calling sequence that the framework will issue, e.g., I must live with an append operation, I can only try to fix things on the side where I prepare the matrix which is to be appended. That code is not black-boxed, but it is too complex for me to know how to change it so that it would meet the interface requirements posed by the framework.
Is there some sequence of inversions, transformations or other manuevers which would let me modify a viewProjection matrix that was designed to be prepended, so that it will turn out right when it is, instead, appended to the Object's World Space coordinates?
I am providing an answer more out of desperation than sure understanding, and still hope I will receive a better answer from those more knowledgeable. From Dunn and Parberry's "3D Math Primer" I learned that "transposing the product of two matrices is the same as taking the product of their transposes in reverse order."
Without being able to understand how to enter text involving superscripts, I am not sure if I can reduce my approach to a helpful mathematical formulation, so I will invent a syntax using functional notation. The equivalency noted by Dunn and Parberry would be something like:
AB = transpose (B) x transpose (A)
That comes close to solving my problem, which problem, to restate, is really just a problem arising out of the fact that I cannot control the behavior of the internal matrix operations in the framework package. I can, however, perform appropriate matrix operations on either side of the workflow from local object coordinates to those required by the GPU Vertex Shader.
I have not completed the test of my solution, which requires the final step to be taken in the AGAL shader, but I have been able to confirm in AS3 that the last 'un-transform' does yield exactly the same combined raw data as the example from the author of the camera with the desired lens properties whose implementation involves prepending rather than appending.
BA = transpose (transpose (A) x transpose (B))
I have also not yet tested to see if these extra calculations are so processing intensive as to reduce my application frame rate beyond what is acceptable, but am pleased at least to be able to confirm that the computations yield the same result.
im searching now for a quite long time to find something working to import
Blender 3D .obj files into xcode to use it on an iphone application.
i cant find a description how to implement something like that anywhere!
i dont want to use any engine. i just want to know the steps i have to
fullfill and the basic things to do.
there is really nothing on the www. you can find articles from 2005 - 2008
but thats all not up to date and nothing working.
So, does anybody knows how to do that?
take a look at the source in libgdx, and take a look at the objloader. He built the lib to be able to load obj files, and is supposed to work across multiple platforms (including ios). From reading the source you will see that creating an object loader for just the vertices is really simple, but becomes more complicated when you start caring about the normals and texture coords. Here is a simple algorithm for scraping an obj file (I will leave parsing the associated tpl file for the reader's own research):
Read in the lines of the file, throwing out all lines that start with # (comments)
Vertices look like: v 1.000 1.000 1.000, so, if line starts with 'v ' split the line on the spaces and store (and convert to float) the 3 floats as a vertice.
Normals look like: vn 1.000 1.000 1.000, so, if the line starts with 'vn ' do the same as number 2, but store as a normal.
Texture coords look like vt 1.000 1.000 with a possible 3rd [w] value, split and store the line in the same way.
Now it gets tricky, there is the face descriptions that look like f 1/1/1 2/2/2 3/3/3 these are describing 3 vertices/textcoords/normals (in that order) for each of the vertices of a shape (normally they are triangles) by index. The hardest part about this is that the obj file type uses three indexes instead of one like opengl or direct3d. So you will have to shuffle around the order of your vertexes/coords/normals so that you can utilize indexed drawing on the sources.
E.g. Basically you have to get the f 1/300/30 40/22/400 20/30/10 to become more like f 1/1/1 2/2/2 40/40/40 through reshuffling the order.
This site gives you an idea of this same algorithm and shows you an example of how to go about this in a high level algorithm (go about midway down on the page), and the source code he references for you to check out can be found here.
Anyways, let me know if you need anymore assistance. :)
Edit:
By the way if you see something like this: f 1//4 2//5 3//7 don't be alarmed, this is a valid file as intended and just means (in this instance) that there are no texture coords
I used the GLEssentials sample app as a starting point. It is really bare bones, but its kind of what you want to start with so you can really understand the format for when you decide to add to it later.
https://developer.apple.com/library/mac/#samplecode/GLEssentials/Introduction/Intro.html
If you compare two sets of data (such as two files), the differences between these sets can be displayed in two columns, or two panes, such as WinMerge does.
But are there any visual paradigms to display the differences between multiple data sets?
Update
The starting point of my question was the assumption that displaying differences between 2 files is relatively easy, as I mentioned WinMerge, whereas comparing 3 or more text files turns out to be more complicated, as there will be more and more differences between, say, different versions of a document that have been created over time.
How would you highlight parts of the file that are the same in 2 versions, but different from other versions?
The data sets I have in mind are objects (A, B, C, ...) which may or may not exist and have properties (a, b, c, ...) which may be set or not set.
Example:
Set 1: A(a, b, c), B(b, c), C(c)
Set 2: A(a, b, c), B(b), C(c)
Set 3: A(a, b), B(b)
If you compare 2 sets, e.g. 1 and 2, the difference would be in B(c). Comparing sets 2 and 3 results in the difference A(c) and C().
If you compare all 3 sets, you end up with 3 comparisons (n * (n-1) / 2)
I have a different view than some of those who provided Answers--i.e., that you need to further specify the problem. The abstraction level is about right. Further specification would make the problem easier, but the solution less useful.
A couple of years ago, i saw a graphic on ProgrammableWeb--it compared the results from a search on Yahoo with the results from the same search on Google. There's a lot of information to covey: some results are in both sets, some in just one, and the common results will have different positions in the respective engine's results, which somehow has to be shown.
I like the graphic and reimplemented it in Matplotlib (a Python scientific plotting library). Below is an example using some random points as well as python code i used to generate it:
from matplotlib import pyplot as PLT
xvals = NP.array([(2,3), (5,7), (8,6), (1.5,1.8), (3.0,3.8), (5.3,5.2),
(3.7,4.1), (2.9, 3.7), (8.4, 6.1), (7.1, 6.4)])
yvals = NP.tile( NP.array([5,3]), [10,1] )
fig = PLT.figure()
ax1 = fig.add_subplot(111)
ax1.plot(x, y, "-", lw=3, color='b')
ax1.plot(x, y2, "-", lw=3, color='b')
for a, b in zip(xvals, yvals) : ax1.plot(a,b,'-o',ms=8,mfc='orange', color='g')
PLT.axis("off")
PLT.show()
This model has some interesting features: (i) it actually deals with 'similarity' on a per-item basis (the vertically-oriented line connecting the dots) rather than aggregate similarity; (ii) the degree of similarity between two data points is proportional to the angle of the line connecting them--90 degrees if they are equal, with a decreasing angle as the difference increases; this is very intuitive; (iii) cases in which a point in one data set is not present in the second data set are easy to show--a point will appear on one of the two lines but without a line connecting it to a point on the other line.
This model works well for comparing search results because each search result has a 'score' (its index, or order in the Results List). For other types of data, you might have to assign a score to each data point--a similarity metric might i suppose (in a sense, that's actually what the search result order is, an distance from the top of the list)
Since there has been so much work into displaying a diff of two files, you might start by expressing your 'multiple data sets' in an appropriate text format, then using whatever you want to show a diff between those text formats.
But you should tell us more about your data sets!
I experimented a bit, and implemented two displays:
Matrix
Timeline
I agree with Peter, you should specify what type your data is and what you wish to bring out in the comparison.
Depending on the nature of the data/comparison you can consider different visualisations. Is your data ordered or unordered? How many things are you comparing, i.e. fine grain or gross comparison?
Examples:
Visualizing a comparison of unordered data could just be plotting the two histograms of your sets (i.e. distributions):
image source
On the other hand, comparing a huge ordered dataset like DNA can be done innovatively.
Also, check out visual complexity, it's a great resource for interesting visualization.