What is the trailing x - directx

I noticed that their are a lot of technologies that uses X in their names like Directx and PhysX and X server ... is there a something common? Or is there any reason to choose X?

According to Wikipedia, the X in DirectX 'stands in' for the various Direct APIs - Direct3D, DirectSound, DirectPlay etc. Seems like a reasonable explanation.
PhysX probably plays on the whole DirectX 'thing' - but I expect it's named as such 'cause it sounds a bit like physics.
X Server serves X. :p

The meaning of the X varies by usage; in PhysX it seems to be the kewl[sic] way to spell Physics; whereas in X Server (part of the X Window System) takes it's name from being the natural evolution of a system named W (probably short for Window, or just the letter after V; the name of the system on which it ran).
DirectX has already been explained in another answer; so there's that.
But the main reason, most of the time; is that Poor Literacy Is Kewl[sic].

Related

ECIES: correct way ECDH-input for KDF? Security effect?

In order to understand ECIES completely and use my favorite library I implemented some parts of ECIES myself. Doing this and comparing the results led to one point which is not really clear for me: what exacly is the input of KDF?
The result of ECDH is an vector, but what do you use for the KDF? Is it just the X value, or is it X + Y (perhaps with an prepended 04)? You can find both concept in the wild, and for sake of interoberability, it would be really interesting which way is the correct way (if there is a correct way at all - I know that ECIEs is more a concept and has several degrees of freedom).
Explanation (correct me if I'm wrong at a specific point, please). If I talk about byte length, this will refer to ECIES with 256 Bit EC Keys.
So, first, the big picture: here's the ECIES process, and I'm talking about the step 2 -> 3:
The recipient's public key is an vector V, the sender's emphemal private key is a scalar u, and key agreement function KA is ECDH which is basicly a multiplication of V * u. As a result, you get a shared key which is also a vector - let's call it "shared key".
Then you take the sender's public key, concat it with the shared key, and use this as an input for the key derival function KDF.
But: If you want to use this vector for the key derival function KDF, you have two ways of doing this:
you can use just shared key's X. Then you have a bytestring of 32 bytes.
you can use shared key's X and Y and prepend it 0x04 as you with public keys. Then you have a bytestring of 01 + 32 + 32 bytes
[3) just to be complete: you can also use X + Y as a compressed point)
The length of the bytestring does not really matter, because after KDF (which usually involves hashing) you always have a fixed value, e.g. 32 bytes (if you use sha256).
But of course the result of KDF is quite different if you choose one or the other method. So the question is: what's the correct way?
eciespy uses method 2 https://github.com/ecies/py/blob/master/ecies/utils.py#L143
python cryptography gives just X back at their ECDH: https://cryptography.io/en/latest/hazmat/primitives/asymmetric/ec/#cryptography.hazmat.primitives.asymmetric.ec.ECDH . They have no ECIES support.
if I understand CryptoC++s documentation correctly, they also just give X back: https://cryptopp.com/wiki/Elliptic_Curve_Diffie-Hellman
same with Java BountyCastle, if I read this correctly - result is an integer: https://github.com/bcgit/bc-java/blob/master/core/src/main/java/org/bouncycastle/crypto/agreement/DHBasicAgreement.java#L79
but you can also find online calculators with both, X and Y: http://www-cs-students.stanford.edu/~tjw/jsbn/ecdh.html
So, I tried to get more information in documentation:
there's the ISO propsal for ECIES. They don't describe it in detail (or I was not able to find it), but I would interpret it as the way with the full vector, X and Y: https://www.shoup.net/papers/iso-2_1.pdf
there is this paper which is widely linked in the internet which refers to just using X at page 27: http://www.secg.org/sec1-v2.pdf
So, result is: I'm confused. Can anybody point me in the right direction, or is this just a degree of freedom you have (and reason for lot's of fun when it comes to compatibility)?
To answer my quesion myself: yes, this is a degree of freedom. The X coordinate way is called compact representation, and it's defined in RFC 6090. So both are valid.
They are also equally secure, because you can calculate Y out of X as described in appendix C at RFC 6090.
The default way is using compact representation. Both ways are not compatibile to each other, so if you stumble across compatibility issues between libaries this might be an interesting point to find out.

Will random() ever change?

I have been looking into a development issue that requires the use of pseudorandom number generation to allow the same set of random numbers to be generated for a given seed.
I have currently been looking at using long random(void) and void srandom(unsigned seed) for this (man page), and currently these are generating the same set of random numbers in a Mac app, an iOS app and an iOS app (64-bit) which is what I was hoping. The iOS tests were only in the simulator so I don't know whether this will affect the result.
My main concerns is that this algorithm could change at some point, making the applications we're developing effectively useless with old data. What are the chances of these algorithms changing / being different on a future device?
I'd say it's extremely likely they will change as the sequence is not guaranteed by any standard.
Why not use your own random number sequence? Even a simple linear congruential generator satisfies most statistical properties of randomness. Here is the formula for such a generator:
next_number = (a * current_number + b) % c
with
a = 1103515245
b = 12345
c = 4294967296
These values of a, b, c give you good statistical properties and are quite well known for building quick and dirty generators.
I don't have the slightest idea about the answer to the question you ask.
If a related question is "How can I be absolutely sure to have the same pseudo-random sequences generated in 10 years time ?", the answer to this question is : don't rely on an external library, write the code explicitly.
Bathsheba proposed this generator. You can google for "pseudo random generator algorithm". Here is a list of algorithms listed on wikipedia.
In fact, srandom did change since Mac OS X 10.7, according to this blog post. However, this was due
to the way srandom was implemented: it tried to access an uninitialized local variable, which
is undefined behavior in C. According to the post, the new compiler used since Mac
OS X 10.7 optimized out the uninitialized memory access, changing its behavior in subtle
ways.

Does Z3 have support for optimization problems

I saw in a previous post from last August that Z3 did not support optimizations.
However it also stated that the developers are planning to add such support.
I could not find anything in the source to suggest this has happened.
Can anyone tell me if my assumption that there is no support is correct or was it added but I somehow missed it?
Thanks,
Omer
If your optimization has an integer valued objective function, one approach that works reasonably well is to run a binary search for the optimal value. Suppose you're solving the set of constraints C(x,y,z), maximizing the objective function f(x,y,z).
Find an arbitrary solution (x0, y0, z0) to C(x,y,z).
Compute f0 = f(x0, y0, z0). This will be your first lower bound.
As long as you don't know any upper-bound on the objective value, try to solve the constraints C(x,y,z) ∧ f(x,y,z) > 2 * L, where L is your best lower bound (initially, f0, then whatever you found that was better).
Once you have both an upper and a lower bound, apply binary search: solve C(x,y,z) ∧ 2 * f(x,y,z) > (U - L). If the formula is satisfiable, you can compute a new lower bound using the model. If it is unsatisfiable, (U - L) / 2 is a new upper-bound.
Step 3. will not terminate if your problem does not admit a maximum, so you may want to bound it if you are not sure it does.
You should of course use push and pop to solve the succession of problems incrementally. You'll additionally need the ability to extract models for intermediate steps and to evaluate f on them.
We have used this approach in our work on Kaplan with reasonable success.
Z3 currently does not support optimization. This is on the TODO list, but it has not been implemented yet. The following slide decks describe the approach that will be used in Z3:
Exact nonlinear optimization on demand
Computation in Real Closed Infinitesimal and Transcendental Extensions of the Rationals
The library for computing with infinitesimals has already been implemented, and is available in the unstable (work-in-progress) branch, and online at rise4fun.

How do I choose a pixel format when creating a new Texture2D?

I'm using the SharpDX Toolkit, and I'm trying to create a Texture2D programmatically, so I can manually specify all the pixel values. And I'm not sure what pixel format to create it with.
SharpDX doesn't even document the toolkit's PixelFormat type (they have documentation for another PixelFormat class but it's for WIC, not the toolkit). I did find the DirectX enum it wraps, DXGI_FORMAT, but its documentation doesn't give any useful guidance on how I would choose a format.
I'm used to plain old 32-bit bitmap formats with 8 bits per color channel plus 8-bit alpha, which is plenty good enough for me. So I'm guessing the simplest choices will be R8G8B8A8 or B8G8R8A8. Does it matter which I choose? Will they both be fully supported on all hardware?
And even once I've chosen one of those, I then need to further specify whether it's SInt, SNorm, Typeless, UInt, UNorm, or UNormSRgb. I don't need the sRGB colorspace. I don't understand what Typeless is supposed to be for. UInt seems like the simplest -- just a plain old unsigned byte -- but it turns out it doesn't work; I don't get an error, but my texture won't draw anything to the screen. UNorm works, but there's nothing in the documentation that explains why UInt doesn't. So now I'm paranoid that UNorm might not work on some other video card.
Here's the code I've got, if anyone wants to see it. Download the SharpDX full package, open the SharpDXToolkitSamples project, go to the SpriteBatchAndFont.WinRTXaml project, open the SpriteBatchAndFontGame class, and add code where indicated:
// Add new field to the class:
private Texture2D _newTexture;
// Add at the end of the LoadContent method:
_newTexture = Texture2D.New(GraphicsDevice, 8, 8, PixelFormat.R8G8B8A8.UNorm);
var colorData = new Color[_newTexture.Width*_newTexture.Height];
_newTexture.GetData(colorData);
for (var i = 0; i < colorData.Length; ++i)
colorData[i] = (i%3 == 0) ? Color.Red : Color.Transparent;
_newTexture.SetData(colorData);
// Add inside the Draw method, just before the call to spriteBatch.End():
spriteBatch.Draw(_newTexture, new Vector2(0, 0), Color.White);
This draws a small rectangle with diagonal lines in the top left of the screen. It works on the laptop I'm testing it on, but I have no idea how to know whether that means it's going to work everywhere, nor do I have any idea whether it's going to be the most performant.
What pixel format should I use to make sure my app will work on all hardware, and to get the best performance?
The formats in the SharpDX Toolkit map to the underlying DirectX/DXGI formats, so you can, as usual with Microsoft products, get your info from the MSDN:
DXGI_FORMAT enumeration (Windows)
32-bit-textures are a common choice for most texture scenarios and have a good performance on older hardware. UNorm means, as already answered in the comments, "in the range of 0.0 .. 1.0" and is, again, a common way to access color data in textures.
If you look at the Hardware Support for Direct3D 10Level9 Formats (Windows) page you will see, that DXGI_FORMAT_R8G8B8A8_UNORM as well as DXGI_FORMAT_B8G8R8A8_UNORM are supported on DirectX 9 hardware. You will not run into compatibility-problems with both of them.
Performance is up to how your Device is initialized (RGBA/BGRA?) and what hardware (=supported DX feature level) and OS you are running your software on. You will have to run your own tests to find it out (though in case of these common and similar formats the difference should be a single digit percentage at most).

Does functional programming take up more memory?

Warning! possibly a very dumb question
Does functional programming eat up more memory than procedural programming?
I mean ... if your objects(data structures whatever) are all imutable. Don't you end up having more object in the memory at a given time.
Doesn't this eat up more memory?
It depends on what you're doing. With functional programming you don't have to create defensive copies, so for certain problems it can end up using less memory.
Many functional programming languages also have good support for laziness, which can further reduce memory usage as you don't create objects until you actually use them. This is arguably something that's only correlated with functional programming rather than a direct cause, however.
Persistent values, that functional languages encourage but which can be implemented in an imperative language, make sharing a no-brainer.
Although the generally accepted idea is that with a garbage collector, there is some amount of wasted space at any given time (already unreachable but not yet collected blocks), in this context, without a garbage collector, you end up very often copying values that are immutable and could be shared, just because it's too much of a mess to decide who is responsible for freeing the memory after use.
These ideas are expanded on a bit in this experience report which does not claim to be an objective study but only anecdotal evidence.
Apart from avoiding defensive copies by the programmer, a very smart implementation of pure functional programming languages like Haskell or Standard ML (which lack physical pointer equality) can actively recover sharing of structurally equal values in memory, e.g. as part of the memory management and garbage collection.
Thus you can have automatic hash consing provided by your programming language runtime-system.
Compare this with objects in Java: object identity is an integral part of the language definition. Even just exchanging one immutable String for another poses semantic problems.
There is indeed at least a tendency to regard memory as affluent ressource (which, in fact, it really is in most cases), but this applies to modern programming as a whole.
With multiple cores, parallel garbage collectors and available RAM in the gigabytes, one used to concentrate on different aspects of a program than in earlier times, when every byte one could save counted. Remember when Bill Gates said "640K should be enough for every program"?
I know that I'm a lot late on this question.
Functional languages does not in general use more memory than imperative or OO languages. It depends more on the code you write. Yes F#, SML, Haskell and such has immutable values (not variables), but for all of them it goes without saying that if you update f.x. a single linked list, it re-compute only what is necessary.
Say you got a list of 5 elements, and you are removing the first 3 and adding a new one in front of it. it will simply get the pointer that points to the fourth element and let the new list point to that point of data i.e. reusing data. as seen below.
old list
[x0,x1,x2]
\
[x3,x4]
new list /
[y0,y1]
If it was an imperative language we could not do this because the values x3 and x4 could very well change over time, the list [x3,x4] could change too. Say that the 3 elements removed are not used afterward, the memory they use can be cleaned up right away, in contrast to unused space in an array.
That all data are immutable (except IO) are a strength. It simplifies the data flow analysis from a none trivial computation to a trivial one. This combined with a often very strong type system, will give the compiler a bunch of information about the code it can use to do optimization it normally could not do because of indicability. Most often the compiler turn values that are re-computed recursively and discarded from each iteration (recursion) into a mutable computation. These two things gives you the proof that if your program compile it will work. (with some assumptions)
If you look at the language Rust (not functional) just by learning about "borrow system" you will understand more about how and when things can be shared safely. it is a language that is painful to write code in unless you like to see your computer scream at you that your are an idiot. Rust is for the most part the combination of all the study made of programming language and type theory for more than 40 years. I mention Rust, because it despite the pain of writing in it, has the promise that if your program compile, there will be NO memory leaking, dead locking, dangling pointers, even in multi processing programs. This is because it uses much of the research of functional programming language that has been done.
For a more complex example of when functional programming uses less memory, I have made a lexer/parser interpreter (the same as generator but without the need to generate a code file) when computing the states of the DFA (deterministic finite automata) it uses immutable sets, because it compute new sets of already computed sets, my code allocate less memory simply because it borrow already known data points instead of copying it to a new set.
To wrap it up, yes functional programming can use more memory than imperative once. Most likely it is because you are using the wrong abstraction to mirror the problem. i.e. If you try to do it the imperative way in a functional language it will hurt you.
Try this book, it has not much on memory management but is a good book to start with if you will learn about compiler theory and yes it is legal to download. I have ask Torben, he is my old professor.
http://hjemmesider.diku.dk/~torbenm/Basics/
I'll throw my hat in the ring here. The short answer to the question is no, and this is because immutability does not mean the same thing as stored in memory. For example, let's take this toy program :
x = 2
x = x * 3
x = x * 2
print(x)
Which uses mutation to compute new values. Compare this to the same program which does not use mutation:
x = 2
y = x * 3
z = y * 2
print(z)
At first glance, it appears this requires 3x the memory of the first program! However, just because a value is immutable doesn't mean it needs to be stored in memory. In the case of the second program, after y is computed, x is no longer necessary, because it isn't used for the rest of the program, and can be garbage collected, or removed from memory. Similarly, after z is computed, y can be garbage collected. So, in principle, with a perfect garbage collector, after we execute the third line of code, I only need to have stored z in memory.
Another oft-worried about source of memory consumption in functional languages is deep recursion. For example, calculating a large Fibonacci number.
calc_fib(x):
if x > 1:
return x * calc_fib(x-1)
else:
return x
If I run calc_fib(100000), I could implement this in a way which requires storing 100000 values in memory, or I could use Tail-Call Elimination (basically storing only the most-recently computed value in memory instead of all function calls). For less straightforward recursion you can resort to trampolining. So for functional languages which support this, recursion does not need to be a source of massive memory consumption, either. However, not all nominally functional languages do (for example, JavaScript does not).

Resources