In typical Apple fashion, there's no documentation (and what little there borders on trolling). For example, what is simd_precise_normalize(_:)? You’d be forgiven for thinking it was a slower, more precise normalization than simd_fast_normalize(_:). But then why does simd_normalize(_:) exist?
Why is there simd_cross(simd_float3, simd_float3) and cross(SIMD3<Float>, SIMD3<Float>) when typealias simd_float3 = SIMD3<Float>?
And what about the Swift operator overloads on simd_float3?
I've written a bug to Apple about it, but does anyone know?
But then why does simd_normalize(_:) exist?
This comment explains it. simd_normalize is equivalent to simd_precise_normalize unless you are compiling with -ffast-math specified, in which case it is equivalent to simd_fast_normalize. I never used swift only objective-c, but it’s possible there’s equivalent option somewhere in compiler switches or xcode project settings.
Why is there simd_cross(simd_float3, simd_float3) and cross(SIMD3, SIMD3)
I think they are equivalent. Note that comments in the header discuss both C-style API like simd_cross(x,y) and C++ API simd::cross(x,y). It could be that in Swift both are available for some of these functions.
Related
I am evaluating Keil Microvision IDE on STM32H753.
I am doing compiler comparison between ARMCC5 and AC6 in the different optimisation levels. AC6 is based on Clang.
My code is not using memcpy and I have unchecked "Use MicroLIB" in the project settings , However a basic byte per byte copy loop in my code is replaced by a memcpy with AC6 (only in "high" optimisation levels). It doesn't happen with ARMCC5.
I tried using compilation options to avoid that, as described here: -ffreestanding and -disable-simplify-libcalls, at both compiler and linker levels but it didn't change (for the second option, I get an error message saying that the option is not supported).
In the ARMCLANG reference guide i've found the options -nostdlib -nostdlibinc that prevent (??) the compiler to use any function of a standard lib.
However I still need the math.h function.
Do you know how to prevent clang to use functions from the Standard C Lib that are not explicitely called in the code ?
EDIT: here is a quick and dirty reproduceable example:
https://godbolt.org/z/AX8_WV
Please do not discuss the quality of this example, I know it is dumb !!, I know about memset, etc... It is just to understand the issue
gcc know a lot about the memcpy, memset and similar functions and even they are called "the builtin functions". If you do not want those functions to be used by default just use the command line option -fno-builtin
https://godbolt.org/z/a42m4j
Since Xamarin.iOS doesn't support code generation at runtime, why do Compile() and DynamicInvoke() work as expected?
For example, the following code works fine:
var lambda = Expression.Lambda(
Expression.Add(
Expression.Constant(1),
Expression.Constant(2)
)
);
var f = lambda.Compile();
var result = f.DynamicInvoke();
// result==3 at this point
Is Xamarin evaluating the expression tree at runtime instead of emitting IL code?
On platforms that support code generation, Reflection.Emit-based LambdaCompiler is used.
If that's not available, the expression is interpreted using the interpreter. For example, there are classes that interpret Constant and Add.
The details of the Xamarin limitations are here.
You don't seem to be using anything in the Reflection.Emit namespace, which is the big no-no. Your code must still be AOT'd. Otherwise, I would imagine it would not work.
But there HAVE been examples of [native] developers thwarting the iOS static analysis tool and circumventing the dynamic code restriction. I tried to locate the article, but couldn't find it.
Anyway, I don't think your scenario exemplifies that. Your code example will still be AOT-compiled.
But you raise a really good question: at what time does the expression get evaluated?
EDIT:
Another SO answer on the same topic: What does Expression.Compile do on Monotouch?
There's also some good info on Expression.Compile() and "full AOT" here:
http://www.mono-project.com/docs/advanced/aot/
EDIT:
After reading some more, I think I know what's going on here. It's not that Expression.Compile() won't work...it's that when your iOS app bundle is subjected to the iOS static analysis tool when you submit it to the app store, it will not pass the analysis, because it is dynamically generating code. So, sure, you can use Expression.Compile(), but don't expect it to be accepted into the app store. But as mentioned by #svick, if you use the "full AOT" compile option, your Expression.Compile() will probably fail at runtime, or perhaps even fail compilation.
I'm compiling my first project with 64 bit support enabled. I'm running into a bunch of compiler warnings about implicit conversions to float. This is happening because I'm using fabsf() and assigning the result to a CGFloat (which is a double, not float on the new 64 bit architecture).
According to the answer on this question:
CGFloat-based math functions?
I just need to #include <tgmath.h> to solve this problem and probably change fabsf to fabs. I have at least one file where this doesn't seem to be helping. I still get the warning: implicit conversion loses floating-point precision 'double' to 'CGFloat' aka (float). Here's the line that generates that warning:
CGFloat deltaX = fabs(item.center.x-point.x);
Has anyone else run across this? How did you solve it? I'd rather not disable this warning or litter my code with a ton of typecasts.
I guess, you are using CGPoint types, so the conversion doesn't occur within fabs(DOUBLE -> FLOAT), but on assignment CGFloat = DOUBLE. That's probably because compiler used fabs from math.h which operates on doubles.
Problem with math.h is that it's internally imported by OSX headers (carbon if I remember correctly), so I guess some iOS header might also do that. After quick look, it seems that basic set of frameworks doesn't import math.h, so probably you should look for it being manually imported. In case it's imported internally by some system libs, you'll probably won't be able to use those libs and tgmath in a single implementation file.
If you want to check if there are some math.h dependencies, you can use a dirty trick to prevent it's inclusion - add this line to a file (or better on top of prefix file):
#define __MATH_H__
I was able to get the tgmath.h functions to work by including the header at the top of my PCH file.
At some point (read: Xcode update) I had to start disabling Modules to get this to work. The details of such are in the question Dima links to below.
I find Erlang's module arity import /n where n is the number of arguments rather bizarre.
In Java and various other languages you can do something like:
import static com.stuff.Blah.myFunction;
Which will import all overloaded Blay.myFunction(..) regardless of parameters.
Besides I guess being explicit why did the language designers decide this was a good idea (I'm not trying to criticize the language... just curious)?
Does it have to do with code swapping?
Or does it have to do with hiding guard methods for recursion? If so why not allow arity on export but no need for arity on import?
Why would I want to be that explicit? That is import the two argument function but not the the three argument of myFunction?
You should be aware of what importing functions in Erlang really does. It is a pure textual transformation. If I do an -import(foo, [bar/1,baz/2]). it means that when I write a call like bar(5) or baz(a, 3) the compiler transforms these to foo:bar(5) and foo:baz(a, 3). That is all it does, nothing else. It doesn't check anything:
It doesn't check if the module foo contains the functions bar/1 or baz/2.
It doesn't even check if the module foo exists.
Really all it does is hide that you are calling a function in another module. That is why the recommendation from experienced Erlangers is "don't use it". It was a mistake. Unfortunately it is much easier to add stupid things than to get rid of them so we were never able to remove it.
"Does it have to do with code swapping?"
Yes, sort of. The unit of all code handling in Erlang is the module. So you compile modules, load modules, purge and delete modules. This means that there are no inter-module dependencies at all in the system and the compiler makes no assumptions about other modules when it is compiling a module. No assumptions are made that the environment in which a module is compiled will be the same in which it is run. That is why it is at runtime the system checks whether the function you are trying to call in another exists, or even if the module itself exists. That is why the import was a purely textual transformation.
Erlang was originally developed in Prolog.
In Prolog, the arity adds additional meaning to what you consider to be the 'arguments, as I understand from a function' in a procedural programming language. But that model does not apply here.
The so-called clauses 'married(X,Y).' and 'married(X,Y,Z).' imply a different kind of relationship 'married', which can be declared as married/2 and married/3.
In procedural programming, 'add(a,b)' or 'add(a,b,c)' are intended to generate the addition of a different number of arguments. That's not immediately the case in Prolog, where it is possible to have the relationship 'a and b, added' or 'a, b and c, added' mean something else. Needless to say, Prolog allows you to declare 'add' as you would expect a function would do. But it allows for more. More available meaning, means more need to control it.
And as in any module system, selecting what you want to expose to external clients makes sense: hence the declaration of arity.
Does it have to do with code swapping?
Kind of. The modules in Erlang are compiled separately (which is part of what allows code swapping), unlike Java classes, so the compiler doesn't know how many versions of the imported function with different arities exist. It could assume that all calls of a function with the given name come from the same module, of course, but the designers likely decided it wasn't particularly useful.
In fact, you rarely want to use imports at all, at least in my experience, just as you rarely use static imports in Java. Just write module:function, like Class.staticMethod.
Or does it have to do with hiding guard methods for recursion?
No, since not importing functions doesn't hide them in any way.
As mentioned by the title, I would like to find an implementation for HMAC-SHA-512 written for ActionScript. I was able to find a library that provide HMAC-SHA-256 with other functions, however, I am looking for HMAC-SHA-512 specifically.
Thank you
Edit:
Or, since actionscript and javascript have the same origin, can some one port this javascript version into actionscript?
http://pajhome.org.uk/crypt/md5/sha512.html
Edit 2:
I already ported the code from javascript to actionscript. The code can be found in one of the answers in this question
Porting SHA-512 Javascript implementation to Actionscript
Checkout this library:
http://code.google.com/p/as3crypto/
Though only does:
SHA-256,SHA-224,SHA-1,MD5, and MD2
So I guess that doesn't answer your question.
But best Crypto library for actionscript I've seen.
The implementation you link to doesn't seem to be using any features that aren't supported by ActionScript 3. Just surround the whole thing with public class SHA512 { }, and prefix the first five functions with public.
Edit: You will also need to convert function int64 to it's own class (or possibly use Number, though I'm not sure if you will lose precision for 64-bit integers).
Just found all of SHA-2 (SHA-224, SHA-256, SHA-384, SHA-512) implemented at http://code.google.com/p/flame/. Also it provides HMAC implementation. Didn't try it yet but looks what you're looking for.