Does Intel IPP 8.0 support in-place operations? - intel-ipp

IPP <= 7.1 has special in-place functions.
In IPP 8 they are deprecated: deprecation-summary
It is not clear if the new out-of-place functions also support in-place operation.
My guess is that for some of the functions it is OK to pass the same pointers for src/dst, but for others it is not, but this is not documented.
Here is the documentation

I've had similar question and have posted it under Intel's developer zone. The following link will take you to my post and to the answer I got from Intel:
http://software.intel.com/en-us/forums/topic/498093
Here is a short quote from the above link:
in IPP 8.1 (will be available on the web on ww06 2014) deprecation message removed from all ipps in-place functionality (based on customers' feedback). The same is planned be done for ippi domain in the nearest future.
Hope you find this helpful (I did).

Here's a quote from Intel's comment on the deprecated functions:
In-place functionality will be removed: In-place functions accept only
one pointer for input and output. The out-of-place versions often
offer the same functionality but with the additional flexibility of
specifying a different output buffer.
In my experience, all out-of-place functions with deprecated in-place variants support in-place operation, pSrc and pDst may point to the same memory.
Therefore my answer is: Yes, IPP 8.0 still supports in-place operations, but it's not documented well.

Here is a list of functions deprecated in 7.1, with recommended substitutes. You should notice that the non in-place functions are typically recommended as substitutes for their in-place counterparts.
Here is a forum discussion where in an intel engineer affirms that non in-place functions can be used in place of their in-place counterparts, by setting src==dst.
There is a caveat, though. If you're using IPP 7.0, the compiler will issue deprecation warnings. However, for at least some of those functions, using the src==dst method produces corrupt output. This doesn't appear to be fully implemented until 7.1. I've experienced this issue personally with filter functions, and there's a question in the discussion about it, though Intel never responded to it.
It's frustrating that Intel hasn't been more forthcoming about clearly documenting this change. The resulting bugs are very difficult to diagnose, and could easily be overlooked entirely. The only way to catch them, is by comparing the output of both functions, and few people would bother to do that.

Related

What is the difference between tf.estimator.Estimator and tf.contrib.learn.Estimator in TensorFlow

Some months ago, I used the tf.contrib.learn.DNNRegressor API from TensorFlow, which I found very convenient to use. I didn't keep up with the development of TensorFlow the last few months. Now I have a project where I want to use a Regressor again, but with more control over the actual model as provided by DNNRegressor. As far as I can see, this is supported by the Estimator API using the model_fn parameter.
But there are two Estimators in the TensorFlow API:
tf.contrib.learn.Estimator
tf.estimator.Estimator
Both provide a similar API, but are nevertheless slightly different in their usage. Why are there two different implementations and are there reasons to prefer one?
Unfortunately, I can't find any differences in the TensorFlow documentation or a guide when to use which of both. Actually, working through the TensorFlow tutorials produced a lot of warnings as some of the interfaces apparently have changed (instead of the x,y parameter, the input_fn parameter et cetera).
I wondered the same and cannot give a definitive answer, but I have a few educated guesses that might help you:
It seems that tf.estimator.Estimator together with a model function that returns tf.estimator.EstimatorSpec is the most current one that is used in the newer examples and the one to be used in new code.
My guess now is that the tf.contrib.learn.Estimator is an early prototype that got replaced by the tf.estimator.Estimator. According to the docs everything in tf.contrib is unstable API that may change at any time and it looks like the tf.estimator module is the stable API that “evolved” from the tf.contrib.learn module. I assume that the authors just forgot to mark tf.contrib.learn.Estimator as deprecated and that it wasn't removed yet so existing code won't break.
Now there is this explicit statement in the docs:
Note: TensorFlow also includes a deprecated Estimator class at tf.contrib.learn.Estimator, which you should not use.
https://www.tensorflow.org/programmers_guide/estimators
For some reason it's not marked Deprecated in code.
To add to Christoph's answer.
The distinction between these packages has been specifically mentioned at Tensorflow Dev Summit 2017 by Martin Wicke:
The distinction between core and contrib is really in core things
don't change. Things are backward compatible until release 2.0, and nobody's thinking about that right now.
If you have something in core, it's stable, you should use it. If you have something in contrib, the API may change and depending on your needs
you may or may not want to use it.
So you can think of tf.contrib package as "experimental" or "early preview". For classes that are already in both tf.estimator and tf.contrib, you should definitely use tf.estimator version, because tf.contrib class gets deprecated automatically (even if it's not stated explicitly in the documentation) and can be dropped in the next release.
As of tensorflow 1.4 the list of "graduated" classes includes: Estimator
DNNClassifier, DNNRegressor, LinearClassifier, LinearRegressor, DNNLinearCombinedClassifier, DNNLinearCombinedRegressor. These should be ported to tf.estimator.
I had the same question about to ask.
I guess tf.estimator.Estimator is high level interface and recommended usage while tf.contrib.learn.Estimator is so said not high level interface (but it is indeed).
As Christoph mentioned, tf.contrib is unstable so tf.contrib.learn.Estimator is vulnerable to changes. It is changed from 0.x versions to 1.1 version and changed again in 2016.12. The problem is, the usage of them seems different. You can use tf.contrib.learn.SKCompat to wrap tf.contrib.learn.Estimator while for tf.estimator.Estimator, you can't do the same thing. And model_fn requirement/parameter is different if you check error messages.
The conclusion is that this two Estimator is different thing!
Anyway, I think tf doc did very bad on this topic since tf.estimator is in their tutorial page which means they are very serious about this...

View code generated by IBM's Enterprise COBOL compiler

I have recently started doing some work with COBOL, where I have only ever done work in z/OS Assembler on a Mainframe before.
I know that COBOL will be translated into Mainframe machine-code, but I am wondering if it is possible to see the generated code?
I want to use this to better understand the under workings of COBOL.
For example, if I was to compile a COBOL program, I would like to see the assembly that results from the compile. Is something like this possible?
Relenting, only because of this: "I want to use this to better understand the under workings of Cobol".
The simple answer is that there is, for Enterprise COBOL on z/OS, a compiler option, LIST. LIST will provide what is known as the "pseudo assembler" output in your compile listing (and some other useful stuff for understanding the executable program). Another compiler option, OFFSET, shows the displacement from the start of the program of the code generated for each COBOL verb. LIST (which inherently has the offset already) and OFFSET are mutually exclusive. So you need to specify LIST and NOOFFSET.
Compiler options can be specified on the PARM of the EXEC PGM= for the compiler. Since the PARM is limited to 100 characters, compiler options can also be specified in a data set, with a DDName of SYSOPTF (which, in turn, you use a compiler option to specify its use).
A third way to specify compiler options is to include them in the program source, using the PROCESS or (more common, since it is shorter) CBL statement.
It is likely that you have a "panel" to compile your programs. This may have a field allowing options to be specified.
However, be aware of a couple of things: it is possible, when installing the compiler, to "nail in" compiler options (which means they can't be changed by the application programmer); it is possible, when installing the compiler, to prevent the use of PROCESS/CBL statements.
The reason for the above is standardisation. There are compiler options which affect code generation, and using different code generation options within the same system can cause unwanted affects. Even across systems, different code generation options may not be desirable if programmers are prone to expect the "normal" options.
It is unlikely that listing-only options will be "nailed", but if you are prevented from specifying options, then you may need to make a special request. This is not common, but you may be unlucky. Not my fault if it doesn't work for you.
This compiler options, and how you can specify them, are documented in the Enterprise COBOL Programming Guide for your specific release. There you will also find the documentation of the pseudo-assembler (be aware that it appears in the document as "pseudo-assembler", "pseudoassembler" and "pseudo assembler", for no good reason).
When you see the pseudo-assembler, you will see that it is not in the same format as an Assembler statement (I've never discovered why, but as far as I know it has been that way for more than 40 years). The line with the pseudo-assembler will also contain the machine-code in the format you are already familiar with from the output of the Assembler.
Don't expect to see a compiled COBOL program looking like an Assembler program that you would write. Enterprise COBOL adheres to a language Standard (1985) with IBM Extensions. The answer to "why does it do it likely that" will be "because", except for optimisations (see later).
What you see will depend heavily on the version of your compiler, because in the summer of 2013, IBM introduced V5, with entirely new code-generation and optimisation. Up to V4.2, the code generator dated back to "ESA", which meant that over 600 machine instructions introduced since ESA were not available to Enterprise COBOL programs, and extended registers. The same COBOL program compiled with V4.2 and with V6.1 (latest version at time of writing) will be markedly different, and not only because of the different instructions, but also because the structure of an executable COBOL program was also redesigned.
Then there's opimisation. With V4.2, there was one level of possible optimisation, and the optimised code was generally "recognisable". With V5+, there are three levels of optimisation (you get level zero without asking for it) and the optimisations are much more extreme, including, well, extreme stuff. If you have V5+, and want to know a bit more about what is going on, use OPT(0) to get a grip on what is happening, and then note the effects of OPT(1) and OPT(2) (and realise, with the increased compile times, how much work is put into the optimisation).
There's not really a substantial amount of official documentation of the internals. Search-engineing will reveal some stuff. IBM's Compiler Cafe:COBOL Cafe Forum - IBM is a good place if you want more knowledge of V5+ internals, as a couple of the developers attend there. For up to V4.2, here may be as good a place as any to ask further specific questions.

custom nodeinfo implementation with saxon 97 - HE

I have a custom NodeInfo, DocumentInfo and ExternalObjectModel implementation written with saxon 8.7.
I also need to support few custom functions.
My understanding is that saxon 9.7 HE has better support, so trying to migrate from 8.7 based implementation to 9.7 HE.
Is there a way to switch off xslt functionality ? I don't need it for now.
Is s9api the recommended api to get the following features :
To work with custom datamodels (I dont have xml documents)
To support
custom functions
To provide custom implementation for current()
function
current implementation has this pattern.
XPathEvaluator eval = new XPathEvaluator(docw);
eval.setNamespaceContext(new NamespaceContext() {
// stripped off
});
List<DataNode> res = eval.evaluate(xpath);
Now, the XPathEvaluator is not accepting the 'NodeInfo' implementor.
the evaluate is returning a string.
what are the relevant new api/classes in 9.7 ?
also, there is no saxon-xpath. I think that functionality is now part of Saxon-HE.
A lot has changed between 8.7 and 9.7 - you are talking about two releases separated by about 10 years, with 10 major releases and perhaps 100 maintenance releases intervening. While the changes to the NodeInfo interface between any two major releases will be very minor, they will accumulate over time to a significant difference.
Saxon 9.7 changed the DocumentInfo interface, replacing it (in effect) with a new TreeInfo object to hold information about a tree whether or not the root node is a document node.
A question like "what are the new api/classes in 9.7" is much too broad. We publish detailed change information at each major release, and the online interactive documentation has a change history which allows you to list changes by category between any two selected releases. With two releases as far apart as 8.7 and 9.7 it is a very long list, and there's no point even starting to summarise it here.
saxon-xpath was once a separate JAR file, I think the reason was probably so that you could keep it off your classpath to avoid JAXP applications picking it up by accident. The functionality is now in the main JAR file - except that Saxon no longer advertises itself as a JAXP XPath provider, to avoid this problem.
I would generally recommend use of the s9api interface to anyone writing Saxon applications especially if they need to transcend XSLT, XPath, XSD, and XQuery. The JAXP equivalents are much messier: they don't integrate well across tools, they are often not type-safe, they don't provide access to functionality in the latest W3C standards, etc. But if you're doing deep integration, e.g. defining your own object models and replacing system functions, then you're going to have to dig below s9api into internal lower-level APIs.
A great deal is possible here, but it's not 100% stable from one release to the next, and it's not always very well documented. We're happy to answer technical questions but we expect you to have a high level of technical competence if you tackle such integration.

iOS Deprecation Ramifications

I just came to realize that my project is currently using methods that are "discouraged" in iOS 4.0+. I fear that deprecation is soon inevitable. However, in practical terms, I'm not sure what this would mean for my project. Does it mean that users who attempt to run the app on a future iOS version will experience runtime errors or does it simply mean that I'll have compile-time errors when maintaining it on future iOS SDK's? Or, am I missing the boat (and the river) entirely?
Thanks
It's unlikely that they'll remove the old method entirely, Apple just suggests (see "Animations") that you use the newer format.
As stated, if you want your projects to support iOS versions <4.0, continue using the old format. If not, it's recommended to update, but if you choose not to, don't be too worried about it. (at least not until the old format becomes officially deprecated)
EDIT: Just re-read the question, and to answer that specifically; no, not you nor any other users would experience errors in the future (provided you're using the functions correctly) - even if it does become deprecated.

Does F# provide you automatic parallelism?

By this I meant: when you design your app side effects free, etc, will F# code be automatically distributed across all cores?
No, I'm afraid not. Given that F# isn't a pure functional language (in the strictest sense), it would be rather difficult to do so I believe. The primary way to make good use of parallelism in F# is to use Async Workflows (mainly via the Async module I believe). The TPL (Task Parallel Library), which is being introduced with .NET 4.0, is going to fulfil a similar role in F# (though notably it can be used in all .NET languages equally well), though I can't say I'm sure exactly how it's going to integrate with the existing async framework. Perhaps Microsoft will simply advise the use of the TPL for everything, or maybe they will leave both as an option and one will eventually become the de facto standard...
Anyway, here are a few articles on asynchronous programming/workflows in F# to get you started.
http://blogs.msdn.com/dsyme/archive/2007/10/11/introducing-f-asynchronous-workflows.aspx
http://strangelights.com/blog/archive/2007/09/29/1597.aspx
http://www.infoq.com/articles/pickering-fsharp-async
F# does not make it automatic, it just makes it easy.
Yet another chance to link to Luca's PDC talk. Eight minutes starting at 52:20 are an awesome demo of F# async workflows. It rocks!
No, I'm pretty sure that it won't automatically parallelise for you. It would have to know that your code was side-effect free, which could be hard to prove, for one thing.
Of course, F# can make it easier to parallelise your code, particularly if you don't have any side effects... but that's a different matter.
Like the others mentioned, F# will not automatically scale across cores and will still require a framework such as the port of ParallelFX that Josh mentioned.
F# is commonly associated with potential for parallel processing because it defaults to objects being immutable, removing the need for locking for many scenarios.
On purity annotations: Code Contracts have a Pure attribute. I remember hearing the some parts of the BCL already use this. Potentially, this attribute could be used by parallellization frameworks as well, but I'm not aware of such work at this point. Also, I' not even sure how well code contacts are usable from within F#, so a lot of unknowns here.
Still, it will be interesting to see how all this stuff comes together.
No it will not. You must still explicitly marshal calls to other threads via one of the many mechanisms supported by F#.
My understanding is that it won't but Parallel Extensions is being modified to make it consumable by F#. Which won't make it automatically multi-thread it, should make it very easy to achieve.
Well, you have your answer, but I just wanted to add that I think this is the most significant limitation of F# stemming from the fact that it is a hybrid imperative/functional language.
I would like to see some extension to F# that declares a function to be pure. That is, it has no side-effects that are not denoted by the function's type. The idea would be that a function is pure only if it references other "known-pure" functions. Of course, this would only be useful if it were then possible to require that a delegate passed as a function parameter references a pure function.

Resources