What is the difference between tf.estimator.Estimator and tf.contrib.learn.Estimator in TensorFlow - machine-learning

Some months ago, I used the tf.contrib.learn.DNNRegressor API from TensorFlow, which I found very convenient to use. I didn't keep up with the development of TensorFlow the last few months. Now I have a project where I want to use a Regressor again, but with more control over the actual model as provided by DNNRegressor. As far as I can see, this is supported by the Estimator API using the model_fn parameter.
But there are two Estimators in the TensorFlow API:
tf.contrib.learn.Estimator
tf.estimator.Estimator
Both provide a similar API, but are nevertheless slightly different in their usage. Why are there two different implementations and are there reasons to prefer one?
Unfortunately, I can't find any differences in the TensorFlow documentation or a guide when to use which of both. Actually, working through the TensorFlow tutorials produced a lot of warnings as some of the interfaces apparently have changed (instead of the x,y parameter, the input_fn parameter et cetera).

I wondered the same and cannot give a definitive answer, but I have a few educated guesses that might help you:
It seems that tf.estimator.Estimator together with a model function that returns tf.estimator.EstimatorSpec is the most current one that is used in the newer examples and the one to be used in new code.
My guess now is that the tf.contrib.learn.Estimator is an early prototype that got replaced by the tf.estimator.Estimator. According to the docs everything in tf.contrib is unstable API that may change at any time and it looks like the tf.estimator module is the stable API that “evolved” from the tf.contrib.learn module. I assume that the authors just forgot to mark tf.contrib.learn.Estimator as deprecated and that it wasn't removed yet so existing code won't break.

Now there is this explicit statement in the docs:
Note: TensorFlow also includes a deprecated Estimator class at tf.contrib.learn.Estimator, which you should not use.
https://www.tensorflow.org/programmers_guide/estimators
For some reason it's not marked Deprecated in code.

To add to Christoph's answer.
The distinction between these packages has been specifically mentioned at Tensorflow Dev Summit 2017 by Martin Wicke:
The distinction between core and contrib is really in core things
don't change. Things are backward compatible until release 2.0, and nobody's thinking about that right now.
If you have something in core, it's stable, you should use it. If you have something in contrib, the API may change and depending on your needs
you may or may not want to use it.
So you can think of tf.contrib package as "experimental" or "early preview". For classes that are already in both tf.estimator and tf.contrib, you should definitely use tf.estimator version, because tf.contrib class gets deprecated automatically (even if it's not stated explicitly in the documentation) and can be dropped in the next release.
As of tensorflow 1.4 the list of "graduated" classes includes: Estimator
DNNClassifier, DNNRegressor, LinearClassifier, LinearRegressor, DNNLinearCombinedClassifier, DNNLinearCombinedRegressor. These should be ported to tf.estimator.

I had the same question about to ask.
I guess tf.estimator.Estimator is high level interface and recommended usage while tf.contrib.learn.Estimator is so said not high level interface (but it is indeed).
As Christoph mentioned, tf.contrib is unstable so tf.contrib.learn.Estimator is vulnerable to changes. It is changed from 0.x versions to 1.1 version and changed again in 2016.12. The problem is, the usage of them seems different. You can use tf.contrib.learn.SKCompat to wrap tf.contrib.learn.Estimator while for tf.estimator.Estimator, you can't do the same thing. And model_fn requirement/parameter is different if you check error messages.
The conclusion is that this two Estimator is different thing!
Anyway, I think tf doc did very bad on this topic since tf.estimator is in their tutorial page which means they are very serious about this...

Related

is there a way to include another file in smtlib?

Similar to #include in C in importing functions and axioms that are defined in another file. I wasn't able to find such functionality described in the SMTLIB documentation or from the online examples. Any hints?
SMTLib has no means of #include'ing or importing other files. This might look like a shortcoming, but it is quite rare for people to hand-write SMTLib files: It is almost always machine generated from a higher level language, and it is assumed that whoever generates the SMTLib can simply spit out one big file that includes everything you need.
Having said that, I think this would be a useful feature to have indeed. SMTLib standard is always evolving and such features are usually discussed in their mailing list:
https://groups.google.com/forum/#!forum/smt-lib
Feel free to join the discussion and make a request!

Flutter standard packages

As I see several dart packages published at dart package website, I am curious to know what packages does flutter endorse?
The question would be vague, so I would like to focus on a specific package dio. I have contacted few flutter developers, and have been told that the package is not yet a industry standard, also I was introduced to some packages that were published just hours back, for example jaguar_retrofit. I also see dart https package used frequently in flutter documentation.
This weighs me to look at what would be the most promising in the future.
Can someone solve the package mystery for me, any flutter insights available?
This is a valid question, but not one that you'll probably find a final answer to on stackoverflow (and it may be closed as off-topic although I won't cast that vote). You might find better luck at https://softwarerecs.stackexchange.com/ although there may not be too many dart/flutter specific people there; I don't know for sure.
But realistically, no-one knows what might happen to the packages in the future other than the people maintaining them. That would probably be a good first step - make contact with the developers as they will be able to give you a better indication of how committed to maintaining their code they are.
Other than that, what I'd look for is who the publisher of the package is (see below, under the "Author").
If it is the 'Dart Team' or 'Flutter Team', there's a fairly good chance it will be maintained. If it isn't, but the uploader has a '#google.com' email address, there's a chance it's just one of their 20% projects, but there's still a better chance of it being maintained than a random dev.
And finally, if the package's licence allows for it (which pretty much everything on pub should) you may be able to help the developer with it in the future, in which case everyone wins =).
It helps to look at the official documentation:
Fetch data from the internet
JSON and serialization
More in the cookbook.
Some time ago, I took a look at the dio source code and I'm not really convinced that it is a good option. Basically it is just a thin wrapper around the standard http library.
The retrofit clone seems to rely on a custom JSON serializer code generator, instead of using one of the standard solutions.

custom nodeinfo implementation with saxon 97 - HE

I have a custom NodeInfo, DocumentInfo and ExternalObjectModel implementation written with saxon 8.7.
I also need to support few custom functions.
My understanding is that saxon 9.7 HE has better support, so trying to migrate from 8.7 based implementation to 9.7 HE.
Is there a way to switch off xslt functionality ? I don't need it for now.
Is s9api the recommended api to get the following features :
To work with custom datamodels (I dont have xml documents)
To support
custom functions
To provide custom implementation for current()
function
current implementation has this pattern.
XPathEvaluator eval = new XPathEvaluator(docw);
eval.setNamespaceContext(new NamespaceContext() {
// stripped off
});
List<DataNode> res = eval.evaluate(xpath);
Now, the XPathEvaluator is not accepting the 'NodeInfo' implementor.
the evaluate is returning a string.
what are the relevant new api/classes in 9.7 ?
also, there is no saxon-xpath. I think that functionality is now part of Saxon-HE.
A lot has changed between 8.7 and 9.7 - you are talking about two releases separated by about 10 years, with 10 major releases and perhaps 100 maintenance releases intervening. While the changes to the NodeInfo interface between any two major releases will be very minor, they will accumulate over time to a significant difference.
Saxon 9.7 changed the DocumentInfo interface, replacing it (in effect) with a new TreeInfo object to hold information about a tree whether or not the root node is a document node.
A question like "what are the new api/classes in 9.7" is much too broad. We publish detailed change information at each major release, and the online interactive documentation has a change history which allows you to list changes by category between any two selected releases. With two releases as far apart as 8.7 and 9.7 it is a very long list, and there's no point even starting to summarise it here.
saxon-xpath was once a separate JAR file, I think the reason was probably so that you could keep it off your classpath to avoid JAXP applications picking it up by accident. The functionality is now in the main JAR file - except that Saxon no longer advertises itself as a JAXP XPath provider, to avoid this problem.
I would generally recommend use of the s9api interface to anyone writing Saxon applications especially if they need to transcend XSLT, XPath, XSD, and XQuery. The JAXP equivalents are much messier: they don't integrate well across tools, they are often not type-safe, they don't provide access to functionality in the latest W3C standards, etc. But if you're doing deep integration, e.g. defining your own object models and replacing system functions, then you're going to have to dig below s9api into internal lower-level APIs.
A great deal is possible here, but it's not 100% stable from one release to the next, and it's not always very well documented. We're happy to answer technical questions but we expect you to have a high level of technical competence if you tackle such integration.

differences between classes in backtype.storm & org.apache.storm & com.twitter.heron packages

I want to write some custom schedulers for apache heron, and i'm diving a little deep into the source code. I noticed that in the heron source code there are couple of packages with similar classes. For example most of classes in backtype.storm & org.apache.storm are similar(exactly similar such that inside codes are identical). There are also some similar classes between these two packages and com.twitter.heron(for example com.twitter.heron.api.tuple.Fields) but some of them have different code inside(such as the Fields class). I know that when writing topologies we can import each package that we want and we can choose between either one of these but i'm curious about the differences between them and why they put all of these packages together. and didn't merge them? And if storm classes are the only choice for writing topologies, what are classes in com.twitter.heron package good for?
I know that heron is designed to be fully backward compatible with storm and this might be because of the backward compatibility issue, but i have to admit that this has confused me a lot, because i need to write my own code inside these classes and i don't know how to choose which one, which one is constantly developing and maintaining by developers and i should choose them as candidates to modify.
Thanks in advance.
Based on the Descriptions of the developer team in here:
Use of heron api classes is not recommended - since we might change them frequently. They are meant for internal usage only.
backtype.storm is if your application wants to use pre-storm 1.0.0. For post 1.0.0 applications, you should use org.apache.storm

Concise description of the Lua vm?

I've skimmed Programming in Lua, I've looked at the Lua Reference.
However, they both tells me this function does this, but not how.
When reading SICP, I got this feeling of: "ah, here's the computational model underlying scheme"; I'm trying to get the same sense concerning Lua -- i.e. a concise description of it's vm, a "how" rather than a "what".
Does anyone know of a good document (besides the C source) describing this?
You might want to read the No-Frills Intro to Lua 5(.1) VM Instructions (pick a link, click on the Docs tab, choose English -> Go).
I don't remember exactly where I've seen it, but I remember reading that Lua's authors specifically discourage end-users from getting into too much detail on the VM; I think they want it to be as much of an implementation detail as possible.
Besides already mentioned A No-Frills Introduction to Lua 5.1 VM Instructions, you may be interested in this excellent post by Mike Pall on how to read Lua source.
Also see related Lua-Users Wiki page.
See http://www.lua.org/source/5.1/lopcodes.h.html . The list starts at OP_MOVE.
The computational model underlying Lua is pretty much the same as the computational model underlying Scheme, except that the central data structure is not the cons cell; it's the mutable hash table. (At least until you get into metaprogramming with metatables.) Otherwise all the familiar stuff is there: nested first-class functions with mutable local variables (let-bound variables in Scheme), and so on.
It's not clear to me that you'd get much from a study of the VM. I did some hacking on the VM a while back and it's a lot like any other register-oriented VM, although maybe a bit cleaner. Only a handful of instructions are Lua-specific.
If you're curious about the metatables, the semantics is described clearly, if somewhat verbosely, in Section 2.8 of the reference manual for Lua 5.1. If you look at the VM code in src/lvm.c you'll see almost exactly that logic implemented in C (e.g., the internal Arith function). The VM instructions are specialized for the common cases, but it's all terribly straightforward; nothing clever is involved.
For years I've been wanting a more formal specification of Lua's computational model, but my tastes run more toward formal semantics...
I've found The Implementation of Lua 5.1 very useful for understanding what Lua is actually doing.
It explains the hashing techniques, garbage collection and some other bits and pieces.
Another great paper is The Implmentation of Lua 5.0, which describes design and motivations of various key systems in the VM. I found that reading it was a great way to parse and understand what I was seeing in the C code.
I am surprised you refer to the C source for the VM as this is protected by lua.org and the tecgraf/puc rio in Brazil specially as the language is used for real business and commercial applications in a number of countries. The paper about The Implementation of lua contains details about the VM in the most detail it is permitted to include but the structure of the VM is proprietary. It is worth noting that versions 5.0 and 5' were commissioned by IBM in Europe for use on customer mainframes and their register-based version have a VM which accepts the IBM defined format of intermediate instructions.

Resources