When writing proofs I noticed that Agda's auto proof search frequently wouldn't find solutions that seem obvious to me. Unfortunately coming up with a small example, that illustrates the problem seems to be hard, so I try to describe the most common patterns instead.
I forgot to add -m to the hole to make Agda look at the module scope. Can I make that flag the default? What downsides would that have?
Often the current hole can be filled by a parameter of the function I am about to implement. Even when adding -m, Agda will not consider function parameters or symbols introduced in let or where clauses though. Is there something wrong with simply trying all of them?
When viewing a goal, symbols introduced in let or where clauses are not even displayed. Why?
What other habits can make using auto more effective?
Agda's auto proof search is hardwired into the compiler. That makes it fast,
but limits the amount of customization you can do. One alternative approach
would be to implement a similar proof search procedure using Agda's
reflection mechanism. With the recent beefed up version of reflection using
the TC monad,
you no longer need to implement your own unification procedure.
Carlos
Tome's been working on reimplementing these ideas (check out his code
https://github.com/carlostome/AutoInAgda ). He's been working on several
versions that try to use information from the context, print debugging info,
etc. Hope this helps!
Related
the custom_federated_algorithms_2 tutorial presents a local_train function using tff.federated_computation.
There a comment saying "while we could have implemented this logic entirely in TensorFlow, relying on tf.data.Dataset.reduce...":
regarding this comment:
I didn't manage to actually convert the code to using tf.data.Dataset.reduce seems non-trivial and the debug comments really don't help
I wonder what is the motivation of using federated_computation in cases like this, I looked all over the guides and real did find an explanation for what is going on here and when should we use it.
thank's!
Addressing these two in order:
It may not be trivial to adapt the code given directly to use tf.data.Dataset.reduce; that comment is intended to call out that the logic expressed here is also expressible using the dataset-reduce primitive, as effectively it only represents a local reduction, there are no communications across placements happening here.
There are at least two distinct purposes of this demonstration. One is to show that TFF as a language does not necessarily rely on the in-graph looping constructs of TensorFlow; another is to demonstrate the ability to "capture" values using the federated computation decorator. This could be used to natively capture something like learning rate decay in TFF, by evaluating a function of the round number and closing over it in the manner above, though there are other ways to implement similar functionality, as demonstrated here for example.
I personally find this pattern a little confusing; reading into the question behind the question a little, I agree that it is confusing to use a federated_computation decorator where there is no communication happening. When writing TFF, I generally express all my local computation in TensorFlow proper (usually in a functional manner), and let TFF handle the communication only. The purpose of the second tutorial is to show that TFF proper is actually much more flexible than indicated by restricting oneself to using the pattern just described.
I'm doing a project which involves parsing the histories of common lisp repos. I need to parse them into list-of-lists or something like that. Ideally, I'd like to preserve as much of the original source file syntax as possible, in some way. For example, in the case of the text #+sbcl <something>, which I think means "If our current lisp is sbcl, read <something>, otherwise skip it", I'd like to get something like (#+ 'sbcl <something>).
I originally wrote a LALR parser in Python, which sort of worked, but it's not ideal for many reasons. I'm having a lot of difficulty getting correct output, and I have tons of special cases to add.
I figured that what I should really do is is use lisp itself, since it already has a lisp parser built in. If I could just read a file into sexps, I could dump it into something (cl-json would do) for further processing down the line.
Unfortunately, when I attempt to read https://github.com/fukamachi/woo/blob/master/src/woo.lisp, I get the error
There is no package with the name WOO.EV.TCP
which is of course coming from line 80 of that file, since that package is defined in src/ev/tcp.lisp, and we haven't read it.
Basically, is it possible to just read the file into sexps without caring whether the packages are defined or if they contain the relevant symbols? If so, how? I've tried looking at the hyperspec reader documentation, but I don't see anything that sounds relevant.
I'm out of practice with actually writing common lisp, but it seems potentially possible to hack around this by handling the undefined package condition by creating a blank package with that name, and handling the no-symbol-of-that-name-in-package condition by just interning a given symbol. I think. I don't know how to actually do this, I don't know if it would work, I don't know how many special cases would be involved. Offhand, the first condition is called no-such-package, but the second one (at least in sbcl) is called simple-error, so I don't even know how to determine whether this particular simple-error is the no-such-symbol-in-that-package error, let alone how to extract the relevant names from the condition, fix it, and restart. I'd really like to hear from a common lisp expert that this is the right thing to do here before I go down the road of trying to do it this way, because it will involve a lot of learning.
It also occurs to me that I could fix this by just sed-ing the file before reading it. E.g. turning woo.ev.tcp:start-listening-socket into, say, woo.ev.tcp===start-listening-socket. I don't particularly like this solution, and it's not clear that I wouldn't run into tons more ugly special cases, but it might work if there's no better answer.
I am almost sure there is no easy portable way to do this for a number of reasons.
(Just limiting things to the non-existent-package problem for now.)
First of all there is no portable access into the bit of the reader which decides that tokens are going to be symbols and then looks for package markers &c: that just happens according to the rules in 2.3. So you can't easily intervene in this.
Secondly there's not portably enough information in any kind of condition the reader might signal to be able to handle them.
There are several possible ways out of this bit of the problem.
If you felt sufficiently heroic you might be able to teach the reader that all of the token-starting characters are in fact things you control and then write a token-reader that somehow deals with the whole package thing by returning some object which isn't a symbol. But to do that you need to deal with numbers, and if you think that's simple, well, it's not.
If you felt less heroic you could write a more primitive token-reader which just doesn't even try to deal with anything except grabbing all the characters needed and returns some kind of object which wraps a string. This would avoid the whole number problem at the cost of losing a lot of intofmration.
If you don't care about portability, find an implementation, understand how its reader does it, and muck around with it. There are more open source or source-available implementations than I can easily count (perhaps I am not very good at counting) so this is a pretty good approach. It's certainly what I'd do.
But this is only the start of the problems. The CL reader is hairy and, in its standard configuration (the configuration which is used for things like compile-file unless people have arranged otherwise) can run completely arbitrary code at read time, including code which modifies the reader itself, some of which may do so in an implementation-dependent way. And people use this: there's a reason Lisp is called the 'programmable programming language' and it's that people program it.
I've decided to solve this using sed (actually Python's re.sub, but who's counting?) because it'll work for my actual use case, and was easy.
For future readers: The various people saying this is impossible in general are probably right. The other questions posted by #Svante look like good easy ways to solve part of the problem. Other parts of the problem might be solved more elegantly by replacing the reader macros for #., #+, #-, etc with ones which just make a list, which sounds less heroic than the suggestions from #tfb, but I don't have time for that shit.
I am using z3py I have a predicate over two integers that needs to be evaluated using a custom algorithm. I have been trying to get it implemented, without much success. Apparently, what I need is a procedural attachment, which is now deprecated. Could anybody tell me how I might impelement this in z3py? I understand that it involves use of Tactics, but I am afraid I haven't managed to figure out how to use them. I wouldn't mind using the deprecated way either, as long as it works.
There is no procedural attachment tactic. All tactics are implemented inside of Z3;
you can compose tactics from outside.
Previous versions of Z3 exposed a way to register a "user theory".
This was deprecated since (1) the source of Z3 is now available so users can compile with their custom theories directly, (2) the user-theory abstraction lacked proper support
for model generation. You can of course try previous versions of Z3 that have the user theory extension, but it is not supported.
As we know Microsoft introduced the dynamic type a long time ago. And I also applied it in some case in the ASP.NET MVC application. But to me, it is not good for all cases. In specific, it's seen to be violating some basic principals like The Acyclic Dependencies Principle. For example, I have a package A that using package B, then in B I use dynamic and reference to A. It work fine. So the question is how do I use the dynamic type in correct way?
Speaking from experience: don't do it. Seriously, sooner or later you will regret.
Each time I decide to use dynamics I found it was a mistake. Using dynamics makes refactoring a nightmare, and you lose the biggest advantage which is type safety. Errors will show up in runtime instead of during compilation.
It's usually ten times better to refine your design and use oop principles or try to find some common interfaces.
It should be used only to simplify working with dynamic languages such as java script. Otherwise it is bad for your program performance and your mind sanity :)
So the best practice with dynamics is: try avoiding using them
Dynamic is not a type, it's syntactic sugar. The type will be object, but the compiler will put in a lot of code to detect the actual type of the variable at runtime.
It's meant to be used when you don't know the actual type, for example is used by the dynamic languages running on top of .Net.
It can be abused, as a lazy shortcut (but for that use var ) but you'll get a performance penalty in that case. Long story short, it should be used when you can't solve a problem easily with strong typing.
I think the dynamic keyword is good, but we have to using it very careful, like Mike mention as above. I used it on some small examples. When we use it, and we know it, so when somebody call to it, he/she have to know what kind of object that use in dynamic. Hope this help.
I want to build a parser for a C like language. The interesting aspect about it is that I want to build it in such a way that someone who has access to the source can easily modified it to extend the language (a new expression type of instance) with the extensions being runtime configurable (they can be turned on and off).
My current intent is to build a recursive decent parser as an object. Each production will be a method of an object. The method of extension will be to derive classes from this base replacing methods (and production definitions) as needed. I'm still trying to figure out how to mix and match extensions. One idea is to play games with the v-tbl. Objects would be constructed with a v-tbl that is a copy of the base but with methods replaced from derived classes.
Aside from the bit-twiddling nature of the solution the only issues I have with it is
a reasonable way to do the v-tbl mixup
what to do when 2 extensions alter the same productions (as most replacements will end up calling the original having one replacement call the other would work but the mechanics of setting this up are the issue)
how to allow the extension of extensions (this might end up looking like a standard MI system, but I've never got how they work)
Another solution (a slightly more mundane version of the same same approach) would be to use static member variables to store function-pointers and call them for the same effect.
Edit: I have already built a system that lets me build productions from BNF definitions. I can alter it to support whatever I decide on.
These are some of the challenges the Perl 6 design effort has faced. You may find it worthwhile looking into some of the solutions they came up with. Or you may find that to be gross overkill.
I made a configurable parser I uploadei it some time ago at
http://code.google.com/p/compparser/
The project there is not up-to-date but is working fine.
If I recall my university courses correctly, recursive descent parsers have some limitations that might bite you, especially since you're allowing extensions - somebody elses language extension could cause issues.
A proper compiler toolkit - such as the open source ANTLR - might make things easier, and might also provide some different approaches for you.
another option is to express the parsing rules in XML or something, instead of in code; less efficient, but far more dynamically configurable; each language or variant can just use its own (XML) file, and even include/reference other files as 'base' files...
Frankly, I am not even sure I understood everything you wrote... :-)
But when I see parser and flexibility, I think about LPeg - Parsing Expression Grammars For Lua. It might not fit your needs but it is well worth a look... ;-)