Where are the Condition flags stored in Y86-64 ISA? - memory

The Condition Flags SF, ZF and OF, in the fictitious Y86-64 architecture, as described in Computer Systems, a Programmer's Perspective by Randall and Bryant.

They are probably not reinventing the wheel.
https://en.wikipedia.org/wiki/FLAGS_register

Related

define-fun vs define-funs-rec in smtlib

It seems that define-funs-rec is a strict superset of what define-fun can do according to the SMTLIB standard. If so is there a reason for not always using define-funs-rec, maybe except for syntactic simplicity?
Strictly speaking; no. But define-fun-rec is rather new (as opposed to good old define-fun), so if you want greater portability and have no need for a recursive definition, then you should stick to define-fun.
It is conceivable that the define-fun-rec might also bring heavier-machinery in a solver that is not really needed unless you really have a recursive definition, such as a well-foundedness checker. This might end up costing some performance cycles; though I doubt this would be too much of a concern.

Do Pascal compilers need SecureZeroMemory function?

Consider the code:
procedure DoSmthSecret;
var
Seed: array[0..31] of Byte;
begin
// get random seed
..
// use the seed to do something secret
..
// erase the seed
FillChar(Seed, SizeOf(Seed), 0);
end;
The problem with the code is: FillChar is a compiler intrinsic, and potentially a compiler can "optimize it out". The problem is known for C/C++ compilers, see SecureZeroMemory. Can modern Pascal compiler (Delphi, FPC) do such optimization, and if they can, do they provide SecureZeroMemory equivalent?
FPC can't do such optimizations at the moment, and afaik even with C++ they belong into the "uncertain" class. (since the state of the program due to this optimization ignores what the programmer tells it to be)
Solving such problem is a matter of defining which constructs can be optimized out and which not. It doesn't need API/OS assistance per se, any externally linked object file with such function would do (since then global optimization wouldn't touch it)
Note that the article doesn't name the C++ compiler specifically, so I expect it is more a general utility function for when an user of a compiler gets into problems, without hitting the docs too hard, or when it must easily work on multiple (windows-only!) compilers without overly complicating the buildsystem.
Choosing a non inlinable API function might be non optimal in other cases, specially with small, constant sizes to zero, since it won't be inlined, so I would be careful with this function, and make sure there is a hard need
It might be important mainly when an external entity can change memory (DMA, memory mapping etc) of a program, or to erase passwords and other sensitive info from the memory image, even if the program according to the compiler will never read it
Even if FreePascal would optimize out writing to memory that is never read again (which I doubt it does atm, regardless of how long you guys discuss it), it does support the absolute type modifier which it guarantees (documented) to never optimize (somewhat similar to volatile in C/C++).

Z3: Is a custom theory extension appropriate for my application?

I have precise and validated descriptions of the behaviors of many X86 instructions in terms amenable to encoding in QF_ABV and solving directly with the standard solver (using no special solving strategies). I wrote an SMT-LIB script whose interface matches my ultimate goal perfectly:
X86State, a record sort describing x86 machine state (registers and flags as bitvectors, and memory as an array).
X86Instr, a record sort describing x86 instructions (enumerated mnemonics, operands as an ML-like discriminated union describing registers, memory expressions, etc.)
A function x86-translate taking an X86State and an X86Instr, and returning a new X86State. It decodes the X86Instr and produces a new X86State in terms of the symbolic effects of the given X86Instr on the input X86State.
It's great for prototyping: the user can write x86 easily and directly. After simplifying a formula built using the library, all functions and extraneous data types are eliminated, leaving a QF_ABV expression. I hoped that users could simply (set-logic QF_ABV) and #include my script (alas, neither the SMT-LIB standard nor Z3 support #include).
Unfortunately, by defining functions and types, the script requires theories such as uninterpreted functions, thus requiring a logic other than QF_ABV (or even QF_AUFBV due to the types). My experience with SMT solvers dictates that the lowest acceptable logic should be specified for best solving time. Also, it is unclear whether I can reuse my SMT-LIB script in a programmatic context (e.g. OCaml, Python, C) as I desire. Finally, the script is a bit verbose given the lack of higher-order functions, and my lack of access to par leading to code duplication.
Thus, despite having accomplished my technical goals, I think that SMT-LIB might be the wrong approach. Is there a more natural avenue for interacting with Z3 to implement my x86 instruction description / QF_ABV translation scheme? Is the SMT-LIB script re-usable at all in these avenues? For example, you can build "custom OCaml top-levels", i.e. interpreters with scripts "burned into them". Something like that could be nice. Or do I have to re-implement the functionality in another language, in a program that interacts with Z3 via a theory extension (C DLL)? What's the best option here?
Well, I don't think that people write .smt2 files by hand. These are usually generated automatically by some program.
I find the Z3 Python interface quite nice, so I guess you could give it a try. But you can always write a simple .smt2 dumper from any language.
BTW, do you plan releasing the specification you wrote for X86? I would be really interested!

Library for optimizing strings ( Boyer-Moore Algorithm )

I heavily use strings in a project so what i am looking for is a fast library for handling them.I think the Boyer-Moore Algorithm is the best.
Is there a free solution for that ?
You can consider the following resources implementing Boyer–Moore algorithm:
Boyer-Moore Horspool in Delphi 2010
Boyer-Moore-Horspool text searching
Search Components - Version 2.1
Boyer-moore, de la recherche efficace
Last Edit:
The StringSimilarity package of theunknownones project is a good source for fuzzy and phonetic string comparison algorithms:
DamerauLevenshtein
Koelner Phonetik
SoundEx
Metaphone
DoubleMetaphone
NGram
Dice
JaroWinkler
NeedlemanWunch
SmithWatermanGotoh
MongeElkan
CAUTION: Answering to the comment rather than to the question itself
There is (or, rather, was, because it has been abandoned currently) a Delphi unit (namely!) FastStrings which implements Boyer–Moore string search algorithm by heavy use of inline assembler. Is is one you are looking for?
As side note: project homepage is defunct now as long as author's e-mail, so i'm finding reuse (and modification and, naturally, any further development) of this code rather problematic given how restrictive are licensing terms.

I've heard that LaTeX is Turing complete. Are there any programs written in LaTeX?

It's possible to do interesting things with what would ordinarily be thought of as typesetting languages. For example, you can construct the Mandelbrot set using postscript.
It is suggested in this MathOverflow question that LaTeX may be Turing-complete. This implies the ability to write arbitrary programs (although it may not be easy!). Does anyone know of any concrete example of such a program in LaTeX, which does something highly unusual with the language?
In issue 13 of The Monad Reader, Stephen Hicks writes about implementing the solution to an ICFP contest (involving Mars rover navigation) in TeX, with copious use of macros. Amusingly, the solution's output when typeset is a postscript map of the rover's path.
Alternatively, Andrew Greene wrote a BASIC interpreter in TeX (more details). This may count as slightly perverse.
\def\K#1#2{#2}
\def\S#1#2#3{#1#3{#2#3}}
The pgfmath library still amazes me. But on a more Turing-related note: it is possible to write an actual Turing machine in TeX, as per http://en.literateprograms.org/Turing_machine_simulator_(LaTeX). It's just a nifty way of using expansions in TeX.
PostScript is Turing complete as well, if you'll read the manual you'll be amazed by the general programming capabilities of it (at least, I was).
I'm not sure if this qualifies as programming per se, but I've recently starting doing something a bit like Object Oriented stuff in LaTeX. (You don't need to know any maths to follow the following.) In recent papers, I've been writing about categories, which have objects and morphisms. Since there've been quite a few of those, I wanted a consistent style so that, say, 𝒞 was a category with typical object C and typical morphism c. Then I'd also have 𝒟 with D and d. So I define a "class", say "category" (you need to be a mathematician to understand the joke there), and declare that C is an instance of this class, and then have access to \ccat, \cobj, \cmor and so forth. The reason for not doing \cat{c}, \obj{c}, and \mor{c}, and so forth, is that sometimes these categories have special names and so after declaring the instance, I can modify it's name very easily (simply redefine \ccat - well, actually \mathccat since \ccat is a wrapper which selects \mathccat in math mode and \textccat in text mode). (Of course, it's a little more complicated than the above suggests and the OO stuff really comes in useful when I want to define a new category as a variant of an old one (it can even deal with the case where the old one doesn't exist yet.).)
Although it may not qualify as actual programming, I am using it in papers and do find it useful - the other answers (so far) have more of the feel of showing off the capabilities of LaTeX than of a sensible solution to a practical problem.
I know of someone who wrote the answer to an ACM contest problem in LaTeX.

Resources