Turing machine states design - automata

I want to design a Turing machine that accepts at most 3 0s. Now, I have designed one, which goes to accept state overtime it sees 1, 2 and 3 0s and rejects any further 0s. I wanted to know if it is okay for TM to go to accepting state from 3 different states?

Yes, this is perfectly fine. Even if you want a deterministic machine, several transitions going into the same state are fine. If several outgoing transitions read the same symbol the machine is not deterministic anymore. But even this is not a problem for nondeterministic TMs.

Related

Proof of Turing Completeness for a stack-based language

I'm writing a joke language that is based on stack operations. I've tried to find the minimum amount of instructions necessary to make it Turing complete, but have no idea if a language based on one stack can even be Turing complete. Will these instructions be enough?
IF (top of stack is non-zero)
WHILE (top of stack is non-zero)
PUSH [n-bit integer (where n is a natural number)]
POP
SWAP (top two values)
DUPLICATE (top value)
PLUS (adds top two values, pops them, and pushes result)
I've looked at several questions and answers (like this one and this one) and believe that the above instructions are sufficient. Am I correct? Or do I need something else like function calls, or variables, or another stack?
If those instructions are sufficient, are any of them superfluous?
EDIT: By adding the ROTATE command (changes the top three values of the stack from A B C to B C A) and eliminating the DUPLICATE, PLUS, and SWAP commands it is possible to implement a 3 character version of the Rule 110 cellular automaton. Is this sufficient to prove Turing completeness?
If there is an example of a Turing complete one-stack language without variables or functions that would be great.
If you want to prove that your language is Turing complete, then you should look at this Q&A on the Math StackExchange site.
How to Prove a Programming Language is Turing Complete?
One approach is to see if you can write a program using your language that can simulate an arbitrary Turing Machine. If you can, that is a proof of Turing completeness.
If you want to know if any of those instructions are superfluous, see if you can simplify your TM emulator to not use one of the instructions.
But if you want to know if a smaller Turing complete language is possible, look at SKI Combinator Calculus. Arguably, there are three instructions: the S, K and I combinators. And I is apparently redundant.
A language based only on a single stack can't be Turing complete (unless you "cheat" by allowing stuff like temporary variables or access to values "deeper" in the stack than the top item). Such a language is, as I understand it, equivalent to a Pushdown Automata, which can implement some stuff (e.g. context-free grammars) but certainly not as much as a full Turing machine.
With that said, Turing machines are actually a much lower bar than you'd intuitively expect - as originally formulated, they were little more than a linked list, the ability to read and modify the linked list, and branching. You don't even need to add all that much to a purely stack-oriented language to make it equivalent to a Turing machine - a second stack will technically do it (although I certainly wouldn't want to program against it), as would a linked list or queue.
Correct me if I'm wrong, but I'd think that establishing that you can read from and write to memory, can do branching, and have at least one of those data structures (two stacks, one queue, one linked list, or the equivalent) would be adequate to establish Turing completeness.
Take a look, too, at nested stack automata.
You may also want to look at the Chomsky hierarchy (it seems like you may be floating somewhere in the vicinity of a Type 1 or a Type 2 language).
As others have pointed, if you can simulate any Turing machine, then your language is Turing-complete.
Yet Turing machines, despite their conceptual simplicity and their amenity to mathematical treatment, are not the easiest machines to simulate.
As a shortcut, you can simulate some simple language that has already been proved Turing-complete.
My intuition tells me that a functional language, particularly LISP, might be a good choice. This SO Q&A has pointers to what a minimum Turing-complete LISP looks like.

Why is the CAP theorem interesting?

Is there anything mathematically interesting about the CAP theorem? Looking at the proof, there seem to be four different cases for two different statements across two formalisms. The CAP theorem holds in the three trivial cases and not in the fourth one. All of which use extremely roundabout proof techniques to say something extraordinarily simple.
3.1 Thm 1. If two machines have no communication whatsoever, they cannot contain consistent data.
3.1 Corollary 1.1 If two machines are not allowed to wait to receive messages from each other and the communication line between them is arbitrarily slow, you get an inconsistent result if you write to one and then immediately query the other.
4.2 Thm 2. If two machines that are allowed to wait-with-timeout have no connection whatsoever, they still cannot contain consistent data.
... but if the communication line between them has guarantees about worst-case transmission time, then you can just wait for the timeout each time you perform a write and CAP theorem doesn't apply.
Am I missing something here? The proof techniques used in the paper seem to be more like the kind of thing you find in the generals-on-a-hill problem (which IS nontrivial) where the generals can set a time to coordinate their attack and agree they're going to do it, but they can't agree that they agree. But I just can't see how that applies here.

How to use Functional Programming on App development [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I've been reading and playing with Functional Programming (FP) and I really like it's concepts, but I'm not sure how to apply them for most of my applications.
I'll be more specific, let's talk about iOS apps. I do see how to use some of the concept like immutable data structures and higher order functions, but not how to work with only/mostly pure functions - avoiding side effects - which seems to be the main part of FP.
I feel like most of the app is about coordinating input calls, displaying data, saving data, making network requests, navigating from one screen to another, animations.
All those would be impure functions on FP:
Coordinating input: a button tap, a notification, a server socket push, for all those I have to decide what to call, where to observe them, etc.
Displaying data: reads from a local database or from a server (side effect).
Saving data: same as above (but writing).
Making network requests: obvious, so I'll just give an example here - retrieving list images from Instagram.
Navigating: that's basically presenting View Controllers, which is a side effect.
Animations: changing something on the screen, side effect.
There are very few places in which I have to process data and those consist almost always of retrieving some Struct's from the database and concatenating the multiple information into another Struct that will be used by the View Controller (that's like 5 lines... assuming you need 5 properties to be displayed in the view). Sure, you might need to do some processing like converting money: Int = 20 to moneyString: String = "US$\(money).00", but that's it.
I feel like I'm failing to implement FP in my app development cycle. Can anyone clarify how I can achieve that? Maybe with examples.
Thank you.
EDIT: right now, following the Clean Architecture idea, I have something like this as my architecture:
Inputs can come from the View, as a button click, they go to the ViewController who decides which Interactor to call. That Interactor will access the necessary Gateways to get some data and transform it into presentable data that will be passed to the Presenter (in the form of a delegate). Finally the Presenter will update the View to display the new data.
Additionally, the input can come from External sources, like the server telling you some data was updated and you have to refresh the View. That goes to the Interactor (in the form of observer) which will follow the rest of the chain as in the previous example.
The only FP part is transformaing the Gateway's data into presentable data. All the rest has side effects. I feel like I'm doing something wrong and maybe some of that code should be organized differently so that more code can be moved to pure functions.
Putting aside FP for a moment, the challenge with multi-user or time dependent models is that the end-user in front of the program is not the only source of control events.
To keep the dependencies clean, we should view external triggers as an form of user input (unsolicited as it may be) and process them through the same paths.
If the end user had somehow been telepathically informed that new data was available, he could have pressed a button to have the program get it. Then no backward control flow would ever be needed (such as push notifications).
In that perfect world, the user's action to get the data would have first been captured at the View level and then carried down the layers.
This tells us that the notifications should be handled by the view controller or, better yet by a view component designed to receive them. Such notifications however would not contain any data except perhaps some indication of which part of the model have been invalidated.
Back to FP, this is of course a gigantic side effect if you consider that all function calls thereafter will return potentially different results. BUT...
In mathematics, if you define a function that gives you distance traveled at a given speed but don't supply the time parameter, you're not victim of a side effect, you merely forgot to provide an input value.
So, if we consider that all software layers are pure functions but that time is an implicit parameter given with the initial input, you can validate that your program conforms to FP by checking that repeated calls, in any order, to functions of the system when time is frozen should always return the same results.
If databases could keep a 100% complete set of snapshots of their states at any given time it would be possible to validate pure FP conformance of applications by freezing time or making it a parameter.
Back in the real world now, such a design is often impractical for performance reasons. It would pretty much preclude any form of caching.
Still, I would suggest you try to categorize notifications as unsolicited user input in your design. I believe it may help solve some of the conundrums.
There should always be more than one tool in your toolbox. FP is a great approach and a discipline that most people should have for pieces of the program where it applies (e.g. presenting the model on the views through the view controllers in MVVC).
Attempting to use FP on everything in an application is probably not a worthwhile endeavour. As soon as you're going to persist data or manage the passage of time, you'll have to deal with states. Even the "restful" services (which are conceptually good candidates for an FP approach) will not be pure FP and have some state dependency. This is not because they are bad FP implementations but because it is their purpose to manage externally persisted states. You could spin it to view stored data as "the input" but from either side of the service the other side will still be a side effect (except for read-only operations).
If you embrase the fact that MVVC is responsible for managing state transition, and allow for a non-FP relationship between its components, it becomes easier to implement smaller scale FP paradigm to each of them.
For example, your view controllers should not have any variables that duplicate or maintain transformed versions of any data in the model. The use of delegates between MVVC components does break the FP rules in some cases but within the scope of functionality of a view controller, those are inputs (not states). By planning (and drawing) an interaction diagram before you start coding , you'll be able to better isolate concerns and not get into dead ends that will break FP within your components.
Within the model itself, computed properties can go a long way in ensuring that you abide by FP.
In any case, if you never use the var statement (this is going to be a challenge in some places), you're likely to end up with FP conforming code.
Functional programming is good for tackling specific programming issues. Where FP excels is in concurrent/parallel programming; if you have read any of the articles written by Herb Sutter on the concurrent programming, you begin to see the overlap in good concurrent/parallel programming and functional design.
For an application, as Alain had said, you do have to work with state. You can apply FP design patterns around how state is modified, but regardless, you do have to modify state at one point or another, which is as you discovered, not aligned with pure FP.
FP is a tool in your tool chest of programming patterns, but it is not the only tool.

Best db engine for building a web app with ranking algorithms

I've got an idea for a new web app which will involve the following:
1.) lots of raw inputs (text values) that will be stored in a db - some of which contribute as signals to a ranking algorithm
2.) data crunching & analysis - a series of scripts will be written which together form an algorithm that will take said raw inputs from 1.) and then store a series of ranking values for these inputs.
Events 1.) and 2.) are independent of each other. Event 2 will probably happen once or twice a day. Event 1 will happen on an ongoing basis.
I initially dabbled with the idea of writing the whole thing in node.js sitting on top of mongodb as I will curious to try out something new and while I think node.js would be perfect for event 1.) I don't think it will work well for the event 2.) outlined above.
I'd also rather keep everything in one domain rather than mixing node.js with something else for step 2.
Does anyone have any recommendations for what stacks work well for computational type web apps?
Should I stick with PHP or Rails/Mysql (which I already have good experience with)?
Is MongoDB/nosql constrained when it comes to computational analysis?
Thanks for your advice,
Ed
There is no reason why node.js wouldn't work.
You would just write two node applications.
One that takes input stores it in the database and renders output
and the other one crunches numbers in it's own process and is run once or twice per day.
Of course if your doing real number crunching and you need performance you wouldn't do nr 2 in node/ruby/php. You would do it in fortran (or maybe C).

Erlang OTP I/O - A few questions

I have read one of erlang's biggest adopters is the telecom industry. I'm assuming that they use it to send binary data between their nodes and provide for easy redundancy, efficiency, and parallelism.
Does erlang actually send just the binary to a central node?
Is it directly responsible for parsing the binary data into actual voice? Or is it fed to another language/program via ports?
Is responsible for the speed in a telephone call, speed as in the delay between me saying something and you hearing it.
Is it possible that erlang is solely used for the ease in parallel behavior and c++ or similar for processing speed in sequential functions?
I can only guess at how things are implemented in actual telecom switches, but I can recommend an approach to take:
First, you implement everything in Erlang, including much of the low-level stuff. This probably won't scale that much since signal processing is very costly. As a prototype however, it works and you can make calls and whatnot.
Second, you decide on what to do with the performance bottlenecks. You can push them to C(++) and get a factor of roughly 10 or you can push them to an FPGA and get a factor of roughly 100. Finally you can do CMOS work and get a factor of 1000. The price of the latter approach is also much steeper, so you decide what you need and go buy that.
Erlang remains in control of the control backplane in the sense of what happens when you push buttons the call setup and so on. But once a call has been allocated, we hand over the channel to the lower layer. ATM switching is easier here because once the connection is set you don't need to change it (ATM is connection-oriented, IP is packet-oriented).
Erlangs distribution features are there primarily for providing redundancy in the control backplane. That is, we synchronize tables of call setups and so on between multiple nodes to facilitate node takeover in case of hardware failure.
The trick is to use ports and NIFs post prototype to speed up the slower parts of the program.

Resources