Get n-th element from stack in Forth - forth

Is there a way to access an element of the stack by its index in Forth, without popping all the elements above it?
For example, if I have the numbers 1 to 1000 pushed to the stack, how can I get the 500th element?

500 PICK
...will copy the element 500 levels down the stack to the top of the stack in Forth79.
More relevant: PICK is in the core extension wordset in ISO93 Forth, the base of the current standard. The definition of PICK in this standard is 0-based, e.g. '0 PICK' is equivalent to 'DUP'. See section 6.2.2030

And if the Forth you're using don't have PICK, you could define it as
: PICK ?DUP IF SWAP >R 1- RECURSE R> SWAP EXIT THEN DUP ;
(Of course, an iterative version would also be possible.)

Related

How does expressions work in one pass compiler?

I have seen a lot of posts regarding one pass and multi pass compilers and I think the basic parts of a One Pass Compiler as follows.
Lexical Analysis
Parse and Syntax Check
Symbol Table
Code Generation
But I have a question, that I cannot understand. How does Expression work in one pass compiler ( Without AST )
Let's take 1 + 2 / 3 as an example. Suppose you need to generate assembly code for this. After create a parser, how does generate assembly from it? ( Directly ) How does assembly code generate without any errors?
Can you explain this with examples? Please tell me if there is a problem in this question. I am new to Stackoverflow :)
The simplest way for a single-shot compiler process something like a=b*c[i]+d*e; would be to process each token as it is encountered:
a Generate code to push the address of a onto the stack
= Record the fact that the top item on the stack will be used as the target of an assignment.
b Generate code to push the address of b onto the stack
* Generate code to pop the address (of b), fetch the object there, and push it.
Also record the fact that the top item on the stack will be used as the left operand of a multiply.
c Generate code to push the address of c onto the stack
[ Record the fact that the top item on the stack will be the left operand of [ and evaluate the following expression
i Generate code to push the address of i onto the stack
] Generate code to pop the address (of i), fetch the object there, and push it.
Also pop the two items on the stack (the value of i and the address of c, add them, and push that result
+ Generate code to pop the address (of c[i]), fetch the object there, and push it
Also generate code to pop the top two values (c[i] and b), multiply them, and push the result
Also record the fact that the top item on the stack will be the left operand of +
d Generate code to push the address of d onto the stack
* Generate code to pop the address, fetch the object there, and push it.
Also record the fact that the top item on the stack will be used as the left operand of a multiply.
e Generate code to push the address of e onto the stack
(end of statement): Generate code to pop the address (of e), fetch the object there, and push it
Also generate code to pop the top two values (e and d), multiply them, and push the result
Also generate code to pop the top two values (d*e and b*c[i]), add them, and push the result
Also generate code to pop a value (b*c[i]+d*e) and address a and store the value to the address
Note that on many platform, performance may be enormously improved if code generation for operations which end with a push is deferred until the next operation is generated, and if pops that immediately follow pushes are consolidated with them; further, it may be useful to have the compiler record the fact that the left hand side of the assignment operator is a, rather than generating code to push that address, and then have the assignment operator perform a store directly to a rather that computing the address first and then using it for the store. On the other hand, the approach as described will be able to generate machine code for everything up to the exact byte of machine code being processed, without having to keep track of much beyond pending operations that will need to be unwound.

How to implement a low-momory stack in ti-basic

So I am creating a maze generating algorithm using recursive backtracking. I keep track of the points that I visit in a stack using a matrix. This matrix has two columns, one for the x-coordinate, and one for the y-coordinate. The problem is, my program works for small mazes, but for bigger mazes my calculator runs out of memory. I was wondering if there is a less memory intensive way to implement a stack. I'm thinking about using strings as a possible way to do it. I use a ti-84 CSE by the way.
Your stack should probably be implemented using a list. I'll be using L1 for demonstration purposes. A stack is a last-in, first-out data structure.
List elements are accessible by using
L1(X)
Where X is the item you want. This means first in has to go to L1(1) (the beginning of the list; the 1st item), and onwards, and first out has to come out of the last item in the list. To find how many items are in a list (and therefore find out the Nth item is the last) use
dim(L1)
This will give a number of how many items there are. Instead of storing it to a variable, we can use it to always access the last item in a list. using this:
L1(dim(L1))->M
//this addresses the item of L1 at dim(L1), meaning the last item
Now M will have the value of the last item. This is the first-out part. Then, to destroy the last item (since you popped it off), do this:
dim(L1)-1->dim(L1)
So putting it all together, your "pop" code will be:
If dim(L1)>0
Then
// It checks if L1 has an element to pop off in the first place
L1(dim(L1))->M
dim(L1)-1->dim(L1)
End
Now, M will have the value of the last item, and the last item is destroyed. Now, onto the push code. To push, you must put your number into a new slot one higher than the old last number. Essentially, you must make a new last item to put it in. Luckily, this is very easy in TI-Basic. Your "push" code would be:
If dim(L1)<99
// It checks if L1 has less than the maximum number of elements,
// which is 99.
M->L1(dim(L1)+1)
And if you're gonna be storing X/Y coordinates with your stack, I'd recommend a format such as this:
X + .01Y -> M
//X=3, Y = 15
// This would make M be 3.15
And to seperate these back into two seperate coordinates:
int(M)->X
// The integer value of M is 3, which is what X was earlier
100*fPart(M)->Y
// The fraction part of M was .15. Multiply that by 100 to get 15,
// which is what Y was earlier

Swift 3 and Index of a custom linked list collection type

In Swift 3 Collection indices have to conform to Comparable instead of Equatable.
Full story can be read here swift-evolution/0065.
Here's a relevant quote:
Usually an index can be represented with one or two Ints that
efficiently encode the path to the element from the root of a data
structure. Since one is free to choose the encoding of the “path”, we
think it is possible to choose it in such a way that indices are
cheaply comparable. That has been the case for all of the indices
required to implement the standard library, and a few others we
investigated while researching this change.
In my implementation of a custom linked list collection a node (pointing to a successor) is the opaque index type. However, given two instances, it is not possible to tell if one precedes another without risking traversal of a significant part of the chain.
I'm curious, how would you implement Comparable for a linked list index with O(1) complexity?
The only idea that I currently have is to somehow count steps while advancing the index, storing it within the index type as a property and then comparing those values.
Serious downside of this solution is that indices must be invalidated when mutating the collection. And while that seems reasonable for arrays, I do not want to break that huge benefit linked lists have - they do not invalidate indices of unchanged nodes.
EDIT:
It can be done at the cost of two additional integers as collection properties assuming that single linked list implements front insert, front remove and back append. Any meddling around in the middle would anyway break O(1) complexity requirement.
Here's my take on it.
a) I introduced one private integer type property to my custom Index type: depth.
b) I introduced two private integer type properties to the collection: startDepth and endDepth, which both default to zero for an empty list.
Each front insert decrements the startDepth.
Each front remove increments the startDepth.
Each back append increments the endDepth.
Thus all indices startIndex..<endIndex have a reflecting integer range startDepth..<endDepth.
c) Whenever collection vends an index either by startIndex or endIndex it will inherit its corresponding depth value from the collection. When collection is asked to advance the index by invoking index(_ after:) I will simply initialize a new Index instance with incremented depth value (depth += 1).
Conforming to Comparable boils down to comparing left-hand side depth value to the right-hand side one.
Note that because I expand the integer range from both sides as well, all the depth values for the middle indices remain unchanged (thus are not invalidated).
Conclusion:
Traded benefit of O(1) index comparisons at the cost of minor increase in memory footprint and few integer increments and decrements. I expect index lifetime to be short and number of collections relatively small.
If anyone has a better solution I'd gladly take a look at it!
I may have another solution. If you use floats instead of integers, you can gain kind of O(1) insertion-in-the-middle performance if you set the sortIndex of the inserted node to a value between the predecessor and the successor's sortIndex. This would require to store (and update) the predecessor's sortIndex on your nodes (I imagine this should not be to hard since it is only changed on insertion or removal and it can always be propagated 'up').
In your index(after:) method you need to query the successor node, but since you use your node as index, that is be straightforward.
One caveat is the finite precision of floating points, so if on insertion you the distance between the two sort indices are two small, you need to reindex at least part of the list. Since you said you only expect small scale, I would just go through the hole list and use the position for that.
This approach has all the benefits of your own, with the added benefit of good performance on insertion in the middle.

Is there a more efficient way to find the middle of a singly-linked list? (Any language)

Let's set the context/limitations:
A linked-list consists of Node objects.
Nodes only have a reference to their next node.
A reference to the list is only a reference to the head Node object.
No preprocessing or indexing has been done on the linked-list other than construction (there are no other references to internal nodes or statistics collected, i.e. length).
The last node in the list has a null reference for its next node.
Below is some code for my proposed solution.
Node cursor = head;
Node middle = head;
while (cursor != null) {
cursor = cursor.next;
if (cursor != null) {
cursor = cursor.next;
middle = middle.next;
}
}
return middle;
Without changing the linked-list architecture (not switching to a doubly-linked list or storing a length variable), is there a more efficient way to find the middle element of singly-linked list?
Note: When this method finds the middle of an even number of nodes, it always finds the left middle. This is ideal as it gives you access to both, but if a more efficient method will always find the right middle, that's fine, too.
No, there is no more efficient way, given the information you have available to you.
Think about it in terms of transitions from one node to the next. You have to perform N transitions to work out the list length. Then you have to perform N/2 transitions to find the middle.
Whether you do this as a full scan followed by a half scan based on the discovered length, or whether you run the cursor (at twice speed) and middle (at normal speed) pointers in parallel is not relevant here, the total number of transitions remains the same.
The only way to make this faster would be to introduce extra information to the data structure which you've discounted but, for the sake of completeness, I'll include it here. Examples would be:
making it a doubly-linked list with head and tail pointers, so you could find it in N transitions by "squeezing" in from both ends to the middle. That doubles the storage requirements for pointers however so may not be suitable.
having a skip list with each node pointing to both it's "child" and its "grandchild". This would speed up the cursor transitions resulting in only about N in total (that's N/2 for each of cursor and middle). Like the previous point, there's an extra pointer per node required for this.
maintaining the length of the list separately so you could find the middle in N/2 transitions.
same as the previous point but caching the middle node for added speed under certain circumstances.
That last point bears some extra examination. Like many optimisations, you can trade space for time and the caching shows one way to do it.
First, maintain the length of the list and a pointer to the middle node. The length is initially zero and the middle pointer is initially set to null.
If you're ever asked for the middle node when the length is zero, just return null. That makes sense because the list is empty.
Otherwise, if you're asked for the middle node and the pointer is null, it must be because you haven't cached the value yet.
In that case, calculate it using the length (N/2 transitions) and then store that pointer for later, before returning it.
As an aside, there's a special case here when adding to the end of the list, something that's common enough to warrant special code.
When adding to the end when the length is going from an even number to an odd number, just set middle to middle->next rather than setting it back to null.
This will save a recalculation and works because you (a) have the next pointers and (b) you can work out how the middle "index" (one-based and selecting the left of a pair as per your original question) changes given the length:
Length Middle(one-based)
------ -----------------
0 none
1 1
2 1
3 2
4 2
5 3
: :
This caching means, provided the list doesn't change (or only changes at the end), the next time you need the middle element, it will be near instantaneous.
If you ever delete a node from the list (or insert somewhere other than the end), set the middle pointer back to null. It will then be recalculated (and re-cached) the next time it's needed.
So, for a minimal extra storage requirement, you can gain quite a bit of speed, especially in situations where the middle element is needed more often than the list is changed.

Implement SWAP in Forth

I saw that in an interview with Chuck Moore, he says:
The words that manipulate that stack are DUP, DROP and OVER period.
There's no, well SWAP is very convenient and you want it, but it isn't
a machine instruction.
So I tried to implement SWAP in terms of only DUP, DROP and OVER, but couldn't figure out how to do it, without increasing the stack at least.
How is that done, really?
You are right, it seems hard or impossible with just dup, drop, and over.
I would guess the i21 probably also has some kind return stack manipulation, so this would work:
: swap over 2>r drop 2r> ;
Edit: On the GA144, which also doesn't have a native swap, it's implemented as:
over push over or or pop
Push and pop refer to the return stack, or is actually xor. See http://www.colorforth.com/inst.htm
In Standard Forth it is
: swap ( a b -- b a ) >r >r 2r> ;
or
: swap ( a b -- b a ) 0 rot nip ;
or
: swap ( a b -- b a ) 0 rot + ;
or
: swap ( a b -- b a ) 0 rot or ;
This remark of Charles Moore can easily be misunderstood, because it is in the context of his Forth processors. A SWAP is not a machine instruction for a hardware Forth processor. In general in Forth some definitions are in terms of other definitions, but this ends with certain so called primitives. In a Forth processor those are implemented in hardware, but in all Forth implementations on e.g. host systems or single board computers they are implemented by a sequence of machine instruction, e.g. for Intel :
CODE SWAP pop, ax pop, bx push, ax push, bx END-CODE
He also uses the term "convenient", because SWAP is often avoidable. It is a situation where you need to handle two data items, but they are not in the order you want them. SWAP means a mental burden because you must imagine the stack content changed. One can often keep the stack straight by using an auxiliary stack to temporarily hold an item you don't need right now. Or if you need an item twice OVER is preferable. Or a word can be defined differently, with its parameters in a different order.
Going out of your way to implement SWAP in terms of 4 FORTH words instead of 4 machine instructions is clearly counterproductive, because those FORTH words each must have been implemented by a couple of machine instructions themselves.

Resources