Is there a way to look up into the stack via Reflection.Emit Opcodes? For example, let's say I want to push X, and then push Y, and then I need to get at the value of X...how do I go about that?
True, I could store y into a local variable, and then later load Y again, but that's a bit of a round-about way of going at it.
There is no direct way to do this. See Reflection.Emit - access topmost-but-one item from stack for another related question.
Related
I have seen a lot of posts regarding one pass and multi pass compilers and I think the basic parts of a One Pass Compiler as follows.
Lexical Analysis
Parse and Syntax Check
Symbol Table
Code Generation
But I have a question, that I cannot understand. How does Expression work in one pass compiler ( Without AST )
Let's take 1 + 2 / 3 as an example. Suppose you need to generate assembly code for this. After create a parser, how does generate assembly from it? ( Directly ) How does assembly code generate without any errors?
Can you explain this with examples? Please tell me if there is a problem in this question. I am new to Stackoverflow :)
The simplest way for a single-shot compiler process something like a=b*c[i]+d*e; would be to process each token as it is encountered:
a Generate code to push the address of a onto the stack
= Record the fact that the top item on the stack will be used as the target of an assignment.
b Generate code to push the address of b onto the stack
* Generate code to pop the address (of b), fetch the object there, and push it.
Also record the fact that the top item on the stack will be used as the left operand of a multiply.
c Generate code to push the address of c onto the stack
[ Record the fact that the top item on the stack will be the left operand of [ and evaluate the following expression
i Generate code to push the address of i onto the stack
] Generate code to pop the address (of i), fetch the object there, and push it.
Also pop the two items on the stack (the value of i and the address of c, add them, and push that result
+ Generate code to pop the address (of c[i]), fetch the object there, and push it
Also generate code to pop the top two values (c[i] and b), multiply them, and push the result
Also record the fact that the top item on the stack will be the left operand of +
d Generate code to push the address of d onto the stack
* Generate code to pop the address, fetch the object there, and push it.
Also record the fact that the top item on the stack will be used as the left operand of a multiply.
e Generate code to push the address of e onto the stack
(end of statement): Generate code to pop the address (of e), fetch the object there, and push it
Also generate code to pop the top two values (e and d), multiply them, and push the result
Also generate code to pop the top two values (d*e and b*c[i]), add them, and push the result
Also generate code to pop a value (b*c[i]+d*e) and address a and store the value to the address
Note that on many platform, performance may be enormously improved if code generation for operations which end with a push is deferred until the next operation is generated, and if pops that immediately follow pushes are consolidated with them; further, it may be useful to have the compiler record the fact that the left hand side of the assignment operator is a, rather than generating code to push that address, and then have the assignment operator perform a store directly to a rather that computing the address first and then using it for the store. On the other hand, the approach as described will be able to generate machine code for everything up to the exact byte of machine code being processed, without having to keep track of much beyond pending operations that will need to be unwound.
Apologies, in trying to be concise and clear my previous description of my question turned into a special case of the general case I'm trying to solve.
New Description
I'm trying to Compare the last emitted value of an Aggregation Function (Let's say Sum()) with a each element that I aggregate over in the current window.
Worth noting, that the ideal (I think) solution would include
The T2(from t-1) element used at time = t is the one that was created during the previous window.
I've been playing with several ideas/experiments but I'm struggling to find a way to accomplish this in a way is elegant and "empathetic" to Beam's compute model (which I'm still trying to fully Grock after many an article/blog/doc and book :)
Side inputs seem unwieldy because It looks like I have to shift the emitted 5M#T-1 Aggregation's timestamp into the 5M#T's window in order to align it with the current 5M window
In attempting this with side inputs (as I understand them), I ended up with some nasty code that was quite "circularly referential", but not in an elegant recursive way :)
Any help in the right direction would be much appreciated.
Edit:
Modified diagram and improved description to more clearly show:
the intent of using emitted T2(from t-1) to calculate T2 at t
that the desired T2(from t-1) used to calculate T2 is the one with the correct key
Instead of modifying the timestamp of records that are materialized so that they appear in the current window, you should supply a window mapping fn which just maps the current window on to the past one.
You'll want to create a custom WindowFn which implements the window mapping behavior that you want paying special attention to overriding the getDefaultWindowMappingFn function.
Your pipeline would be like:
PCollection<T> mySource = /* data */
PCollectionView<SumT> view = mySource
.apply(Window.into(myCustomWindowFnWithNewWindowMappingFn))
.apply(Combine.globally(myCombiner).asSingletonView());
mySource.apply(ParDo.of(/* DoFn that consumes side input */).withSideInputs(view));
Pay special attention to the default value the combiner will produce since this will be the default value when the view has had no data emitted to it.
Also, the easiest way to write your own custom window function is to copy an existing one.
Yeah, so one of my friends said we can use indexes to traverse a stack, but I think he's wrong. Basically, I have a homework in which I had to write an algorithm using an array. I had to use two for loops to do it, so I was wondering how to do something like this with a stack:
for(int i = 0; i < n; i++)
{
for(int j = 0; j < n; j++)
{
x = A[i]+A[j]
}
}
There is no way right? And I have to use pop() and push() only to do whatever I need to do, right? Because I used an array and stack concurrently, but one of my friends told me I can't do that. I know we can implement a stack using an array, but the stack ADT doesn't have indexes (although they just said stack and not stack ADT).
said we can use indexes to traverse a stack, but I think he's wrong.
You're right, he's wrong.
There is no way [to do two nested loops] right?
You can access element at index if you have enough space for a temporary stack: pop to the index while storing the popped values onto a temp stack, remember the value, and then push the values back:
int GetAt(Stack s, int index) {
Stack temp;
while (temp.size() != index) {
temp.push(s.pop());
}
int res = temp.top();
while (!temp.empty()) {
s.push(temp.pop());
}
return res;
}
Yes, that's very, very slow.
Yes, a pure stack abstraction would not have indices. But pure abstractions rarely exist outside of Comp Sci classrooms and Haskell User's Groups, and most stack implementations can admit to something like this, because indeed, they are usually implemented using an array. At the end of the day, you don't get a prize for how "pure" something is, but rather for getting the job done. I can certainly imagine a situation in which you build up a stack, and then at some point, you need to process all the elements as given in your loop. Welcome to the real world!
"Stack" is an abstract concept, not something that exists in reality. In the real world, they are generally implemented as data in consecutive memory addresses, and so indexing them is certainly possible, depending on the language/library/API you're using. Without a specific language or library, there's really no way to answer a question like yours.
If what you really mean is, is there a way to do the calculation you mention with two data collections that can only be accessed by push/pop, then probably not without an intermediate array. But why would you want to? A stack is, by its very nature, intended to be used for algorithms that only need to access its data in a LIFO way. Why else would you use one?
Actually you're all wrong(tm)! With two stacks ("pure" stacks) you can implement an array, inefficiently. Suppose stack S1 has 10 items pushed on it, and you want item 5; just pop 4 items off of S1, pushing each on to S2 in turn. Then pop off one more and that is your item. Just keep track of the index of the item that is currently in use (on neither stack), and you can always retrieve any of the other items when you need them.
But this idea is absurd.
Look at Reverse Polish Notation for how to use stacks for computing.
http://en.wikipedia.org/wiki/Reverse_Polish_notation
Basically push the values, then pop them off and apply a function.
Stack have index. Depending on how you implement stack we can use array or link list to implement stack in array the indexing is start from left to right from zero. But we can do indexing in both ways either from
stack top element to bottom most element
bottom most element to top most element
I'm trying to implement a Backend for LLVM.
Right now I'm running into problems with stack frame lowering.
I need to implement the following stack layout:
When a function gets called I need to put a "Return Symbol"(as the target system can only jump to absolute adresses) and "Offset" onto the stack followed by all the function arguments.
The stack alignement is 1 Byte and the stack has to grow up.
Stack Layout before Call:
RetSymb <- SP
Offset
Arguments
Local Data
Stack Layout before entry to function:
RetSymb
Offset
Arguments
Local Data
RetSymb <- SP
Offset = SP - Old SP
Arguments
Local Data
On return the SP gets automatically decremented by the value stored in "Offset".
Variable argument handling is not an importance right now.
I currently have no idea at which places I have to look and what I need to do at those places.
I have found the emitPrologue and emitEpilogue functions in the XXXFrameLowering.cpp but I don't really know what they are supposed to do (I guess insert code at start and end of a function).
I also found a couple of functions in the XXXISelLowering.cpp file. Is there a List that explains what the different functions are supposed to do?
For example:
LowerFormalArguments(I guess insert load from stack for arguments)
LowerCallResult
LowerCall
LowerReturn
Thanks in advance for any information that point me in the right direction.
To the best of my knowledge, there's no single place that explains this. You'll have to pick one of the existing backends and follow its code to see where the magic is done. emitPrologue and emitEpilogue are good candidates to start from, because these specifically deal with the code that sets up and tears down the frame in a function. A function is lowered to (rough approximation, there are more details...):
func_label:
prologue
.. function code
epilogue
So to handle the custom stack layout you will definitely have to write custom code for the prologue and epilogue. The same goes for calls to functions, if the caller is responsible for some of the stack layout.
I suggest you begin by reading about the stack frame layout of some of the existing backends and then study the relevant code in LLVM. I described some of the x86 (32-bit) frame information here, for example.
Produce a PDA to recognise the following language : the language of strings containing more a's than b's
I have been struggling with this question for several days now, I seem to have hit a complete mental block. Would any one be able to provide some guidance or direction to how I can solve this problem?
Your problem of "more a's than b's" can be solved by PDA.
All you have to do is:
When input is a and the stack is either empty or has an a on the top, push a on the stack; pop b, if b is the top.
When input is b and the stack is either empty or has an b on the top, push b on the stack; pop a, if a is on the top.
Finally, when the string is finished, go to the final state with null input if a is on top of the stack. Otherwise you don't have more a's than b's.
I have come up with a more general solution to the problem concerning the number of as and bs, see the picture below:
where a > b means more as than bs and so does a < b , a = b.
Z means the bottom of stack, and A/B are stack symbols.
I'm excited about it because this PDA seperates the 3 different states. In your problem, you can just set the a > b state to be the final state and let a = b be the start state.
And if you want to step further, you can use this PDA to easily generate PDAs for a >= b, a - b >= 1, 2 <= a - b <= 3 etc, which is fascinating.
I'm assuming you mean strings of the form a^nb^m, where n>m.
The idea is relatively easy, for a you push it on the stack (in a loop), for b you switch to a separate loop to pop a from the stack. If the stack is ever empty, you give up with a FAIL. If in the first loop you get anything other than a or b, or in the second loop you get anything other than b, you give up with FAIL.
At the end you try to pop another a and if the stack is empty you give up with a FAIL (ie, you had at least as many b's as a's on the stack, possibly more). If not, SUCCESS.
Edit: As a side note, I'm not convinced this is the right site for this question, might be better on programmers. Not sure though so not voting to close.
PDA to accept more a's than b's
Basic infromation of transaction is given in below figure.
Here is the complete transaction.
In it є is equal to NULL.
$ sign is pushed into stack at the start of the string and pop at the end to determine the string has been completely read and end now. qo is start state and q5 is final state
It is Non-deterministic push down Automata (NPDA).So Due to NPDA the transaction of rejection string is not shown in it.