I was curious that are there any ternary operator being used in programming language except ?: operator. And could found only 2 from wikipedia
Is it only operator we have been used? Are there any more than these?
Element update
Another useful class of ternary operator, especially in functional languages, is the "element update" operation. For example, OCaml expressions have three kinds of update syntax:
a.b<-c means the record a where field b has value c
a.(b)<-c means the array a where index b has value c
a.[b]<-c means the string a where index b has value c
Note that these are not "update" in the sense of "assignment" or "modification"; the original object is unchanged, and a new object is yielded that has the stated properties. Consequently, these operations cannot be regarded as a simple composition of two binary operators.
Similarly, the Isabelle theorem prover has:
a(|b := c|) meaning the record a where field b has value c
Array slice
Yet another sort of ternary operator is array slice, for example in Python we have:
a[b:c] meaning an array whose first element is a[b] and last element is a[c-1]
In fact, Python has a quaternary form of slice:
a[b:c:d] meaning an array whose elements are a[b + n*d] where n ranges from 0 to the largest value such that b + n*d < c
Bash/ksh variable substitution
Although quite obscure, bash has several forms of variable expansion (apparently borrowed from ksh) that are ternary:
${var:pos:len} is a maximum of len characters from $var, starting at pos
${var/Pattern/Replacement} is $var except the first substring within it that matches Pattern is replaced with Replacement
${var//Pattern/Replacement} is the same except all matches are replaced
${var/#Pattern/Replacement} is like the first case except Pattern has to match a prefix of $var
${var/%Pattern/Replacement} is like the previous except for matching a suffix
These are borderline in my opinion, being close to ordinary functions that happen to accept three arguments, written in the sometimes baroque style of shell syntax. But, I include them as they are entirely made of non-letter symbols.
Congruence modulo
In mathematics, an important ternary relation is congruence modulo:
a ≡ b (mod c) is true iff a and b both belong to the same equivalence class in c
I'm not aware of any programming language that has this, but programming languages often borrow mathematical notation, so it's possible it exists in an obscure language. (Of course, most programming languages have mod as a binary operator, allowing the above to be expressed as (a mod c) == (b mod c).) Furthermore, unlike the bash variable substitution syntax, if this were introduced in some language, it would not be specific to that language since it is established notation elsewhere, making it more similar to ?: in ubiquity.
Excluded
There are some categories of operator I've chosen to exclude from the category of "ternary" operator:
Operations like function application (a(b,c)) that could apply to any number of operators.
Specific named functions (e.g., f(a,b,c)) that accept three arguments, as there are too many of them for any to be interesting in this context.
Operations like SUM (Σ) or let that function as a binding introduction of a new variable, since IMO a ternary operator ought to act on three already-existing things.
One-letter operators in languages like sed that happen to accept three arguments, as these really are like the named function case, and the language just has a very terse naming convention.
Well it’s not a ternary operator per-say but I do think the three way comparison operator is highly underrated.
The ternary operator is appropriate when a computation has to take place, even if I cannot use the effect inside of an if/else statement or switch statement Consequently, 0 or the DEFAULT VALUE is treated as a DEFAULT VALUE when I try the computation.
The if/else or switch statements require me to enumerate every case that can take place and are only helpful if the ability to discriminate between a numeric value and a branching choice can help. In some cases, it is clear why it won't help if a condition test exists, simply because I am either too early or too late to do something about the condition, even though some other function is unable to test for the given condition.
The ternary operator forces a computation to have the effect that can pass through the code that follows it. Other types of condition tests resort to using, for instance, the && and || operators which can't guarantee a pass through.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed last month.
Improve this question
I'm reading Crafting Interpreters. It's very readable. Now I'm reading
chapter 17 Compiling Expressions and find algorithm:
Vaughan Pratt’s “top-down operator precedence parsing”. The implementation is very brief
and I don't understand it why it works.
So I read Vaughan Pratt’s “top-down operator precedence parsing” paper. It's so old
and not easy to read. I read related blogs about it and spend days reading the
original paper.
related blogs :
https://abarker.github.io/typped/pratt_parsing_intro.html
https://journal.stuffwithstuff.com/2011/03/19/pratt-parsers-expression-parsing-made-easy/
https://matklad.github.io/2020/04/13/simple-but-powerful-pratt-parsing.html
I am now more confident that I can write an implementation. But I still can't see the
trick behind the magic. Here are some of my questions, if I can describe them clealy:
What grammar can Pratt’s parser handle? Floyd Operator Grammar? I even take a look
at Floyd's paper, but it's very abstract. Or Pratt’s parser can handle any Language
as long as it meets the restrictions on page 44
These restrictions on the language, while slightly irksome,...
On page 45,Theorem 2, Proof.
First assign even integers (to make room for the followin~terpolations) to the data type classes.
Then to each argument position assign an integer lying strictly (where possible) between the integers
corresponding to the classes of the argument and result types.
On page 44,
The idea is to assign data types to classes and then to totally order
the classes.
An example might be, in ascending order,Outcomes (e.g., the pseudo-result of “print”), Booleans,
Graphs (e.g. trees, lists, plexes), Strings, Algebraics (e.g. integers, complex nos, polynomials)...
I can't figure out what the term "data type" means in this paper. If it means primitive
data types in a programming language, like boolean , int , char in Java, then the following exmaple
may be Counterexample
1 + 2 * 3
for +, it's argument type is number, say assign integer 2 to data type number class. +'s result data type is
also a number. so + must have the integer 2. But the same is for *. In this way + and * would have the same
binding power.
I guess data type in this paper is the AST Node type. So +'s result type is term, *'s result type is factor,
which will have a bigger integger than +'s. But I can't be sure.
By data type, Pratt meant, roughly, "primitive data type in the programming language being parsed". (Although he included some things which are often not thought about as types, unless you program in C or C++, and I don't think he contemplated user-defined types.)
But Pratt is not really talking about a parsing algorithm on page 44. What he's talking about is language design. He wants languages to be designed in such a way that a simple semantic rule involving argument and result types can be used to determine operator precedence. He wants this rule, apparently, in order to save the programmer the task of memorizing arbitrary operator precedence tables (or to have to deal with different languages ordering operators in different ways.)
These are indeed laudable goals, but I'm afraid that the train had already left the station when Pratt was writing that paper. Half a century later, it's evident that we're never going to achieve that level of interlanguage consistency. Fortunately, we can always use parentheses, and we can even write compilers which nag at you about not using redundant parentheses until you give up and write them.
Anyway, that paragraph probably contravened SO's no-opinions policy, so let's get back to Pratt's paper. The rule he proposes is that all of a languages primitive datatypes be listed in a fixed total ordering, with the hope that all language designers will choose the same ordering. (I use the word "dominates" to describe this ordering: type A dominates another type B if A comes later in Pratt's list than B. A type does not dominate itself.)
At one end of the ordering is the null type, which is the result type of an operator which doesn't have a return value. Pratt calls this type "Outcome", since an operator which doesn't return anything must have had some side-effect --its "outcome"-- in order to not be pointless. At the other end of the ordering is what C++ calls a reference type: something which can be used as an argument to an assignment operator. And he then proposes a semantic rule: no operator can produce a result whose type dominates the type of one or more of its arguments, unless the operator's syntax unambiguously identifies the arguments.
That last exception is clearly necessary, since there will always be operators which produce types subordinate to the types of their arguments. Pratt's example is the length operator, which in his view must require parentheses because the Integer type dominates the String and Collection types, so length x, which returns an Integer given a String, cannot be legal. (You could write length(x) or |x| (provided | is not used for other purposes), because those syntaxes are unambiguous.)
It's worth noting that this rule must also apply to implicit coercions, which is equivalent to saying that the rule applies to all overloads of a single operator symbol. (C++ was still far in the future when Pratt was writing, but implicit coercions were common.)
Given that total ordering on types and the restriction (which Pratt calls "slightly irksome") on operator syntax, he then proposes a single simple syntactic rule: operator associativity is resolved by eliminating all possibilities which would violate type ordering. If that's not sufficient to resolve associativity, it can only be the case that there is only one type between the result and argument types of the two operators vying for precedence. In that case, associativity is to the left.
Pratt goes on to prove that this rule is sufficient to remove all ambiguity, and furthermore that it is possible to derive Floyd's operator precedence relation from type ordering (and the knowledge about return and argument types of every operator). So syntactically, Pratt's languages are similar to Floyd's operator precedence grammars.
But remember that Pratt is talking about language design, not parsing. (Up to that point of the paper.) Floyd, on the other hand, was only concerned with parsing, and his parsing model would certainly allow a prefix length operator. (See, for example, C's sizeof operator.) So the two models are not semantically equivalent.
This reduces the amount of memorization needed by someone learning the language: they only have to memorize the order of types. They don't need to try to memorize the precedence between, say, a concatenation operator and a division operator, because they can just work it out from the fact that Integer dominates String. [Note 1]
Unfortunately, Pratt has to deal with the fact that "left associative unless you have a type mismatch" really does not cover all the common expectations for expression parsing. Although not every language complies, most of us would be surprised to find that a*4 + b*6 was parsed as ((a * 4) + b) * 6, and would demand an explanation. So Pratt proposes that it is possible to make an exception by creating "pseudotypes". We can pretend that the argument and return types of multiplication and division are different from (and dominate) the argument and return types of addition and subtraction. Then we can allow the Product type to be implicitly coerced to the Sum type (conceptually, because the coercion does nothing), thereby forcing the desired parse.
Of course, he has now gone full circle: the programmer needs to memorise both the type ordering rules, but also the pseudotype ordering rules, which are nothing but precedence rules in disguise.
The rest of the paper describes the actual algorithm, which is quite clever. Although it is conceptually identical to Floyd's operator precedence parsing, Pratt's algorithm is a top-down algorithm; it uses the native call stack instead of requiring a separate stack algorithm, and it allows the parser to interpolate code with production parsing without waiting for the production to terminate.
I think I've already deviated sufficiently from SO's guidelines in the tone of this answer, so I'll leave it at that, with no other comment about the relative virtues of top-down and bottom-up control flows.
Notes
Integer dominates String means that there is no implicit coercion from a String to an Integer, the same reason that the length operator needs to be parenthesised. There could be an implicit coercion from Integer to String. So the expression a divide b concatenate c must be parsed as (a divide b) concatenate c. a divide (b concatenate c) is disallowed and so the parser can ignore the possibility.
Context
I've recently come up with an issue that I couldn't solve by myself in a parser I'm writing.
This parser is a component in a compiler I'm building and the question is in regards to the expression parsing necessary in programming language parsing.
My parser uses recursive descent to parse expressions.
The problem
I parse expressions using normal regular language parsing rules, I've eliminated left recursion in all my rules but there is one syntactic "ambiguity" which my parser simply can't handle and it involves generics.
comparison → addition ( ( ">" | ">=" | "<" | "<=" ) addition )* ;
is the rule I use for parsing comparison nodes in the expression
On the other hand I decided to parse generic expressions this way:
generic → primary ( "<" arguments ">" ) ;
where
arguments → expression ( "," expression )* ;
Now because generic expressions have higher precedence as they are language constructs and not mathematical expressions, it causes a scenario where the generic parser will attempt to parse expressions when it shouldn't.
For example in a<2 it will parse "a" as a primary element of the identifier type, immediately afterwards find the syntax for a generic type, parse that and fail as it can't find the closing tag.
What is the solution to such a scenario? Especially in languages like C++ where generics can also have expressions in them if I'm not mistaken arr<1<2> might be legal syntax.
Is this a special edge case or does it require a modification to the syntax definition that im not aware of?
Thank you
for example in a<2 it will parse "a" as a primary element of the identifier type, immideatly afterwards find the syntax for a generic type, parse that and fail as it cant find the closing tag
This particular case could be solved with backtracking or unbounded lookahead. As you said, the parser will eventually fail when interpreting this as a generic, so when that happens, you can go back and parse it as a relational operator instead. The lookahead variant would be to look ahead when seeing a < to check whether the < is followed by comma-separated type names and a > and only go into the generic rule if that is the case.
However that approach no longer works if both interpretations are syntactically valid (meaning the syntax actually is ambiguous). One example of that would be x<y>z, which could either be a declaration of a variable z of type x<y> or two comparisons. This example is somewhat unproblematic since the latter meaning is almost never the intended one, so it's okay to always interpret it as the former (this happens in C# for example).
Now if we allow expressions, it becomes more complicated. For x<y>z it's easy enough to say that this should never be interpreted as two comparison as it makes no sense to compare the result of a comparison with something else (in many languages using relational operators on Booleans is a type error anyway). But for something like a<b<c>() there are two interpretations that might both be valid: Either a is a generic function called with the generic argument b<c or b is a generic function with the generic argument c (and a is compared to the result of calling that function). At this point it is no longer possible to resolve that ambiguity with syntactic rules alone:
In order to support this, you'll need to either check whether the given primary refers to a generic function and make different parsing decisions based on that or have your parser generate multiple trees in case of ambiguities and then select the correct one in a later phase. The former option means that your parser needs to keep track of which generic functions are currently defined (and in scope) and then only go into the generic rule if the given primary is the name of one of those functions. Note that this becomes a lot more complicated if you allow functions to be defined after they are used.
So in summary supporting expressions as generic arguments requires you to keep track of which functions are in scope while parsing and use that information to make your parsing decisions (meaning your parser is context sensitive) or generate multiple possible ASTs. Without expressions you can keep it context free and unambiguous, but will require backtracking or arbitrary lookahead (meaning it's LL(*)).
Since neither of those are ideal, some languages change the syntax for calling generic functions with explicit type parameters to make it LL(1). For example:
Java puts the generic argument list of a method before the method name, i.e. obj.<T>foo() instead of obj.foo<T>().
Rust requires :: before the generic argument list: foo::<T>() instead of foo<T>().
Scala uses square brackets for generics and for nothing else (array subscripts use parentheses): foo[T]() instead of foo<T>().
Conceptually I see a value as a single element. I can understand that at the lowest level of hardware the value returned is zero or one. I just see a "value" as returning a single unit. I see a procedure as a multiple unit. For example, a procedure (+ x x) to me seems like it should return "(", ")", "+" , "x". In this example, the value of lambda is the procedure.
What am I missing here?
Scheme is primarily a functional programming language. Functional languages deal with expressions1 (as opposed to statements); expressions are at the core of functional languages pretty much like classes are at the core of object-oriented languages.
In Scheme, functions are expressed as lambda expressions. Since Scheme primarily deals with expressions, and since lambda expressions themselves are expressions, Scheme deals with functions just like any other expression. Therefore, functions are first-class citizens of the language.
I don't think one should feel overly concerned about how exactly all of this translates under the hood in terms of bits and bytes. What plays as a strength in some languages (C/C++) can quickly turn against you here: imperative thinking in Scheme will only get you frustrated, and bounce you right back out to mainstream paradigms and languages.
What functional languages are really about is abstraction, metaprogramming (many Schemes feature powerful syntactic macros), and more abstraction. There is this well-known quote from Peter Deutsch: "Lisp ... made me aware that software could be close to executable mathematics.", I think it sums it up very well.
1 In Scheme (and other dialects of LISP), s-expressions are used to denote expressions. They give the language that distinctive parenthesized syntax.
When the program is compiled the code for a procedure is generated. When you run the program the code for a procedure is stored at a given address. A procedure value will in most implementations consist of a record/struct containing the name of the procedure, the address where the procedure us stored (when you call the procedure, the cpu jumps to this address) and finally in the case of procedure created by a lambda expression a table of the free values.
In Scheme everything is passed by value. When that said values that are not able to be stored in the actual address space of a machine word are pointers to data structures.
An internal procedure (primitive) is treated specially such that it might be just a value while a evaluated lambda expression is multi value object. That object has an address which is the "value" of the procedure. When evaluating a lambda form it turns into such a structure. Example:
;; a lambda form is evaluated into a closure and then
;; that object is the value of variable x
(define x (lambda y y)) ; ==> undefined
;; x is evaluated and turns out to be a closure. Scheme
;; evalautes rest of arguments before applying.
(x 1 2 3) ; ==> (1 2 3)
;; same only that all the arguments also evaluates to the same closure.
(x x x x) ; ==> (#<closure> #<closure> #<closure>)
In the PowerShell Language Specification, the grammar has a term called "Primary Expression".
primary-expression:
value
member-access
element-access
invocation-expression
post-increment-expression
post-decrement-expression
Semantically, what is a primary expression intended to describe?
As I understand it there are two parts to this:
Formal grammars break up things like expressions so operator precedence is implicit.
Eg. if the grammar had
expression:
value
expression * expression
expression + expression
…
There would need to be a separate mechanism to define * as having higher precedence than +. This
becomes significant when using tools to directly transform the grammar into a tokeniser/parser.1
There are so many different specific rules (consider the number of operators in PowerShell) that using fewer rules would be harder to understand because each rule would be so long.
1 I suspect this is not the case with PowerShell because it is highly context sensitive (express vs command mode, and then consider calling non-inbuilt executables). But such grammars across languages tend to have a lot in common, so style can also be carried over (don't re-invent the wheel).
My understaning of it is that an expression is an arrangement of commands/arguments and/or operators/operands that, taken together will execute and effect an action or produce a result (even if it's $null) than can be assigned to a variable or send down the pipeline. The term "primary expression" is used to differentiate the whole expression from any sub-expressions $() that may be contained within it.
My very limited experience says that primary refers to expressions (or sub-expressions) that can't be parsed in more than one way, regardless of precedence or associativity. They have enough syntactic anchors that they are really non-negotiable--ID '(' ID ')' for example, can only match that. The parens make it guaranteed, and there are no operators to allow for any decision on precedence.
In matching an expression tree, it's common to have a set of these sub-expressions under a "primary_expr" rule. Any sub-expressions can be matched any way you like because they're all absolutely determined by syntax. Everything after that will have to be ordered carefully in the grammar to guarantee correct precedence and associativity.
There's some support for this interpretation here.
think $( lots of statements ) would be like a complex expression
and in powershell most things can be an expression like
$a = if ($y) { 1 } else { 2}
so primary expression is likely the lowest common denominator of classic expressions
a single thing that returns a value
whether an explicit value, calling something.. getting a variable, or a property from a variable $x.y or the result of increment operation $z++
but even a simple math "expression" wouldn't match this 4 +2 + (3 / 4 ) , that would be more of a complex expression.
So I was thinking of the WHY behind it, and at first it was mentioned that it could be to help determine command/expression mode, but with investigation that wasn't it. Then i thought maybe it was what could be passed in command mode as an argument explicitly, but thats not it, because if you pass $x++ as a parameter to a cmdlet you get a string.
I think its probably just the lowest common denominator of expressions, a building block, so the next question is what other expressions does the grammar contain, and where is this one used?
This page says "Prefix operators are usually right-associative, and postfix operators left-associative" (emphasis mine).
Are there real examples of left-associative prefix operators, or right-associative postfix operators? If not, what would a hypothetical one look like, and how would it be parsed?
It's not particularly easy to make the concepts of "left-associative" and "right-associative" precise, since they don't directly correspond to any clear grammatical feature. Still, I'll try.
Despite the lack of math layout, I tried to insert an explanation of precedence relations here, and it's the best I can do, so I won't repeat it. The basic idea is that given an operator grammar (i.e., a grammar in which no production has two non-terminals without an intervening terminal), it is possible to define precedence relations ⋖, ≐, and ⋗ between grammar symbols, and then this relation can be extended to terminals.
Put simply, if a and b are two terminals, a ⋖ b holds if there is some production in which a is followed by a non-terminal which has a derivation (possibly not immediate) in which the first terminal is b. a ⋗ b holds if there is some production in which b follows a non-terminal which has a derivation in which the last terminal is a. And a ≐ b holds if there is some production in which a and b are either consecutive or are separated by a single non-terminal. The use of symbols which look like arithmetic comparisons is unfortunate, because none of the usual arithmetic laws apply. It is not necessary (in fact, it is rare) for a ≐ a to be true; a ≐ b does not imply b ≐ a and it may be the case that both (or neither) of a ⋖ b and a ⋗ b are true.
An operator grammar is an operator precedence grammar iff given any two terminals a and b, at most one of a ⋖ b, a ≐ b and a ⋗ b hold.
If a grammar is an operator-precedence grammar, it may be possible to find an assignment of integers to terminals which make the precedence relationships more or less correspond to integer comparisons. Precise correspondence is rarely possible, because of the rarity of a ≐ a. However, it is often possible to find two functions, f(t) and g(t) such that a ⋖ b is true if f(a) < g(b) and a ⋗ b is true if f(a) > g(b). (We don't worry about only if, because it may be the case that no relation holds between a and b, and often a ≐ b is handled with a different mechanism: indeed, it means something radically different.)
%left and %right (the yacc/bison/lemon/... declarations) construct functions f and g. They way they do it is pretty simple. If OP (an operator) is "left-associative", that means that expr1 OP expr2 OP expr3 must be parsed as <expr1 OP expr2> OP expr3, in which case OP ⋗ OP (which you can see from the derivation). Similarly, if ROP were "right-associative", then expr1 ROP expr2 ROP expr3 must be parsed as expr1 ROP <expr2 ROP expr3>, in which case ROP ⋖ ROP.
Since f and g are separate functions, this is fine: a left-associative operator will have f(OP) > g(OP) while a right-associative operator will have f(ROP) < g(ROP). This can easily be implemented by using two consecutive integers for each precedence level and assigning them to f and g in turn if the operator is right-associative, and to g and f in turn if it's left-associative. (This procedure will guarantee that f(T) is never equal to g(T). In the usual expression grammar, the only ≐ relationships are between open and close bracket-type-symbols, and these are not usually ambiguous, so in a yacc-derivative grammar it's not necessary to assign them precedence values at all. In a Floyd parser, they would be marked as ≐.)
Now, what about prefix and postfix operators? Prefix operators are always found in a production of the form [1]:
non-terminal-1: PREFIX non-terminal-2;
There is no non-terminal preceding PREFIX so it is not possible for anything to be ⋗ PREFIX (because the definition of a ⋗ b requires that there be a non-terminal preceding b). So if PREFIX is associative at all, it must be right-associative. Similarly, postfix operators correspond to:
non-terminal-3: non-terminal-4 POSTFIX;
and thus POSTFIX, if it is associative at all, must be left-associative.
Operators may be either semantically or syntactically non-associative (in the sense that applying the operator to the result of an application of the same operator is undefined or ill-formed). For example, in C++, ++ ++ a is semantically incorrect (unless operator++() has been redefined for a in some way), but it is accepted by the grammar (in case operator++() has been redefined). On the other hand, new new T is not syntactically correct. So new is syntactically non-associative.
[1] In Floyd grammars, all non-terminals are coalesced into a single non-terminal type, usually expression. However, the definition of precedence-relations doesn't require this, so I've used different place-holders for the different non-terminal types.
There could be in principle. Consider for example the prefix unary plus and minus operators: suppose + is the identity operation and - negates a numeric value.
They are "usually" right-associative, meaning that +-1 is equivalent to +(-1), the result is minus one.
Suppose they were left-associative, then the expression +-1 would be equivalent to (+-)1.
The language would therefore have to give a meaning to the sub-expression +-. Languages "usually" don't need this to have a meaning and don't give it one, but you can probably imagine a functional language in which the result of applying the identity operator to the negation operator is an operator/function that has exactly the same effect as the negation operator. Then the result of the full expression would again be -1 for this example.
Indeed, if the result of juxtaposing functions/operators is defined to be a function/operator with the same effect as applying both in right-to-left order, then it always makes no difference to the result of the expression which way you associate them. Those are just two different ways of defining that (f g)(x) == f(g(x)). If your language defines +- to mean something other than -, though, then the direction of associativity would matter (and I suspect the language would be very difficult to read for someone used to the "usual" languages...)
On the other hand, if the language doesn't allow juxtaposing operators/functions then prefix operators must be right-associative to allow the expression +-1. Disallowing juxtaposition is another way of saying that (+-) has no meaning.
I'm not aware of such a thing in a real language (e.g., one that's been used by at least a dozen people). I suspect the "usually" was merely because proving a negative is next to impossible, so it's easier to avoid arguments over trivia by not making an absolute statement.
As to how you'd theoretically do such a thing, there seem to be two possibilities. Given two prefix operators # and # that you were going to treat as left associative, you could parse ##a as equivalent to #(#(a)). At least to me, this seems like a truly dreadful idea--theoretically possible, but a language nobody should wish on even their worst enemy.
The other possibility is that ##a would be parsed as (##)a. In this case, we'd basically compose # and # into a single operator, which would then be applied to a.
In most typical languages, this probably wouldn't be terribly interesting (would have essentially the same meaning as if they were right associative). On the other hand, I can imagine a language oriented to multi-threaded programming that decreed that application of a single operator is always atomic--and when you compose two operators into a single one with the left-associative parse, the resulting fused operator is still a single, atomic operation, whereas just applying them successively wouldn't (necessarily) be.
Honestly, even that's kind of a stretch, but I can at least imagine it as a possibility.
I hate to shoot down a question that I myself asked, but having looked at the two other answers, would it be wrong to suggest that I've inadvertently asked a subjective question, and that in fact that the interpretation of left-associative prefixes and right-associative postfixes is simply undefined?
Remembering that even notation as pervasive as expressions is built upon a handful of conventions, if there's an edge case that the conventions never took into account, then maybe, until some standards committee decides on a definition, it's better to simply pretend it doesn't exist.
I do not remember any left-associated prefix operators or right-associated postfix ones. But I can imagine that both can easily exist. They are not common because the natural way of how people are looking to operators is: the one which is closer to the body - is applying first.
Easy example from C#/C++ languages:
~-3 is equal 2, but
-~3 is equal 4
This is because those prefix operators are right associative, for ~-3 it means that at first - operator applied and then ~ operator applied to the result of previous. It will lead to value of the whole expression will be equal to 2
Hypothetically you can imagine that if those operators are left-associative, than for ~-3 at first left-most operator ~ is applied, and after that - to the result of previous. It will lead to value of the whole expression will be equal to 4
[EDIT] Answering to Steve Jessop:
Steve said that: the meaning of "left-associativity" is that +-1 is equivalent to (+-)1
I do not agree with this, and think it is totally wrong. To better understand left-associativity consider the following example:
Suppose I have hypothetical programming language with left-associative prefix operators:
# - multiplies operand by 3
# - adds 7 to operand
Than following construction ##5 in my language will be equal to (5*3)+7 == 22
If my language was right-associative (as most usual languages) than I will have (5+7)*3 == 36
Please let me know if you have any questions.
Hypothetical example. A language has prefix operator # and postfix operator # with the same precedence. An expression #x# would be equal to (#x)# if both operators are left-associative and to #(x#) if both operators are right-associative.