How does operator precedence in Bison work? - parsing

This is probably a simple question and has already been asked but I'm having difficulty understanding Bison and in particular operator precedence. If I have this code and + has left association.
%left '+'
%%
S:
S E ’\n’ { printf("%d\n", $2); }
|
;
E:
num { $$ = $1; }
| E '+' E {$$ = $1 + $3;}
| E '*' E {$$ = $1 * $3;}
;
%%
The input is 2+3+4*5 and the output is 25. My answer was 45.
Could someone show me step by step what Bison does? I mean how elements are pushed to the stack and how and when they are reduced. Or even the parse tree if is possible.

The easiest way to see what is going on in a grammar is to enable bison's trace facility, explained in the bison manual section on debugging parsers. It's useful to have the state machine handy while you're reading traces, since the trace provides the path through the state machine. To see the state machine, use bison's -v option, which will create a file with extension .output.
$ cat minimal.y
%{
#include <stdio.h>
#include <ctype.h>
int yylex(void);
void yyerror(const char* msg) {
fprintf(stderr, "%s\n", msg);
}
%}
%token num
%left '+'
%%
S: S E '\n' { printf("%d\n", $2); }
|
E: num { $$ = $1; }
| E '+' E {$$ = $1 + $3;}
| E '*' E {$$ = $1 * $3;}
%%
int yylex(void) {
int c;
do c = getchar(); while (c == ' ');
if (isdigit(c)) {
yylval = c - '0';
return num;
}
return c == EOF ? 0 : c;
}
int main(int argc, char* argv[]) {
#if YYDEBUG
yydebug = 1;
#endif
return yyparse();
}
Compile and run:
$ bison -t -v -o minimal.c minimal.y
minimal.y: warning: 3 shift/reduce conflicts [-Wconflicts-sr]
$ gcc -Wall -o minimal minimal.c
$ ./minimal <<<'2+3+4*5'
Starting parse
Entering state 0
Reducing stack by rule 2 (line 14):
-> $$ = nterm S ()
Stack now 0
I snipped the trace (although you can see it at the bottom of the answer). Look through the trace for the line which says that it is reading the token *:
Entering state 8
Reading a token: Next token is token '*' ()
Shifting token '*' ()
Entering state 7
Here's the definition of State 8 from minimal.output, complete with shift-reduce conflict (indicated by the square brackets around the action which will not be taken) and the default resolution:
State 8
4 E: E . '+' E
4 | E '+' E .
5 | E . '*' E
'*' shift, and go to state 7
'*' [reduce using rule 4 (E)]
$default reduce using rule 4 (E)
Here's the complete trace (although I strongly encourage you to do the experiment on your own machine):
Starting parse
Entering state 0
Reducing stack by rule 2 (line 14):
-> $$ = nterm S ()
Stack now 0
Entering state 1
Reading a token: Next token is token num ()
Shifting token num ()
Entering state 3
Reducing stack by rule 3 (line 16):
$1 = token num ()
-> $$ = nterm E ()
Stack now 0 1
Entering state 4
Reading a token: Next token is token '+' ()
Shifting token '+' ()
Entering state 5
Reading a token: Next token is token num ()
Shifting token num ()
Entering state 3
Reducing stack by rule 3 (line 16):
$1 = token num ()
-> $$ = nterm E ()
Stack now 0 1 4 5
Entering state 8
Reading a token: Next token is token '+' ()
Reducing stack by rule 4 (line 17):
$1 = nterm E ()
$2 = token '+' ()
$3 = nterm E ()
-> $$ = nterm E ()
Stack now 0 1
Entering state 4
Next token is token '+' ()
Shifting token '+' ()
Entering state 5
Reading a token: Next token is token num ()
Shifting token num ()
Entering state 3
Reducing stack by rule 3 (line 16):
$1 = token num ()
-> $$ = nterm E ()
Stack now 0 1 4 5
Entering state 8
Reading a token: Next token is token '*' ()
Shifting token '*' ()
Entering state 7
Reading a token: Next token is token num ()
Shifting token num ()
Entering state 3
Reducing stack by rule 3 (line 16):
$1 = token num ()
-> $$ = nterm E ()
Stack now 0 1 4 5 8 7
Entering state 9
Reading a token: Next token is token '\n' ()
Reducing stack by rule 5 (line 18):
$1 = nterm E ()
$2 = token '*' ()
$3 = nterm E ()
-> $$ = nterm E ()
Stack now 0 1 4 5
Entering state 8
Next token is token '\n' ()
Reducing stack by rule 4 (line 17):
$1 = nterm E ()
$2 = token '+' ()
$3 = nterm E ()
-> $$ = nterm E ()
Stack now 0 1
Entering state 4
Next token is token '\n' ()
Shifting token '\n' ()
Entering state 6
Reducing stack by rule 1 (line 13):
$1 = nterm S ()
$2 = nterm E ()
$3 = token '\n' ()
25
-> $$ = nterm S ()
Stack now 0
Entering state 1
Reading a token: Now at end of input.
Shifting token $end ()
Entering state 2
Stack now 0 1 2
Cleanup: popping token $end ()
Cleanup: popping nterm S ()

Related

OCaml: parse minus floating-point number as a calculator

What I would like to do
I would like to correctly parse minus floating-point numbers.
How should I fix my code?
What is not working
When I try to interpret - 5 as -5.000000, it shows me this error.
Fatal error: exception Stdlib.Parsing.Parse_error
1c1
< error: parse error at char=0, near token '-'
---
> - 5 = -5.000000
My source code
calc_ast.ml
(* abstract syntax tree *)
type expr =
Num of float
| Plus of expr * expr
| Times of expr * expr
| Div of expr * expr
| Minus of expr * expr
;;
calc_lex.ml
{
open Calc_parse
;;
}
rule lex = parse
| [' ' '\t' '\n' ] { lex lexbuf }
| '-'? ['0' - '9']+ as s { NUM(float_of_string s) }
| '-'? ['0' - '9']+ ('.' digit*)? as s { NUM(float_of_string s) }
| '+' { PLUS }
| '-' { MINUS }
| '*' { TIMES }
| '/' { DIV }
| '(' { LPAREN }
| ')' { RPAREN }
| eof { EOF }
calc_parse.mly
%{
%}
%token <float> NUM
%token PLUS TIMES EOF MINUS DIV LPAREN RPAREN
%start program
%type <Calc_ast.expr> program
%%
program :
| compound_expr EOF { $1 }
compound_expr :
| expr { $1 }
| LPAREN expr RPAREN { $2 }
expr :
| mul { $1 }
| expr PLUS mul { Calc_ast.Plus($1, $3) }
| expr MINUS mul { Calc_ast.Minus($1, $3) }
mul :
| NUM { Calc_ast.Num $1 }
| mul TIMES NUM { Calc_ast.Times($1, Calc_ast.Num $3) }
| mul DIV NUM { Calc_ast.Div($1, Calc_ast.Num $3) }
%%
calc.ml
open Calc_parse
(* token -> string *)
let string_of_token t =
match t with
NUM(s) -> Printf.sprintf "NUM(%f)" s
| PLUS -> "PLUS"
| TIMES -> "TIMES"
| MINUS -> "MINUS"
| DIV -> "DIV"
| LPAREN -> "LPAREN"
| RPAREN -> "RPAREN"
| EOF -> "EOF"
;;
(* print token t and return it *)
let print_token t =
Printf.printf "%s\n" (string_of_token t);
t
;;
(* apply lexer to string s *)
let lex_string s =
let rec loop b =
match print_token (Calc_lex.lex b) with
EOF -> ()
| _ -> loop b
in
loop (Lexing.from_string s)
;;
(* apply parser to string s;
show some info when a parse error happens *)
let parse_string s =
let b = Lexing.from_string s in
try
program Calc_lex.lex b (* main work *)
with Parsing.Parse_error as exn ->
(* handle parse error *)
let c0 = Lexing.lexeme_start b in
let c1 = Lexing.lexeme_end b in
Printf.fprintf stdout
"error: parse error at char=%d, near token '%s'\n"
c0 (String.sub s c0 (c1 - c0));
raise exn
;;
(* evaluate expression (AST tree) *)
let rec eval_expr e =
match e with
Calc_ast.Num(c) -> c
| Calc_ast.Plus(e0, e1)
-> (eval_expr e0) +. (eval_expr e1)
| Calc_ast.Minus(e0, e1)
-> (eval_expr e0) -. (eval_expr e1)
| Calc_ast.Times(e0, e1)
-> (eval_expr e0) *. (eval_expr e1)
| Calc_ast.Div(e0, e1)
-> (eval_expr e0) /. (eval_expr e1)
;;
(* evaluate string *)
let eval_string s =
let e = parse_string s in
eval_expr e
;;
(* evaluate string and print it *)
let eval_print_string s =
let y = eval_string s in
Printf.printf "%s = %f\n" s y
;;
let eval_print_stdin () =
let ch = stdin in
let s = input_line ch in
eval_print_string (String.trim s)
;;
let main argv =
eval_print_stdin ()
;;
if not !Sys.interactive then
main Sys.argv
;;
As indicated in the comments, it's almost never a good idea for the lexical analyser to try to recognise the - as part of a numeric literal:
Since the lexical token must be a contiguous string, - 5 will not match. Instead, you'll get two tokens. So you need to handle that in the parser anyway.
On the other hand, if you don't put a space after the -, then 3-4 will be analysed as the two tokens 3 and -4, which is also going to lead to a syntax error.
A simple solution is to add term to recognise the unary negation operator:
mul :
| term { Calc_ast.Num $1 }
| mul TIMES term { Calc_ast.Times($1, Calc_ast.Num $3) }
| mul DIV term { Calc_ast.Div($1, Calc_ast.Num $3) }
term :
| NUM { $1 }
| MINUS term { Calc_ast.Minus(0, $2) }
| LPAREN expr RPAREN { $2 }
In the above, I also moved the handling of parentheses from the bottom to the top of the hierarchy, in order to make 4*(5+3) possible. With that change, you will no longer require compound_expr.

Why does my "equation" grammar break the parser?

Currently, my parser file looks like this:
%{
#include <stdio.h>
#include <math.h>
int yylex();
void yyerror (const char *s);
%}
%union {
long num;
char* str;
}
%start line
%token print
%token exit_cmd
%token <str> identifier
%token <str> string
%token <num> number
%%
line: assignment {;}
| exit_stmt {;}
| print_stmt {;}
| line assignment {;}
| line exit_stmt {;}
| line print_stmt {;}
;
assignment: identifier '=' number {printf("Assigning var %s to value %d\n", $1, $3);}
| identifier '=' string {printf("Assigning var %s to value %s\n", $1, $3);}
;
exit_stmt: exit_cmd {exit(0);}
;
print_stmt: print print_expr {;}
;
print_expr: string {printf("%s\n", $1);}
| number {printf("%d\n", $1);}
;
%%
int main(void)
{
return yyparse();
}
void yyerror (const char *s) {fprintf(stderr, "%s\n", s);}
Giving the input: myvar = 3 gives the output Assigning var myvar = 3 to value 3, as expected. However, modifying the code to include an equation grammar rule breaks such assignments.
Equation grammar:
equation: number '+' number {$$ = $1 + $3;}
| number '-' number {$$ = $1 - $3;}
| number '*' number {$$ = $1 * $3;}
| number '/' number {$$ = $1 / $3;}
| number '^' number {$$ = pow($1, $3);}
| equation '+' number {$$ = $1 + $3;}
| equation '-' number {$$ = $1 - $3;}
| equation '*' number {$$ = $1 * $3;}
| equation '/' number {$$ = $1 / $3;}
| equation '^' number {$$ = pow($1, $3);}
;
Modifying the assignment grammar accordingly as well:
assignment: identifier '=' number {printf("Assigning var %s to value %d\n", $1, $3);}
| identifier '=' equation {printf("Assigning var %s to value %d\n", $1, $3);}
| identifier '=' string {printf("Assigning var %s to value %s\n", $1, $3);}
;
And giving the equation rule the type of num in the parser's first section:
%type <num> equation
Giving the same input: var = 3 freezes the program.
I know this is a long question but can anyone please explain what is going on here?
Also, here's the lexer in case you wanna take a look.
It doesn't "freeze the program". The program is just waiting for more input.
In your first grammar, var = 3 is a complete statement which cannot be extended. But in your second grammar, it could be the beginning of var = 3 + 4, for example. So the parser needs to read another token after the 3. If you want input lines to be terminated by a newline, you will need to modify your scanner to send a newline character as a token, and then modify your grammar to expect a newline token at the end of every statement. If you intend to allow statements to be spread out over several lines, you"ll need to be aware of that fact while typing input.
There are several problems with your grammar, and also with your parser. (Flex doesn't implement non-greedy repetition, for example.) Please look at the examples in the bison and flex manuals

Simple Calculator in Bison

I am new to parsing. Following is a code snippet for a parser in Bison :
Parser.y:
%{
#include <stdio.h>
%}
/* declare tokens */
%token NUMBER
%token ADD SUB MUL DIV ABS
%token EOL
%%
calclist: /* nothing */
| calclist exp EOL { printf("= %d\n", $1); }
;
exp: factor
| exp ADD factor { $$ = $1 + $3; }
| exp SUB factor { $$ = $1 - $3; }
;
factor: term
| factor MUL term { $$ = $1 * $3; }
| factor DIV term { $$ = $1 / $3; }
;
term: NUMBER
| ABS term { $$ = $2 >= 0? $2 : - $2; }
;
%%
main(int argc, char **argv)
{ yyparse();
}
yyerror(char *s)
{ fprintf(stderr, "error: %s\n", s);
}
I am struggling to understand how the input string 10 - 3 * 2 + 6 will be parsed/processed adhering to the operator precedence. Can anyone please describe the parsing mechanism step-by-step ? For e.g.
Step1: 10 is read and token NUMBER is returned
Step2: etc....
Any help is appreciated.
Thanks.
bison parsers will happily tell you exactly what they are doing if you ask them to, by using bison's trace facility.
To get the following trace, I used your input file with minimal changes:
I fixed the prototypes with no return value (main and yyerror) and added forward declarations of yylex and yyerror.
I fixed the printf in calclist to print the value of the expression ($2) rather than the calclist itself, which has no value.
I changed the single character tokens (ADD, SUB, etc.) to the actual single characters ('+', -, etc.) in order to simplify the scanner
I added a trivial lexer.
Finally, I enabled tracing by adding yydebug = 1; to the main function and invoking bison with the -t flag.
The result, using the expression you provide, is below. To understand the state transitions, you will want to print out the state transition table. Use the -v option to bison.
$ ./trace <<< '10 - 3 * 2 + 6'
Starting parse
Entering state 0
Reducing stack by rule 1 (line 13):
-> $$ = nterm calclist ()
Stack now 0
Entering state 1
Reading a token: Next token is token NUMBER ()
Shifting token NUMBER ()
Entering state 3
Reducing stack by rule 10 (line 22):
$1 = token NUMBER ()
-> $$ = nterm term ()
Stack now 0 1
Entering state 9
Reducing stack by rule 7 (line 19):
$1 = nterm term ()
-> $$ = nterm factor ()
Stack now 0 1
Entering state 8
Reading a token: Next token is token '-' ()
Reducing stack by rule 4 (line 16):
$1 = nterm factor ()
-> $$ = nterm exp ()
Stack now 0 1
Entering state 7
Next token is token '-' ()
Shifting token '-' ()
Entering state 14
Reading a token: Next token is token NUMBER ()
Shifting token NUMBER ()
Entering state 3
Reducing stack by rule 10 (line 22):
$1 = token NUMBER ()
-> $$ = nterm term ()
Stack now 0 1 7 14
Entering state 9
Reducing stack by rule 7 (line 19):
$1 = nterm term ()
-> $$ = nterm factor ()
Stack now 0 1 7 14
Entering state 18
Reading a token: Next token is token '*' ()
Shifting token '*' ()
Entering state 15
Reading a token: Next token is token NUMBER ()
Shifting token NUMBER ()
Entering state 3
Reducing stack by rule 10 (line 22):
$1 = token NUMBER ()
-> $$ = nterm term ()
Stack now 0 1 7 14 18 15
Entering state 19
Reducing stack by rule 8 (line 20):
$1 = nterm factor ()
$2 = token '*' ()
$3 = nterm term ()
-> $$ = nterm factor ()
Stack now 0 1 7 14
Entering state 18
Reading a token: Next token is token '+' ()
Reducing stack by rule 6 (line 18):
$1 = nterm exp ()
$2 = token '-' ()
$3 = nterm factor ()
-> $$ = nterm exp ()
Stack now 0 1
Entering state 7
Next token is token '+' ()
Shifting token '+' ()
Entering state 13
Reading a token: Next token is token NUMBER ()
Shifting token NUMBER ()
Entering state 3
Reducing stack by rule 10 (line 22):
$1 = token NUMBER ()
-> $$ = nterm term ()
Stack now 0 1 7 13
Entering state 9
Reducing stack by rule 7 (line 19):
$1 = nterm term ()
-> $$ = nterm factor ()
Stack now 0 1 7 13
Entering state 17
Reading a token: Next token is token '\n' ()
Reducing stack by rule 5 (line 17):
$1 = nterm exp ()
$2 = token '+' ()
$3 = nterm factor ()
-> $$ = nterm exp ()
Stack now 0 1
Entering state 7
Next token is token '\n' ()
Shifting token '\n' ()
Entering state 12
Reducing stack by rule 3 (line 15):
$1 = nterm calclist ()
$2 = nterm exp ()
$3 = token '\n' ()
= 10
-> $$ = nterm calclist ()
Stack now 0
Entering state 1
Reading a token: Now at end of input.
Shifting token $end ()
Entering state 2
Stack now 0 1 2
Cleanup: popping token $end ()
Cleanup: popping nterm calclist ()

Wrong operator precedence with Happy

Exploring parsing libraries in Haskell I came across this project: haskell-parser-examples. Running some examples I found a problem with the operator precedence. It works fine when using Parsec:
$ echo "3*2+1" | dist/build/lambda-parsec/lambda-parsec
Op Add (Op Mul (Num 3) (Num 2)) (Num 1)
Num 7
But not with Happy/Alex:
$ echo "3*2+1" | dist/build/lambda-happy-alex/lambda-happy-alex
Op Mul (Num 3) (Op Add (Num 2) (Num 1))
Num 9
Even though the operator precedence seems well-defined. Excerpt from the parser:
%left '+' '-'
%left '*' '/'
%%
Exprs : Expr { $1 }
| Exprs Expr { App $1 $2 }
Expr : Exprs { $1 }
| let var '=' Expr in Expr end { App (Abs $2 $6) $4 }
| '\\' var '->' Expr { Abs $2 $4 }
| Expr op Expr { Op (opEnc $2) $1 $3 }
| '(' Expr ')' { $2 }
| int { Num $1 }
Any hint? (I opened a bug report some time ago, but no response).
[Using gch 7.6.3, alex 3.1.3, happy 1.19.4]
This appears to be a bug in haskell-parser-examples' usage of token precedence. Happy's operator precedence only affects the rules that use the tokens directly. In the parser we want to apply precedence to the Expr rule, but the only applicable rule,
| Expr op Expr { Op (opEnc $2) $1 $3 }
doesn't use tokens itself, instead relying on opEnc to expand them. If opEnc is inlined into Expr,
| Expr '*' Expr { Op Mul $1 $3 }
| Expr '+' Expr { Op Add $1 $3 }
| Expr '-' Expr { Op Sub $1 $3 }
it should work properly.

Shift/Reduce conflicts in a propositional logic parser in Happy

I'm making a simple propositional logic parser on happy based on this BNF definition of the propositional logic grammar, this is my code
{
module FNC where
import Data.Char
import System.IO
}
-- Parser name, token types and error function name:
--
%name parse Prop
%tokentype { Token }
%error { parseError }
-- Token list:
%token
var { TokenVar $$ } -- alphabetic identifier
or { TokenOr }
and { TokenAnd }
'¬' { TokenNot }
"=>" { TokenImp } -- Implication
"<=>" { TokenDImp } --double implication
'(' { TokenOB } --open bracket
')' { TokenCB } --closing bracket
'.' {TokenEnd}
%left "<=>"
%left "=>"
%left or
%left and
%left '¬'
%left '(' ')'
%%
--Grammar
Prop :: {Sentence}
Prop : Sentence '.' {$1}
Sentence :: {Sentence}
Sentence : AtomSent {Atom $1}
| CompSent {Comp $1}
AtomSent :: {AtomSent}
AtomSent : var { Variable $1 }
CompSent :: {CompSent}
CompSent : '(' Sentence ')' { Bracket $2 }
| Sentence Connective Sentence {Bin $2 $1 $3}
| '¬' Sentence {Not $2}
Connective :: {Connective}
Connective : and {And}
| or {Or}
| "=>" {Imp}
| "<=>" {DImp}
{
--Error function
parseError :: [Token] -> a
parseError _ = error ("parseError: Syntax analysis error.\n")
--Data types to represent the grammar
data Sentence
= Atom AtomSent
| Comp CompSent
deriving Show
data AtomSent = Variable String deriving Show
data CompSent
= Bin Connective Sentence Sentence
| Not Sentence
| Bracket Sentence
deriving Show
data Connective
= And
| Or
| Imp
| DImp
deriving Show
--Data types for the tokens
data Token
= TokenVar String
| TokenOr
| TokenAnd
| TokenNot
| TokenImp
| TokenDImp
| TokenOB
| TokenCB
| TokenEnd
deriving Show
--Lexer
lexer :: String -> [Token]
lexer [] = [] -- cadena vacia
lexer (c:cs) -- cadena es un caracter, c, seguido de caracteres, cs.
| isSpace c = lexer cs
| isAlpha c = lexVar (c:cs)
| isSymbol c = lexSym (c:cs)
| c== '(' = TokenOB : lexer cs
| c== ')' = TokenCB : lexer cs
| c== '¬' = TokenNot : lexer cs --solved
| c== '.' = [TokenEnd]
| otherwise = error "lexer: Token invalido"
lexVar cs =
case span isAlpha cs of
("or",rest) -> TokenOr : lexer rest
("and",rest) -> TokenAnd : lexer rest
(var,rest) -> TokenVar var : lexer rest
lexSym cs =
case span isSymbol cs of
("=>",rest) -> TokenImp : lexer rest
("<=>",rest) -> TokenDImp : lexer rest
}
Now, I have two problems here
For some reason I get 4 shift/reduce conflicts, I don't really know where they might be since I thought the precedence would solve them (and I think I followed the BNF grammar correctly)...
(this is rather a Haskell problem) On my lexer function, for some reason I get parsing errors on the line where I say what to do with '¬', if I remove that line it's works, why could that be? (this issue is solved)
Any help would be great.
If you use happy with -i it will generate an info file. The file lists all the states that your parser has. It will also list all the possible transitions for each state. You can use this information to determine if the shift/reduce conflict is one you care about.
Information about invoking happy and conflicts:
http://www.haskell.org/happy/doc/html/sec-invoking.html
http://www.haskell.org/happy/doc/html/sec-conflict-tips.html
Below is some of the output of -i. I've removed all but State 17. You'll want to get a copy of this file so that you can properly debug the problem. What you see here is just to help talk about it:
-----------------------------------------------------------------------------
Info file generated by Happy Version 1.18.10 from FNC.y
-----------------------------------------------------------------------------
state 17 contains 4 shift/reduce conflicts.
-----------------------------------------------------------------------------
Grammar
-----------------------------------------------------------------------------
%start_parse -> Prop (0)
Prop -> Sentence '.' (1)
Sentence -> AtomSent (2)
Sentence -> CompSent (3)
AtomSent -> var (4)
CompSent -> '(' Sentence ')' (5)
CompSent -> Sentence Connective Sentence (6)
CompSent -> '¬' Sentence (7)
Connective -> and (8)
Connective -> or (9)
Connective -> "=>" (10)
Connective -> "<=>" (11)
-----------------------------------------------------------------------------
Terminals
-----------------------------------------------------------------------------
var { TokenVar $$ }
or { TokenOr }
and { TokenAnd }
'¬' { TokenNot }
"=>" { TokenImp }
"<=>" { TokenDImp }
'(' { TokenOB }
')' { TokenCB }
'.' { TokenEnd }
-----------------------------------------------------------------------------
Non-terminals
-----------------------------------------------------------------------------
%start_parse rule 0
Prop rule 1
Sentence rules 2, 3
AtomSent rule 4
CompSent rules 5, 6, 7
Connective rules 8, 9, 10, 11
-----------------------------------------------------------------------------
States
-----------------------------------------------------------------------------
State 17
CompSent -> Sentence . Connective Sentence (rule 6)
CompSent -> Sentence Connective Sentence . (rule 6)
or shift, and enter state 12
(reduce using rule 6)
and shift, and enter state 13
(reduce using rule 6)
"=>" shift, and enter state 14
(reduce using rule 6)
"<=>" shift, and enter state 15
(reduce using rule 6)
')' reduce using rule 6
'.' reduce using rule 6
Connective goto state 11
-----------------------------------------------------------------------------
Grammar Totals
-----------------------------------------------------------------------------
Number of rules: 12
Number of terminals: 9
Number of non-terminals: 6
Number of states: 19
That output basically says that it runs into a bit of ambiguity when it's looking at connectives. It turns out, the slides you linked mention this (Slide 11), "ambiguities are resolved through precedence ¬∧∨⇒⇔ or parentheses".
At this point, I would recommend looking at the shift/reduce conflicts and your desired precedences to see if the parser you have will do the right thing. If so, then you can safely ignore the warnings. If not, you have more work for yourself.
I can answer No. 2:
| c== '¬' == TokenNot : lexer cs --problem here
-- ^^
You have a == there where you should have a =.

Resources