This is the first part of a series of articles which cover the parsing technique Parsing Expression Grammars
.
This part introduces a support library and a parser generator for C# 3.0 .
The support library consists of the classes PegCharParser
and PegByteParser
which
are for parsing text and binary sources and which support user defined error handling,
direct evaluation during parsing, parse tree generation and abstract syntax tree generation.
Using these base classes results in fast, easy to understand and to extend Parsers,
which are well integrated into the hosting C# program.
The underlying parsing methodology — called Parsing Expression Grammar
[1][2][3] — is
relatively new (first described 2004), but has already many implementations.
Parsing Expressions Grammars
(PEG) can be easily implemented in any programming language,
but fit especially well into languages with a rich expression syntax like functional languages
and functionally enhanced imperative languages (like C# 3.0)
because PEG concepts have a close relationship to mutually recursive function calls,
short-circuit boolean expressions and in-place defined functions (lambdas).
A new trend in parsing is integration of parsers into a host language so that the
semantic gap between grammar notation and implementation in the host language is as small as possible
(Perl 6
and boost::sprit
are forerunners of this trend).
Parsing Expression Grammars are escpecially well suited when striving for this goal.
Earlier Grammars were not so easy to implement, so that one grammar rule could result
in dozens of code lines. In some parsing strategies, the relationship between grammar rule and
implementation code was even lost.
This is the reason, that until recently generators were used to build parsers.
This article shows how the C# 3.0 lambda facility can be used to implement a support
library for Parsing Expression Grammars, which makes parsing with the PEG technique easy.
When using this library, a PEG grammar is mapped to a C# grammar class which
inherits basic functionality from a PEG base class and
each PEG grammar rule is mapped to a method in the C# grammar class.
Parsers implemented with this libary should be fast
(provided the C# compiler inlines methods whenever possible),
easy to understand and to extend.
Error Diagnosis, generation of a parse tree and addition of semantic actions
are also supported by this library.
The most striking property of PEG and especially this library is the small footprint and
the lack of any administrative overhead.
The main emphasis of this article is on explaining the PEG framework and on studying
concrete application samples. One of the sample applications is a PEG parser generator,
which generates C# source code. The PEG parser generator is the only sample parser which has
been written manually, all other sample parsers were generated by the parser generator.
Contents
Parsing Expression Grammars are a kind of executable grammars. Execution of
a PEG grammar means, that grammar patterns matching the input string advance
the current input position accordingly.
Mismatches are handled by going back to
a previous input string position where parsing eventually continues with an alternative.
The following subchapters explain PEGs in detail and introduce the basic PEG constructs,
which have been extended by the author in order to support error diagnosis, direct evaluation and
tree generation.
The following PEG grammar rule
EnclosedDigits: [0-9]+ / '(' EnclosedDigits ')' ;
introduces a so called Nonterminal EnclosedDigits and a right hand side consisting
of two alternatives.
The first alternative ([0-9]+) describes a sequence of digits,
the second ( '(' EnclosedDigits ')') something enclosed in parentheses.
Executing EnclosedDigits with the string ((123))+5 as input would result in a match
and move the input position to just before +5.
This sample also shows the potential for recursive definitions,
since EnclosedDigits uses itself as soon as it recognizes an opening parentheses.
The following table shows the outcome of applying the above grammar for some other input strings.
The | character is an artifical character which visualizes the input position
before and after the match.
Input | Match Position | Match Result |
|((123))+5 | ((123))|+5 | true |
|123 | 123| | true |
|5+123 | |5+123 | false |
|((1)] | |((1)] | false |
For people familiar with regular expressions, it may help to think of a parsing expression
grammar as a generalized regular expression which always matches the beginning of
an input string (regexp prefixed with ^). Whereas
a regular expression consists of a single expression, a PEG consists of set of rules;
each rule can use other rules to help in parsing.
The starting rule matches the whole input and uses the other rules to match subparts of the input.
During parsing one has always a current input position and the input string starting
at this position must match against the rest of the PEG grammar.
Like regular expressions PEG supports the postfix operators * + ? ,
the dot . and character sets enclosed in [].
Unique to PEG are the prefix operators & (peek) and ! (not),
which are used to look ahead without consuming input.
Alternatives in a PEG are not separated by | but by / to indicate
that alternatives are strictly tried in sequential order.
What makes PEG grammars powerful and at the same time a potential memory hogg is
unlimited backtracking, meaning that the input position can be set back to any of
the previously visited input positions in case an alternative fails.
A good and detailed explanation of PEG can be found in the wikipedia [2].
The following table gives an overview of the PEG constructs
(and some homegrown extensions) which are
supported by the library class described in this article.
The following terminology is used
Notion | Meaning |
Nonterminal | Name of a grammar rule. In PEG, there must be exactly one
grammar rule having a nonterminal on the left hand
side. The right hand side of the grammar rule
provides the definition of the grammar rule.
A nonterminal on the right hand side of a grammar rule
must reference an existing grammar rule definition.; |
Input string | string which is parsed. |
Input position | Indicates the next input character to be read. |
Match | A grammar element can match a stretch of the input.
The match starts at the current input position. |
Success/ Failure | Possible outcome of matching a PEG element against the input |
e, e1, e2 | e, e1 and e2 stand each for arbitrary PEG expressions. |
The extended PEG constructs supported by this library are listed in the following table
(|indicates the input position, italics like in name indicate a placeholder):
PEG element | Notation | Meaning |
CodePoint |
#32 | (decimal) | #x3A0 | (hex) | #b111 | (binary) |
| Match input against the specified unicode character.
PEG | Success | Failure | #x25 | %|1 | |1% |
|
Literal | 'literal' | Match input against quoted string.
PEG | Success | Failure | 'for' | for|tran | |afordable |
Escapes take the same form as in the "C" syntax family. |
CaseInsensitive Literal | 'literal'\i | Same as for Literal but compares case insensitive. \i must follow a Literal
PEG | Success | Failure | 'FOR'\i | FoR|TraN | |affordable |
|
CharacterSet | [chars] | Same meaning as in regular expressions.
Supported are ranges as in [A-Za-z0-9], single characters and escapes sequences. |
Any | . | increment the input position except when being at the end of the input.
PEG | Success | Failure | 'this is the end' . | this is the end!| | |this is the end |
|
BITS | BITS<bitNo>
BITS<low-high> | Interprets the Bit bitNo/Bitsequence [low-high] of the current input byte as integer
which must match is used as input for the PegElement.
PEG | Success | Failure | &BITS <7-8,#b11> | |11010101 | |01010101 |
|
Sequence | e1 e2 | Match input against e1 and then -in case of
success- against e2.
PEG | Success | Failure | '#'[0-9] | #5| | |#A |
|
Sequentially executed alternatives | e1 / e2 | Match input against e1 and then - in case of failure - against e2.
PEG | Success | Failure | '<='/'<' | <|5 | |>5 |
|
Greedy Option | e? | Try to match input against e.
PEG | Success | Success | '-'? | -|42 | |+42 |
|
Greedy repeat zero or more occurrences | e* | Match input repeated against e until the match fails.
PEG | Success | Success | [0-9]* | 42|b | |-42 |
|
Greedy repeat one or more occurrences | e+ | Shorthand for e e*
PEG | Success | Failure | [0-9]* | 42|b | |-42 |
|
Greedy repeat
between minimum
and maximum
occurrences | e{min}
e{min,max}
e{,max}
e{min,}
| Match input at least min times but not more than max times against e.
PEG | Success | Failure | ('.'[0-9]*){2,3} | .12.36.42|.18b | |.42b |
|
Peek | &e | Match e without changing input position.
PEG | Success | Failure | &'42' | |42 | |-42 |
|
Not | !e | Like Peek but Success<->Failure
PEG | Success | Failure | !'42' | |-42 | |42 |
|
FATAL | FATAL <"message"> | Prints the message and error location to the error stream and
quits the parsing process afterwards (throws an exception). |
WARNING | WARNING <"message"> | WARNING<"message"> Prints the message and location to the error stream.
Success:<always> |
Mandatory | @e | Same as (e/FATAL<"e expected"> ) |
Tree Node | ^^e | if e is matched, a tree node will be added to the parse tree.
This tree node will hold the starting and ending match positions for e |
Ast Node | ^e | like ^^e, but node will be replaced by the child node if there is only one child |
Rule | N: e; | N is the nonterminal; e the right hand side which is terminated by a semicolon. |
Rule with id | [id]N: e; | id must be a positive integer, e.g. [1] Int:[0-9]+;
The id will be assigned to the the tree/ast node id.
| |
Tree building rule | [id] ^^N: e; | N will be allocated as tree node having the id <id> |
Ast building rule | [id]^N: e; | N will be allocated as tree node and is eventually replaced by a child if
the node for N has only one child which has no siblings.
|
Parametrized Rule | N<peg1, peg2,..>: e; | N takes the PEG epressions peg1, peg2 ... as parameter. This parameters
cant then be used in e. |
Into variable | e:variableName | Set the host language variable (a string , byte[] , int , double or PegBegEnd ) to the matched input stretch.
The language variable must be declared either in the semantic block of the corresponding rule
or in the semantic block of the grammar (see below). |
Bits Into variable | BITS<bitNo, :variable>
BITS<low-high, :variable> | Interpret the Bit bitNo or the Bitsequence [low-high] as integer and store it in the host variable.
|
Semantic Function | f_ | call host language function f_ in a semantic block (see below).
A semantic function has the signature bool f_();.
A return value of true is handled as success whereas a return value of false
is handled as fail. |
Semantic Block (Grammar level) | BlockName{ //host
//language
//statements
} | The BlockName can be missing in which case a local class named _Top will be created.
Functions and data of a grammar-level semantic block
can be accessed from any other rule-level semantic block.
Functions in the grammar-level semantic block can be used
as semantic functions at any place in the grammar. | |
CREATE Semantic Block (Grammar level) | CREATE{ //host
//language
//statements
} | This kind of block is used in conjunction with customized tree nodes as described at the very end of this table |
Semantic Block (Rule level) | RuleName { //host
//language
//statements
}: e; | Functions and data of a rule-level semantic block
are only available from within the associated rule.
Functions in the rule associated semantic block can be used
as semantic functions on the right hand side of the rule. |
Using semantic block (which is elsewhere defined) | RuleName
using NameOfSemanticBlock: e; | The using directive supports reusing the same semantic block
when several rules need the same local semantic block. |
Custom Node Creation
| ^^CREATE< CreaFunName> N: e;
^CREATE< CreaFuncName> N: e;
| Custom Node creation allows to create a user defined Node (which must be derived from the library node PegNode).
The CreaFunc must be defined in a CREATE semantic block (see above) and must have the following overall structure
PegNode CreaFuncName(ECreatorPhase phase, PegNode parentOrCreated, int id)
{
if (phase == ECreatorPhase.eCreate || phase == ECreatorPhase.eCreateAndComplete)
{
}else{
}
}
|
PEG's behave in some respects similar to regular expressions:
The application of a PEG to an input string can be explained
by a pattern matching process which assigns matching parts of
the input string to rules of the grammar (much like with groups in regexes)
and which backtracks in case of a mismatch. The most important difference
between a PEG and regexes is the fact, that PEG support recursivenesss
and that PEG patterns are greedy.
Compared to most other traditional language parsing techniques, PEG is surprisingly different.
The most striking differences are:
- Parsing Expression Grammars are deterministic and never ambigous,
thereby removing a
problem of most other parsing techniques. Ambiguity means that the same
input string can be
parsed with different sets of rules of a given grammar and that there
is no policy saying which of
the competing rules should be used. This is in most cases a serious
problem, since if this gets
undetected it results in different parse trees for the same input. The
lack of ambiguity is a big plus for PEG. But the fact, that the order
of alternatives in a PEG rule matters, takes getting used to.
The following PEG rule e.g.
rel_operator: '<' / '<=' / '>' / '>=';
will never succeed in recognizing <=
because the first alternative will be chosen.
The correct
rule is:
rel_operator: '<=' / '<' / '>=' / '>';
- Parsing Expression Grammars are scannerless, whereas most other parsers
divide the parsing task
into a low level lexical parse phase called scanning and a high level
phase - proper parsing. The lexical parse phase
just parses items like numbers, identifiers and strings and presents
the information as a so called token
to the proper parser. This subdivision has its merits and its weak
points. It avoids in some cases backtracking and makes it e.g. easy to
distinguish between a keyword and an identifier. A weak point of most
scanners is the lack of context information inside the scanner so that
a given input string always results in the same token. This is not
always desirable and makes e.g. problems in C++ for the input string
>>
which can be a right shift operator or the closing of two
template brackets.
- Parsing Expression Grammars can backtrack to an arbitrary location
at the beginning of the input string.
PEG does not require that a file which has to be parsed must be read
completely into memory, but it prohibits
to give free any part of the file which has already been parsed. This
means that a file which foreseeably will
be parsed to the end, should be read into memory completely before
parsing starts. Fortunately memory is not anymore a scarce resource. In
a direct evaluation scenario (semantic actions are executed as soon as
the corresponding
syntax element is recognized) backtracking can also cause problems,
since already executed semantic actions
are in most cases not so easily undone. Semantic actions should
therefore be placed at points where backtracking
cannot anymore occur or where backtracking would indicate a fatal
error. Fatal errors in PEG parsing are best handled by throwing an
exception.
- For many common problems idiomatic solution exist within the PEG framework as shown in the following table
Goal | Idiomatic solution | Sample |
Avoid that white space scanning clutters up the grammar | White Space scanning should be done immediately after reading a terminal,
but not in any other place.
|
[3]prod: val S ([*/] S val S)*;
[4]val : [0-9]+ / '(' S expr ')' S;
[3]prod: val ([*/] S val)*;
[4]val: ([0-9]+ / '(' S expr ')') S;
|
Reuse Nonterminal when only a subset is applicable | !oneOfExceptions reusedNonterminal | Java spec
SingleCharacter: InputCharacter but not ' or \
Peg spec
SingleCharacter: !['\\] InputCharacter
|
Test for end of input | !. |
(!./FATAL<"end of input expected"> )
|
Generic rule for quoting situation | GenericQuote <BegQuote,QuoteContent,EndQuote>:
BegQuote QuoteContent EndQuote; |
GenericQuote<'"',(!'"' .)*,'"'>
|
Order alternatives having the same start | longer_choice / shorter_choice |
<= / <
|
Integrate error handling into Grammar | Use error handling alternative. |
'(' expr @')'
'(' expr (')'/FATAL<"')' expected"> );
|
Provide detailed, expressive error messages | Generate better error messages by peeking at next symbol |
[4]object: '{' S members? @'}' S;
[5]members: (str/num)(',' S @(str/num))*;
[4]object: '{' S (&'}'/members) @'}' S;
[5]members: @(str/num)(',' S @(str/num))*;
|
Most modern programming languages are based on grammars, which can be almost parsed by the predominant
parsing technique (LALR(1) parsing). The emphasis here is on almost, meaning that there are
often grammar rules which require special handling outside of the grammar framework.
The PEG framework can handle this exceptional cases far better as will be shown for
the C++ and C# grammar.
The C# Language Specification V3.0
e.g. has the following wording for its cast-expression/parenthized-expression disambiguation rule:
A sequence of one or more tokens (§2.3.3) enclosed in parentheses is considered
the start of a cast-expression only if at least one of the following are true:
1) The sequence of tokens is correct grammar for a type,
and the token immediately following the closing parentheses is
the token <code>~</code>, the token <code>!</code>, the token <code>(</code>, an <code>identifier</code> (§2.4.1),
a <code>literal</code> (§2.4.4), or any <code>keyword</code> (§2.4.3) except <code>as</code> and <code>is</code>.
2) The sequence of tokens is correct grammar for a type, but not for an expression.
This can be expressed in PEG with
cast_expression:
('(' S type ')' S &([~!(]/identifier/literal/!('as' B/'is' B) keyword B)
/ !parenthesized_expression '(' S type ')' ) S unary_expression;
B: ![a-zA-Z_0-9];
S: (comment/whitespace/new_line/pp_directive )*;
The C++ standard has the following wording for its expression-statement/declaration disambiguation rule
An expression-statement ... can be indistinguishable from a declaration ...
In those cases the statement is a declaration.
This can be expressed in PEG with
statement: declaration / expression_statement;
A PEG grammar can only recognize an input string,
which gives you just two results, a boolean value
indicating match success or match failure and an input position
pointing to the end of the matched string part.
But in most cases, the grammar is only a means to give the input string
a structure.
This structure is then used to associate the input string with a meaning
(a semantic) and to execute statements based on this meaning.
These statements executed during parsing are called semantic actions.
The executable nature of PEG grammars makes integration of semantic actions easy.
Assuming a sequence of grammar symbols e1 e2 and a semantic action es_
which
should be performed after recognition of e1 we just get the sequence e1 es_
e2
where es_ is a function of the host language.
From the grammar view point es_
has to conform to the same interface as
e1 and e2 or any other PEG component, what means that es_
is a function
returning a bool
value as result, where true
means success and false
failure.
The semantic function es_
can be defined either local to the rule
which uses (calls) es_
or in the global environment of the grammar.
A bundling of semantic functions, into-variables, helper data values and helper functions
forms then a semantic block.
Semantic actions face one big problem in PEG grammars, namely backtracking.
In most cases, backtracking should not occur anymore after a
semantic function (e.g. computation of a result of an arithemtic subexpression)
has been performed. The simplest way to guard against backtracking
in such a case is to handle any attempt to backtrack as fatal error.
The FATAL<msg> construct presented here aborts parsing (by raising an exception).
Embedding semantic actions into the grammar enables direct evaluation of the parsed
construct.
A typical application is the stepwise computation of an arithmetical expression
during the parse phase.
Direct evaluation is fast but very limiting since it can only use information present at
the current parse point.
In many cases embedded semantic actions are therefore used to collect
information during parsing for processing
after parsing has completed.
The collected data can have many forms, but the most important one is a tree.
Optimizing parsers and compilers delay semantic actions until the end of the parsing
phase and just create a physical parse tree during parsing
(our PEG framework supports tree generating by the prefixes ^ and ^^).
A tree walking process then checks and optimizes the tree.
Finally the tree is intrerpreted at runtime or it is
just used to generate virtual or real machine code.
The most important evaluation options are shown below
Parsing -> Direct Evaluation
-> Collecting Information during Parsing
-> User defined datastructure
->User defined evaluation
-> Tree Structure
->Interpretation of generated tree
->Generation of VM or machine code
In a PEG implementation, tree generation must cope with backtracking by deleting
tree parts which were built after
the backtrack restore point.
Furthermore, no tree nodes should be created when a Peek
or Not
production is active.
In this implementation this is handled by tree generation aware
code in the implemenations for And
, Peek
, Not
and ForRepeat
productions.
The following sample grammar is also taken from the wikipedia article on PEG [2]
(but with a sligthly different notation).
<<Grammar Name="WikiSample">>
Expr: S Sum;
Sum: Product ([+-] S Product)*;
Product: Value ([*/] S Value)*;
Value: [0-9]+ ('.' [0-9]+)? S / '(' S Sum ')' S;
S: [ \n\r\t\v]*;
<</Grammar>>
During the application of a grammar to an input string, each grammar rule is called from some parent grammar rule
and matches a subpart of the input string which is matched by the parent rule. This results in a parse tree.
The grammar rule Expr would associate the arithmetical expressions 2.5 * (3 + 5/7)
with the following parse tree:
Expr<
S<' '>
Sum<
Product<
Value<'2.5' S<' '>>
'*'
S<' '>
Value<
'('
S<''>
Sum<
Product<Value<'3' S<' '>>
'+'
S<' '>
Product< Value<'5' S<''>> '/' S<''> Value<'7' S<''>> >
>
>
>
>
>
The above parse tree is not a physical tree but an implicit tree which only exists during the parse process.
The natural implementation for a PEG parser associates each grammar rule with a method (function).
The right hand side of the grammar rule corresponds to the function body and each
nonterminal on the right hand side of the rule is mapped to a function call.
When a rule function is called, it tries to match the input string at the current input
position against the right hand side of the rule. If it succeeds it advances the input
position accordingly and returns true otherwise the input position is unchanged and the result is false.
The above parse tree can therefore be regarded as a stack trace.
The location marked with [*] in the above parse tree corresponds to the
function stack Value<=Product<=Sum<=Expr
with the function
Value
at the top of the stack and the function Expr
at the bottom of the stack.
The parsing process as described above just matches an input string or it
fails to match. But it is not difficult
to add semantic actions during this parse process by inserting helper
functions at appropriate places.
The PEG parser for arithemtical expressions could e.g. compute the result of the
expression during parsing.
Such direct evaluation does not significantly slow down the parsing process.
Using into variables and semantic blocks as listed above one would get
the following enhanced PEG grammar for arithmetical expressions which directly
evaluates the result of the expression
and prints it out to the console.
<<Grammar Name="calc0_direct">>
Top{
double result;
bool print_(){Console.WriteLine("{0}",result);return true;}
}
Expr: S Sum (!. print_ / FATAL<"following code not recognized">);
Sum
{
double v;
bool save_() {v= result;result=0; return true;}
bool add_() {v+= result;result=0;return true;}
bool sub_() {v-= result;result=0;return true;}
bool store_() {result= v; return true;}
} :
Product save_
('+' S Product add_
/'-' S Product sub_)* store_ ;
Product
{
double v;
bool save_() {v= result;result=0; return true;}
bool mul_() {v*= result;result=0; return true;}
bool div_() {v/= result;result=0;return true;}
bool store_() {result= v;return true;}
} :
Value save_
('*' S Value mul_
/'/' S Value div_)* store_ ;
Value: Number S / '(' S Sum ')' S ;
Number
{
string sNumber;
bool store_(){double.TryParse(sNumber,out result);return true;}
}
: ([0-9]+ ('.' [0-9]+)?):sNumber store_ ;
S: [ \n\r\t\v]* ;
<</Grammar>>
In many cases on the fly evaluation during parsing is not sufficient and one needs a
physical parse tree or an abstract syntax tree (abbreviated AST).
An AST is a parse tree shrinked to the essential nodes thereby saving space and
providing a view better suited for evaluation.
Such physical trees typically need at least 10 times the memory space of the input
string and reduce the parsing speed by a factor of 3 to 10.
The following PEG grammar uses the symbol ^ to indicate an abstract snytax node
and the symbol ^^ to indicate a parse tree node.
The grammar presented below is furthermore enhanced with the error handling item Fatal< errMsg>.
Fatal leaves the parsing process immediately with the result fail but the input position
set to the place where the fatal error occurred.
<<Grammar Name="WikiSampleTree">>
[1] ^^Expr: S Sum (!./FATAL<"end of input expected">) ;
[2] ^Sum: Product (^[+-] S Product)* ;
[3] ^Product: Value (^[*/] S Value)* ;
[4] Value: Number S / '(' S Sum ')' S /
FATAL<"number or ( <Sum> ) expected">;
[5] ^^Number: [0-9]+ ('.' [0-9]+)? ;
[6] S: [ \n\r\t\v]* ;
<</Grammar>>
With this grammar the arithmetical expression 2.5 * (3 + 5/7)
would result in the following physical tree:
Expr<
Product<
Number<'2.5'>
<'*'>
Sum< Number<'3'>; <'+'> Product<Number<'5'><'/'>Number<'7'> >>
>
>
With a physical parse tree, much more options for evaluation are possible, e.g. one can generate code for a virtual
machine after first optimizing the tree.
In this chapter I first show how to implement all the PEG constructs one by one.
This will be expressed in pseudo code. Then I will try to find the best interface for
this basic PEG functions in C#1.0 and C#3.0.
The natural representation of a PEG is a top down recursive parser with
backtracking.
PEG rules are implemented as functions/methods which call each other
when needed and return true in case of a match and false in case of a mismatch.
Backtracking is implemented by saving the input position before
calling a parsing function and restoring the input position to the saved one
in case the parsing function returns false.
Backtracking can be limited to the the PEG sequence construct
and the e<min,max> repetitions if the input position is only moved forward
after successful matching in all other cases.
In the following pseudo code we use strings and integer variables,
short circuit conditional expressions
(using && for AND and || for OR) and exceptions.
s stands for the input string and i refers to the current input position.
bTreeBuild is an instance variable which inhibits tree build operations when set to false.
PEG construct | sample | pseudo code to implement sample |
CodePoint #<dec> #x<hex> #x<bin>
| #32 (decimal) #x3A0 (hex) <#b111> (binary) |
if i<length(s) && s[i]=='\u03A0'
{i+= 1; return true;}
else {return false;}
|
Literal 'literal' | 'ab' |
if i+2<length(s) && s[i]=='a' && s[i+1]=='b'
{ i+= 2; return true; }
else {return false; }
|
CaseInsensitive Literal | 'ab'\i |
if i+2<length(s) && toupper(s[i])=='A' && toupper(s[i+1] =='B'
{ i+= 2; return true; }
else {return false; } |
Charset [charset] | [ab] |
if i+1<length(s) && (s[i]=='a'|| s[i]=='b')
{i+= 1; return true;}
else {return false;}
|
CharacterSet | [a-z] |
if i+1<length(s) && s[i]>='a' && s[i]<='z'
{i+= 1; return true;}
else {return false;} |
Any | . |
if i+1<length(s)
{i+=1;return true;}
else {return false;}
|
BITS | BITS<7-8,#b11> |
if i<length(s) && ExtractBitsAsInt(s[i],7,8)==3
{i+=1;return true;}
else {return false;} |
Sequence
| e1 e2 |
int i0= i;
TreeState t=SaveTreeState();
if e1() && e2()
{return true;}
else {i=i0; RestoreTreeState(t);return false;} |
Alternative | e1 / e2 |
return e1() || e2(); |
Greedy Option | e? |
return e() || true; |
Greedy repeat 0+
| e* |
while e() {} return true; | |
Greedy repeat 1+ | e+ |
if !e() { return false};
while e() {}
return true; |
Greedy repeat >=low<=high | e{low,high} |
int c,i0=i;
TreeState t=SaveTreeState();
for(c=0;c<high;++c){if !e(){ break;} }
if c<low { i=i0; RestoreTreeState(t); return false;}
else {return true;} |
Peek | &e |
int i0=i;
bool bOld= bTreeBuild; bTreeBuild= false;
bool b= e();
i=i0; bTreeBuild= bOld;
return b; |
Not | !e |
int i0=i;
bool bOld= bTreeBuild; bTreeBuild= false;
bool b= e();
i=i0; bTreeBuild= bOld;
return !b; |
FATAL | FATAL< message > |
PrintMsg(message);
throw PegException(); |
WARNING | WARNING< message > |
PrintMsg(message); return true; |
Into | e :variableName |
int i0=i;
bool b= e();
variableName= s.substring(i0,i-i0);
return b; |
Bits Into variable | BITS<3-5,:v> |
int i0=i;
if i<length(s) {v= ExtractBitsAsInt(s[i],3,5);++i;return true;}
else {return false;} |
Build Tree Node | ^^e |
TreeState t=SaveTreeState();
AddTreeNode(...)
bool b= e();
if !b {RestoreTreeState(savedState);}
return b; |
Build Ast Node | ^e |
TreeState t=SaveTreeState();
AddTreeNode(..)
bool b= e();
if !b {RestoreTreeState(savedState);}
else {TryReplaceByChildWithoutSibling();}
return b; |
In C#1.0 we can map the PEG operators CodePoint,Literal, Charset, Any, FATAL,
and WARNING
to helper functions in a base class. But the other PEG constructs,
like Sequence, Repeat, Peek, Not, Into
and Tree building cannot be easily outsourced
to a library module.
The Grammar for integer sums
<<Grammar Name="IntSum">>
Sum: S [0-9]+ ([+-] S [0-9]+)* S ;
S: [ \n\r\t\v]* ;
<</Grammar>>
results in the following C#1.0 implementation
(PegCharParser
is a not shown base class with the field pos_
and the
methods In
and OneOfChars
)
class InSum_C1 : PegCharParser
{
public InSum_C1(string s) : base(s) { }
public bool Sum()
{
S();
if( !In('0', '9') ){return false;}
while (In('0', '9')) ;
for(;;){
int pos= pos_;
if( S() && OneOfChars('+','-') && S() ){
if( !In('0', '9') ){pos_=pos; break;}
while (In('0', '9')) ;
}else{
pos_= pos; break;
}
}
S();
return true;
}
bool S()
{
while (OneOfChars(' ', '\n', '\r', '\t', '\v')) ;
return true;
}
}
To execute the Grammar we must just call the method Sum
of an object of the above class.
But we cannot be happy and satisfied with this solution.
Compared with the original grammar rule, the method Sum
in
the above class InSum_C1
is large and in its use of loops and helper variables
quite confusing. But it is perhaps the best of what is possible in C#1.0.
Many traditional parser generators even produce worse code.
PEG operators like Sequence, Repeat, Into, Tree Build, Peek
and Not
can be regarded as operators or
functions which take a function as parameter.
This maps in C# to a method with a delegate parameter.
The Peg Sequence operator e.g can be implemented as a function
with the following interface
public bool And(Matcher pegSequence);
where Matcher
is the following delegate
public delegate bool Matcher();
.
In older C# versions, passing a function as a parameter required some code lines, but with C#3.0 this changed.
C#3.0 supports lambdas, which are anonymous functions with a very low syntactical overhead.
Lambdas enable a functional implementation of PEG in C#.
The PEG Sequence e1 e2 can now be mapped to the C# term And(()=>e1() && e2())
.
()=>e1()&& e2()
looks like a normal expression,
but is in effect a fullfledged function
with zero parameters (hence ()=>
) and the function body {return e1() && e2();}
.
With this facility, the Grammar for integer sums
<<Grammar Name="IntSum">>
Sum: S [0-9]+ ([+-] S [0-9]+)* S ;
S: [ \n\r\t\v]* ;
<</Grammar>>
results in the following C#3.0 implementation
(PegCharParser
is a not shown base class with
methods And
,PlusRepeat
,OptRepeat
, In
and OneOfChars
)
class IntSum_C3 : PegCharParser
{
public IntSum_C3(string s) : base(s) { }
public bool Sum()
{
return
And(()=>
S()
&& PlusRepeat(()=>In('0','9'))
&& OptRepeat(()=> S() && OneOfChars('+','-') && S() && PlusRepeat(()=>In('0','9')))
&& S());
}
public bool S()
{
return OptRepeat(()=>OneOfChars(' ', '\n', '\r', '\t', '\v'));
}
}
Compared to the C#1.0 implementation this parser class is a huge improvement.
We have eliminated all loops and helper variables. The correctness (accordance with the grammar rule)
is also much easier to check. The methods And
, PlusRepeat
, OptRepeat
, In
and OneOfChars
are all implemented in both the PegCharParser
and PegByteParser
base classes.
The following table shows most of the PEG methods available in the base library delivered with this article.
PEG element | C# methods | sample usage | |
CodePoint | Char(char) | Char('\u0023') |
Literal | Char(char c0,char c1,...) Char(string s) | Char("ab") |
CaseInsensitive Literal | IChar(char c0,char c1,...) IChar(string s) | IChar("ab") |
Char Set
[<c0c1...>] | OneOf(char c0,char c1,...) OneOf(string s) | OneOf("ab") |
Char Set
[<c0-c1...>]
| In(char c0,char c1,...) In(string s) | In('A','Z','a'-'z','0'-'9') |
Any . | Any() | Any() |
BITS | Bits(char cLow,char cHigh,byte toMatch) | Bits(1,5,31) |
Sequence e1 e2 ... | And(MatcherDelegate m) | And(() => S() && top_element()) |
Alternative e1 / e2 / ... | e1 || e2 || ... | @object() || array() |
Greedy Option e? | Option(MatcherDelegate m) | Option(() => Char('-')) |
Greedy repeat 0+ e* | OptRepeat(MatcherDelegate m) | OptRepeat(() => OneOf(' ', '\t', '\r', '\n')) |
Greedy repeat 1+ e+ | PlusRepeat(MatcherDelegate m) | PlusRepeat(() => In('0', '9')) |
Greedy repeat n0..n1 e{low,high} | PlusRepeat(MatcherDelegate m) | ForRepeat(4, 4, () => In('0', '9', 'A', 'F', 'a', 'f')) |
Peek &e | Peek(MatcherDelegate m) | Peek(() => Char('}')) |
Not !e | Not(MatcherDelegate m) | Not(()=>OneOf('"','\\')) |
FATAL FATAL<message> | Fatal("<message>") | Fatal("<<'}'>> expected") |
WARNING WARNING<message> | Warning("<message>") | Warning("non-json stuff before end of file") |
Into e :variableName | Into(out string varName,MatcherDelegate m)
Into(out int varName,MatcherDelegate m)
Into(out PegBegEnd varName,MatcherDelegate m) | Into(out top.n, () => Any()) |
Bits Into BITS<3-5,:v>variableName | BitsInto(int lowBitNo, int highBitNo,out int varName) | BitsInto(1, 5,out top.tag) |
Build Tree Node [id] ^^RuleName: | TreeNT(int nRuleId, PegBaseParser.Matcher toMatch); | TreeNT((int)Ejson_tree.json_text,()=>...) |
Build Ast Node [id] ^RuleName: | TreeAST(int id,PegBaseParser.MatcherDelegate m)
| TreeAST((int)EC_KernighanRitchie2.external_declaration,()=>...) |
Parametrized Rule RuleName<a,b,...> | RuleName(MatcherDelegate a, MatcherDelegate b,...)
| binary(()=> relational_expression(),
()=>TreeChars(
()=>Char('=','=') || Char('!','=')
) |
The following examples show uses of
the PegGrammar
class for all supported use cases:
- Recognition only: The result is just match or does not match,
in which case an error message is issued.
- Build of a physical parse tree: The result is a physical tree.
- Direct evaluation: Semantic actions executed during parsing.
- Build tree, interpret tree: The generated Tree is traversed and evaluated.
JSON (JavaScript Object Notation) [5][6] is an exchange format suited for serializing/deserializing program data.
Compared to XML it is featherweight and therefore a good testing candidate for parsing techniques.
The JSON Checker presented here gives an error message and error location in case the file does not conform
to the JSON grammar. The following PEG grammar is the basis of json_check
.
<<Grammar Name="json_check"
encoding_class="unicode" encoding_detection="FirstCharIsAscii"
reference="www.ietf.org/rfc/rfc4627.txt">>
[1]json_text: S top_element expect_file_end ;
[2]expect_file_end: !./ WARNING<"non-json stuff before end of file">;
[3]top_element: object / array /
FATAL<"json file must start with '{' or '['"> ;
[4]object: '{' S (&'}'/members) @'}' S ;
[5]members: pair S (',' S pair S)* ;
[6]pair: @string S @':' S value ;
[7]array: '[' S (&']'/elements) @']' S ;
[8]elements: value S (','S value S)* ;
[9]value: @(string / number / object /
array / 'true' / 'false' / 'null') ;
[10]string: '"' char* @'"' ;
[11]char: escape / !(["\\]/control_chars)unicode_char ;
[12]escape: '\\' ( ["\\/bfnrt] /
'u' ([0-9A-Fa-f]{4}/FATAL<"4 hex digits expected">)/
FATAL<"illegal escape">);
[13]number: '-'? int frac? exp? ;
[14]int: '0'/ [1-9][0-9]* ;
[15]frac: '.' [0-9]+ ;
[16]exp: [eE] [-+] [0-9]+ ;
[17]control_chars: [#x0-#x1F] ;
[18]unicode_char: [#x0-#xFFFF] ;
[19]S: [ \t\r\n]* ;
<</Grammar>>
The translation of the above grammar to C#3.0 is straightforward and results in the
following code (only the translation of the first 4 rules are reproduced).
public bool json_text()
{return And(()=> S() && top_element() && expect_file_end() );}
public bool expect_file_end()
{ return
Not(()=> Any() )
|| Warning("non-json stuff before end of file");
}
public bool top_element()
{ return
@object()
|| array()
|| Fatal("json file must start with '{' or '['");
}
public bool @object()
{
return And(()=>
Char('{')
&& S()
&& ( Peek(()=> Char('}') ) || members())
&& ( Char('}') || Fatal("<<'}'>> expected"))
&& S() );
}
With a few changes of the JSON checker grammar we get a grammar which
generates a physical tree for a JSON file. In order to have unique nodes for
the JSON values true
, false
, null
we add corresponding rules. Furthermore, we add a rule which matches the
content of a string (the string without the enclosing double quotes). This gives
us the following grammar:
<<Grammar Name="json_tree"
encoding_class="unicode" encoding_detection="FirstCharIsAscii"
reference="www.ietf.org/rfc/rfc4627.txt">>
[1]^^json_text: (object / array) ;
[2]^^object: S '{' S (&'}'/members) S @'}' S ;
[3]members: pair S (',' S @pair S)* ;
[4]^^pair: @string S ':' S value ;
[5]^^array: S '[' S (&']'/elements) S @']' S ;
[6]elements: value S (','S @value S)* ;
[7]value: @(string / number / object /
array / true / false / null) ;
[8]string: '"' string_content '"' ;
[9]^^string_content: ( '\\'
( 'u'([0-9A-Fa-f]{4}/FATAL<"4 hex digits expected">)
/ ["\\/bfnrt]/FATAL<"illegal escape">
)
/ [#x20-#x21#x23-#xFFFF]
)* ;
[10]^^number: '-'? '0'/ [1-9][0-9]* ('.' [0-9]+)? ([eE] [-+] [0-9]+)?;
[11]S: [ \t\r\n]* ;
[12]^^true: 'true' ;
[13]^^false: 'false' ;
[14]^^null: 'null' ;
<</Grammar>>
The following table shows on the left hand side a JSON input file and
on the right hand side the tree generated by the TreePrint
helper class
of our parser library.
JSON Sample File | TreePrint Output |
{
"ImageDescription": {
"Width": 800,
"Height": 600,
"Title": "View from 15th Floor",
"IDs": [116, 943, 234, 38793]
}
}
|
json_text<
object<
pair<
'ImageDescription'
object<
pair<'Width' '800'>
pair<'Height' '600'>
pair<'Title' 'View from 15th Floor'>
pair<'IDs' array<'116' '943' '234' '38793'>>
>
>
>
>
|
BER (Basic Encoding Rules) is the most commonly used format
for encoding ASN.1 data. Like XML, ASN.1 serves the purpose of representing
hierarchical data, but unlike XML, ASN.1 is traditionally encoded in compact binary formats
and BER is one of the these formats (albeit the least compact one). The Internet
standards SNMP and LDAP are examples of ASN.1 protocols using BER as encoding.
The following PEG grammar for reading a BER file into a tree representation
uses semantic blocks to store information necessary for further parsing.
This kind of dynamic parsing which uses data read during the parsing process to
decode data further downstreams is typical for parsing of binary formats.
The grammar rules for BER [4] as shown below express the following facts:
- BER nodes consist of the triple Tag Length Value (abbreviated as TLV)
where Value is either a primitive value or a list of TLV nodes.
- The Tag identifies the element (like the start tag in XML).
- The Tag contains a flag whether the element is primitive or constructed.
Constructed means that there are children.
- The Length is either the length of the Value in bytes or it is the special pattern 0x80
(only allowed for elements with children), in which case the sequence of childrens
ends with two zero bytes (0x0000).
- The Value is either a primitive value or -if the constructed flag is set-
it is a sequence of Tag Length Value triples. The sequence of TLV triples ends
when the length given in the Length part of the TLV tripple is used up or
in the case where the length is given as 0x80, when
the end marker 0x0000 has been reached.
<<Grammar Name="BER" encoding_class="binary"
reference="http://en.wikipedia.org/wiki/Basic_encoding_rules"
comment="Tree generating BER decoder (minimal version)">>
{
int tag,length,n,@byte;
bool init_() {tag=0;length=0; return true;}
bool add_Tag_() {tag*=128;tag+=n; return true;}
bool addLength_(){length*=256;length+=@byte;return true;}
}
[1] ProtocolDataUnit: TLV;
[2] ^^TLV: init_
( &BITS<6,#1> Tag ( #x80 CompositeDelimValue #0#0 / Length CompositeValue )
/ Tag Length PrimitiveValue
);
[3] Tag: OneOctetTag / MultiOctetTag / FATAL<"illegal TAG">;
[4] ^^OneOctetTag: !BITS<1-5,#b11111> BITS<1-5,.,:tag>;
[5] ^^MultiOctetTag: . (&BITS<8,#1> BITS<1-7,.,:n> add_Tag_)* BITS<1-7,.,:n> add_Tag_;
[6] Length : OneOctetLength / MultiOctetLength
/ FATAL<"illegal LENGTH">;
[7] ^^OneOctetLength: &BITS<8,#0> BITS<1-7,.,:length>;
[8]^^MultiOctetLength: &BITS<8,#1> BITS<1-7,.,:n> ( .:byte addLength_){:n};
[9]^^PrimitiveValue: .{:length} / FATAL<"BER input ends before VALUE ends">;
[10]^^CompositeDelimValue: (!(#0#0) TLV)*;
[11]^^CompositeValue
{
int len;
PegBegEnd begEnd;
bool save_() {len= length;return true;}
bool at_end_(){return len<=0;}
bool decr_() {len-= begEnd.posEnd_-begEnd.posBeg_;return len>=0;}
}
: save_
(!at_end_ TLV:begEnd
(decr_/FATAL<"illegal length">))*;
<</Grammar>>
This calculator supports the basic arithmetic operations + - * /,
built in functions taking one argument like 'sin','cos',.. and assignments to variables.
The calculator expects line separated expressions and assignments. It works
as two step interpreter which first builds a tree, then evaluates the tree.
The PEG grammar for this calculator can be translated to a peg parser by the
parser generator coming with the PEG Grammar Explorer. The evaluator must be
written by hand. It works by walking the tree and evaluating the results as it visits
the nodes.
The grammar for the calculator is:
<<Grammar Name="calc0_tree">>
[1]^^Calc: ((^'print' / Assign / Sum)
([\r\n]/!./FATAL<"end of line expected">)
[ \r\n\t\v]* )+
(!./FATAL<"not recognized">);
[2]^Assign:S ident S '=' S Sum;
[3]^Sum: Prod (^[+-] S @Prod)*;
[4]^Prod: Value (^[*/] S @Value)*;
[5] Value: (Number/'('S Sum @')'S/Call/ident) S;
[6]^Call: ident S '(' @Sum @')' S;
[7]^Number:[0-9]+ ('.' [0-9]+)?([eE][+-][0-9]+)?;
[8]^ident: [A-Za-z_][A-Za-z_0-9]*;
[9] S: [ \t\v]*;
<</Grammar>>
The library classes PegCharParser
and PegByteParser
are designed for manual
Parser construction of PEG parsers. But it
is highly recommended in any case to first write the grammar on paper before implementing it.
I wrote a little parser generator (using PegCharParser
) which translates a 'paper' Peg grammar
to a C# program. The current version of the PEG parser generator
just generates a C# parser. It uses optimizations for huge character sets and for big sets of literal alternatives.
Future versions will generate source code for C/C++ and other languages
and furthermore support debugging, tracing and direct execution of the grammar without the need to translate it to
a host language. But even the current version of the PEG parser generator is quite
helpful.
All the samples presented in the chapter
Expression Grammar Examples
were generated with it. The PEG Parser Generator is an example of a PEG parser which generates
a syntax tree. It takes a PEG grammar as input, validates the generated syntax tree
and then writes a set of C# code files, which implement the parser described by the PEG grammar.
The PEG Parser Generator coming with this article expects a set of grammar rules written as described in the
chapter Parsing Expression Grammars Basics.
These rules must be preceded by a header and terminated by a trailer as described in the following PEG Grammar:
<<Grammar Name="PegGrammarParser">>
peg_module: peg_head peg_specification peg_tail;
peg_head: S '<<Grammar'\i B S attribute+ '>>';
attribute: attribute_key S '=' S attribute_value S;
attribute_key: ident;
attribute_value: "attribute value in single or double quotes";
peg_specification: toplevel_semantic_blocks peg_rules;
toplevel_semantic_blocks:semantic_block*;
semantic_block: named_semantic_block / anonymous_semantic_block;
named_semantic_block: sem_block_name S anonymous_semantic_block;
anonymous_semantic_block:'{' semantic_block_content '}' S;
peg_rules: S peg_rule+;
peg_rule: lhs_of_rule ':'S rhs_of_rule ';' S;
lhs_of_rule: rule_id? tree_or_ast? create_spec?
rule_name_and_params
(semantic_block/using_sem_block)?;
rule_id: (![A-Za-z_0-9^] .)* [0-9]+ (![A-Za-z_0-9^] .)*;
tree_or_ast: '^^'/'^';
create_spec: 'CREATE' S '<' create_method S '>' S;
create_method: ident;
ident: [A-Za-z_][A-Za-z_0-9]*;
rhs_of_rule: "right hand side of rule as described in
<a href="%22#Parsing" expression="" grammars="" basics"="">Parsing Expression Grammars Basics</a>";
semantic_block_content: "semantic block content as described in
<a href="%22#Parsing" expression="" grammars="" basics"="">Parsing Expression Grammars Basics</a>";
peg_tail: '<</GRAMMAR'\i; S '>>';
<</Grammar>>
The header of the grammar contains HTML/XML-style attributes which are used to determine
the name of the generated C# file and the input file properties. The following attributes
are used by the C# code generator:
Attribute Key | Optionality | Attribute Value |
Name | Mandatory | Name for the generated C# grammar file and namespace |
encoding_class | Optional | Encoding of the input file. Must be one of
binary ,
unicode ,
utf8 or
ascii .
Default is ascii |
encoding_detection | Optional | Must only be present if the encoding_class is set to unicode .
In this case one of the values FirstCharIsAscii or
BOM is expected. |
All further attributes are treated as comments.
The attribute reference
in the following sample header
<<Grammar Name="json_check"
encoding_class="unicode" encoding_detection="FirstCharIsAscii"
reference="www.ietf.org/rfc/rfc4627.txt">>
is treated as comment.
Semantic blocks are translated to local classes. The code inside
semantic blocks must be C# source text as expected in a class body, except that access keywords
can be left out. The parser generator prepends an internal
access keyword when necessary. Top level semantic blocks are handled differently than local semantic blocks.
A top level semantic block is created in the grammar's constructor, wheras a local semantic
block is created each time the associated rule method is called. There is no need to define
a constructor in a local semantic block, since the parser generator creates a constructor
with one parameter, a reference to the grammar class.
The following sample shows a grammar excerpt with a top level and a local semantic block and
its translation to C# code.
<<Grammar Name="calc0_direct">>
Top{
double result;
bool print_(){Console.WriteLine("{0}",result);return true;}
}
...
Number
{
string sNumber;
bool store_(){double.TryParse(sNumber,out result);return true;}
} : ([0-9]+ ('.' [0-9]+)?):sNumber store_ ;
These semantic blocks will be translated to the following C# source code
class calc0_direct : PegCharParser
{
class Top{
internal double result;
internal bool print_(){Console.WriteLine("{0}",result);return true;}
}
Top top;
#region Constructors
public calc0_direct(): base(){ top= new Top();}
public calc0_direct(string src,TextWriter FerrOut): base(src,FerrOut){top= new Top();}
#endregion Constructors
...
class _Number{
internal string sNumber;
internal bool store_(){double.TryParse(sNumber,out parent_.top.result);return true;}
internal _Number(calc0_direct grammarClass){parent_ = grammarClass; }
calc0_direct parent_;
}
public bool Number()
{
var _sem= new _Number(this);
...
}
Quite often, several grammar rules must use the same local semantic block. To avoid code duplication,
the parser generator supports the using SemanticBlockName
clause.
The semantic block named SemanticBlockName
should be defined before the
first grammar rule at the same place where the top level semantic blocks are defined. But because
such a block is referenced in the using clause of a rule, it is treated as local semantic block.
Local semantic blocks also support destructors. A destructor is tranlsated to a IDispose
interface and the destructor code is placed into the corresponding Dispose()
function.
The grammar rule function which is generated by the parser generator will be enclosed in a using block.
This allows to execute cleanup code at the end of the rule even in the presence of exceptions.
The following sample is taken from the Python 2.5.2
sample parser.
Line_join_sem_{
bool prev_;
Line_join_sem_ (){set_implicit_line_joining_(true,out prev_);}
~Line_join_sem_(){set_implicit_line_joining_(prev_);}
}
...
[8] parenth_form: '(' parenth_form_content @')' S;
[9] parenth_form_content
using Line_join_sem_: S expression_list?;
...
[17]^^generator_expression: '(' generator_expression_content @')' S;
[18]generator_expression_content
using Line_join_sem_: S expression genexpr_for;
The Line_join_sem
semantic block turns Python's implicit line joining on and off (Python is
line oriented except that line breaks are allowed inside constructs which are parenthized as in
(...) {...) [...]
. The Line_join_sem
semantic block and rule [8] of the
above grammar excerpt are translated to
class Line_join_sem_ : IDisposable{
bool prev_;
internal Line_join_sem_(python_2_5_2_i parent)
{
parent_= parent;
parent_._top.set_implicit_line_joining_(true,out prev_);
}
python_2_5_2_i parent_;
public void Dispose(){parent_._top.set_implicit_line_joining_(prev_);}
}
public bool parenth_form_content()
{
using(var _sem= new Line_join_sem_(this)){
return And(()=> S() && Option(()=> expression_list() )
);
}
Parsing Expression Grammars narrow the semantic gap between
formal grammar and implementation of the grammar in a functional or imperative programming language.
PEGs are therefore particularly well suited for manually written parsers as well as for attempts
to integrate a grammar very closely into a programming language. As stated in [1], the elements
which form the PEG framework are not new, but are well known and commonly used techniques when implementing
parsers manually. What makes the PEG framework unique is the selection and combination of the basic elements,
namely
PEG Feature | Advantage | Disadvantage |
Scannerless parsing | Only one level of abstraction. No scanner means no scanner worries. | Grammar sligthly cluttered up. Recognition of overlapping tokens might be inefficient
(e.g. identifier token overlaps with keywords -> ident: !keyword [A-Z]+; ) |
Lack of ambiguity | There is only one interpretation for a grammar.
The effect is, that PEG grammars are "executable". | --- |
Error handling by using FATAL alternative | Total user control over error diagnostics | Bloats the grammar. |
Excessive Backtracking possible | Backtracking adds to the powerfulness of PEG. If the input string
is in memory, backtrackting just means resetting the input position. | Potential memory hogg. Interferes with semantic actions.
Solution: Issue a fatal error in case backtracking cannot succeed anymore.
|
Greedy repetition | Greedy repetition conforms to the "maximum munch rule" used in scanning and therefore
allows scannerless parsing.
| Some patterns are more difficult to recognize.
|
Ordered Choice | The author of the grammar determines the selection strategy. | Potential error source for the unexperienced. R: '<' / '<=' ;
will never find <= |
Lookahead operators & and ! | Supports arbitrary lookaheads.
Good cost/gain ratio if backtracking is anyway supported.
Lookahead e.g. allows better reuse of grammar rules and supports
more expressive error diagnostics. | Excessive use of & and ! makes the parser slow. |
A PEG grammar can incur a serious performance penalty, when backtracking occurs frequently. This is the reason
that some PEG tools (so called packrat parsers) memoize already read input and the associated rules. It can
be proven, that appropriate memoization guarantees linear parse time even when backtracking and unlimited lookahead
occurs. Memoization (saving information about already taken paths) on the other hand has its own overhead and
impairs performance in the average case. The far better approach to limit backtracking is rewriting the grammar
in a way, which reduces backtracking. How to do this will be shown in the next chapter.
The ideas underlying PEG grammars are not entirely new and many
of them are regularly used to manually construct parsers.
Only in its support and encouragement for backtracking and unlimited
lookahead deviates PEG from most earlier parsing techniques.
The simplest implementation for unlimited lookahead and backtracking
requires that the input file must be read into internal
memory before parsing starts. This is not a problem nowadays, but was
not acceptable earlier when memory was a scarce resource.
A set of grammar rules can recognize a given language.
But the same language can be described by many different grammars even within the same formalism (e.g. PEG grammars).
Grammar modifications can be used to meet the following goals:
Goal | Before modification | After modification |
More informative tree nodes [1] |
[1]^^string:
'"' ('\\' ["'bfnrt\\])/!'"' .)* '"'; |
[1]^^string: '"'(escs/chars)* '"';
[3] ^^escs: ('\\' ["'bfnrt\\])*;
[4] ^^chars: (!["\\] .)*; |
Better placed error indication [2] |
'/*' (!'*/' . )*
('*/' /
FATAL<"comment not closed before end">
);
|
'/*'
( (!'*/' .)* '*/'
/ FATAL<"comment not closed before end">
);
|
Faster Grammar (reduce calling depth) [3] |
[10]string: '"' char* '"' ;
[11]char: escape
/ !(["\\]/control)unicode
/!'"' FATAL<"illegal character">;
[12]escape: '\\' ["\\/bfnrt];
[17]control [#x0-#x1F];
[18]unicode: [#x0-#xFFFFF]; |
[10]string:
'"'
( '\\'["\\/bfnrt]
/ [#0x20-#0x21#0x23-#0x5B#0x5D-#0xFFFF]
/ !'"' FATAL<"illegal character">
)*
'"'; |
Faster Grammar, Less Backtracking (left factoring) |
[1] if_stat:
'if' S '('expr')' stat /
'if' S '('expr')' stat 'else' S stat; |
[1] if_stat:
'if' S '(' expr ')' stat
('else' S stat)?; |
Remarks:
[1] More informative tree nodes can be obtained by syntactical grouping of grammar
elements so that postprocessing is easier. In the above example, access
to the content of the string is improved by grouping consecutive non-escape characters
into one syntactical unit.
[2] The source reference place which is given by an error message is important.
In the example of a c comment which is not closed until the end of the input,
the error message should be given where the comment opens.
[3] Reducing calling depth means inlinig of function calls, since
each rule corresponds to one function call in our PEG implementation.
Such a transformation should only be carried out for hot spots, otherwise
the expressiveness of the grammar gets lost. Furthermore, some aggressive
inlining compilers may do this inlining for you.
Reducing calling depth may be questionable, but left factorization
is certainly not. It not only improves performance but also eliminiates potential
disruptive backtracking. When embedding semantic actions into a PEG parser,
backtracking should in many cases not occur anymore, because undoing semantic actions
may be tedious.
Most parsing strategies currently in use are based on the notion of
a context free grammar. (The following explanations follow
-for the next 50 lines- closely the material used in the
Wikipedia on Context free grammars [3])
A context free grammar consists
of a set of rules similar to the set of rules for a PEG parser.
But context free grammars are quite differently interpreted than
PEG grammars. The main difference is the fact, that context free
grammars are nondeterministic, meaning that
- Alternatives in context free grammars can be chosen arbitrarily
- Nonterminals can be substituted in an arbitrary order
(Substitution means replacing a Nonterminal on the right hand side of a rule by the
definition of the Nonterminal).
By starting with the start rule and choosing
alternatives and substituting nonterminals in all possible orders we can generate all the
strings which are described by the grammar
(also called the language described by the grammar).
With the context free grammar
S : 'a' S 'b' | 'ab';
e.g. we can generate the following language strings
ab, aabb, aaabbb,aaaabbbb,...
With PEG we cannot generate a language, we can only recognize an input string.
The same grammar interpreted as PEG grammar
S: 'a' S 'b' / 'ab';
would recognize any of the following input strings
ab
aabb
aaabbb
aaaabbbb
It turns out, that the nondeterministic nature of context free grammars, while being
indispensable for generating a language, can be a problem when recognizing an input string.
If an input string can be parsed in two different ways we have the problem of ambiguity,
which must be avoided by parsers.
A further consequence of nondeterminism is that a context free input string recognizer
(a parser) must choose a strategy how to substitute nonterminals on the right hand side of a rule.
To recognize the input string
1+1+a
with the context free rule
S: S '+' S | '1' | 'a';
we either can e.g use the following substitutions:
S '+' S ->
(S '+' S) '+' S ->
(('1') '+' S) '+' S ->
(('1') '+' ('1')) '+' S ->
(('1') '+' ('1')) '+' ('a')
This is called a leftmost derivation. Or we can use the substitutions:
S '+' S ->
S '+' (S '+' S) ->
S '+' (S '+' ('a')) ->
S '+' (('1') '+' ('a')) ->
('1') '+' (('1') '+' ('a'));
This is called a rightmost derivation.
A leftmost derivation parsing strategy is called LL, wheras a rightmost
derivation parsing strategy is called LR
(the first L in LL and LR stands for "parse the input string from
Left", but who will try it from right?).
Most parsers in use are either LL or LR parsers. Furthermore, grammars
used for LL parsers and LR parsers must obey different rules.
A grammar for an LL parser must never use left recursive rules, whereas
a grammar for an LR parser prefers immediate left recursive rules over
right recursive ones.
The C# grammar e.g. is written for an LR parser. The rule for a list of
local variables is therefore:
local-variable-declarators:
local-variable-declarator
| local-variable-declarators ',' local-variable-declarator;
If we want use this grammar with an LL parser, then we must rewrite this rule to:
local-variable-declarators:
local-variable-declarator (',' local-variable-declarator)*;
Coming back to original context free rule
S: S '+' S | '1' | 'a';
and interpreting it as a PEG rule
S: S '+' S / '1' / 'a';
we do not have to do substitutions or/and choose a parsing strategy to recognize the input string
1+1+a
We only must follow the execution rules for a PEG which translates to the following steps:
- Set the input position to the start of the input string
- Choose the first alternative of the start rule (here:
S '+' S
) - Match the input against the first component of the sequence
S '+' S
- Since the first component is the nonterminal
S
, call this nonterminal.
This obviously results in infinite recursion. The rule S: S '+' S / '1' / 'a';
is therefore not a valid PEG rule. But almost any context free rule can be transformed into a PEG rule.
The context free rule S: S '+' S | '1' | 'a';
translates to the valid PEG rule
S: ('1'/'a')('+' S)*;
One of the following chapters shows how to translate a context free rule into a PEG rule.
The following table compares the prevailing parser types.
Parser Type | Sub Type | Scanner | Lookahead | Generality | Implementation | Examples |
Context Free | LR-Parser | yes | - | - | table driven | |
Context Free | SLR-Parser | yes | 1 | medium | table driven | handcomputed table |
Context Free | LALR(1)-Parser | yes | 1 | high | table driven | YACC,Bison |
Context Free | LL-Parser | yes | - | - | code or table driven | |
Context Free | LL(1)-Parser | yes | 1 | low | code or table driven | Predictive parsing |
Context Free | LL(k)-Parser | yes | k | high | code or table driven | ANTLR,Coco-R |
Context Free | LL(*)-Parser | yes | unlimited | high+ | code or table driven | boost::spirit |
PEG-Parser | PEG-Parser | no | unlimited | very high | code preferred | Rats,Packrat,Pappy |
The reason, that the above table qualifies the generality and powerfulness of PEG as very high
is due to the PEG operators & (peek) and ! (not). It is not difficult to implement these operations,
but heavy use of them can impair the parser performance and earlier generations of parser writers
carefully avoided such features because of the implied costs.
When it comes to runtime performance, the differences between the above parser strategies are
not so clear. LALR(1) Parser can be very fast. The same is true for LL(1) parsers (predictive parsers).
When using LL(*) and PEG-Parsers, runtime performance depends on the amount of lookahead actually used
by the grammar. Special versions of PEG-Parsers (Packrat parsers) can guarantee linear runtime behaviour
(meaning that doubling the length of the input string just doubles the parsing time).
An important difference between LR-Parsers and LL- or PEG-Parsers is the fact that LR-Parser are
always table driven. A manually written Parser is therefore in most cases either an LL-Parser or a PEG-Parser.
Table driven parsing puts parsing into a black box which only allows limited user interaction.
This is not a problem for a one time, clearly defined parsing task, but is not ideal if one
frequently corrects/improves and extends the grammar because changing the grammar means in
case of a table driven parser a complete table and code regeneration.
Most specifications for popular programming languages come with a grammar suited for an LR parser.
LL and PEG parsers can not directly use such grammars because of left recursive rules.
Left recursive rules are forbidden in LL and PEG parsers because they result in idefinite recursion.
Another problem with LR grammars is that they often use alternatives with the same beginning.
This is legal in PEG but results in unwanted backtracking.
The following table shows the necessary grammar transformations when going from an LR grammar to
a PEG grammar.
Transformation Category | LR rule | ~PEG rule (result of transformation) |
Immediate Left Recursion =>
Factor out non recursive alternatives |
A: A t1 | A t2 | s1 | s2;
|
A: (s1 | s2) (t1 | t2)* |
Indirect Left Recursion =>
Transfrom to Immediate Left Recursion=>
Factor out non recursive alternatives |
A: B t1 | s1 ;
B: A t2 | s3 ;
|
A: (A t2 | s3) t1 | s1;
A: (s3 t1 | s1) (t2 t1)* ...; |
Alternatives with same beginning =>
Merge alternatives using Left Factorization |
A: s1 t1 | s1 t2; |
A: s1 (t1 | t2);
|
The following sample shows the transformation of part of the "C" grammar from the LR grammar as presented
in Kernighan and Ritchies book on "C" to a PEG grammar (the symbol S is used to denote scanning of white space).
LR grammar: "C" snippet declarator stuff... | PEG grammar: "C" declarator stuff... |
declarator: pointer? direct_declarator; |
declarator: pointer? direct_declarator; |
|
direct_declarator:
identifier
| '(' declarator ')'
| direct_declarator '[' constant_expression? ']'
| direct_declarator '(' parameter_type_list ')'
| direct_declarator '(' identifier_list? ')'; |
direct_declarator:
(identifier / '(' S declarator ')' S)
( '[' S constant_expression? ']' S
/ '(' S parameter_type_list ')' S
/ '(' S identifier_list? ')' S
)*; |
pointer:
'*' type_qualifier_list?
| '*' type_qualifier_list? pointer;; |
pointer:
'*' S type_qualifier_list? pointer? |
parameter_type_list:
parameter_list
| parameter_list ',' '...'; |
parameter_type_list:
parameter_list (',' S '...')?; |
|
type_qualifier_list:
type_qualifier
| type_qualifier_list type_qualifier; |
type_qualifier_list:
type_qualifier+; |
|
parameter_declaration:
declaration_specifiers declarator
| declaration_specifiers abstract_declrator?; |
parameter_declaration:
declaration_specifiers
(declarator / abstract_declarator?); |
|
identifier_list:
identifier
| identifier_list ',' identifier; |
identifier_list:
identifier S
(',' S identifier)*; |
|
The planned series of articles consists of the following parts
Subject | Planned Release date | Description
|
Parser and Parser Generator | october 2008 | C# classes to support work with Parsing expression grammars |
PEG-Debugger/Interpreter | january 2009 | Direct interpretation and debugging of PEG grammars without need to generate C# module |
Sample Applications | june 2009 | More grammar samples with postprocessing |
History
- 2008 October: initial version
- 2008 October:..minor update
- improved semantic block support (using clause,IDispose Interface)
- added new sample parser
Python 2.5.2
References
[1] Parsing Expression Grammars,Bryan Ford, MIT, January 2004
[^]
[2] Parsing expression grammar, Wikipedia.en
[^]
[3] Context-free grammar, Wikipedia.en
[^]
[4] ITU-T X.690: Specification of BER, CER and DER, INTERNATIONAL TELECOMMUNICATION UNION
[^]
[5] RFC 4627:The application/json Media Type for JavaScript Object Notation (JSON)
[^]
[6] Introducing JSON
[^]