Creating DSL with ANTLR4 and Scala
Domain specific languages, when done right, helps a lot in improving developer productivity. First thing which you need while creating a DSL is a parser which can takes a piece of text and transforms it in structured format(like Abstract Syntax Tree) so that your program can understand and do something useful with it. DSL tends to stay for years so while choosing a tool for creating parser for you DSL you need to make sure that its easy to maintain and evolve the language. For parsing simple DSL, you can just use regular expression or scala's in-built parser-combinators, but for even slightly complex DSL, both of these becomes performance and mantainenance nightmares.
In this post we will see how to use antlr to create a basic grammar and use it in Scala. Full code and grammar for this post is available at https://github.com/saumitras/antlr4-scala/
ANTLR4
ANTLR can generate lexers, parsers, tree parsers, and combined lexer-parsers. Parsers can automatically generate abstract syntax trees which can be further processed with tree parsers. ANTLR provides a single consistent notation for specifying lexers, parsers, and tree parsers. This is in contrast with other parser/lexer generators and adds greatly to the tool’s ease of use. It supports:
- Tree construction
- Tree walking
- Error recovery
- Error handling
- Translation
Antlr supports a large number of target languages, so same grammar can be used for both backend parsing or frontend validations. Following langauges are supported:
Ada, Action Script, C. C#, D, Emacs, ELisp, Objective C, Java, Javascript, Python, Ruby, Perl6, Perl, PHP, Oberon, Scala
How ANTLR works
{% img /images/posts/antlr2.png %}
On a high level, here's what you do to parse something with ANTLR
- Create lexer rules
- Create parser rules which uses lexer ouput
- Use lexer and parser to generate source code for a target language
- Use generated sources to convert some raw input into structured form (AST)
- Do something with this structured data
We will understand it with an example. Lets say we want to create a DSL for allowing arithmetic operation. A valid input(expression) will be
As humans, if we want to evaluate this expression, here's what we will do:
- Split this expression into different components.
- For example in above example, each character belongs to one of these group
- Operands (3, 4, 5)
- Operation (+ - * /)
- Whitespaces
- This part is called lexical anaysis where you convert raw text(stream of characters) into tokens
- Create relationship between tokens
- To evaluate it efficiently we can create a tree like structure to define relationship between different expression like this:
- This is called AST (Abstract syntax tree) and this gets by applying rules you define in your grammar on input text. Once you have the AST, to evaluate the expression, we need to traverse or ‘walk’ it in a depth first manner. We start at the root ‘+’ and go as deep into the tree as we can along each child, then evaluate the operations as we come back out of the tree.
We will now setup the tools and try creating a simple grammar.
Setup
IDE
ANTLR provides a GUI based IDE for developing grammar. You can download it from http://www.antlr3.org/works/. It combines an excellent grammar-aware editor with an interpreter for rapid prototyping and a language-agnostic debugger for isolating grammar errors.
IntelliJ plugin
Intellij provides a plugin too for ANTLR. Refer this link for adding the plugin https://plugins.jetbrains.com/plugin/7358-antlr-v4-grammar-plugin
Command line setup
You can directly create, test and debug grammar from command line too. Here's steps on how
- Download antlr jar http://www.antlr.org/download/antlr-4.7-complete.jar
- Create an alias for command to generate sources
alias antlr4='java -jar /home/sam/softwares/antlr/antlr-4.6-complete.jar'
- Create an alias to test your grammar for some input
alias grun='java -cp ".:/home/sam/softwares/antlr/antlr-4.6-complete.jar" org.antlr.v4.gui.TestRig'
Add this in ~/.bashrc
to be able to directly call antlr4
and grun
command from anywhere.
Creating grammar
A grammar will consist of 2 parts
- Lexer
- Parser
Both of these can be defined in same file, but for maintainence sake its better to define it in separate files. Lets create lexer and parser for a DSL which will allow basic arithmetic operations on 2 numbers. Some valid inputs will be:
Lets first define lexer
definitions in a file named ArithmeticLexer.g4
to extract tokens from input:
lexer grammar ArithmeticLexer;
WS: [ \t\n]+ -> skip ;
NUMBER: ('0' .. '9') + ('.' ('0' .. '9') +)?;
ADD: '+';
SUB: '-';
MUL: '*';
DIV: '/';
- First definition for
WS
is telling to skip all the space, tabs and newline chars - Second definition for
NUMBER
is telling to extract all numbers as NUMBER token ADD/SUB/MUL/DIV
defintion is assigning a named token to respective mathematical operator
Now lets write some parser
rules in file named ArithmeticParser.g4
which will processes tokens generated by lexer and create a AST for a valid input
parser grammar ArithmeticParser;
options { tokenVocab=ArithmeticLexer; }
expr: NUMBER operation NUMBER;
operation: (ADD | SUB | MUL | DIV);
expr
is the base rule and will accept any 2 numbers with one of valid operation.operation
rule is telling tokens are valid operations
Generating sources for a target language
Now that we have our grammar, we can generate lexer and parser source in any of the supported language. Run following from command line:
By default it will generate sources in java. You can change that by passing language argument. For example, following command generate sources in javascript:
Instead of generating sources individually for lexer and parsr, you can do in same command too:
After you run above, you will see following java source generated in same directory:
├── ArithmeticLexer.g4
├── ArithmeticLexer.java
├── ArithmeticLexer.tokens
├── ArithmeticParserBaseListener.java
├── ArithmeticParser.g4
├── ArithmeticParser.java
├── ArithmeticParserListener.java
└── ArithmeticParser.tokens
You can also provide a package name for generated sources, like below
Antlr4 provide 2 ways to walk the AST - Listener
and Vistor
. Antlr doesn't generate sources for visitor by default. Since we will be using visitor pattern while using it in scala to avoid mutability, so lets generate visitor source too. It can be done by providing visitor flag, like below:
Now you will see source for visitor too
├── ArithmeticLexer.g4
├── ArithmeticLexer.java
├── ArithmeticLexer.tokens
├── ArithmeticParserBaseListener.java
├── ArithmeticParserBaseVisitor.java
├── ArithmeticParser.g4
├── ArithmeticParser.java
├── ArithmeticParserListener.java
├── ArithmeticParser.tokens
└── ArithmeticParserVisitor.java
Next compile the sources
Now you are ready to test any input against you DSL.
Testing the DSL
To do that, run following command
What above command is saying that execute org.antlr.v4.gui.TestRig
on Arithmetic
grammar and test for rule named expr
. -tokens
flag will allow us to see the tokens generated by lexer.
Enter any valid input like 10 + 3
. Press enter
. Press Ctrl+D
. You will see input like below:
$ grun Arithmetic expr -tokens
10 + 2
^D
[@0,0:1='10',<NUMBER>,1:0]
[@1,3:3='+',<'+'>,1:3]
[@2,5:5='2',<NUMBER>,1:5]
[@3,7:6='<EOF>',<EOF>,2:0]
Since it didn't show any error, it means your input in valid. Each line is showing token value, token name and its start and end offset.
In case of invalid input, antlr will tell you what was it was expecting, like below
In case of valid input, in addition to tokens, you can also see the AST by passing -gui
flag, like below
$ grun Arithmetic expr -tokens -gui
1272.12 * 743.12
^D
[@0,0:6='1272.12',<NUMBER>,1:0]
[@1,8:8='*',<'*'>,1:8]
[@2,10:15='743.12',<NUMBER>,1:10]
[@3,17:16='<EOF>',<EOF>,2:0]
Using generated sources in code
We will now see how to extend generated interfaces and use it from within code. As I mentioned above, antlr4 provides 2 ways to walk the AST - Visitor and Listener. We will first see how to use the listener pattern. Although listener method is commonly used by java devs, but scala folks will not like it because it can only return unit, hence you need to use intermediate variables leading to side-effects. Refer to this post for a comparision between two patterns.
First thing you need to do is add antlr dependency:
libraryDependencies ++= Seq(
"org.antlr" % "antlr4-runtime" % "4.6",
"org.antlr" % "stringtemplate" % "3.2"
)
Next import all the generated source in your project and create a parse method which accept an input expression
def parse(input:String) = {
println("\nEvaluating expression " + input)
val charStream = new ANTLRInputStream(input)
val lexer = new ArithmeticLexer(charStream)
val tokens = new CommonTokenStream(lexer)
val parser = new ArithmeticParser(tokens)
/* Implement listener and use parser */
}
- In line 4, we converted input text to a char stream, because lexer operates at char level
- in line 5, we get a lexer object which uses
ArithmeticLexer
generated using definitions from 'ArithmeticLexer.g4' and pass input stream to it - in line 6, we got all the token obtained by applying lexer rules on input text
- in line 7, we created a parser by applying rules which we defined in
ArithmeticParser.g4
Next thing which we need to do is implement some methods in the BaseListener
interface. Lets see what content of generated ArithmeticParserBaseListener
.
public class ArithmeticParserBaseListener implements ArithmeticParserListener {
//enter and exit methods for grammar rules
@Override public void enterExpr(ArithmeticParser.ExprContext ctx) { }
@Override public void exitExpr(ArithmeticParser.ExprContext ctx) { }
@Override public void enterOperation(ArithmeticParser.OperationContext ctx) { }
@Override public void exitOperation(ArithmeticParser.OperationContext ctx) { }
//default grammar independent methods
@Override public void enterEveryRule(ParserRuleContext ctx) { }
@Override public void exitEveryRule(ParserRuleContext ctx) { }
@Override public void visitTerminal(TerminalNode node) { }
@Override public void visitErrorNode(ErrorNode node) { }
}
For every rule which we defined in ArithmeticParser.g4
, it created a enter
and exit
method. Since we had 2 rules, expr
and operation
, so it created 4 methods. As name implies, these will get triggered every time walker enters and exit a matched rule. For now lets focus on entry method of our starting rule expr
. This problem can be solved by using visitor instead of listener as discussed in this post.
context
which has all the meta information as well as matched input info. Also note that all methods return void
which means you need to use mutable variables to store computational values if they needs to be shared among different rules or even by main caller.
So now we create our own class by extending ArithmeticParserBaseListener
and implement enterExpr
rule.
class ArithmeticListenerApp extends ArithmeticParserBaseListener {
override def enterExpr(ctx: ArithmeticParser.ExprContext): Unit = {
val exprText = ctx.getText
println(s"Expression after tokenization = $exprText")
val operands = ctx.NUMBER().toList.map(_.getText)
val operand1 = parseDouble(operands.lift(0).getOrElse("0.0")).getOrElse(0.0)
val operand2 = parseDouble(operands.lift(1).getOrElse("0.0")).getOrElse(0.0)
val operation = ctx.operation().getText
calculate(operand1, operand2, operation) match {
case Some(result) =>
println(s"Result of $operand1 $operation $operand2 = $result")
case None =>
println(s"Failed to evaluate expression. Tokenized expr = $exprText")
}
}
def parseDouble(s: String): Option[Double] = Try(s.toDouble).toOption
def calculate(op1:Double, op2:Double, operation:String):Option[Double] = {
operation match {
case "+" => Some(op1 + op2)
case "-" => Some(op1 - op2)
case "*" => Some(op1 * op2)
case "/" => Try(op1 / op2).toOption
case _ =>
println(s"Unsupported operation")
None
}
}
- In line 4, exprText will have tokenized text for this rule.
- In line 7,
expr
rule'scontext
knows aboutNUMBER
andoperation
. SinceNUMBER
occurs twice in the rule, hencectx.NUMBER()
will be a list containing 2 numbers. - In line 11, we get value of
operation
from expr rule'scontext
- We calculate the value and since there is no way to return it to caller from
enterExpr
method, hence we just print it. We could have stored it in some mutable variable in case caller needed it.
Now that we have implemented this, we need to use it in the parse
method which we defined earlier, like below
def parse(input:String) = {
println("\nEvaluating expression " + input)
val charStream = new ANTLRInputStream(input)
val lexer = new ArithmeticLexer(charStream)
val tokens = new CommonTokenStream(lexer)
val parser = new ArithmeticParser(tokens)
val arithmeticListener = new ArithmeticListenerApp()
parser.expr.enterRule(arithmeticListener)
}
Lets test it now on some input expressions:
val expressions = List(
"127.1 + 2717",
"2674 - 4735",
"47 * 74.1",
"271 / 281",
"12 ^ 3" // unsupported expression
)
expressions.foreach(parse)
You will see following output:
Evaluating expression 127.1 + 2717
Expression after tokenization = 127.1+2717
Result of 127.1 + 2717.0 = 2844.1
Evaluating expression 2674 - 4735
Expression after tokenization = 2674-4735
Result of 2674.0 - 4735.0 = -2061.0
Evaluating expression 47 * 74.1
Expression after tokenization = 47*74.1
Result of 47.0 * 74.1 = 3482.7
Evaluating expression 271 / 281
Expression after tokenization = 271/281
Result of 271.0 / 281.0 = 0.9644128113879004
Evaluating expression 12 ^ 3
line 1:3 token recognition error at: '^'
line 1:5 missing {'+', '-', '*', '/'} at '3'
Expression after tokenization = 123
Unsupported operation
Failed to evaluate expression. Tokenized expr = 123
I hope this post gave you an idea on how to get started with creating your own DSL.