This isn’t my first time trying to create an equation solver (I recall doing something similar for an assignment my freshman year of high school), so I knew that I had to somehow parse the infix notation into an abstract syntax tree. There are two steps to this:

• tokenizing the string
• parsing the tokenized string

The first part is trivial with regex:

(defn tokenize
"split string into tokens"
[x]
(concat (map characterize (re-seq #"(?:(\d+)|(\w+)|(.))" x)) (list [:end ""])))


The regular expression above splits the string into numbers, words (variables), and symbols (everything else). I have each of these wrapped into separate capturing groups so I can characterize them later (guess what the characterize function does):

(defn characterize
"characterize a specific token (number,variable,paren,binop)"
[tok]
(let [num (nth tok 1)
var (nth tok 2)
bop (nth tok 3)]
(cond
(some? num) [:num (js/parseInt num)]
(some? var) [:var var]
(= bop "(") [:lparen bop]
(= bop ")") [:rparen bop]
:else [:binop bop])))


characterize takes a specific match from the re-seq call from above, and returns a “tuple” of the type of token and the value of the token (eg. [:binop “+”] indicates that the token is a binary operator with the value “+”). I take the liberty of parsing any numbers in this function, but there’s no need to.

In the tokenize call above, I “append” a token of type :end to the (you guessed it) end of the list, which I’ll get to later.

Now onto the tricky part. I tried looking at different ways to do actually parse the tokens, coming across terms such as recursive descent, precedence climbing, TDOP (which sounds really cool), and even this one snippet in a Wikipedia entry somewhere that talks about a set of rewriting rules used in a FORTRAN compiler that put valid parentheses around every operation, allowing for parsing based on parenthesis depth/level. The problem was that all implementations of these parsing algorithms required some global state, or a cursor token, which apparently is a big no-no for functional programming.

Side note: from my research, it seems like functional programming just means:

• pure functions (same input -> same output)
• no/minimal global state
• immutable data (create new data instead of modifying old data)
• avoid classes (I think?)

Anyway, back to the main program.

Most of the implementations of the algorithms above used something like a Python generator which gave the next token in the string whenever you called a method. The reason I didn’t see this working in something like Clojure is because I’d have to do something akin to handing down the state of the cursor to another function, but then I’d have no idea how that call affects the state in the current function call. In other words, if I come across a token that requires some call to function A, after A returns, I don’t know how many tokens it has consumed during its call, so I don’t know where to pick up in my main call.

Actually, writing the above makes me realize that I could just return the “changed” state of the tokenized string from the child call. It always surprises (and disappoints) me how often I can find the solution to problems like these only after I’ve started writing about them. I’ve always thought I was more of a thinking guy but maybe I should write down things more often.

Anyways, I found a really easy to understand description of the shunting-yard algorithm here, which was really easy to understand (did I say how easy it was to understand?). The Wikipedia article has too much detail (which isn’t necessarily a bad thing, it just wasn’t what I needed at the moment). My implementation starts out with a case check on the type of token:

(defn shunting-yard
"use shunting-yard algorithm to convert infix to ast"
([toks] (shunting-yard toks '() '())) ; start the initial call with empty stacks
([toks ops res] ; toks = token queue, ops = operator stack, res = result/operand stack
(let [[type val] (first toks)] ; local vars type and val represent the current token
(case type ; switch on the type of the current token
:num ...
:var ... ; variable
:lparen ...
:rparen ...
:binop ... ; binary operators
:end ... )))) ; end of tokenized string.



Excuse any missing closing parens, I was using Parinfer (our lord and savior).

The :num and :var cases are pretty simple, à la the first bullet point in the algorithm description:

(case type
; "If we see [a :num or :var], push it on[to res]."
:num (recur (next toks) ops (conj res val))
:var (recur (next toks) ops (conj res :var))


If you didn’t notice, I’m quoting the actual article. Keep it open and follow along, because later on, the code order and the article order don’t match up and you’re gonna get confused.

It took some time for me to really understand recur because it’s used exactly the same as tail-end recursion. Later on I learned that the advantage of recur over good old-fashioned tail-end recursion is that recur reuses the current call, which doesn’t add more entries onto the call stack and, in turn, doesn’t cause a stack overflow when you need to recurse many times. If it still confuses you, you can just imagine that I’m calling the original function instead of recur, and you’ll get the same idea.

Both :num and :var are operands, so we can recurse with the first entry of toks dequeued, ops unchanged, and the token pushed onto res.

Going along with the algorithm description, we can handle the operators (:binop) next:

(case type
...
:binop (if (>= ; "while there’s a [:binop] on top of [ops] of precedence higher than or equal to that of [val]"
(get-in opinfo [(first ops) :prec])
(get-in opinfo [val :prec]))
(recur toks ; don't dequeue from toks until the above condition is false (emulates while)
(pop ops) ; "pop it off [of ops]"
(apply-binop (peek ops) res)) ; "and apply it"
(recur (next toks)
(conj ops val) ; "Then, push [val] on[to ops]"
res)))


Because the structure of this function is to recurse based on conditions, the while loop had to be dealt with slightly differently. Fortunately, the while condition can be handled with the recursive calls, so I can just replace the while with an if, and just defer dequeuing (how do you even spell that word) the token from toks until the while loop exits.

There’s a call to apply-binop in the middle there, associated with applying the operator (obviously). Here’s the code for that:

(defn apply-binop
"apply a binary operator to two operands from an operand stack, pushing the result back to the stack"
[op stack]
(conj (nnext stack) (list op (second stack) (first stack))))


The function takes an operator, and the res stack, and returns a new stack with the operator applied. Since the desired result of this shunting-yard algorithm is an abstract syntax tree, all we need to do is to create a node/subtree with the operator as the root and the top two entries from the operand stack as the children. Now, let’s say that instead of an abstract syntax tree, we just wanted to evaluate the expression (assuming no variables). Then it’s here where we can just compute the result of applying the operator, and then push that onto the stack. Just shows the flexible nature of the algorithm.

The observant among you may have noticed that there is an object called opinfo. “But Jason”, you cry. “If you use something from outside of the function, doesn’t that make it global state, and therefore make this function impure?” you may ask (actually probably not, but waftsoc). opinfo is merely a structure holding constant information about each of the binary operators I chose to implement:

(def opinfo {
"*" { :prec 3 :inv "/" :op *}
"/" { :prec 3 :inv "*" :op /}
"+" { :prec 2 :inv "-" :op +}
"-" { :prec 2 :inv "+" :op -}
"=" { :prec 1 }})


Where :prec, :inv, and :op indicate the precedence, inverse, and Clojure function object (?), respectively. Note that the equal sign is there as well, but doesn’t have :inv or :op. This is because opinfo is used for both parsing and evaluating, and for parsing purposes, the equal sign is the :binop, or binary operator, with the lowest (or highest, depending on your perspective) precedence.

The last part of the algorithm is to handle the end of the string. This is where that :end token comes in. Instead of using a cond or condp and checking if toks is empty ((empty? toks)), by adding that :end token I can use the constant-time case function. To be honest time complexity to check between six items is going to do nothing in the long run, but it made the code distasteful otherwise. Here’s the code:

(case type
...
:end (if (empty? ops)
(first res) ; "Then the result is the only item left on [res]"
(recur toks ; "apply any operators remaining on [ops]"
(pop ops)
(apply-binop (peek ops) res))))


And with that the simple shunting-yard algorithm is complete. There isn’t a lot you can do with just the four basic operations though, so to spice it up (just a little bit), we can read on and add the parentheses handling.

(case type
...
:lparen (recur (next toks) (conj ops val) res) ; "When you see [an :lparen], push it on[to ops]"
:rparen (if (= (first ops) "(") ; "until you get back to [an :lparen]"
(recur (next toks) (pop ops) res) ; "which is popped and discarded."
(recur toks ; "pop-and-apply any operators on [ops]"
(pop ops)
(apply-binop (peek ops) res))))


If you’ve been following along so far, the above shouldn’t be too difficult.

All of the above was so that I could turn this:

"4+x=5"


Into this:

("=" ("+" 4 :var) 5)


Yippee.

Honestly this post is already so long, I think I’ll put the rest of the project in a different post.