Recently, to get back into programming a little, I've decided to apply what I've been learning in Calculus AB through C++ and create a derviator (a program that derives a function). The ultimate goal is to present it a function and to have it apply a bunch of rules and known derivatives (just as I would when solving) to find the derivative of the given function.

My first foray into the subject started with an incredibly simple lexerless and parserless system that only accepted input of `Kx^L`

for K and L between 1 and 9. This then used the power rule to find the derivative `KLx^(L-1)`

through immensely basic means; it simply looked for the caret (`^`

) and found the single-digit positive number next to it, then found the `x`

, found the single digit positive number before it, and did the necessary math. While this worked beautifully (for a very limited set of inputs), the way I programmed it meant that it was not an easily scalable solution. I'm not trying to create Mathway, but I don't want to just implement the power rule. So, I decided I needed to write a lexer and parser, as Mathway and Symbolab sure have, but much less sophisticated.

My current setup for the lexer (the parser, I have yet to get to) is relatively simple. Each significant item (a variable, a number, a caret, etc.) has a "key," which is assigned a value (drawn from an enum, ie. `PLUS`

or `NEGATIVE`

) and a value, which is a string representation of the character in question. Each input string (ie. `4x^(-2sin(|x|))`

) is then parsed into a series of keys, which will then be interpreted by the parser (which I have yet to build).

Currently, the lexer can key every single-character item, like the caret and parentheses. What is more difficult is lexing multi-character items, ie. numbers. My current system is a little difficult to explain, but while writing this I have actually had a revelation as to how simple it really is, so I will detail my thoughts here. Currently, the lexer works on a two-pass system; the first pass captures single-digit characters, and the second pass is meant to be a kind of pre-parser, doing things like discerning between minus and a negative symbol, and also finding multi-character keys. However, I just now realized that I could simply glue them together in the second pass, ie. if the lexer sees 5 `NUMBER`

keys next to each other it can simply tie them together.