Recursion is common in computation. Recursive functions in programming languages are typically difficult to reason about since one needs to understand the control flow through arbitrarily many instances of the recursive calls. For example, Conway proved a generalization of the Collatz Conjecture embeds the halting problem. In this work, we study the analysis of pure \emph{multiply recursive functions}. These are functions which only produce a value, and which call themselves directly multiple times. Although a restricted form of recursion, pure recursive functions include many important functions in mathematics and computer science such as binomial coefficients and Stirling numbers of the second kind. Many recursive functions exhibit exponential time complexity. There are a couple of techniques, memoization and matrix encoding, which can make certain pure recursive functions $O(n^m)$ or $O(\log(n))$ time rather than $O(r^n)$ for the naive recursive implementations. Unfortunately, the techniques used in mainstream optimizing compilers such as GHC, GCC, and Clang only provide constant factor improvement, leaving the function exponential.
In this paper, we present two key contributions. For pure recursive $f(n)$, we use a lattice structure of prior values to only store $O(n^{m-1})$ of $O(n^m)$ prior values. For generalized linearly recursive $f(n)$, we use a polynomial based matrix multiplication technique to compute $f(n)$ in $O(\log(n)r\log(r))$ time, instead of $O(\log(n)r^\omega)$. Here, $\omega > 2$ is the matrix multiplication complexity constant and $r$ describes how recursive $f$ is. For a specific function, $r$ is fixed and this reduces the number of multiplications needed. We evaluated these new techniques by applying them to the Fibonacci function, which is perhaps the most popular recursive function in various textbooks. Our preliminary results are quite promising. In particular, our general method was $2%$ faster than plain memoization (the main advantage is space usage), and our linear method was $17%$ better than plain binary matrix exponentiation.