Write a Blog >>

We study the weak call-by-value λ-calculus as a model for computational complexity theory and establish the natural measures for time and space — the number of beta-reduction steps and the size of the largest term in a computation — as reasonable measures with respect to the invariance thesis of Slot and van Emde Boas from 1984. More precisely, we show that, using those measures, Turing machines and the weak call-by-value λ-calculus can simulate each other within a polynomial overhead in time and a constant factor overhead in space for all computations terminating in (encodings of) “true” or “false”. The simulation yields that standard complexity classes like P, NP, PSPACE, or EXP can be defined solely in terms of the λ-calculus, but does not cover sublinear time or space.

Note that our measures still have the well-known size explosion property, where the space measure of a computation can be exponentially bigger than its time measure. However, our result implies that this exponential gap disappears once complexity classes are considered instead of concrete computations.

We consider this result a first step towards a solution for the long-standing open problem of whether the natural measures for time and space of the λ-calculus are reasonable. Our proof for the weak call-by-value λ-calculus is the first proof of reasonability (including both time and space) for a functional language based on natural measures and enables the formal verification of complexity-theoretic proofs concerning complexity classes, both on paper and in proof assistants.

The proof idea relies on a hybrid of two simulation strategies of reductions in the weak call-by-value λ-calculus by Turing machines, both of which are insufficient if taken alone. The first strategy is the most naive one in the sense that a reduction sequence is simulated precisely as given by the reduction rules; in particular, all substitutions are executed immediately. This simulation runs within a constant overhead in space, but the overhead in time might be exponential. The second strategy is heap-based and relies on structure sharing, similar to existing compilers of eager functional languages. This strategy only has a polynomial overhead in time, but the space consumption might require an additional factor of log n, which is essentially due to the size of the pointers required for this strategy. Our main contribution is the construction and verification of a space-aware interleaving of the two strategies, which is shown to yield both a constant overhead in space and a polynomial overhead in time.

Wed 22 Jan (GMT-06:00) Saskatchewan, Central America change

POPL-2020-Research-Papers
10:30 - 11:35: Research Papers - Complexity / Decision Procedures at Ile de France III (IDF III)
Chair(s): Roopsha SamantaPurdue University
POPL-2020-Research-Papers10:30 - 10:51
Talk
Yannick ForsterSaarland University, Fabian KunzeSaarland University, Marc RothSaarland University and MMCI and Merton College, Oxford University
Link to publication DOI Pre-print Media Attached
POPL-2020-Research-Papers10:51 - 11:13
Talk
Yotam FeldmanTel Aviv University, Neil ImmermanUniversity of Massachusetts, Amherst, Mooly SagivTel Aviv University, Sharon ShohamTel Aviv university
Link to publication DOI Media Attached
POPL-2020-Research-Papers11:13 - 11:35
Talk
Parosh Aziz AbdullaUppsala University, Sweden, Mohamed Faouzi AtigUppsala University, Sweden, Rojin RezvanSharif University
Link to publication DOI