Write a Blog >>
Fri 24 Jan 2020 14:43 - 15:05 at Ile de France III (IDF III) - Language Design Chair(s): Amin Timany

Nested parallelism has proved to be a popular approach for programming the rapidly expanding range of multicore computers. It allows programmers to express parallelism at a high level and relies on a run-time system and a scheduler to deliver efficiency and scalability. As a result, many programming languages and extensions that support nested parallelism have been developed, including in C/C++, Java, Haskell, and ML. Yet, writing efficient and scalable nested parallel programs remains challenging, primarily due to difficult concurrency bugs arising from destructive updates or effects. For decades, researchers have argued that functional programming can simplify writing parallel programs by allowing more control over effects but functional programs continue to underperform in comparison to parallel programs written in lower-level languages. The fundamental difficulty with functional languages is that they have high demand for memory, and this demand only grows with parallelism.

In this paper, we identify a memory property, called disentanglement, of nested-parallel programs, and propose memory management techniques for improved efficiency and scalability. Disentanglement allows for (destructive) effects as long as concurrently executing threads do not gain knowledge of the memory objects allocated by each other. We formally define disentanglement by considering an ML-like higher-order language with mutable references and presenting a dynamic semantics for it that enables reasoning about computation graphs of nested parallel programs. Based on this graph semantics, we formalize a classic correctness property—determinacy race freedom—and prove that it implies disentanglement. This establishes that disentanglement applies to a relatively broad class of parallel programs. We then propose memory management techniques for nested-parallel programs that take advantage of disentanglement for improved efficiency and scalability. We show that these techniques are practical by extending the MLton compiler for Standard ML to support this form of nested parallelism. Our empirical evaluation shows that our techniques are efficient and scale well.

slides (popl.pdf)619KiB

Fri 24 Jan
Times are displayed in time zone: Saskatchewan, Central America change

14:00 - 15:05: Language DesignResearch Papers at Ile de France III (IDF III)
Chair(s): Amin Timanyimec-Distrinet KU-Leuven
14:00 - 14:21
Talk
Research Papers
Ralf JungMPI-SWS, Hoang-Hai DangMPI-SWS, Jeehoon KangKAIST, Derek DreyerMPI-SWS
Link to publication DOI Media Attached File Attached
14:21 - 14:43
Talk
Research Papers
Michael GreenbergPomona College, Austin J. BlattPuppet Labs
Link to publication DOI Media Attached File Attached
14:43 - 15:05
Talk
Research Papers
Sam WestrickCarnegie Mellon University, Rohan YadavCarnegie Mellon University, Matthew FluetRochester Institute of Technology, Umut A. AcarCarnegie Mellon University
Link to publication DOI Media Attached File Attached