Date(s) - 12/06/2014
2:00 pm - 3:00 pm
Seminar: Design Frontiers in Parallel Languages: the role of determinism”. (Room 1.33a, Jack Cole Building).
Abstract: Constraints can be a source of inspiration; their role in creative art forms is well-recognized, with poetry as the quintessential example. We argue that the requirement of determinism can play the same role in the design of parallel programming languages. This talk describes a series of design explorations that begin with determinism as the constraint, introduce the concept of monotonically-changing concurrent data structures (LVars), and end in some interesting places — flirting with the boundaries to yield quasideterminism, and revealing synergies between parallel effects, such as cancelation and memoization, when used in a deterministic context.
Our goal is for guaranteed-deterministic parallel programming to be practical and efficient for a wide range of applications. One challenge is simply to integrate the known forms of deterministic-by-construction parallelism, which we overview in this talk: Kahn process networks, pure data-parallelism, single assignment languages, functional programming, and type-effect systems that enforce limited access to state by threads. My group, together with many others around the world, are developing libraries such as LVish and Accelerate that add these capabilities to the programming language Haskell. It is early days yet, but it is already possible to build programs that mix concurrent, lock-free data structures, blocking data-flow, callbacks, and GPU-based data-parallelism, without ever compromising determinism or referential transparency.
Biography: Ryan R. Newton emerged from the wetlands of South Florida to receive his Ph.D. in computer science from MIT in 2009, and from 2009 through
2011 conducted research on parallel programming tools as part of Intel’s Developer Products Division. In 2011, he joined Indiana University, where his research focuses on language-based approaches to the programming challenges posed by future architectures. To this end, he and his students have invented a new concurrent data abstraction (LVars) for deterministic parallelism, and are working on domain-specific compilers for array languages and distributed stream processing.