Correctness Meets Performance: From Agda to FutharkRemote
In this paper we demonstrate a technique for developing high performance applications
with strong correctness guarantees. Using a theorem prover, we derive a high-level
specification of the application that includes correctness invariants of our choice.
After that, within the same theorem prover, we implement an extraction of the
specified application into a high-performance language of our choice. Concretely,
we are using Agda to specify a framework for automatic differentiation (reverse mode)
that is focused on index-safe tensors. This framework comes
with an optimiser for tensor expressions and the ability to translate these
expressions into Futhark. We specify a canonical convolutional neural network
within the proposed framework, compute the derivatives needed for the training
phase and then demonstrate that the generated code approaches the performance of TensorFlow
code when running on a GPU.
Tue 14 OctDisplayed time zone: Perth change
| 10:50 - 12:05 | Clever CompilationJFP First Papers / ICFP Papers at Orchid West Chair(s): John Reppy University of Chicago | ||
| 10:5025m Talk | Compiling with Generating Functions ICFP PapersDOI | ||
| 11:1525m Talk | Correctness Meets Performance: From Agda to FutharkRemote ICFP PapersDOI | ||
| 11:4025m Paper | Domain-specific tensor languages JFP First Papers Jean-Philippe Bernardy University of Gothenburg, Sweden, Patrik Jansson Chalmers University of Technology and University of GothenbrugDOI | ||

