I created a new JIRA entry to move from
Exploiting GPUs can allow us to shorten the execution time of a Spark job and to reduce the number of machines in a cluster. We are working to effectively and easily exploit GPUs on Spark at http://github.com/kiszk/spark-gpu. Our project page is http://kiszk.github.io/spark-gpu/. A design document is here
Our ideas for exploiting GPUs are
- adding a new format for a partition in an RDD, which is a column-based structure in an array format, in addition to the current Iterator[T] format with Seq[T]
- generating parallelized GPU native code to access data in the new format from a Spark application program by using an optimizer and code generator (this is similar to Project Tungsten) and pre-compiled library
The motivation of idea 1 is to reduce the overhead of serializing/deserializing partition data for copy between CPU and GPU. The motivation of idea 2 is to avoid writing hardware-dependent code by application programmers. At first, we are working for idea A (For idea B, we need to write CUDA code for now).