Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-12620

Proposal of GPU exploitation for Spark

    XMLWordPrintableJSON

Details

    • New Feature
    • Status: Closed
    • Major
    • Resolution: Duplicate
    • None
    • None
    • Spark Core
    • None

    Description

      I created a new JIRA entry to move from SPARK-3875

      Exploiting GPUs can allow us to shorten the execution time of a Spark job and to reduce the number of machines in a cluster. We are working to effectively and easily exploit GPUs on Spark at http://github.com/kiszk/spark-gpu. Our project page is http://kiszk.github.io/spark-gpu/. A design document is here

      Our ideas for exploiting GPUs are

      1. adding a new format for a partition in an RDD, which is a column-based structure in an array format, in addition to the current Iterator[T] format with Seq[T]
      2. generating parallelized GPU native code to access data in the new format from a Spark application program by using an optimizer and code generator (this is similar to Project Tungsten) and pre-compiled library

      The motivation of idea 1 is to reduce the overhead of serializing/deserializing partition data for copy between CPU and GPU. The motivation of idea 2 is to avoid writing hardware-dependent code by application programmers. At first, we are working for idea A (For idea B, we need to write CUDA code for now).

      This prototype achieved 3.15x performance improvement of logistic regression (SparkGPULR) in examples on a 16-thread IvyBridge box with an NVIDIA K40 GPU card over that with no GPU card

      You can download the pre-build binary for x86_64 and ppc64le from here. You can run this on Amazon EC2 by the procedure, too.

      Attachments

        Issue Links

          Activity

            People

              Unassigned Unassigned
              kiszk Kazuaki Ishizaki
              Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: