Uploaded image for project: 'Mahout'
  1. Mahout
  2. MAHOUT-2079

investigat Cuda as a backend for in core matricies.

    XMLWordPrintableJSON

Details

    • Wish
    • Status: Resolved
    • Major
    • Resolution: Won't Fix
    • 0.14.1
    • classic-15.0
    • None
    • None

    Description

      with nvidia card  memories now exceeding what used to be max for some machines.. 16 G easily it would be good to be able to dumb into Cuda memory directly from a statment or assignment.  look into backing in-core matrices with CIDA memory or CUDA Shared memory.

      Attachments

        Activity

          People

            Andrew_Palumbo Andrew Palumbo
            Andrew_Palumbo Andrew Palumbo
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: