Uploaded image for project: 'Mahout'
  1. Mahout
  2. MAHOUT-1506

Creation of affinity matrix for spectral clustering

    XMLWordPrintableJSON

    Details

    • Type: Improvement
    • Status: Closed
    • Priority: Major
    • Resolution: Won't Fix
    • Affects Version/s: 1.0.0
    • Fix Version/s: 0.10.0
    • Component/s: Clustering
    • Labels:
      None

      Description

      I wanted to get this discussion going, since I think this is a critical blocker for any kind of documentation update on spectral clustering (I can't update the documentation until the algorithm is useful, and it won't be useful until there's a built-in method for converting raw data to an affinity matrix).

      Namely, I'm wondering what kind of "raw" data should this algorithm be expecting (anything that k-means expects, basically?), and what are the data structures associated with this? I've created a proof-of-concept for how pairwise affinity generation could work.

      https://github.com/magsol/Hadoop-Affinity

      It's a two-step job, but if the data structures in the input data format provides 1) the total number of data points, and 2) for each data point to know its index in the overall set, then the first job can be scrapped entirely and affinity generation will consist of 1 MR task.

      (discussions on Spark / h20 pending, of course)

      Mainly this is an engineering problem at this point. Let me know your thoughts and I'll get this done (I'm out of town the next 10 days for my wedding/honeymoon, will get to this on my return).

        Attachments

          Activity

            People

            • Assignee:
              magsol Shannon Quinn
              Reporter:
              magsol Shannon Quinn
            • Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved: