Uploaded image for project: 'Bigtop'
  1. Bigtop
  2. BIGTOP-1414

Add Apache Spark implementation to BigPetStore



    • Improvement
    • Status: Closed
    • Major
    • Resolution: Fixed
    • backlog
    • 1.0.0
    • blueprints
    • None


      Currently we only process data with hadoop. Now its time to add spark to the bigpetstore application. This will basically demonstrate the difference between a mapreduce based hadoop implementation of a big data app, versus a Spark one.

      We will need to

      • update graphviz arch.dot to diagram spark as a new path.
      • Adding a spark job to the existing code, in a new package., which uses existing scala based generator, however, we will use it inside a spark job, rather than in a hadoop inputsplit.
      • The job should output to an RDD, which can then be serialized to disk, or else, fed into the next spark job...

      So, the next spark job should

      • group the data and write product summaries to a local file
      • run a product recommender against the input data set.

      We want the jobs to be runnable as modular, or as a single job, to leverage the RDD paradigm.

      So it will be interesting to see how the code is architected. Lets start the planning in this JIRA. I have some stuff ive informally hacked together, maybe i can attach an initial patch just to start a dialog.


        1. chart.png
          54 kB
          Jörn Franke

        Issue Links



              jayunit100 jay vyas
              jayunit100 jay vyas
              0 Vote for this issue
              3 Start watching this issue