Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-21669

Internal API for collecting metrics/stats during FileFormatWriter jobs

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • 2.3.0
    • 2.3.0
    • SQL
    • None

    Description

      It would be useful to have some infrastructure in place for collecting custom metrics or statistics on data on the fly, as it is being written to disk.

      This was inspired by the work in SPARK-20703, which added simple metrics collection for data write operations, such as numFiles, numPartitions, numRows. Those metrics are first collected on the executors and then sent to the driver, which aggregates and posts them as updates to the SQLMetrics subsystem.

      The above can be generalized and turned into a pluggable interface, which in the future could be used for other purposes: e.g. automatic maintenance of cost-based optimizer (CBO) statistics during "INSERT INTO <table> SELECT ..." operations, such that users won't need to explicitly call "ANALYZE TABLE <table> COMPUTE STATISTICS" afterwards anymore, thus avoiding an extra full-table scan.

      Attachments

        Issue Links

          Activity

            People

              a.ionescu Adrian Ionescu
              a.ionescu Adrian Ionescu
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: