Uploaded image for project: 'CarbonData'
  1. CarbonData
  2. CARBONDATA-3557

Support write Flink streaming data to Carbon

    XMLWordPrintableJSON

Details

    • New Feature
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • None
    • 2.0.0
    • spark-integration
    • None

    Description

      Sometimes, user need to write flink streaming data to carbon,  required high concurrency and high throughput.

      The write process is:

      1. Write flink streaming data to local file system of flink task node use flink StreamingFileSink and carbon SDK;
      2. Copy local carbon data file to carbon data store system, such as HDFS, S3;
      3. Generate and write segment file to ${tablePath}/load_details;

      Run "alter table ${tableName} collect segments" command on server, to compact segment files in ${tablePath}/load_details, and then move the compacted segment file to ${tablePath}/Metadata/Segments/,update table status file finally.

      Attachments

        Activity

          People

            Unassigned Unassigned
            niuge01 Zhi Liu
            Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Time Tracking

                Estimated:
                Original Estimate - Not Specified
                Not Specified
                Remaining:
                Remaining Estimate - 0h
                0h
                Logged:
                Time Spent - 12h 50m
                12h 50m