Uploaded image for project: 'Flink'
  1. Flink
  2. FLINK-19345

In Table File Sink, introduce streaming sink compaction

Details

    • New Feature
    • Status: Closed
    • Major
    • Resolution: Fixed
    • None
    • 1.12.0
    • Table SQL / Runtime
    • None

    Description

      Users often complain that many small files are written out. Small files will affect the performance of file reading and the DFS system, and even the stability of the DFS system.

      Target: 

      • Compact all files generated by this job in a single checkpoint.
      • With compaction, Users can have smaller checkpoint interval, even to seconds.

      Document: https://docs.google.com/document/d/1cdlyoqgBq9yJEiHFBziimIoKHapQiEY2-0Tn8IF6G-c/edit?usp=sharing

      Attachments

        Issue Links

          Activity

            lzljs3620320 Jingsong Lee added a comment -

            Hi aljoscha pnowojski kkl0u What do you think? And related FLINK-19356 FLINK-19357 .

            CC: gaoyunhaii  maguowei

            lzljs3620320 Jingsong Lee added a comment - Hi aljoscha pnowojski kkl0u  What do you think? And related FLINK-19356 FLINK-19357 . CC: gaoyunhaii   maguowei
            kkl0u Kostas Kloudas added a comment - - edited

            Hi lzljs3620320, as you may have noticed there is an ongoing discussion about the new sink interface going on in the dev mailing list and the target is to have it ready for the 1.12.

            Instead of having parallel efforts, why not contributing to the discussion there so that our efforts are aligned? Also opening a new discussion although there is already an open one for similar problems, I think it slows down the process of reaching a consensus as potentially parts of the already done discussion may have to be repeated.

            I will have a look at the document attached here, but I would think that participating in the discussion on the FLIP about the unified sinks would be a lot more helpful also for the community.

            kkl0u Kostas Kloudas added a comment - - edited Hi lzljs3620320 , as you may have noticed there is an ongoing discussion about the new sink interface going on in the dev mailing list and the target is to have it ready for the 1.12. Instead of having parallel efforts, why not contributing to the discussion there so that our efforts are aligned? Also opening a new discussion although there is already an open one for similar problems, I think it slows down the process of reaching a consensus as potentially parts of the already done discussion may have to be repeated. I will have a look at the document attached here, but I would think that participating in the discussion on the FLIP about the unified sinks would be a lot more helpful also for the community.
            lzljs3620320 Jingsong Lee added a comment -

            Hi kkl0u, thanks for you reply, I have discussed with Guowei about the unified sink many times offline, and in the unified sink discussion, Guowei also mentioned relevant design and considerations about file compaction.[1]

            At present, the conclusion of unified sink is that Hive partition commit and file compaction are not supported for now. I think maybe considering too much scope on the unified sink can lead to overly complex designs.

            [1]http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-FLIP-143-Unified-Sink-API-td44602.html

            lzljs3620320 Jingsong Lee added a comment - Hi kkl0u , thanks for you reply, I have discussed with Guowei about the unified sink many times offline, and in the unified sink discussion, Guowei also mentioned relevant design and considerations about file compaction. [1] At present, the conclusion of unified sink is that Hive partition commit and file compaction are not supported for now. I think maybe considering too much scope on the unified sink can lead to overly complex designs. [1] http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-FLIP-143-Unified-Sink-API-td44602.html
            lzljs3620320 Jingsong Lee added a comment - - edited

            Hi kkl0u, I think your another comment mean FLINK-17505. I think the reason why I create a new JIRA is that I think FLINK-17505 provides a more general and perfect set of merging solutions. I don't mean that I want to split table and DataStream. The solutions of table layer are built on DataStream StreamingFileSink. 
            However, there are some advanced things in the table layer. Such as FileWriter is an operator rather than a sink, such as Hive's partition committer, such as small file compaction. Table goes a little bit further, but I believe these requirements are reasonable.

            maguowei, gaoyunhaii, aljoscha and community partners are doing a lot of great work to provide more through-depth abstractions and solutions, including [1], including unified sink. I believe that positive communication between us can make things more smooth.

            [1]https://docs.google.com/document/d/1or7V024ptedwFzsmHbSzoJapq9Ah5L03SYPnnHTfoEg/edit#

            lzljs3620320 Jingsong Lee added a comment - - edited Hi kkl0u , I think your another comment mean  FLINK-17505 . I think the reason why I create a new JIRA is that I think FLINK-17505 provides a more general and perfect set of merging solutions. I don't mean that I want to split table and DataStream. The solutions of table layer are built on DataStream StreamingFileSink.  However, there are some advanced things in the table layer. Such as FileWriter is an operator rather than a sink, such as Hive's partition committer, such as small file compaction. Table goes a little bit further, but I believe these requirements are reasonable. maguowei , gaoyunhaii , aljoscha and community partners are doing a lot of great work to provide more through-depth abstractions and solutions, including [1] , including unified sink. I believe that positive communication between us can make things more smooth. [1] https://docs.google.com/document/d/1or7V024ptedwFzsmHbSzoJapq9Ah5L03SYPnnHTfoEg/edit#
            ZhuShang zhuxiaoshang added a comment -

            Hi lzljs3620320,if the temp file is big enough,will it be compacted. or the big files will be cut into small files according to the `compaction.file-size` property.

            ZhuShang zhuxiaoshang added a comment - Hi lzljs3620320 ,if the temp file is big enough,will it be compacted. or the big files will be cut into small files according to the `compaction.file-size` property.
            ZhuShang zhuxiaoshang added a comment -

            I have seen the code,the big file will be just renamed.lzljs3620320

            ZhuShang zhuxiaoshang added a comment - I have seen the code,the big file will be just renamed. lzljs3620320
            lzljs3620320 Jingsong Lee added a comment - - edited

            Thank you for your in-depth exploration, yes, for big files:

            • HDFS: Just rename file.
            • S3 (Object Store): Copy bytes to new file from temp file.
            lzljs3620320 Jingsong Lee added a comment - - edited Thank you for your in-depth exploration, yes, for big files: HDFS: Just rename file. S3 (Object Store): Copy bytes to new file from temp file.
            ZhuShang zhuxiaoshang added a comment -

            HI lzljs3620320,when the `auto-compaction` is open,the bucket should not be committed before the compaction is done.Otherwise,may cause compacting failure.Correct me if I'm wrong.

            When i dig into the code,i found that the bucket is committed in `AbstractStreamingWriter#notifyCheckpointComplete` no matter whether the `auto-compaction` is open.

             

            ZhuShang zhuxiaoshang added a comment - HI  lzljs3620320 ,when the `auto-compaction` is open,the bucket should not be committed before the compaction is done.Otherwise,may cause compacting failure.Correct me if I'm wrong. When i dig into the code,i found that the bucket is committed in `AbstractStreamingWriter#notifyCheckpointComplete` no matter whether the `auto-compaction` is open.  

            People

              lzljs3620320 Jingsong Lee
              lzljs3620320 Jingsong Lee
              Votes:
              1 Vote for this issue
              Watchers:
              11 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: