Uploaded image for project: 'Flink'
  1. Flink
  2. FLINK-9061

Add entropy to s3 path for better scalability

Attach filesAttach ScreenshotVotersWatch issueWatchersCreate sub-taskLinkCloneUpdate Comment AuthorReplace String in CommentUpdate Comment VisibilityDelete Comments
    XMLWordPrintableJSON

Details

    Description

      I think we need to modify the way we write checkpoints to S3 for high-scale jobs (those with many total tasks).  The issue is that we are writing all the checkpoint data under a common key prefix.  This is the worst case scenario for S3 performance since the key is used as a partition key.
       
      In the worst case checkpoints fail with a 500 status code coming back from S3 and an internal error type of TooBusyException.

       
      One possible solution would be to add a hook in the Flink filesystem code that allows me to "rewrite" paths.  For example say I have the checkpoint directory set to:
       
      s3://bucket/flink/checkpoints
       
      I would hook that and rewrite that path to:
       
      s3://bucket/[HASH]/flink/checkpoints, where HASH is the hash of the original path
       
      This would distribute the checkpoint write load around the S3 cluster evenly.
       
      For reference: https://aws.amazon.com/premiumsupport/knowledge-center/s3-bucket-performance-improve/
       
      Any other people hit this issue?  Any other ideas for solutions?  This is a pretty serious problem for people trying to checkpoint to S3.
       
      -Jamie
       

      Attachments

        Issue Links

        Activity

          This comment will be Viewable by All Users Viewable by All Users
          Cancel

          People

            ind_rc Indrajit Roychoudhury
            jgrier Jamie Grier
            Votes:
            0 Vote for this issue
            Watchers:
            20 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Slack

                Issue deployment