Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-14381

Add option to hdfs dfs to ignore corrupt blocks

    XMLWordPrintableJSON

    Details

    • Type: Improvement
    • Status: Open
    • Priority: Minor
    • Resolution: Unresolved
    • Affects Version/s: 3.2.0
    • Fix Version/s: None
    • Component/s: tools
    • Labels:
      None

      Description

      If I have a file in HDFS that contains 100 blocks, and I happen to lose the first block (for whatever obscure/unlikely/dumb reason), I can no longer access the 99% of the file that's still there and accessible. In the case of some data formats (e.g. text), the remaining data may still be useful. It would be nice to have a way to extract the remaining data without having to manually reassemble the file contents from the block files. Something like hdfs dfs -copyToLocal -ignoreCorrupt <file>. It could insert some marker to show where the missing blocks are.

        Attachments

          Activity

            People

            • Assignee:
              Unassigned
              Reporter:
              templedf Daniel Templeton
            • Votes:
              0 Vote for this issue
              Watchers:
              7 Start watching this issue

              Dates

              • Created:
                Updated: