Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-13663

Should throw exception when incorrect block size is set

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • None
    • 3.2.0
    • None
    • None
    • Reviewed

    Description

      See

      ./hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockRecoveryWorker.java

      void syncBlock(List<BlockRecord> syncList) throws IOException {
      
      
             newBlock.setNumBytes(finalizedLength);
              break;
            case RBW:
            case RWR:
              long minLength = Long.MAX_VALUE;
              for(BlockRecord r : syncList) {
                ReplicaState rState = r.rInfo.getOriginalReplicaState();
                if(rState == bestState) {
                  minLength = Math.min(minLength, r.rInfo.getNumBytes());
                  participatingList.add(r);
                }
                if (LOG.isDebugEnabled()) {
                  LOG.debug("syncBlock replicaInfo: block=" + block +
                      ", from datanode " + r.id + ", receivedState=" + rState.name() +
                      ", receivedLength=" + r.rInfo.getNumBytes() + ", bestState=" +
                      bestState.name());
                }
              }
              // recover() guarantees syncList will have at least one replica with RWR
              // or better state.
              assert minLength != Long.MAX_VALUE : "wrong minLength"; <= should throw exception 
              newBlock.setNumBytes(minLength);
              break;
            case RUR:
            case TEMPORARY:
              assert false : "bad replica state: " + bestState;
            default:
              break; // we have 'case' all enum values
            }
      

      when minLength is Long.MAX_VALUE, it should throw exception.

      There might be other places like this.

      Otherwise, we would see the following WARN in datanode log

      WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Can't replicate block xyz because on-disk length 11852203 is shorter than NameNode recorded length 9223372036854775807
      

      where 9223372036854775807 is Long.MAX_VALUE.

      Attachments

        1. HDFS-13663.001.patch
          1 kB
          Shweta
        2. HDFS-13663.002.patch
          1.0 kB
          Shweta
        3. HDFS-13663.003.patch
          0.9 kB
          Shweta

        Issue Links

          Activity

            People

              shwetayakkali Shweta
              yzhangal Yongjun Zhang
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: