Details

    • Sub-task
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • None
    • 3.3.5
    • None

    Attachments

      Issue Links

        Activity

          githubbot ASF GitHub Bot added a comment -

          mukund-thakur merged PR #4921:
          URL: https://github.com/apache/hadoop/pull/4921

          githubbot ASF GitHub Bot added a comment - mukund-thakur merged PR #4921: URL: https://github.com/apache/hadoop/pull/4921
          githubbot ASF GitHub Bot added a comment -

          hadoop-yetus commented on PR #4921:
          URL: https://github.com/apache/hadoop/pull/4921#issuecomment-1260156398

          :confetti_ball: *+1 overall*

          Vote Subsystem Runtime Logfile Comment
          :----: ----------: --------: :--------: :-------:
          +0 :ok: reexec 0m 45s   Docker mode activated.
          _ Prechecks _
          +1 :green_heart: dupname 0m 0s   No case conflicting files found.
          +0 :ok: codespell 0m 0s   codespell was not available.
          +0 :ok: detsecrets 0m 0s   detect-secrets was not available.
          +1 :green_heart: @author 0m 0s   The patch does not contain any @author tags.
          +1 :green_heart: test4tests 0m 0s   The patch appears to include 1 new or modified test files.
          _ trunk Compile Tests _
          +1 :green_heart: mvninstall 44m 1s   trunk passed
          +1 :green_heart: compile 23m 30s   trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04
          +1 :green_heart: compile 20m 54s   trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07
          +1 :green_heart: checkstyle 1m 32s   trunk passed
          +1 :green_heart: mvnsite 1m 59s   trunk passed
          +1 :green_heart: javadoc 1m 45s   trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04
          +1 :green_heart: javadoc 1m 11s   trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07
          +1 :green_heart: spotbugs 3m 4s   trunk passed
          +1 :green_heart: shadedclient 23m 36s   branch has no errors when building and testing our client artifacts.
          _ Patch Compile Tests _
          +1 :green_heart: mvninstall 1m 7s   the patch passed
          +1 :green_heart: compile 22m 41s   the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04
          +1 :green_heart: javac 22m 41s   the patch passed
          +1 :green_heart: compile 21m 14s   the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07
          +1 :green_heart: javac 21m 14s   the patch passed
          +1 :green_heart: blanks 0m 0s   The patch has no blanks issues.
          -0 :warning: checkstyle 1m 27s [/results-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4921/4/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt) hadoop-common-project/hadoop-common: The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)
          +1 :green_heart: mvnsite 2m 9s   the patch passed
          +1 :green_heart: javadoc 1m 38s   the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04
          +1 :green_heart: javadoc 1m 4s   the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07
          +1 :green_heart: spotbugs 3m 7s   the patch passed
          +1 :green_heart: shadedclient 23m 8s   patch has no errors when building and testing our client artifacts.
          _ Other Tests _
          +1 :green_heart: unit 18m 46s   hadoop-common in the patch passed.
          +1 :green_heart: asflicense 1m 17s   The patch does not generate ASF License warnings.
              220m 40s    
          Subsystem Report/Notes
          ----------: :-------------
          Docker ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4921/4/artifact/out/Dockerfile
          GITHUB PR https://github.com/apache/hadoop/pull/4921
          Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets
          uname Linux 400f85ce121a 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality dev-support/bin/hadoop.sh
          git revision trunk / 14e07ae25403e6ec0ba5b45f336db19b09cf9984
          Default Java Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07
          Multi-JDK versions /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07
          Test Results https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4921/4/testReport/
          Max. process+thread count 1263 (vs. ulimit of 5500)
          modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common
          Console output https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4921/4/console
          versions git=2.25.1 maven=3.6.3 spotbugs=4.2.2
          Powered by Apache Yetus 0.14.0 https://yetus.apache.org

          This message was automatically generated.

          githubbot ASF GitHub Bot added a comment - hadoop-yetus commented on PR #4921: URL: https://github.com/apache/hadoop/pull/4921#issuecomment-1260156398 :confetti_ball: * +1 overall * Vote Subsystem Runtime Logfile Comment :----: ----------: --------: :--------: :-------: +0 :ok: reexec 0m 45s   Docker mode activated. _ Prechecks _ +1 :green_heart: dupname 0m 0s   No case conflicting files found. +0 :ok: codespell 0m 0s   codespell was not available. +0 :ok: detsecrets 0m 0s   detect-secrets was not available. +1 :green_heart: @author 0m 0s   The patch does not contain any @author tags. +1 :green_heart: test4tests 0m 0s   The patch appears to include 1 new or modified test files. _ trunk Compile Tests _ +1 :green_heart: mvninstall 44m 1s   trunk passed +1 :green_heart: compile 23m 30s   trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 +1 :green_heart: compile 20m 54s   trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 +1 :green_heart: checkstyle 1m 32s   trunk passed +1 :green_heart: mvnsite 1m 59s   trunk passed +1 :green_heart: javadoc 1m 45s   trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 +1 :green_heart: javadoc 1m 11s   trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 +1 :green_heart: spotbugs 3m 4s   trunk passed +1 :green_heart: shadedclient 23m 36s   branch has no errors when building and testing our client artifacts. _ Patch Compile Tests _ +1 :green_heart: mvninstall 1m 7s   the patch passed +1 :green_heart: compile 22m 41s   the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 +1 :green_heart: javac 22m 41s   the patch passed +1 :green_heart: compile 21m 14s   the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 +1 :green_heart: javac 21m 14s   the patch passed +1 :green_heart: blanks 0m 0s   The patch has no blanks issues. -0 :warning: checkstyle 1m 27s [/results-checkstyle-hadoop-common-project_hadoop-common.txt] ( https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4921/4/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt ) hadoop-common-project/hadoop-common: The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) +1 :green_heart: mvnsite 2m 9s   the patch passed +1 :green_heart: javadoc 1m 38s   the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 +1 :green_heart: javadoc 1m 4s   the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 +1 :green_heart: spotbugs 3m 7s   the patch passed +1 :green_heart: shadedclient 23m 8s   patch has no errors when building and testing our client artifacts. _ Other Tests _ +1 :green_heart: unit 18m 46s   hadoop-common in the patch passed. +1 :green_heart: asflicense 1m 17s   The patch does not generate ASF License warnings.     220m 40s     Subsystem Report/Notes ----------: :------------- Docker ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4921/4/artifact/out/Dockerfile GITHUB PR https://github.com/apache/hadoop/pull/4921 Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets uname Linux 400f85ce121a 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality dev-support/bin/hadoop.sh git revision trunk / 14e07ae25403e6ec0ba5b45f336db19b09cf9984 Default Java Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 Multi-JDK versions /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 Test Results https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4921/4/testReport/ Max. process+thread count 1263 (vs. ulimit of 5500) modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common Console output https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4921/4/console versions git=2.25.1 maven=3.6.3 spotbugs=4.2.2 Powered by Apache Yetus 0.14.0 https://yetus.apache.org This message was automatically generated.
          githubbot ASF GitHub Bot added a comment -

          steveloughran commented on code in PR #4921:
          URL: https://github.com/apache/hadoop/pull/4921#discussion_r981079747

          ##########
          hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractVectoredReadTest.java:
          ##########
          @@ -393,39 +392,40 @@ public void testVectoredIOEndToEnd() throws Exception {

          try (FSDataInputStream in = fs.open(path(VECTORED_READ_FILE_NAME))) {
          in.readVectored(fileRanges, value -> pool.getBuffer(true, value));

          • // user can perform other computations while waiting for IO.
            for (FileRange res : fileRanges) {
            dataProcessor.submit(() -> {
            try { - readBufferValidateDataAndReturnToPool(pool, res, countDown); + readBufferValidateDataAndReturnToPool(res, countDown); } catch (Exception e) {
            - LOG.error("Error while process result for {} ", res, e);
            + String error = String.format("Error while processing result for %s", res);
            + LOG.error(error, e);
            + ContractTestUtils.fail(error, e);
            }
            });
            }
            - if (!countDown.await(100, TimeUnit.SECONDS)) {
            - throw new AssertionError("Error while processing vectored io results");
            + // user can perform other computations while waiting for IO.
            + if (!countDown.await(VECTORED_READ_OPERATION_TEST_TIMEOUT_SECONDS, TimeUnit.SECONDS)) { + ContractTestUtils.fail("Timeout/Error while processing vectored io results"); }
            } finally { - pool.release(); HadoopExecutors.shutdown(dataProcessor, LOG, 100, TimeUnit.SECONDS); }
            }

            - private void readBufferValidateDataAndReturnToPool(ByteBufferPool pool,
            - FileRange res,
            + private void readBufferValidateDataAndReturnToPool(FileRange res,
            CountDownLatch countDownLatch)
            throws IOException, TimeoutException {
            CompletableFuture<ByteBuffer> data = res.getData();
            - ByteBuffer buffer = FutureIO.awaitFuture(data,
            - VECTORED_READ_OPERATION_TEST_TIMEOUT_SECONDS,
            - TimeUnit.SECONDS);
            // Read the data and perform custom operation. Here we are just
            // validating it with original data.
            - assertDatasetEquals((int) res.getOffset(), "vecRead",
            - buffer, res.getLength(), DATASET);
            - // return buffer to pool.
            - pool.putBuffer(buffer);
            + FutureIO.awaitFuture(data.thenAccept(buffer -> { + assertDatasetEquals((int) res.getOffset(), + "vecRead", buffer, res.getLength(), DATASET); + // return buffer to the pool once read. + pool.putBuffer(buffer); + }), VECTORED_READ_OPERATION_TEST_TIMEOUT_SECONDS, TimeUnit.SECONDS);

            Review Comment:
            nit: put the timeout + unit on a new line



            ##########
            hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractVectoredReadTest.java:
            ##########
            @@ -393,39 +392,40 @@ public void testVectoredIOEndToEnd() throws Exception {

            try (FSDataInputStream in = fs.open(path(VECTORED_READ_FILE_NAME))) {
            in.readVectored(fileRanges, value -> pool.getBuffer(true, value));
            - // user can perform other computations while waiting for IO.
            for (FileRange res : fileRanges) {
            dataProcessor.submit(() -> {
            try {- readBufferValidateDataAndReturnToPool(pool, res, countDown);+ readBufferValidateDataAndReturnToPool(res, countDown); }

            catch (Exception e) {

          • LOG.error("Error while process result for {} ", res, e);
            + String error = String.format("Error while processing result for %s", res);
            + LOG.error(error, e);
            + ContractTestUtils.fail(error, e);
            }
            });
            }
          • if (!countDown.await(100, TimeUnit.SECONDS)) {
          • throw new AssertionError("Error while processing vectored io results");
            + // user can perform other computations while waiting for IO.
            + if (!countDown.await(VECTORED_READ_OPERATION_TEST_TIMEOUT_SECONDS, TimeUnit.SECONDS)) { + ContractTestUtils.fail("Timeout/Error while processing vectored io results"); }

            } finally {

          • pool.release();
            HadoopExecutors.shutdown(dataProcessor, LOG, 100, TimeUnit.SECONDS);

          Review Comment:
          use VECTORED_READ_OPERATION_TEST_TIMEOUT_SECONDS,

          githubbot ASF GitHub Bot added a comment - steveloughran commented on code in PR #4921: URL: https://github.com/apache/hadoop/pull/4921#discussion_r981079747 ########## hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractVectoredReadTest.java: ########## @@ -393,39 +392,40 @@ public void testVectoredIOEndToEnd() throws Exception { try (FSDataInputStream in = fs.open(path(VECTORED_READ_FILE_NAME))) { in.readVectored(fileRanges, value -> pool.getBuffer(true, value)); // user can perform other computations while waiting for IO. for (FileRange res : fileRanges) { dataProcessor.submit(() -> { try { - readBufferValidateDataAndReturnToPool(pool, res, countDown); + readBufferValidateDataAndReturnToPool(res, countDown); } catch (Exception e) { - LOG.error("Error while process result for {} ", res, e); + String error = String.format("Error while processing result for %s", res); + LOG.error(error, e); + ContractTestUtils.fail(error, e); } }); } - if (!countDown.await(100, TimeUnit.SECONDS)) { - throw new AssertionError("Error while processing vectored io results"); + // user can perform other computations while waiting for IO. + if (!countDown.await(VECTORED_READ_OPERATION_TEST_TIMEOUT_SECONDS, TimeUnit.SECONDS)) { + ContractTestUtils.fail("Timeout/Error while processing vectored io results"); } } finally { - pool.release(); HadoopExecutors.shutdown(dataProcessor, LOG, 100, TimeUnit.SECONDS); } } - private void readBufferValidateDataAndReturnToPool(ByteBufferPool pool, - FileRange res, + private void readBufferValidateDataAndReturnToPool(FileRange res, CountDownLatch countDownLatch) throws IOException, TimeoutException { CompletableFuture<ByteBuffer> data = res.getData(); - ByteBuffer buffer = FutureIO.awaitFuture(data, - VECTORED_READ_OPERATION_TEST_TIMEOUT_SECONDS, - TimeUnit.SECONDS); // Read the data and perform custom operation. Here we are just // validating it with original data. - assertDatasetEquals((int) res.getOffset(), "vecRead", - buffer, res.getLength(), DATASET); - // return buffer to pool. - pool.putBuffer(buffer); + FutureIO.awaitFuture(data.thenAccept(buffer -> { + assertDatasetEquals((int) res.getOffset(), + "vecRead", buffer, res.getLength(), DATASET); + // return buffer to the pool once read. + pool.putBuffer(buffer); + }), VECTORED_READ_OPERATION_TEST_TIMEOUT_SECONDS, TimeUnit.SECONDS); Review Comment: nit: put the timeout + unit on a new line ########## hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractVectoredReadTest.java: ########## @@ -393,39 +392,40 @@ public void testVectoredIOEndToEnd() throws Exception { try (FSDataInputStream in = fs.open(path(VECTORED_READ_FILE_NAME))) { in.readVectored(fileRanges, value -> pool.getBuffer(true, value)); - // user can perform other computations while waiting for IO. for (FileRange res : fileRanges) { dataProcessor.submit(() -> { try {- readBufferValidateDataAndReturnToPool(pool, res, countDown);+ readBufferValidateDataAndReturnToPool(res, countDown); } catch (Exception e) { LOG.error("Error while process result for {} ", res, e); + String error = String.format("Error while processing result for %s", res); + LOG.error(error, e); + ContractTestUtils.fail(error, e); } }); } if (!countDown.await(100, TimeUnit.SECONDS)) { throw new AssertionError("Error while processing vectored io results"); + // user can perform other computations while waiting for IO. + if (!countDown.await(VECTORED_READ_OPERATION_TEST_TIMEOUT_SECONDS, TimeUnit.SECONDS)) { + ContractTestUtils.fail("Timeout/Error while processing vectored io results"); } } finally { pool.release(); HadoopExecutors.shutdown(dataProcessor, LOG, 100, TimeUnit.SECONDS); Review Comment: use VECTORED_READ_OPERATION_TEST_TIMEOUT_SECONDS,
          githubbot ASF GitHub Bot added a comment -

          hadoop-yetus commented on PR #4921:
          URL: https://github.com/apache/hadoop/pull/4921#issuecomment-1258748260

          :confetti_ball: *+1 overall*

          Vote Subsystem Runtime Logfile Comment
          :----: ----------: --------: :--------: :-------:
          +0 :ok: reexec 0m 47s   Docker mode activated.
          _ Prechecks _
          +1 :green_heart: dupname 0m 0s   No case conflicting files found.
          +0 :ok: codespell 0m 0s   codespell was not available.
          +0 :ok: detsecrets 0m 0s   detect-secrets was not available.
          +1 :green_heart: @author 0m 0s   The patch does not contain any @author tags.
          +1 :green_heart: test4tests 0m 0s   The patch appears to include 1 new or modified test files.
          _ trunk Compile Tests _
          +1 :green_heart: mvninstall 39m 6s   trunk passed
          +1 :green_heart: compile 23m 28s   trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04
          +1 :green_heart: compile 20m 59s   trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07
          +1 :green_heart: checkstyle 1m 33s   trunk passed
          +1 :green_heart: mvnsite 1m 57s   trunk passed
          +1 :green_heart: javadoc 1m 42s   trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04
          +1 :green_heart: javadoc 1m 4s   trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07
          +1 :green_heart: spotbugs 3m 7s   trunk passed
          +1 :green_heart: shadedclient 23m 19s   branch has no errors when building and testing our client artifacts.
          _ Patch Compile Tests _
          +1 :green_heart: mvninstall 1m 6s   the patch passed
          +1 :green_heart: compile 22m 32s   the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04
          +1 :green_heart: javac 22m 32s   the patch passed
          +1 :green_heart: compile 20m 56s   the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07
          +1 :green_heart: javac 20m 56s   the patch passed
          +1 :green_heart: blanks 0m 0s   The patch has no blanks issues.
          +1 :green_heart: checkstyle 1m 28s   the patch passed
          +1 :green_heart: mvnsite 1m 57s   the patch passed
          +1 :green_heart: javadoc 1m 32s   the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04
          +1 :green_heart: javadoc 1m 4s   the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07
          +1 :green_heart: spotbugs 3m 4s   the patch passed
          +1 :green_heart: shadedclient 23m 23s   patch has no errors when building and testing our client artifacts.
          _ Other Tests _
          +1 :green_heart: unit 18m 48s   hadoop-common in the patch passed.
          +1 :green_heart: asflicense 1m 20s   The patch does not generate ASF License warnings.
              215m 27s    
          Subsystem Report/Notes
          ----------: :-------------
          Docker ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4921/3/artifact/out/Dockerfile
          GITHUB PR https://github.com/apache/hadoop/pull/4921
          Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets
          uname Linux 5ca7b5642f5b 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality dev-support/bin/hadoop.sh
          git revision trunk / fe6abc3160b41fea4e6285d067131cb1bbb25648
          Default Java Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07
          Multi-JDK versions /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07
          Test Results https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4921/3/testReport/
          Max. process+thread count 1263 (vs. ulimit of 5500)
          modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common
          Console output https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4921/3/console
          versions git=2.25.1 maven=3.6.3 spotbugs=4.2.2
          Powered by Apache Yetus 0.14.0 https://yetus.apache.org

          This message was automatically generated.

          githubbot ASF GitHub Bot added a comment - hadoop-yetus commented on PR #4921: URL: https://github.com/apache/hadoop/pull/4921#issuecomment-1258748260 :confetti_ball: * +1 overall * Vote Subsystem Runtime Logfile Comment :----: ----------: --------: :--------: :-------: +0 :ok: reexec 0m 47s   Docker mode activated. _ Prechecks _ +1 :green_heart: dupname 0m 0s   No case conflicting files found. +0 :ok: codespell 0m 0s   codespell was not available. +0 :ok: detsecrets 0m 0s   detect-secrets was not available. +1 :green_heart: @author 0m 0s   The patch does not contain any @author tags. +1 :green_heart: test4tests 0m 0s   The patch appears to include 1 new or modified test files. _ trunk Compile Tests _ +1 :green_heart: mvninstall 39m 6s   trunk passed +1 :green_heart: compile 23m 28s   trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 +1 :green_heart: compile 20m 59s   trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 +1 :green_heart: checkstyle 1m 33s   trunk passed +1 :green_heart: mvnsite 1m 57s   trunk passed +1 :green_heart: javadoc 1m 42s   trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 +1 :green_heart: javadoc 1m 4s   trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 +1 :green_heart: spotbugs 3m 7s   trunk passed +1 :green_heart: shadedclient 23m 19s   branch has no errors when building and testing our client artifacts. _ Patch Compile Tests _ +1 :green_heart: mvninstall 1m 6s   the patch passed +1 :green_heart: compile 22m 32s   the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 +1 :green_heart: javac 22m 32s   the patch passed +1 :green_heart: compile 20m 56s   the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 +1 :green_heart: javac 20m 56s   the patch passed +1 :green_heart: blanks 0m 0s   The patch has no blanks issues. +1 :green_heart: checkstyle 1m 28s   the patch passed +1 :green_heart: mvnsite 1m 57s   the patch passed +1 :green_heart: javadoc 1m 32s   the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 +1 :green_heart: javadoc 1m 4s   the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 +1 :green_heart: spotbugs 3m 4s   the patch passed +1 :green_heart: shadedclient 23m 23s   patch has no errors when building and testing our client artifacts. _ Other Tests _ +1 :green_heart: unit 18m 48s   hadoop-common in the patch passed. +1 :green_heart: asflicense 1m 20s   The patch does not generate ASF License warnings.     215m 27s     Subsystem Report/Notes ----------: :------------- Docker ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4921/3/artifact/out/Dockerfile GITHUB PR https://github.com/apache/hadoop/pull/4921 Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets uname Linux 5ca7b5642f5b 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality dev-support/bin/hadoop.sh git revision trunk / fe6abc3160b41fea4e6285d067131cb1bbb25648 Default Java Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 Multi-JDK versions /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 Test Results https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4921/3/testReport/ Max. process+thread count 1263 (vs. ulimit of 5500) modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common Console output https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4921/3/console versions git=2.25.1 maven=3.6.3 spotbugs=4.2.2 Powered by Apache Yetus 0.14.0 https://yetus.apache.org This message was automatically generated.
          githubbot ASF GitHub Bot added a comment -

          hadoop-yetus commented on PR #4921:
          URL: https://github.com/apache/hadoop/pull/4921#issuecomment-1256780029

          :broken_heart: *-1 overall*

          Vote Subsystem Runtime Logfile Comment
          :----: ----------: --------: :--------: :-------:
          +0 :ok: reexec 0m 44s   Docker mode activated.
          _ Prechecks _
          +1 :green_heart: dupname 0m 0s   No case conflicting files found.
          +0 :ok: codespell 0m 1s   codespell was not available.
          +0 :ok: detsecrets 0m 1s   detect-secrets was not available.
          +1 :green_heart: @author 0m 0s   The patch does not contain any @author tags.
          +1 :green_heart: test4tests 0m 1s   The patch appears to include 1 new or modified test files.
          _ trunk Compile Tests _
          +1 :green_heart: mvninstall 38m 47s   trunk passed
          -1 :x: compile 16m 45s [/branch-compile-root-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4921/2/artifact/out/branch-compile-root-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04.txt) root in trunk failed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04.
          -1 :x: compile 14m 37s [/branch-compile-root-jdkPrivateBuild-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4921/2/artifact/out/branch-compile-root-jdkPrivateBuild-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07.txt) root in trunk failed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07.
          +1 :green_heart: checkstyle 1m 28s   trunk passed
          +1 :green_heart: mvnsite 1m 55s   trunk passed
          +1 :green_heart: javadoc 1m 30s   trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04
          +1 :green_heart: javadoc 1m 5s   trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07
          +1 :green_heart: spotbugs 3m 11s   trunk passed
          +1 :green_heart: shadedclient 23m 38s   branch has no errors when building and testing our client artifacts.
          _ Patch Compile Tests _
          +1 :green_heart: mvninstall 1m 7s   the patch passed
          -1 :x: compile 15m 41s [/patch-compile-root-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4921/2/artifact/out/patch-compile-root-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04.txt) root in the patch failed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04.
          -1 :x: javac 15m 41s [/patch-compile-root-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4921/2/artifact/out/patch-compile-root-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04.txt) root in the patch failed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04.
          -1 :x: compile 14m 31s [/patch-compile-root-jdkPrivateBuild-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4921/2/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07.txt) root in the patch failed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07.
          -1 :x: javac 14m 31s [/patch-compile-root-jdkPrivateBuild-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4921/2/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07.txt) root in the patch failed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07.
          +1 :green_heart: blanks 0m 0s   The patch has no blanks issues.
          -0 :warning: checkstyle 1m 18s [/results-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4921/2/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt) hadoop-common-project/hadoop-common: The patch generated 4 new + 0 unchanged - 0 fixed = 4 total (was 0)
          +1 :green_heart: mvnsite 1m 59s   the patch passed
          +1 :green_heart: javadoc 1m 30s   the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04
          +1 :green_heart: javadoc 1m 3s   the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07
          +1 :green_heart: spotbugs 3m 4s   the patch passed
          +1 :green_heart: shadedclient 23m 6s   patch has no errors when building and testing our client artifacts.
          _ Other Tests _
          +1 :green_heart: unit 18m 39s   hadoop-common in the patch passed.
          +1 :green_heart: asflicense 1m 19s   The patch does not generate ASF License warnings.
              187m 30s    
          Subsystem Report/Notes
          ----------: :-------------
          Docker ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4921/2/artifact/out/Dockerfile
          GITHUB PR https://github.com/apache/hadoop/pull/4921
          Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets
          uname Linux 5971235c6896 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality dev-support/bin/hadoop.sh
          git revision trunk / d95a0f49d52cc799efa07e5a7066274232cf83e7
          Default Java Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07
          Multi-JDK versions /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07
          Test Results https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4921/2/testReport/
          Max. process+thread count 1263 (vs. ulimit of 5500)
          modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common
          Console output https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4921/2/console
          versions git=2.25.1 maven=3.6.3 spotbugs=4.2.2
          Powered by Apache Yetus 0.14.0 https://yetus.apache.org

          This message was automatically generated.

          githubbot ASF GitHub Bot added a comment - hadoop-yetus commented on PR #4921: URL: https://github.com/apache/hadoop/pull/4921#issuecomment-1256780029 :broken_heart: * -1 overall * Vote Subsystem Runtime Logfile Comment :----: ----------: --------: :--------: :-------: +0 :ok: reexec 0m 44s   Docker mode activated. _ Prechecks _ +1 :green_heart: dupname 0m 0s   No case conflicting files found. +0 :ok: codespell 0m 1s   codespell was not available. +0 :ok: detsecrets 0m 1s   detect-secrets was not available. +1 :green_heart: @author 0m 0s   The patch does not contain any @author tags. +1 :green_heart: test4tests 0m 1s   The patch appears to include 1 new or modified test files. _ trunk Compile Tests _ +1 :green_heart: mvninstall 38m 47s   trunk passed -1 :x: compile 16m 45s [/branch-compile-root-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04.txt] ( https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4921/2/artifact/out/branch-compile-root-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04.txt ) root in trunk failed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04. -1 :x: compile 14m 37s [/branch-compile-root-jdkPrivateBuild-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07.txt] ( https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4921/2/artifact/out/branch-compile-root-jdkPrivateBuild-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07.txt ) root in trunk failed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07. +1 :green_heart: checkstyle 1m 28s   trunk passed +1 :green_heart: mvnsite 1m 55s   trunk passed +1 :green_heart: javadoc 1m 30s   trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 +1 :green_heart: javadoc 1m 5s   trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 +1 :green_heart: spotbugs 3m 11s   trunk passed +1 :green_heart: shadedclient 23m 38s   branch has no errors when building and testing our client artifacts. _ Patch Compile Tests _ +1 :green_heart: mvninstall 1m 7s   the patch passed -1 :x: compile 15m 41s [/patch-compile-root-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04.txt] ( https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4921/2/artifact/out/patch-compile-root-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04.txt ) root in the patch failed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04. -1 :x: javac 15m 41s [/patch-compile-root-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04.txt] ( https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4921/2/artifact/out/patch-compile-root-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04.txt ) root in the patch failed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04. -1 :x: compile 14m 31s [/patch-compile-root-jdkPrivateBuild-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07.txt] ( https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4921/2/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07.txt ) root in the patch failed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07. -1 :x: javac 14m 31s [/patch-compile-root-jdkPrivateBuild-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07.txt] ( https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4921/2/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07.txt ) root in the patch failed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07. +1 :green_heart: blanks 0m 0s   The patch has no blanks issues. -0 :warning: checkstyle 1m 18s [/results-checkstyle-hadoop-common-project_hadoop-common.txt] ( https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4921/2/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt ) hadoop-common-project/hadoop-common: The patch generated 4 new + 0 unchanged - 0 fixed = 4 total (was 0) +1 :green_heart: mvnsite 1m 59s   the patch passed +1 :green_heart: javadoc 1m 30s   the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 +1 :green_heart: javadoc 1m 3s   the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 +1 :green_heart: spotbugs 3m 4s   the patch passed +1 :green_heart: shadedclient 23m 6s   patch has no errors when building and testing our client artifacts. _ Other Tests _ +1 :green_heart: unit 18m 39s   hadoop-common in the patch passed. +1 :green_heart: asflicense 1m 19s   The patch does not generate ASF License warnings.     187m 30s     Subsystem Report/Notes ----------: :------------- Docker ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4921/2/artifact/out/Dockerfile GITHUB PR https://github.com/apache/hadoop/pull/4921 Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets uname Linux 5971235c6896 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality dev-support/bin/hadoop.sh git revision trunk / d95a0f49d52cc799efa07e5a7066274232cf83e7 Default Java Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 Multi-JDK versions /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 Test Results https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4921/2/testReport/ Max. process+thread count 1263 (vs. ulimit of 5500) modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common Console output https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4921/2/console versions git=2.25.1 maven=3.6.3 spotbugs=4.2.2 Powered by Apache Yetus 0.14.0 https://yetus.apache.org This message was automatically generated.
          githubbot ASF GitHub Bot added a comment -

          mukund-thakur commented on code in PR #4921:
          URL: https://github.com/apache/hadoop/pull/4921#discussion_r978951742

          ##########
          hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractVectoredReadTest.java:
          ##########
          @@ -364,6 +373,63 @@ public void testMultipleVectoredReads() throws Exception {
          }
          }

          + /**
          + * This test creates list of ranges and then submit a readVectored
          + * operation and then uses a separate thread pool to process the
          + * results asynchronously.
          + */
          + @Test
          + public void testVectoredIOEndToEnd() throws Exception {
          + FileSystem fs = getFileSystem();
          + List<FileRange> fileRanges = new ArrayList<>();
          + fileRanges.add(FileRange.createFileRange(8 * 1024, 100));
          + fileRanges.add(FileRange.createFileRange(14 * 1024, 100));
          + fileRanges.add(FileRange.createFileRange(10 * 1024, 100));
          + fileRanges.add(FileRange.createFileRange(2 * 1024 - 101, 100));
          + fileRanges.add(FileRange.createFileRange(40 * 1024, 1024));
          +
          + ExecutorService dataProcessor = Executors.newFixedThreadPool(5);
          + CountDownLatch countDown = new CountDownLatch(fileRanges.size());
          +
          + try (FSDataInputStream in = fs.open(path(VECTORED_READ_FILE_NAME))) {
          + in.readVectored(fileRanges, value -> pool.getBuffer(true, value));
          + // user can perform other computations while waiting for IO.
          + for (FileRange res : fileRanges) {
          + dataProcessor.submit(() -> {
          + try

          { + readBufferValidateDataAndReturnToPool(pool, res, countDown); + }

          catch (Exception e) {
          + LOG.error("Error while process result for {} ", res, e);
          + }
          + });
          + }
          + if (!countDown.await(100, TimeUnit.SECONDS))

          { + throw new AssertionError("Error while processing vectored io results"); + }

          + } finally {
          + pool.release();

          Review Comment:
          Actually I shouldn't be calling pool.release() here as it is a shared pool by all the tests. Initially I started with a new pool for this specific test then moved to the common one.

          githubbot ASF GitHub Bot added a comment - mukund-thakur commented on code in PR #4921: URL: https://github.com/apache/hadoop/pull/4921#discussion_r978951742 ########## hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractVectoredReadTest.java: ########## @@ -364,6 +373,63 @@ public void testMultipleVectoredReads() throws Exception { } } + /** + * This test creates list of ranges and then submit a readVectored + * operation and then uses a separate thread pool to process the + * results asynchronously. + */ + @Test + public void testVectoredIOEndToEnd() throws Exception { + FileSystem fs = getFileSystem(); + List<FileRange> fileRanges = new ArrayList<>(); + fileRanges.add(FileRange.createFileRange(8 * 1024, 100)); + fileRanges.add(FileRange.createFileRange(14 * 1024, 100)); + fileRanges.add(FileRange.createFileRange(10 * 1024, 100)); + fileRanges.add(FileRange.createFileRange(2 * 1024 - 101, 100)); + fileRanges.add(FileRange.createFileRange(40 * 1024, 1024)); + + ExecutorService dataProcessor = Executors.newFixedThreadPool(5); + CountDownLatch countDown = new CountDownLatch(fileRanges.size()); + + try (FSDataInputStream in = fs.open(path(VECTORED_READ_FILE_NAME))) { + in.readVectored(fileRanges, value -> pool.getBuffer(true, value)); + // user can perform other computations while waiting for IO. + for (FileRange res : fileRanges) { + dataProcessor.submit(() -> { + try { + readBufferValidateDataAndReturnToPool(pool, res, countDown); + } catch (Exception e) { + LOG.error("Error while process result for {} ", res, e); + } + }); + } + if (!countDown.await(100, TimeUnit.SECONDS)) { + throw new AssertionError("Error while processing vectored io results"); + } + } finally { + pool.release(); Review Comment: Actually I shouldn't be calling pool.release() here as it is a shared pool by all the tests. Initially I started with a new pool for this specific test then moved to the common one.
          githubbot ASF GitHub Bot added a comment -

          steveloughran commented on code in PR #4921:
          URL: https://github.com/apache/hadoop/pull/4921#discussion_r977566267

          ##########
          hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractVectoredReadTest.java:
          ##########
          @@ -364,6 +373,63 @@ public void testMultipleVectoredReads() throws Exception {
          }
          }

          + /**
          + * This test creates list of ranges and then submit a readVectored
          + * operation and then uses a separate thread pool to process the
          + * results asynchronously.
          + */
          + @Test
          + public void testVectoredIOEndToEnd() throws Exception {
          + FileSystem fs = getFileSystem();
          + List<FileRange> fileRanges = new ArrayList<>();
          + fileRanges.add(FileRange.createFileRange(8 * 1024, 100));
          + fileRanges.add(FileRange.createFileRange(14 * 1024, 100));
          + fileRanges.add(FileRange.createFileRange(10 * 1024, 100));
          + fileRanges.add(FileRange.createFileRange(2 * 1024 - 101, 100));
          + fileRanges.add(FileRange.createFileRange(40 * 1024, 1024));
          +
          + ExecutorService dataProcessor = Executors.newFixedThreadPool(5);
          + CountDownLatch countDown = new CountDownLatch(fileRanges.size());
          +
          + try (FSDataInputStream in = fs.open(path(VECTORED_READ_FILE_NAME))) {
          + in.readVectored(fileRanges, value -> pool.getBuffer(true, value));
          + // user can perform other computations while waiting for IO.
          + for (FileRange res : fileRanges) {
          + dataProcessor.submit(() -> {
          + try

          { + readBufferValidateDataAndReturnToPool(pool, res, countDown); + } catch (Exception e) {
          + LOG.error("Error while process result for {} ", res, e);
          + }
          + });
          + }
          + if (!countDown.await(100, TimeUnit.SECONDS)) {

          Review Comment:
          timeout should be a static constant and more visible



          ##########
          hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractVectoredReadTest.java:
          ##########
          @@ -364,6 +373,63 @@ public void testMultipleVectoredReads() throws Exception {
          }
          }

          + /**
          + * This test creates list of ranges and then submit a readVectored
          + * operation and then uses a separate thread pool to process the
          + * results asynchronously.
          + */
          + @Test
          + public void testVectoredIOEndToEnd() throws Exception {
          + FileSystem fs = getFileSystem();
          + List<FileRange> fileRanges = new ArrayList<>();
          + fileRanges.add(FileRange.createFileRange(8 * 1024, 100));
          + fileRanges.add(FileRange.createFileRange(14 * 1024, 100));
          + fileRanges.add(FileRange.createFileRange(10 * 1024, 100));
          + fileRanges.add(FileRange.createFileRange(2 * 1024 - 101, 100));
          + fileRanges.add(FileRange.createFileRange(40 * 1024, 1024));
          +
          + ExecutorService dataProcessor = Executors.newFixedThreadPool(5);
          + CountDownLatch countDown = new CountDownLatch(fileRanges.size());
          +
          + try (FSDataInputStream in = fs.open(path(VECTORED_READ_FILE_NAME))) {
          + in.readVectored(fileRanges, value -> pool.getBuffer(true, value));
          + // user can perform other computations while waiting for IO.
          + for (FileRange res : fileRanges) {
          + dataProcessor.submit(() -> {
          + try {+ readBufferValidateDataAndReturnToPool(pool, res, countDown);+ }

          catch (Exception e) {
          + LOG.error("Error while process result for {} ", res, e);

          Review Comment:
          should be saved to a field/variable, with junit thread rethrowing if the value is non null

          ##########
          hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractVectoredReadTest.java:
          ##########
          @@ -364,6 +373,63 @@ public void testMultipleVectoredReads() throws Exception {
          }
          }

          + /**
          + * This test creates list of ranges and then submit a readVectored
          + * operation and then uses a separate thread pool to process the
          + * results asynchronously.
          + */
          + @Test
          + public void testVectoredIOEndToEnd() throws Exception {
          + FileSystem fs = getFileSystem();
          + List<FileRange> fileRanges = new ArrayList<>();
          + fileRanges.add(FileRange.createFileRange(8 * 1024, 100));
          + fileRanges.add(FileRange.createFileRange(14 * 1024, 100));
          + fileRanges.add(FileRange.createFileRange(10 * 1024, 100));
          + fileRanges.add(FileRange.createFileRange(2 * 1024 - 101, 100));
          + fileRanges.add(FileRange.createFileRange(40 * 1024, 1024));
          +
          + ExecutorService dataProcessor = Executors.newFixedThreadPool(5);
          + CountDownLatch countDown = new CountDownLatch(fileRanges.size());
          +
          + try (FSDataInputStream in = fs.open(path(VECTORED_READ_FILE_NAME))) {
          + in.readVectored(fileRanges, value -> pool.getBuffer(true, value));
          + // user can perform other computations while waiting for IO.
          + for (FileRange res : fileRanges) {
          + dataProcessor.submit(() -> {
          + try

          { + readBufferValidateDataAndReturnToPool(pool, res, countDown); + } catch (Exception e) {
          + LOG.error("Error while process result for {} ", res, e);
          + }
          + });
          + }
          + if (!countDown.await(100, TimeUnit.SECONDS)) { + throw new AssertionError("Error while processing vectored io results"); + }
          + } finally { + pool.release(); + HadoopExecutors.shutdown(dataProcessor, LOG, 100, TimeUnit.SECONDS); + }
          + }
          +
          + private void readBufferValidateDataAndReturnToPool(ByteBufferPool pool,
          + FileRange res,
          + CountDownLatch countDownLatch)
          + throws IOException, TimeoutException {
          + CompletableFuture<ByteBuffer> data = res.getData();
          + ByteBuffer buffer = FutureIO.awaitFuture(data,

          Review Comment:
          I think we all you, me, everyone else needs to spend some time working with CompletableFuture and chaining them.

          In this code
          ```
          data.thenAccept(buffer -> { // all the validation });
          ```

          and await() for that.

          It's a mess because java's checked exceptions cripple their lambda-expression methods when IO operations are invoked. But if we trying to live in their world at least we will get more insight into how we could actually improve our own code to work better there. Though it may of course be too late by now.



          ##########
          hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractVectoredReadTest.java:
          ##########
          @@ -364,6 +373,63 @@ public void testMultipleVectoredReads() throws Exception {
          }
          }

          + /**
          + * This test creates list of ranges and then submit a readVectored
          + * operation and then uses a separate thread pool to process the
          + * results asynchronously.
          + */
          + @Test
          + public void testVectoredIOEndToEnd() throws Exception {
          + FileSystem fs = getFileSystem();
          + List<FileRange> fileRanges = new ArrayList<>();
          + fileRanges.add(FileRange.createFileRange(8 * 1024, 100));
          + fileRanges.add(FileRange.createFileRange(14 * 1024, 100));
          + fileRanges.add(FileRange.createFileRange(10 * 1024, 100));
          + fileRanges.add(FileRange.createFileRange(2 * 1024 - 101, 100));
          + fileRanges.add(FileRange.createFileRange(40 * 1024, 1024));
          +
          + ExecutorService dataProcessor = Executors.newFixedThreadPool(5);
          + CountDownLatch countDown = new CountDownLatch(fileRanges.size());
          +
          + try (FSDataInputStream in = fs.open(path(VECTORED_READ_FILE_NAME))) {
          + in.readVectored(fileRanges, value -> pool.getBuffer(true, value));
          + // user can perform other computations while waiting for IO.
          + for (FileRange res : fileRanges) {
          + dataProcessor.submit(() -> {
          + try {+ readBufferValidateDataAndReturnToPool(pool, res, countDown);+ }

          catch (Exception e) {
          + LOG.error("Error while process result for {} ", res, e);
          + }
          + });
          + }
          + if (!countDown.await(100, TimeUnit.SECONDS))

          { + throw new AssertionError("Error while processing vectored io results"); + }
          + } finally {
          + pool.release();

          Review Comment:
          how about adding an assert on L408 that the pool has its buffers returned?



          ##########
          hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractVectoredReadTest.java:
          ##########
          @@ -364,6 +373,63 @@ public void testMultipleVectoredReads() throws Exception {
          }
          }

          + /**
          + * This test creates list of ranges and then submit a readVectored
          + * operation and then uses a separate thread pool to process the
          + * results asynchronously.
          + */
          + @Test
          + public void testVectoredIOEndToEnd() throws Exception {
          + FileSystem fs = getFileSystem();
          + List<FileRange> fileRanges = new ArrayList<>();
          + fileRanges.add(FileRange.createFileRange(8 * 1024, 100));
          + fileRanges.add(FileRange.createFileRange(14 * 1024, 100));
          + fileRanges.add(FileRange.createFileRange(10 * 1024, 100));
          + fileRanges.add(FileRange.createFileRange(2 * 1024 - 101, 100));
          + fileRanges.add(FileRange.createFileRange(40 * 1024, 1024));
          +
          + ExecutorService dataProcessor = Executors.newFixedThreadPool(5);
          + CountDownLatch countDown = new CountDownLatch(fileRanges.size());
          +
          + try (FSDataInputStream in = fs.open(path(VECTORED_READ_FILE_NAME))) {
          + in.readVectored(fileRanges, value -> pool.getBuffer(true, value));
          + // user can perform other computations while waiting for IO.
          + for (FileRange res : fileRanges) {
          + dataProcessor.submit(() -> {
          + try { + readBufferValidateDataAndReturnToPool(pool, res, countDown); + } catch (Exception e) {
          + LOG.error("Error while process result for {} ", res, e);
          + }
          + });
          + }
          + if (!countDown.await(100, TimeUnit.SECONDS)) {
          + throw new AssertionError("Error while processing vectored io results");

          Review Comment:
          declare timeout



          ##########
          hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractVectoredReadTest.java:
          ##########
          @@ -364,6 +373,63 @@ public void testMultipleVectoredReads() throws Exception {
          }
          }

          + /**
          + * This test creates list of ranges and then submit a readVectored
          + * operation and then uses a separate thread pool to process the
          + * results asynchronously.
          + */
          + @Test
          + public void testVectoredIOEndToEnd() throws Exception {
          + FileSystem fs = getFileSystem();
          + List<FileRange> fileRanges = new ArrayList<>();
          + fileRanges.add(FileRange.createFileRange(8 * 1024, 100));
          + fileRanges.add(FileRange.createFileRange(14 * 1024, 100));
          + fileRanges.add(FileRange.createFileRange(10 * 1024, 100));
          + fileRanges.add(FileRange.createFileRange(2 * 1024 - 101, 100));
          + fileRanges.add(FileRange.createFileRange(40 * 1024, 1024));
          +
          + ExecutorService dataProcessor = Executors.newFixedThreadPool(5);
          + CountDownLatch countDown = new CountDownLatch(fileRanges.size());
          +
          + try (FSDataInputStream in = fs.open(path(VECTORED_READ_FILE_NAME))) {
          + in.readVectored(fileRanges, value -> pool.getBuffer(true, value));
          + // user can perform other computations while waiting for IO.
          + for (FileRange res : fileRanges) {
          + dataProcessor.submit(() -> {
          + try {+ readBufferValidateDataAndReturnToPool(pool, res, countDown);+ } catch (Exception e) {
          + LOG.error("Error while process result for {} ", res, e);
          + }
          + });
          + }
          + if (!countDown.await(100, TimeUnit.SECONDS)) {+ throw new AssertionError("Error while processing vectored io results");+ }

          + } finally {
          + pool.release();
          + HadoopExecutors.shutdown(dataProcessor, LOG, 100, TimeUnit.SECONDS);

          Review Comment:
          use same constant as proposed for L100

          githubbot ASF GitHub Bot added a comment - steveloughran commented on code in PR #4921: URL: https://github.com/apache/hadoop/pull/4921#discussion_r977566267 ########## hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractVectoredReadTest.java: ########## @@ -364,6 +373,63 @@ public void testMultipleVectoredReads() throws Exception { } } + /** + * This test creates list of ranges and then submit a readVectored + * operation and then uses a separate thread pool to process the + * results asynchronously. + */ + @Test + public void testVectoredIOEndToEnd() throws Exception { + FileSystem fs = getFileSystem(); + List<FileRange> fileRanges = new ArrayList<>(); + fileRanges.add(FileRange.createFileRange(8 * 1024, 100)); + fileRanges.add(FileRange.createFileRange(14 * 1024, 100)); + fileRanges.add(FileRange.createFileRange(10 * 1024, 100)); + fileRanges.add(FileRange.createFileRange(2 * 1024 - 101, 100)); + fileRanges.add(FileRange.createFileRange(40 * 1024, 1024)); + + ExecutorService dataProcessor = Executors.newFixedThreadPool(5); + CountDownLatch countDown = new CountDownLatch(fileRanges.size()); + + try (FSDataInputStream in = fs.open(path(VECTORED_READ_FILE_NAME))) { + in.readVectored(fileRanges, value -> pool.getBuffer(true, value)); + // user can perform other computations while waiting for IO. + for (FileRange res : fileRanges) { + dataProcessor.submit(() -> { + try { + readBufferValidateDataAndReturnToPool(pool, res, countDown); + } catch (Exception e) { + LOG.error("Error while process result for {} ", res, e); + } + }); + } + if (!countDown.await(100, TimeUnit.SECONDS)) { Review Comment: timeout should be a static constant and more visible ########## hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractVectoredReadTest.java: ########## @@ -364,6 +373,63 @@ public void testMultipleVectoredReads() throws Exception { } } + /** + * This test creates list of ranges and then submit a readVectored + * operation and then uses a separate thread pool to process the + * results asynchronously. + */ + @Test + public void testVectoredIOEndToEnd() throws Exception { + FileSystem fs = getFileSystem(); + List<FileRange> fileRanges = new ArrayList<>(); + fileRanges.add(FileRange.createFileRange(8 * 1024, 100)); + fileRanges.add(FileRange.createFileRange(14 * 1024, 100)); + fileRanges.add(FileRange.createFileRange(10 * 1024, 100)); + fileRanges.add(FileRange.createFileRange(2 * 1024 - 101, 100)); + fileRanges.add(FileRange.createFileRange(40 * 1024, 1024)); + + ExecutorService dataProcessor = Executors.newFixedThreadPool(5); + CountDownLatch countDown = new CountDownLatch(fileRanges.size()); + + try (FSDataInputStream in = fs.open(path(VECTORED_READ_FILE_NAME))) { + in.readVectored(fileRanges, value -> pool.getBuffer(true, value)); + // user can perform other computations while waiting for IO. + for (FileRange res : fileRanges) { + dataProcessor.submit(() -> { + try {+ readBufferValidateDataAndReturnToPool(pool, res, countDown);+ } catch (Exception e) { + LOG.error("Error while process result for {} ", res, e); Review Comment: should be saved to a field/variable, with junit thread rethrowing if the value is non null ########## hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractVectoredReadTest.java: ########## @@ -364,6 +373,63 @@ public void testMultipleVectoredReads() throws Exception { } } + /** + * This test creates list of ranges and then submit a readVectored + * operation and then uses a separate thread pool to process the + * results asynchronously. + */ + @Test + public void testVectoredIOEndToEnd() throws Exception { + FileSystem fs = getFileSystem(); + List<FileRange> fileRanges = new ArrayList<>(); + fileRanges.add(FileRange.createFileRange(8 * 1024, 100)); + fileRanges.add(FileRange.createFileRange(14 * 1024, 100)); + fileRanges.add(FileRange.createFileRange(10 * 1024, 100)); + fileRanges.add(FileRange.createFileRange(2 * 1024 - 101, 100)); + fileRanges.add(FileRange.createFileRange(40 * 1024, 1024)); + + ExecutorService dataProcessor = Executors.newFixedThreadPool(5); + CountDownLatch countDown = new CountDownLatch(fileRanges.size()); + + try (FSDataInputStream in = fs.open(path(VECTORED_READ_FILE_NAME))) { + in.readVectored(fileRanges, value -> pool.getBuffer(true, value)); + // user can perform other computations while waiting for IO. + for (FileRange res : fileRanges) { + dataProcessor.submit(() -> { + try { + readBufferValidateDataAndReturnToPool(pool, res, countDown); + } catch (Exception e) { + LOG.error("Error while process result for {} ", res, e); + } + }); + } + if (!countDown.await(100, TimeUnit.SECONDS)) { + throw new AssertionError("Error while processing vectored io results"); + } + } finally { + pool.release(); + HadoopExecutors.shutdown(dataProcessor, LOG, 100, TimeUnit.SECONDS); + } + } + + private void readBufferValidateDataAndReturnToPool(ByteBufferPool pool, + FileRange res, + CountDownLatch countDownLatch) + throws IOException, TimeoutException { + CompletableFuture<ByteBuffer> data = res.getData(); + ByteBuffer buffer = FutureIO.awaitFuture(data, Review Comment: I think we all you, me, everyone else needs to spend some time working with CompletableFuture and chaining them. In this code ``` data.thenAccept(buffer -> { // all the validation }); ``` and await() for that. It's a mess because java's checked exceptions cripple their lambda-expression methods when IO operations are invoked. But if we trying to live in their world at least we will get more insight into how we could actually improve our own code to work better there. Though it may of course be too late by now. ########## hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractVectoredReadTest.java: ########## @@ -364,6 +373,63 @@ public void testMultipleVectoredReads() throws Exception { } } + /** + * This test creates list of ranges and then submit a readVectored + * operation and then uses a separate thread pool to process the + * results asynchronously. + */ + @Test + public void testVectoredIOEndToEnd() throws Exception { + FileSystem fs = getFileSystem(); + List<FileRange> fileRanges = new ArrayList<>(); + fileRanges.add(FileRange.createFileRange(8 * 1024, 100)); + fileRanges.add(FileRange.createFileRange(14 * 1024, 100)); + fileRanges.add(FileRange.createFileRange(10 * 1024, 100)); + fileRanges.add(FileRange.createFileRange(2 * 1024 - 101, 100)); + fileRanges.add(FileRange.createFileRange(40 * 1024, 1024)); + + ExecutorService dataProcessor = Executors.newFixedThreadPool(5); + CountDownLatch countDown = new CountDownLatch(fileRanges.size()); + + try (FSDataInputStream in = fs.open(path(VECTORED_READ_FILE_NAME))) { + in.readVectored(fileRanges, value -> pool.getBuffer(true, value)); + // user can perform other computations while waiting for IO. + for (FileRange res : fileRanges) { + dataProcessor.submit(() -> { + try {+ readBufferValidateDataAndReturnToPool(pool, res, countDown);+ } catch (Exception e) { + LOG.error("Error while process result for {} ", res, e); + } + }); + } + if (!countDown.await(100, TimeUnit.SECONDS)) { + throw new AssertionError("Error while processing vectored io results"); + } + } finally { + pool.release(); Review Comment: how about adding an assert on L408 that the pool has its buffers returned? ########## hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractVectoredReadTest.java: ########## @@ -364,6 +373,63 @@ public void testMultipleVectoredReads() throws Exception { } } + /** + * This test creates list of ranges and then submit a readVectored + * operation and then uses a separate thread pool to process the + * results asynchronously. + */ + @Test + public void testVectoredIOEndToEnd() throws Exception { + FileSystem fs = getFileSystem(); + List<FileRange> fileRanges = new ArrayList<>(); + fileRanges.add(FileRange.createFileRange(8 * 1024, 100)); + fileRanges.add(FileRange.createFileRange(14 * 1024, 100)); + fileRanges.add(FileRange.createFileRange(10 * 1024, 100)); + fileRanges.add(FileRange.createFileRange(2 * 1024 - 101, 100)); + fileRanges.add(FileRange.createFileRange(40 * 1024, 1024)); + + ExecutorService dataProcessor = Executors.newFixedThreadPool(5); + CountDownLatch countDown = new CountDownLatch(fileRanges.size()); + + try (FSDataInputStream in = fs.open(path(VECTORED_READ_FILE_NAME))) { + in.readVectored(fileRanges, value -> pool.getBuffer(true, value)); + // user can perform other computations while waiting for IO. + for (FileRange res : fileRanges) { + dataProcessor.submit(() -> { + try { + readBufferValidateDataAndReturnToPool(pool, res, countDown); + } catch (Exception e) { + LOG.error("Error while process result for {} ", res, e); + } + }); + } + if (!countDown.await(100, TimeUnit.SECONDS)) { + throw new AssertionError("Error while processing vectored io results"); Review Comment: declare timeout ########## hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractVectoredReadTest.java: ########## @@ -364,6 +373,63 @@ public void testMultipleVectoredReads() throws Exception { } } + /** + * This test creates list of ranges and then submit a readVectored + * operation and then uses a separate thread pool to process the + * results asynchronously. + */ + @Test + public void testVectoredIOEndToEnd() throws Exception { + FileSystem fs = getFileSystem(); + List<FileRange> fileRanges = new ArrayList<>(); + fileRanges.add(FileRange.createFileRange(8 * 1024, 100)); + fileRanges.add(FileRange.createFileRange(14 * 1024, 100)); + fileRanges.add(FileRange.createFileRange(10 * 1024, 100)); + fileRanges.add(FileRange.createFileRange(2 * 1024 - 101, 100)); + fileRanges.add(FileRange.createFileRange(40 * 1024, 1024)); + + ExecutorService dataProcessor = Executors.newFixedThreadPool(5); + CountDownLatch countDown = new CountDownLatch(fileRanges.size()); + + try (FSDataInputStream in = fs.open(path(VECTORED_READ_FILE_NAME))) { + in.readVectored(fileRanges, value -> pool.getBuffer(true, value)); + // user can perform other computations while waiting for IO. + for (FileRange res : fileRanges) { + dataProcessor.submit(() -> { + try {+ readBufferValidateDataAndReturnToPool(pool, res, countDown);+ } catch (Exception e) { + LOG.error("Error while process result for {} ", res, e); + } + }); + } + if (!countDown.await(100, TimeUnit.SECONDS)) {+ throw new AssertionError("Error while processing vectored io results");+ } + } finally { + pool.release(); + HadoopExecutors.shutdown(dataProcessor, LOG, 100, TimeUnit.SECONDS); Review Comment: use same constant as proposed for L100
          githubbot ASF GitHub Bot added a comment -

          hadoop-yetus commented on PR #4921:
          URL: https://github.com/apache/hadoop/pull/4921#issuecomment-1254569394

          :confetti_ball: *+1 overall*

          Vote Subsystem Runtime Logfile Comment
          :----: ----------: --------: :--------: :-------:
          +0 :ok: reexec 1m 6s   Docker mode activated.
          _ Prechecks _
          +1 :green_heart: dupname 0m 0s   No case conflicting files found.
          +0 :ok: codespell 0m 0s   codespell was not available.
          +0 :ok: detsecrets 0m 0s   detect-secrets was not available.
          +1 :green_heart: @author 0m 0s   The patch does not contain any @author tags.
          +1 :green_heart: test4tests 0m 0s   The patch appears to include 1 new or modified test files.
          _ trunk Compile Tests _
          +1 :green_heart: mvninstall 38m 44s   trunk passed
          +1 :green_heart: compile 23m 29s   trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04
          +1 :green_heart: compile 20m 49s   trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07
          +1 :green_heart: checkstyle 1m 45s   trunk passed
          +1 :green_heart: mvnsite 2m 8s   trunk passed
          +1 :green_heart: javadoc 1m 31s   trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04
          +1 :green_heart: javadoc 1m 2s   trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07
          +1 :green_heart: spotbugs 3m 12s   trunk passed
          +1 :green_heart: shadedclient 23m 13s   branch has no errors when building and testing our client artifacts.
          _ Patch Compile Tests _
          +1 :green_heart: mvninstall 1m 7s   the patch passed
          +1 :green_heart: compile 22m 40s   the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04
          +1 :green_heart: javac 22m 40s   the patch passed
          +1 :green_heart: compile 20m 51s   the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07
          +1 :green_heart: javac 20m 51s   the patch passed
          +1 :green_heart: blanks 0m 0s   The patch has no blanks issues.
          -0 :warning: checkstyle 1m 20s [/results-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4921/1/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt) hadoop-common-project/hadoop-common: The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)
          +1 :green_heart: mvnsite 1m 53s   the patch passed
          +1 :green_heart: javadoc 1m 20s   the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04
          +1 :green_heart: javadoc 1m 3s   the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07
          +1 :green_heart: spotbugs 3m 2s   the patch passed
          +1 :green_heart: shadedclient 23m 34s   patch has no errors when building and testing our client artifacts.
          _ Other Tests _
          +1 :green_heart: unit 18m 47s   hadoop-common in the patch passed.
          +1 :green_heart: asflicense 1m 16s   The patch does not generate ASF License warnings.
              214m 30s    
          Subsystem Report/Notes
          ----------: :-------------
          Docker ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4921/1/artifact/out/Dockerfile
          GITHUB PR https://github.com/apache/hadoop/pull/4921
          Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets
          uname Linux a7a213068382 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality dev-support/bin/hadoop.sh
          git revision trunk / 3d681343063572d1f0075485e03cb73ae4aeee4b
          Default Java Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07
          Multi-JDK versions /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07
          Test Results https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4921/1/testReport/
          Max. process+thread count 1285 (vs. ulimit of 5500)
          modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common
          Console output https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4921/1/console
          versions git=2.25.1 maven=3.6.3 spotbugs=4.2.2
          Powered by Apache Yetus 0.14.0 https://yetus.apache.org

          This message was automatically generated.

          githubbot ASF GitHub Bot added a comment - hadoop-yetus commented on PR #4921: URL: https://github.com/apache/hadoop/pull/4921#issuecomment-1254569394 :confetti_ball: * +1 overall * Vote Subsystem Runtime Logfile Comment :----: ----------: --------: :--------: :-------: +0 :ok: reexec 1m 6s   Docker mode activated. _ Prechecks _ +1 :green_heart: dupname 0m 0s   No case conflicting files found. +0 :ok: codespell 0m 0s   codespell was not available. +0 :ok: detsecrets 0m 0s   detect-secrets was not available. +1 :green_heart: @author 0m 0s   The patch does not contain any @author tags. +1 :green_heart: test4tests 0m 0s   The patch appears to include 1 new or modified test files. _ trunk Compile Tests _ +1 :green_heart: mvninstall 38m 44s   trunk passed +1 :green_heart: compile 23m 29s   trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 +1 :green_heart: compile 20m 49s   trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 +1 :green_heart: checkstyle 1m 45s   trunk passed +1 :green_heart: mvnsite 2m 8s   trunk passed +1 :green_heart: javadoc 1m 31s   trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 +1 :green_heart: javadoc 1m 2s   trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 +1 :green_heart: spotbugs 3m 12s   trunk passed +1 :green_heart: shadedclient 23m 13s   branch has no errors when building and testing our client artifacts. _ Patch Compile Tests _ +1 :green_heart: mvninstall 1m 7s   the patch passed +1 :green_heart: compile 22m 40s   the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 +1 :green_heart: javac 22m 40s   the patch passed +1 :green_heart: compile 20m 51s   the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 +1 :green_heart: javac 20m 51s   the patch passed +1 :green_heart: blanks 0m 0s   The patch has no blanks issues. -0 :warning: checkstyle 1m 20s [/results-checkstyle-hadoop-common-project_hadoop-common.txt] ( https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4921/1/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt ) hadoop-common-project/hadoop-common: The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) +1 :green_heart: mvnsite 1m 53s   the patch passed +1 :green_heart: javadoc 1m 20s   the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 +1 :green_heart: javadoc 1m 3s   the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 +1 :green_heart: spotbugs 3m 2s   the patch passed +1 :green_heart: shadedclient 23m 34s   patch has no errors when building and testing our client artifacts. _ Other Tests _ +1 :green_heart: unit 18m 47s   hadoop-common in the patch passed. +1 :green_heart: asflicense 1m 16s   The patch does not generate ASF License warnings.     214m 30s     Subsystem Report/Notes ----------: :------------- Docker ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4921/1/artifact/out/Dockerfile GITHUB PR https://github.com/apache/hadoop/pull/4921 Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets uname Linux a7a213068382 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality dev-support/bin/hadoop.sh git revision trunk / 3d681343063572d1f0075485e03cb73ae4aeee4b Default Java Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 Multi-JDK versions /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 Test Results https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4921/1/testReport/ Max. process+thread count 1285 (vs. ulimit of 5500) modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common Console output https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4921/1/console versions git=2.25.1 maven=3.6.3 spotbugs=4.2.2 Powered by Apache Yetus 0.14.0 https://yetus.apache.org This message was automatically generated.
          githubbot ASF GitHub Bot added a comment -

          mukund-thakur opened a new pull request, #4921:
          URL: https://github.com/apache/hadoop/pull/4921

          part of HADOOP-18103.

          <!--
          Thanks for sending a pull request!
          1. If this is your first time, please read our contributor guidelines: https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
          2. Make sure your PR title starts with JIRA issue id, e.g., 'HADOOP-17799. Your PR title ...'.
          -->

              1. Description of PR
              1. How was this patch tested?
              1. For code changes:
          • [ ] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')?
          • [ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation?
          • [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)?
          • [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files?
          githubbot ASF GitHub Bot added a comment - mukund-thakur opened a new pull request, #4921: URL: https://github.com/apache/hadoop/pull/4921 part of HADOOP-18103 . <!-- Thanks for sending a pull request! 1. If this is your first time, please read our contributor guidelines: https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute 2. Make sure your PR title starts with JIRA issue id, e.g., ' HADOOP-17799 . Your PR title ...'. --> Description of PR How was this patch tested? For code changes: [ ] Does the title or this PR starts with the corresponding JIRA issue id (e.g. ' HADOOP-17799 . Your PR title ...')? [ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation? [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0] ( http://www.apache.org/legal/resolved.html#category-a)? [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files?

          People

            mthakur Mukund Thakur
            mthakur Mukund Thakur
            Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: