Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-14290

Unexpected message type: PooledUnsafeDirectByteBuf when get datanode info by DatanodeWebHdfsMethods

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Not A Problem
    • 2.7.0, 2.7.1
    • None
    • datanode, webhdfs
    • None

    Description

      The issue is there is no HttpRequestDecoder in InboundHandler of netty,  appear unexpected message type when read message.

        

        

      DEBUG org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer: Proxy failed. Cause: 
      com.xiaomi.infra.thirdparty.io.netty.handler.codec.EncoderException: java.lang.IllegalStateException: unexpected message type: PooledUnsafeDirectByteBuf
      at com.xiaomi.infra.thirdparty.io.netty.handler.codec.MessageToMessageEncoder.write(MessageToMessageEncoder.java:106)
      at com.xiaomi.infra.thirdparty.io.netty.channel.CombinedChannelDuplexHandler.write(CombinedChannelDuplexHandler.java:348)
      at com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:738)
      at com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:730)
      at com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:816)
      at com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:723)
      at com.xiaomi.infra.thirdparty.io.netty.handler.stream.ChunkedWriteHandler.doFlush(ChunkedWriteHandler.java:304)
      at com.xiaomi.infra.thirdparty.io.netty.handler.stream.ChunkedWriteHandler.flush(ChunkedWriteHandler.java:137)
      at com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:776)
      at com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeWriteAndFlush(AbstractChannelHandlerContext.java:802)
      at com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:814)
      at com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(AbstractChannelHandlerContext.java:794)
      at com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(AbstractChannelHandlerContext.java:831)
      at com.xiaomi.infra.thirdparty.io.netty.channel.DefaultChannelPipeline.writeAndFlush(DefaultChannelPipeline.java:1051)
      at com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannel.writeAndFlush(AbstractChannel.java:300)
      at org.apache.hadoop.hdfs.server.datanode.web.SimpleHttpProxyHandler$Forwarder.channelRead(SimpleHttpProxyHandler.java:80)
      at com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
      at com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
      at com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
      at com.xiaomi.infra.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1414)
      at com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
      at com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
      at com.xiaomi.infra.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:945)
      at com.xiaomi.infra.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:146)
      at com.xiaomi.infra.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
      at com.xiaomi.infra.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
      at com.xiaomi.infra.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
      at com.xiaomi.infra.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
      at com.xiaomi.infra.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:886)
      at com.xiaomi.infra.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
      at java.lang.Thread.run(Thread.java:745)
      Caused by: java.lang.IllegalStateException: unexpected message type: PooledUnsafeDirectByteBuf
      at com.xiaomi.infra.thirdparty.io.netty.handler.codec.http.HttpObjectEncoder.encode(HttpObjectEncoder.java:123)
      at com.xiaomi.infra.thirdparty.io.netty.handler.codec.MessageToMessageEncoder.write(MessageToMessageEncoder.java:88)
      ... 30 more
      2018-12-04,14:23:28,690 DEBUG org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer: Proxy failed. Cause: 
      java.nio.channels.ClosedChannelException
      at com.xiaomi.infra.thirdparty.io.netty.handler.stream.ChunkedWriteHandler.discard(ChunkedWriteHandler.java:188)
      at com.xiaomi.infra.thirdparty.io.netty.handler.stream.ChunkedWriteHandler.doFlush(ChunkedWriteHandler.java:198)
      at com.xiaomi.infra.thirdparty.io.netty.handler.stream.ChunkedWriteHandler.flush(ChunkedWriteHandler.java:137)
      at com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:776)
      at com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeWriteAndFlush(AbstractChannelHandlerContext.java:802)
      at com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:814)
      at com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(AbstractChannelHandlerContext.java:794)
      at com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(AbstractChannelHandlerContext.java:831)
      at com.xiaomi.infra.thirdparty.io.netty.channel.DefaultChannelPipeline.writeAndFlush(DefaultChannelPipeline.java:1051)
      at com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannel.writeAndFlush(AbstractChannel.java:300)
      at org.apache.hadoop.hdfs.server.datanode.web.SimpleHttpProxyHandler$Forwarder.channelRead(SimpleHttpProxyHandler.java:80)
      at com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
      at com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
      at com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
      at com.xiaomi.infra.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1414)
      at com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
      at com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
      at com.xiaomi.infra.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:945)
      at com.xiaomi.infra.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:146)
      at com.xiaomi.infra.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
      at com.xiaomi.infra.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
      at com.xiaomi.infra.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
      at com.xiaomi.infra.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
      at com.xiaomi.infra.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:886)
      at com.xiaomi.infra.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
      at java.lang.Thread.run(Thread.java:745)
      2018-12-04,14:23:28,690 DEBUG org.mortbay.log: EOF

      Unexpected message type: PooledUnsafeDirectByteBuf when get datanode info by DatanodeWebHdfsMethods.

       

      Attachments

        1. HDFS-14290.000.patch
          1 kB
          Lisheng Sun
        2. webhdfs show.png
          34 kB
          Lisheng Sun

        Issue Links

          Activity

            ayushtkn Ayush Saxena added a comment -

            Seems same as HDFS-14289?

            ayushtkn Ayush Saxena added a comment - Seems same as HDFS-14289 ?
            leosun08 Lisheng Sun added a comment -

            ayushtkn Sorry, I create duplicated issues. And I have closed the issue [HDFS-14289| ï¼Œplease follow this issue HDFS-14290, Thanks.

            leosun08 Lisheng Sun added a comment - ayushtkn  Sorry, I create duplicated issues. And I have closed the issue [ HDFS-14289 | ï¼Œplease follow this issue  HDFS-14290 , Thanks.
            leosun08 Lisheng Sun added a comment -

            I'll attach a patch later.

            leosun08 Lisheng Sun added a comment - I'll attach a patch later.
            hadoopqa Hadoop QA added a comment -
            -1 overall



            Vote Subsystem Runtime Comment
            0 reexec 0m 36s Docker mode activated.
                  Prechecks
            +1 @author 0m 0s The patch does not contain any @author tags.
            -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
                  trunk Compile Tests
            +1 mvninstall 20m 29s trunk passed
            +1 compile 1m 7s trunk passed
            +1 checkstyle 0m 56s trunk passed
            +1 mvnsite 1m 19s trunk passed
            +1 shadedclient 14m 33s branch has no errors when building and testing our client artifacts.
            +1 findbugs 2m 11s trunk passed
            +1 javadoc 0m 55s trunk passed
                  Patch Compile Tests
            +1 mvninstall 1m 9s the patch passed
            +1 compile 1m 7s the patch passed
            +1 javac 1m 7s the patch passed
            -0 checkstyle 0m 50s hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 8 unchanged - 0 fixed = 9 total (was 8)
            +1 mvnsite 1m 15s the patch passed
            +1 whitespace 0m 0s The patch has no whitespace issues.
            +1 shadedclient 14m 5s patch has no errors when building and testing our client artifacts.
            +1 findbugs 2m 16s the patch passed
            +1 javadoc 0m 57s the patch passed
                  Other Tests
            -1 unit 105m 55s hadoop-hdfs in the patch failed.
            +1 asflicense 0m 48s The patch does not generate ASF License warnings.
            170m 26s



            Reason Tests
            Failed junit tests hadoop.hdfs.server.datanode.web.TestDatanodeHttpXFrame
              hadoop.hdfs.TestDistributedFileSystem
              hadoop.hdfs.qjournal.server.TestJournalNodeSync



            Subsystem Report/Notes
            Docker Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f
            JIRA Issue HDFS-14290
            JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12959226/HDFS-14290.000.patch
            Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle
            uname Linux d9bf240b84b8 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
            Build tool maven
            Personality /testptch/patchprocess/precommit/personality/provided.sh
            git revision trunk / 588b4c4
            maven version: Apache Maven 3.3.9
            Default Java 1.8.0_191
            findbugs v3.1.0-RC1
            checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/26259/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
            unit https://builds.apache.org/job/PreCommit-HDFS-Build/26259/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
            Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/26259/testReport/
            Max. process+thread count 2829 (vs. ulimit of 10000)
            modules C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs
            Console output https://builds.apache.org/job/PreCommit-HDFS-Build/26259/console
            Powered by Apache Yetus 0.8.0 http://yetus.apache.org

            This message was automatically generated.

            hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 36s Docker mode activated.       Prechecks +1 @author 0m 0s The patch does not contain any @author tags. -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.       trunk Compile Tests +1 mvninstall 20m 29s trunk passed +1 compile 1m 7s trunk passed +1 checkstyle 0m 56s trunk passed +1 mvnsite 1m 19s trunk passed +1 shadedclient 14m 33s branch has no errors when building and testing our client artifacts. +1 findbugs 2m 11s trunk passed +1 javadoc 0m 55s trunk passed       Patch Compile Tests +1 mvninstall 1m 9s the patch passed +1 compile 1m 7s the patch passed +1 javac 1m 7s the patch passed -0 checkstyle 0m 50s hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 8 unchanged - 0 fixed = 9 total (was 8) +1 mvnsite 1m 15s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 shadedclient 14m 5s patch has no errors when building and testing our client artifacts. +1 findbugs 2m 16s the patch passed +1 javadoc 0m 57s the patch passed       Other Tests -1 unit 105m 55s hadoop-hdfs in the patch failed. +1 asflicense 0m 48s The patch does not generate ASF License warnings. 170m 26s Reason Tests Failed junit tests hadoop.hdfs.server.datanode.web.TestDatanodeHttpXFrame   hadoop.hdfs.TestDistributedFileSystem   hadoop.hdfs.qjournal.server.TestJournalNodeSync Subsystem Report/Notes Docker Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f JIRA Issue HDFS-14290 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12959226/HDFS-14290.000.patch Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle uname Linux d9bf240b84b8 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/patchprocess/precommit/personality/provided.sh git revision trunk / 588b4c4 maven version: Apache Maven 3.3.9 Default Java 1.8.0_191 findbugs v3.1.0-RC1 checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/26259/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/26259/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/26259/testReport/ Max. process+thread count 2829 (vs. ulimit of 10000) modules C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs Console output https://builds.apache.org/job/PreCommit-HDFS-Build/26259/console Powered by Apache Yetus 0.8.0 http://yetus.apache.org This message was automatically generated.

            Hi Lisheng Sun,

            thanks a lot for reporting the issue. Would you also share how to reproduce?

            weichiu Wei-Chiu Chuang added a comment - Hi Lisheng Sun, thanks a lot for reporting the issue. Would you also share how to reproduce?
            leosun08 Lisheng Sun added a comment - - edited

            Thanks weichiu for this issue. Later I will add a UT for reproducing the issue and corresponding comments.

            leosun08 Lisheng Sun added a comment - - edited Thanks weichiu for this issue. Later I will add a UT for reproducing the issue and corresponding comments.
            leosun08 Lisheng Sun added a comment - - edited

            hi weichiu ayushtkn I is intended to use EmbeddedChannel#writeInbound and  EmbeddedChannel#readInbound to add a UT for SimpleHttpProxyHandler to reproduce, because SimpleHttpProxyHandler extends SimpleChannelInboundHandler<HttpRequest>.  But SimpleHttpProxyHandler is implemented by

            the following way:

            @Override
              public void channelRead0
                (final ChannelHandlerContext ctx, final HttpRequest req) {
                uri = req.getUri();
                final Channel client = ctx.channel();
                Bootstrap proxiedServer = new Bootstrap()
                  .group(client.eventLoop())
                  .channel(NioSocketChannel.class)
                  .handler(new ChannelInitializer<SocketChannel>() {
                    @Override
                    protected void initChannel(SocketChannel ch) throws Exception {
                      ChannelPipeline p = ch.pipeline();
                      p.addLast(new HttpRequestEncoder(), new Forwarder(uri, client));
                    }
                  });
                ChannelFuture f = proxiedServer.connect(host);
                proxiedChannel = f.channel();
            

            proxiedServer#connect that uses the  EmbeddedEventLoop and  judges whether its loop is NioEventLoop because proxiedServer#channel is NioSocketChannel. However EmbeddedEventLoop is not NioEventLoop and unfortunately the issue is difficult to use a UT to reproduce. 

            the ChannelPipeline only include HttpRequestEncoder and not HttpRequestDecoder,and it has problem.

            .handler(new ChannelInitializer<SocketChannel>() {
                    @Override
                    protected void initChannel(SocketChannel ch) throws Exception {
                      ChannelPipeline p = ch.pipeline();
                      p.addLast(new HttpRequestEncoder(), new Forwarder(uri, client));
                    }
                  });
            

              Would you have any suggestions? Thank you.

            leosun08 Lisheng Sun added a comment - - edited hi weichiu ayushtkn I is intended to use EmbeddedChannel#writeInbound and  EmbeddedChannel#readInbound to add a UT for SimpleHttpProxyHandler to reproduce, because SimpleHttpProxyHandler extends SimpleChannelInboundHandler<HttpRequest>.  But SimpleHttpProxyHandler is implemented by the following way: @Override public void channelRead0 ( final ChannelHandlerContext ctx, final HttpRequest req) { uri = req.getUri(); final Channel client = ctx.channel(); Bootstrap proxiedServer = new Bootstrap() .group(client.eventLoop()) .channel(NioSocketChannel.class) .handler( new ChannelInitializer<SocketChannel>() { @Override protected void initChannel(SocketChannel ch) throws Exception { ChannelPipeline p = ch.pipeline(); p.addLast( new HttpRequestEncoder(), new Forwarder(uri, client)); } }); ChannelFuture f = proxiedServer.connect(host); proxiedChannel = f.channel(); proxiedServer#connect that uses the  EmbeddedEventLoop and  judges whether its loop is NioEventLoop because proxiedServer#channel is NioSocketChannel. However EmbeddedEventLoop is not NioEventLoop and unfortunately the issue is difficult to use a UT to reproduce.  the ChannelPipeline only include HttpRequestEncoder and not HttpRequestDecoder,and it has problem. .handler( new ChannelInitializer<SocketChannel>() { @Override protected void initChannel(SocketChannel ch) throws Exception { ChannelPipeline p = ch.pipeline(); p.addLast( new HttpRequestEncoder(), new Forwarder(uri, client)); } });   Would you have any suggestions? Thank you.

            I don't have much experience in netty. ayushtkn any thing you'd like to add?

            weichiu Wei-Chiu Chuang added a comment - I don't have much experience in netty. ayushtkn any thing you'd like to add?
            leosun08 Lisheng Sun added a comment - - edited

            weichiu ayushtkn Could you please take a look?

            leosun08 Lisheng Sun added a comment - - edited weichiu ayushtkn  Could you please take a look?
            hadoopqa Hadoop QA added a comment -
            -1 overall



            Vote Subsystem Runtime Comment
            0 reexec 0m 22s Docker mode activated.
                  Prechecks
            +1 @author 0m 0s The patch does not contain any @author tags.
            -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
                  trunk Compile Tests
            +1 mvninstall 20m 20s trunk passed
            +1 compile 1m 5s trunk passed
            +1 checkstyle 0m 56s trunk passed
            +1 mvnsite 1m 24s trunk passed
            +1 shadedclient 14m 23s branch has no errors when building and testing our client artifacts.
            +1 findbugs 1m 59s trunk passed
            +1 javadoc 0m 51s trunk passed
                  Patch Compile Tests
            +1 mvninstall 0m 59s the patch passed
            +1 compile 0m 54s the patch passed
            +1 javac 0m 54s the patch passed
            -0 checkstyle 0m 35s hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 8 unchanged - 0 fixed = 9 total (was 8)
            +1 mvnsite 1m 0s the patch passed
            +1 whitespace 0m 0s The patch has no whitespace issues.
            +1 shadedclient 12m 33s patch has no errors when building and testing our client artifacts.
            +1 findbugs 2m 4s the patch passed
            +1 javadoc 0m 48s the patch passed
                  Other Tests
            -1 unit 112m 55s hadoop-hdfs in the patch failed.
            +1 asflicense 0m 34s The patch does not generate ASF License warnings.
            173m 3s



            Reason Tests
            Failed junit tests hadoop.hdfs.tools.TestDFSZKFailoverController
              hadoop.hdfs.server.datanode.web.TestDatanodeHttpXFrame
              hadoop.hdfs.server.datanode.TestDataNodeLifeline



            Subsystem Report/Notes
            Docker Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e
            JIRA Issue HDFS-14290
            JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12959226/HDFS-14290.000.patch
            Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle
            uname Linux 7fa1844c4867 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
            Build tool maven
            Personality /testptch/patchprocess/precommit/personality/provided.sh
            git revision trunk / fcfe7a3
            maven version: Apache Maven 3.3.9
            Default Java 1.8.0_212
            findbugs v3.1.0-RC1
            checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/26928/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
            unit https://builds.apache.org/job/PreCommit-HDFS-Build/26928/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
            Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/26928/testReport/
            Max. process+thread count 2688 (vs. ulimit of 10000)
            modules C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs
            Console output https://builds.apache.org/job/PreCommit-HDFS-Build/26928/console
            Powered by Apache Yetus 0.8.0 http://yetus.apache.org

            This message was automatically generated.

            hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 22s Docker mode activated.       Prechecks +1 @author 0m 0s The patch does not contain any @author tags. -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.       trunk Compile Tests +1 mvninstall 20m 20s trunk passed +1 compile 1m 5s trunk passed +1 checkstyle 0m 56s trunk passed +1 mvnsite 1m 24s trunk passed +1 shadedclient 14m 23s branch has no errors when building and testing our client artifacts. +1 findbugs 1m 59s trunk passed +1 javadoc 0m 51s trunk passed       Patch Compile Tests +1 mvninstall 0m 59s the patch passed +1 compile 0m 54s the patch passed +1 javac 0m 54s the patch passed -0 checkstyle 0m 35s hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 8 unchanged - 0 fixed = 9 total (was 8) +1 mvnsite 1m 0s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 shadedclient 12m 33s patch has no errors when building and testing our client artifacts. +1 findbugs 2m 4s the patch passed +1 javadoc 0m 48s the patch passed       Other Tests -1 unit 112m 55s hadoop-hdfs in the patch failed. +1 asflicense 0m 34s The patch does not generate ASF License warnings. 173m 3s Reason Tests Failed junit tests hadoop.hdfs.tools.TestDFSZKFailoverController   hadoop.hdfs.server.datanode.web.TestDatanodeHttpXFrame   hadoop.hdfs.server.datanode.TestDataNodeLifeline Subsystem Report/Notes Docker Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e JIRA Issue HDFS-14290 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12959226/HDFS-14290.000.patch Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle uname Linux 7fa1844c4867 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/patchprocess/precommit/personality/provided.sh git revision trunk / fcfe7a3 maven version: Apache Maven 3.3.9 Default Java 1.8.0_212 findbugs v3.1.0-RC1 checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/26928/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/26928/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/26928/testReport/ Max. process+thread count 2688 (vs. ulimit of 10000) modules C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs Console output https://builds.apache.org/job/PreCommit-HDFS-Build/26928/console Powered by Apache Yetus 0.8.0 http://yetus.apache.org This message was automatically generated.

            It's very difficult for me to verify this patch.
            Is there any way I can reproduce this issue? For example, a certain DataNode webhdfs URL that I can connect to it and which causes this error? It doesn't need to be a UT.

            weichiu Wei-Chiu Chuang added a comment - It's very difficult for me to verify this patch. Is there any way I can reproduce this issue? For example, a certain DataNode webhdfs URL that I can connect to it and which causes this error? It doesn't need to be a UT.
            leosun08 Lisheng Sun added a comment -

            Sorry, I don't reproduce it when update new netty. The problem should cause by old version netty.

            leosun08 Lisheng Sun added a comment - Sorry, I don't reproduce it when update new netty. The problem should cause by old version netty.

            Looks like this is the same as HDFS-13899. From the stack trace, this is probably something internal in Xiaomi's netty. Shall we resolve this as won't fix?

            weichiu Wei-Chiu Chuang added a comment - Looks like this is the same as HDFS-13899 . From the stack trace, this is probably something internal in Xiaomi's netty. Shall we resolve this as won't fix?
            leosun08 Lisheng Sun added a comment -

            Yeah. Sorry. It's not a problem. I have closed it. 

            leosun08 Lisheng Sun added a comment - Yeah. Sorry. It's not a problem. I have closed it. 

            People

              leosun08 Lisheng Sun
              leosun08 Lisheng Sun
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: