Details
-
Sub-task
-
Status: Resolved
-
Trivial
-
Resolution: Won't Fix
-
None
-
None
-
None
-
None
Attachments
Attachments
- HDFS-8342-HDFS-7285.001.patch
- 4 kB
- Walter Su
Activity
Hadoop QA
added a comment -
-1 overall |
Vote | Subsystem | Runtime | Comment |
---|---|---|---|
0 | pre-patch | 5m 13s | Pre-patch |
+1 | @author | 0m 0s | The patch does not contain any @author tags. |
+1 | tests included | 0m 0s | The patch appears to include 1 new or modified test files. |
+1 | javac | 7m 33s | There were no new javac warning messages. |
-1 | release audit | 0m 13s | The applied patch generated 1 release audit warnings. |
+1 | checkstyle | 0m 36s | There were no new checkstyle issues. |
+1 | whitespace | 0m 0s | The patch has no lines that end in whitespace. |
+1 | install | 1m 37s | mvn install still works. |
+1 | eclipse:eclipse | 0m 31s | The patch built with eclipse:eclipse. |
-1 | findbugs | 3m 11s | The patch appears to introduce 8 new Findbugs (version 2.0.3) warnings. |
+1 | native | 1m 18s | Pre-build of native portion |
-1 | hdfs tests | 189m 21s | Tests failed in hadoop-hdfs. |
209m 37s |
Reason | Tests |
---|---|
FindBugs | module:hadoop-hdfs |
Inconsistent synchronization of org.apache.hadoop.hdfs.DFSOutputStream.streamer; locked 89% of time Unsynchronized access at DFSOutputStream.java:89% of time Unsynchronized access at DFSOutputStream.java:[line 146] | |
Possible null pointer dereference of arr$ in org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoStripedUnderConstruction.initializeBlockRecovery(long) Dereferenced at BlockInfoStripedUnderConstruction.java:arr$ in org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoStripedUnderConstruction.initializeBlockRecovery(long) Dereferenced at BlockInfoStripedUnderConstruction.java:[line 194] | |
Unread field:field be static? At ErasureCodingWorker.java:[line 252] | |
Should org.apache.hadoop.hdfs.server.datanode.erasurecode.ErasureCodingWorker$StripedReader be a static inner class? At ErasureCodingWorker.java:inner class? At ErasureCodingWorker.java:[lines 913-915] | |
Found reliance on default encoding in org.apache.hadoop.hdfs.server.namenode.ErasureCodingZoneManager.createErasureCodingZone(String, ECSchema):in org.apache.hadoop.hdfs.server.namenode.ErasureCodingZoneManager.createErasureCodingZone(String, ECSchema): String.getBytes() At ErasureCodingZoneManager.java:[line 117] | |
Found reliance on default encoding in org.apache.hadoop.hdfs.server.namenode.ErasureCodingZoneManager.getECZoneInfo(INodesInPath):in org.apache.hadoop.hdfs.server.namenode.ErasureCodingZoneManager.getECZoneInfo(INodesInPath): new String(byte[]) At ErasureCodingZoneManager.java:[line 81] | |
Dead store to dataBlkNum in org.apache.hadoop.hdfs.util.StripedBlockUtil.calcualteChunkPositionsInBuf(ECSchema, LocatedStripedBlock, byte[], int, int, int, int, long, int, StripedBlockUtil$AlignedStripe[]) At StripedBlockUtil.java:org.apache.hadoop.hdfs.util.StripedBlockUtil.calcualteChunkPositionsInBuf(ECSchema, LocatedStripedBlock, byte[], int, int, int, int, long, int, StripedBlockUtil$AlignedStripe[]) At StripedBlockUtil.java:[line 467] | |
Result of integer multiplication cast to long in org.apache.hadoop.hdfs.util.StripedBlockUtil.planReadPortions(int, int, long, int, int) At StripedBlockUtil.java:to long in org.apache.hadoop.hdfs.util.StripedBlockUtil.planReadPortions(int, int, long, int, int) At StripedBlockUtil.java:[line 206] | |
Failed unit tests | hadoop.tracing.TestTraceAdmin |
hadoop.hdfs.util.TestStripedBlockUtil | |
hadoop.hdfs.server.blockmanagement.TestBlockInfo | |
hadoop.hdfs.TestWriteReadStripedFile | |
hadoop.hdfs.server.namenode.TestFileTruncate | |
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration | |
hadoop.hdfs.TestRecoverStripedFile | |
hadoop.hdfs.server.datanode.TestIncrementalBlockReports | |
hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerWithStripedBlocks | |
hadoop.hdfs.server.datanode.TestTriggerBlockReport | |
hadoop.hdfs.server.namenode.TestAuditLogs | |
hadoop.hdfs.server.datanode.TestBlockReplacement | |
hadoop.hdfs.tools.TestHdfsConfigFields | |
hadoop.hdfs.server.blockmanagement.TestBlockManager | |
Timed out tests | org.apache.hadoop.hdfs.TestDatanodeDeath |
Subsystem | Report/Notes |
---|---|
Patch URL | http://issues.apache.org/jira/secure/attachment/12733107/HDFS-8342-HDFS-7285.001.patch |
Optional Tests | javac unit findbugs checkstyle |
git revision | |
Release Audit | https://builds.apache.org/job/PreCommit-HDFS-Build/11001/artifact/patchprocess/patchReleaseAuditProblems.txt |
Findbugs warnings | https://builds.apache.org/job/PreCommit-HDFS-Build/11001/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html |
hadoop-hdfs test log | https://builds.apache.org/job/PreCommit-HDFS-Build/11001/artifact/patchprocess/testrun_hadoop-hdfs.txt |
Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/11001/testReport/ |
Java | 1.7.0_55 |
uname | Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/11001/console |
This message was automatically generated.
Hadoop QA
added a comment -
-1 overall
Vote
Subsystem
Runtime
Comment
0
pre-patch
5m 13s
Pre-patch HDFS-7285 compilation is healthy.
+1
@author
0m 0s
The patch does not contain any @author tags.
+1
tests included
0m 0s
The patch appears to include 1 new or modified test files.
+1
javac
7m 33s
There were no new javac warning messages.
-1
release audit
0m 13s
The applied patch generated 1 release audit warnings.
+1
checkstyle
0m 36s
There were no new checkstyle issues.
+1
whitespace
0m 0s
The patch has no lines that end in whitespace.
+1
install
1m 37s
mvn install still works.
+1
eclipse:eclipse
0m 31s
The patch built with eclipse:eclipse.
-1
findbugs
3m 11s
The patch appears to introduce 8 new Findbugs (version 2.0.3) warnings.
+1
native
1m 18s
Pre-build of native portion
-1
hdfs tests
189m 21s
Tests failed in hadoop-hdfs.
209m 37s
Reason
Tests
FindBugs
module:hadoop-hdfs
Inconsistent synchronization of org.apache.hadoop.hdfs.DFSOutputStream.streamer; locked 89% of time Unsynchronized access at DFSOutputStream.java:89% of time Unsynchronized access at DFSOutputStream.java: [line 146]
Possible null pointer dereference of arr$ in org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoStripedUnderConstruction.initializeBlockRecovery(long) Dereferenced at BlockInfoStripedUnderConstruction.java:arr$ in org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoStripedUnderConstruction.initializeBlockRecovery(long) Dereferenced at BlockInfoStripedUnderConstruction.java: [line 194]
Unread field:field be static? At ErasureCodingWorker.java: [line 252]
Should org.apache.hadoop.hdfs.server.datanode.erasurecode.ErasureCodingWorker$StripedReader be a static inner class? At ErasureCodingWorker.java:inner class? At ErasureCodingWorker.java: [lines 913-915]
Found reliance on default encoding in org.apache.hadoop.hdfs.server.namenode.ErasureCodingZoneManager.createErasureCodingZone(String, ECSchema):in org.apache.hadoop.hdfs.server.namenode.ErasureCodingZoneManager.createErasureCodingZone(String, ECSchema): String.getBytes() At ErasureCodingZoneManager.java: [line 117]
Found reliance on default encoding in org.apache.hadoop.hdfs.server.namenode.ErasureCodingZoneManager.getECZoneInfo(INodesInPath):in org.apache.hadoop.hdfs.server.namenode.ErasureCodingZoneManager.getECZoneInfo(INodesInPath): new String(byte[]) At ErasureCodingZoneManager.java: [line 81]
Dead store to dataBlkNum in org.apache.hadoop.hdfs.util.StripedBlockUtil.calcualteChunkPositionsInBuf(ECSchema, LocatedStripedBlock, byte[], int, int, int, int, long, int, StripedBlockUtil$AlignedStripe[]) At StripedBlockUtil.java:org.apache.hadoop.hdfs.util.StripedBlockUtil.calcualteChunkPositionsInBuf(ECSchema, LocatedStripedBlock, byte[], int, int, int, int, long, int, StripedBlockUtil$AlignedStripe[]) At StripedBlockUtil.java: [line 467]
Result of integer multiplication cast to long in org.apache.hadoop.hdfs.util.StripedBlockUtil.planReadPortions(int, int, long, int, int) At StripedBlockUtil.java:to long in org.apache.hadoop.hdfs.util.StripedBlockUtil.planReadPortions(int, int, long, int, int) At StripedBlockUtil.java: [line 206]
Failed unit tests
hadoop.tracing.TestTraceAdmin
hadoop.hdfs.util.TestStripedBlockUtil
hadoop.hdfs.server.blockmanagement.TestBlockInfo
hadoop.hdfs.TestWriteReadStripedFile
hadoop.hdfs.server.namenode.TestFileTruncate
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration
hadoop.hdfs.TestRecoverStripedFile
hadoop.hdfs.server.datanode.TestIncrementalBlockReports
hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerWithStripedBlocks
hadoop.hdfs.server.datanode.TestTriggerBlockReport
hadoop.hdfs.server.namenode.TestAuditLogs
hadoop.hdfs.server.datanode.TestBlockReplacement
hadoop.hdfs.tools.TestHdfsConfigFields
hadoop.hdfs.server.blockmanagement.TestBlockManager
Timed out tests
org.apache.hadoop.hdfs.TestDatanodeDeath
Subsystem
Report/Notes
Patch URL
http://issues.apache.org/jira/secure/attachment/12733107/HDFS-8342-HDFS-7285.001.patch
Optional Tests
javac unit findbugs checkstyle
git revision
HDFS-7285 / a35936d
Release Audit
https://builds.apache.org/job/PreCommit-HDFS-Build/11001/artifact/patchprocess/patchReleaseAuditProblems.txt
Findbugs warnings
https://builds.apache.org/job/PreCommit-HDFS-Build/11001/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
hadoop-hdfs test log
https://builds.apache.org/job/PreCommit-HDFS-Build/11001/artifact/patchprocess/testrun_hadoop-hdfs.txt
Test Results
https://builds.apache.org/job/PreCommit-HDFS-Build/11001/testReport/
Java
1.7.0_55
uname
Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
Console output
https://builds.apache.org/job/PreCommit-HDFS-Build/11001/console
This message was automatically generated.
Zhe Zhang
added a comment - Should we move it under HDFS-8031 or is it actually a won't fix?
Walter Su
added a comment -
TestWriteReadStripedFile.verifySeek() has similar test. I just thought TestDFSStripedInputStream is a better place for unit test. It's trival. The previous one works too. Let's move forward to a thorough system test.
Walter Su
added a comment - TestWriteReadStripedFile.verifySeek() has similar test. I just thought TestDFSStripedInputStream is a better place for unit test. It's trival. The previous one works too. Let's move forward to a thorough system test.
This message was automatically generated.