Details
-
Improvement
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
3.0.0-alpha4
-
None
Description
Findbugs hasn't gotten a decent update in a few years. The community has since forked it and created https://github.com/spotbugs/spotbugs . Running the RC1 on trunk has pointed out some definite problem areas. I think it would be to our benefit to switch trunk over sooner rather than later, even though it's still in RC status.
Attachments
Attachments
- HADOOP-14316.01.patch
- 2 kB
- Allen Wittenauer
- HADOOP-14316.00.patch
- 2 kB
- Allen Wittenauer
Issue Links
- is related to
-
HADOOP-14336 Cleanup findbugs warnings found by Spotbugs
- Resolved
-
OOZIE-2933 Switch from Findbugs to Spotbugs
- Closed
- relates to
-
HBASE-17954 Switch findbugs implementation to spotbugs
- Resolved
-
HADOOP-16866 Upgrade spotbugs to 4.0.6
- Resolved
- supercedes
-
HADOOP-12937 Update maven-findbugs-plugin to use FindBugs 3.0.1
- Resolved
Activity
-1 overall |
Vote | Subsystem | Runtime | Comment |
---|---|---|---|
0 | reexec | 0m 14s | Docker mode activated. |
+1 | @author | 0m 0s | The patch does not contain any @author tags. |
-1 | test4tests | 0m 0s | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. |
+1 | mvninstall | 14m 22s | trunk passed |
+1 | compile | 0m 9s | trunk passed |
+1 | mvnsite | 0m 11s | trunk passed |
+1 | mvneclipse | 0m 11s | trunk passed |
+1 | javadoc | 0m 10s | trunk passed |
+1 | mvninstall | 0m 8s | the patch passed |
+1 | compile | 0m 6s | the patch passed |
+1 | javac | 0m 6s | the patch passed |
+1 | mvnsite | 0m 9s | the patch passed |
+1 | mvneclipse | 0m 7s | the patch passed |
+1 | whitespace | 0m 0s | The patch has no whitespace issues. |
+1 | xml | 0m 1s | The patch has no ill-formed XML file. |
+1 | javadoc | 0m 7s | the patch passed |
+1 | unit | 0m 6s | hadoop-project in the patch passed. |
+1 | asflicense | 0m 19s | The patch does not generate ASF License warnings. |
17m 1s |
Subsystem | Report/Notes |
---|---|
Docker | Image:yetus/hadoop:0ac17dc |
JIRA Issue | |
JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12863737/HADOOP-14316.00.patch |
Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit xml |
uname | Linux c2ca0a242551 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
Build tool | maven |
Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh |
git revision | trunk / 8dfcd95 |
Default Java | 1.8.0_121 |
Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/12111/testReport/ |
modules | C: hadoop-project U: hadoop-project |
Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/12111/console |
Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org |
This message was automatically generated.
+1 (non-binding)
Presumably there will some followup "fix spotbug issues" JIRAs?
sgtm.
Presumably there will some followup "fix spotbug issues" JIRAs?
Almost certainly. 62 new findbugs errors are quite a few and will likely be shocking. I didn't look through all of them, but of the handful that I did, they definitely pointed to problems. It's gonna take the community fix them.
I'll drop a note to common-dev just to warn folks on commit.
Reason | Tests |
FindBugs | module:hadoop-common-project/hadoop-minikdc |
Possible null pointer dereference in org.apache.hadoop.minikdc.MiniKdc.delete(File) due to return value of called method Dereferenced at MiniKdc.java:org.apache.hadoop.minikdc.MiniKdc.delete(File) due to return value of called method Dereferenced at MiniKdc.java:[line 368] | |
FindBugs | module:hadoop-common-project/hadoop-auth |
org.apache.hadoop.security.authentication.server.MultiSchemeAuthenticationHandler.authenticate(HttpServletRequest, HttpServletResponse) makes inefficient use of keySet iterator instead of entrySet iterator At MultiSchemeAuthenticationHandler.java:of keySet iterator instead of entrySet iterator At MultiSchemeAuthenticationHandler.java:[line 192] | |
FindBugs | module:hadoop-common-project/hadoop-common |
org.apache.hadoop.crypto.CipherSuite.setUnknownValue(int) unconditionally sets the field unknownValue At CipherSuite.java:unknownValue At CipherSuite.java:[line 44] | |
org.apache.hadoop.crypto.CryptoProtocolVersion.setUnknownValue(int) unconditionally sets the field unknownValue At CryptoProtocolVersion.java:unknownValue At CryptoProtocolVersion.java:[line 67] | |
Possible null pointer dereference in org.apache.hadoop.fs.FileUtil.fullyDeleteOnExit(File) due to return value of called method Dereferenced at FileUtil.java:org.apache.hadoop.fs.FileUtil.fullyDeleteOnExit(File) due to return value of called method Dereferenced at FileUtil.java:[line 118] | |
Possible null pointer dereference in org.apache.hadoop.fs.RawLocalFileSystem.handleEmptyDstDirectoryOnWindows(Path, File, Path, File) due to return value of called method Dereferenced at RawLocalFileSystem.java:org.apache.hadoop.fs.RawLocalFileSystem.handleEmptyDstDirectoryOnWindows(Path, File, Path, File) due to return value of called method Dereferenced at RawLocalFileSystem.java:[line 387] | |
Return value of org.apache.hadoop.fs.permission.FsAction.or(FsAction) ignored, but method has no side effect At FTPFileSystem.java:but method has no side effect At FTPFileSystem.java:[line 421] | |
Useless condition:lazyPersist == true at this point At CommandWithDestination.java:[line 502] | |
org.apache.hadoop.io.DoubleWritable.compareTo(DoubleWritable) incorrectly handles double value At DoubleWritable.java: At DoubleWritable.java:[line 78] | |
org.apache.hadoop.io.DoubleWritable$Comparator.compare(byte[], int, int, byte[], int, int) incorrectly handles double value At DoubleWritable.java:int) incorrectly handles double value At DoubleWritable.java:[line 97] | |
org.apache.hadoop.io.FloatWritable.compareTo(FloatWritable) incorrectly handles float value At FloatWritable.java: At FloatWritable.java:[line 71] | |
org.apache.hadoop.io.FloatWritable$Comparator.compare(byte[], int, int, byte[], int, int) incorrectly handles float value At FloatWritable.java:int) incorrectly handles float value At FloatWritable.java:[line 89] | |
Possible null pointer dereference in org.apache.hadoop.io.IOUtils.listDirectory(File, FilenameFilter) due to return value of called method Dereferenced at IOUtils.java:org.apache.hadoop.io.IOUtils.listDirectory(File, FilenameFilter) due to return value of called method Dereferenced at IOUtils.java:[line 350] | |
org.apache.hadoop.io.erasurecode.ECSchema.toString() makes inefficient use of keySet iterator instead of entrySet iterator At ECSchema.java:keySet iterator instead of entrySet iterator At ECSchema.java:[line 191] | |
Possible bad parsing of shift operation in org.apache.hadoop.io.file.tfile.Utils$Version.hashCode() At Utils.java:operation in org.apache.hadoop.io.file.tfile.Utils$Version.hashCode() At Utils.java:[line 398] | |
org.apache.hadoop.metrics2.lib.DefaultMetricsFactory.setInstance(MutableMetricsFactory) unconditionally sets the field mmfImpl At DefaultMetricsFactory.java:mmfImpl At DefaultMetricsFactory.java:[line 49] | |
org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.setMiniClusterMode(boolean) unconditionally sets the field miniClusterMode At DefaultMetricsSystem.java:miniClusterMode At DefaultMetricsSystem.java:[line 100] | |
Useless object stored in variable seqOs of method org.apache.hadoop.security.token.delegation.ZKDelegationTokenSecretManager.addOrUpdateToken(AbstractDelegationTokenIdentifier, AbstractDelegationTokenSecretManager$DelegationTokenInformation, boolean) At ZKDelegationTokenSecretManager.java:seqOs of method org.apache.hadoop.security.token.delegation.ZKDelegationTokenSecretManager.addOrUpdateToken(AbstractDelegationTokenIdentifier, AbstractDelegationTokenSecretManager$DelegationTokenInformation, boolean) At ZKDelegationTokenSecretManager.java:[line 886] | |
Bad comparison of nonnegative value with 0 in org.apache.hadoop.tracing.TraceAdmin.run(String[]) At TraceAdmin.java:with 0 in org.apache.hadoop.tracing.TraceAdmin.run(String[]) At TraceAdmin.java:[line 169] | |
FindBugs | module:hadoop-hdfs-project/hadoop-hdfs-client |
Possible exposure of partially initialized object in org.apache.hadoop.hdfs.DFSClient.initThreadsNumForStripedReads(int) At DFSClient.java:object in org.apache.hadoop.hdfs.DFSClient.initThreadsNumForStripedReads(int) At DFSClient.java:[line 2856] | |
org.apache.hadoop.hdfs.server.protocol.SlowDiskReports.equals(Object) makes inefficient use of keySet iterator instead of entrySet iterator At SlowDiskReports.java:keySet iterator instead of entrySet iterator At SlowDiskReports.java:[line 105] | |
FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
Possible null pointer dereference in org.apache.hadoop.hdfs.qjournal.server.JournalNode.getJournalsStatus() due to return value of called method Dereferenced at JournalNode.java:org.apache.hadoop.hdfs.qjournal.server.JournalNode.getJournalsStatus() due to return value of called method Dereferenced at JournalNode.java:[line 300] | |
org.apache.hadoop.hdfs.server.common.HdfsServerConstants$StartupOption.setClusterId(String) unconditionally sets the field clusterId At HdfsServerConstants.java:clusterId At HdfsServerConstants.java:[line 193] | |
org.apache.hadoop.hdfs.server.common.HdfsServerConstants$StartupOption.setForce(int) unconditionally sets the field force At HdfsServerConstants.java:force At HdfsServerConstants.java:[line 217] | |
org.apache.hadoop.hdfs.server.common.HdfsServerConstants$StartupOption.setForceFormat(boolean) unconditionally sets the field isForceFormat At HdfsServerConstants.java:isForceFormat At HdfsServerConstants.java:[line 229] | |
org.apache.hadoop.hdfs.server.common.HdfsServerConstants$StartupOption.setInteractiveFormat(boolean) unconditionally sets the field isInteractiveFormat At HdfsServerConstants.java:isInteractiveFormat At HdfsServerConstants.java:[line 237] | |
Possible null pointer dereference in org.apache.hadoop.hdfs.server.datanode.DataStorage.linkBlocksHelper(File, File, int, HardLink, boolean, File, List) due to return value of called method Dereferenced at DataStorage.java:org.apache.hadoop.hdfs.server.datanode.DataStorage.linkBlocksHelper(File, File, int, HardLink, boolean, File, List) due to return value of called method Dereferenced at DataStorage.java:[line 1333] | |
Possible null pointer dereference in org.apache.hadoop.hdfs.server.namenode.NNStorageRetentionManager.purgeOldLegacyOIVImages(String, long) due to return value of called method Dereferenced at NNStorageRetentionManager.java:org.apache.hadoop.hdfs.server.namenode.NNStorageRetentionManager.purgeOldLegacyOIVImages(String, long) due to return value of called method Dereferenced at NNStorageRetentionManager.java:[line 258] | |
Possible null pointer dereference in org.apache.hadoop.hdfs.server.namenode.NNUpgradeUtil$1.visitFile(Path, BasicFileAttributes) due to return value of called method Dereferenced at NNUpgradeUtil.java:org.apache.hadoop.hdfs.server.namenode.NNUpgradeUtil$1.visitFile(Path, BasicFileAttributes) due to return value of called method Dereferenced at NNUpgradeUtil.java:[line 133] | |
Useless condition:argv.length >= 1 at this point At DFSAdmin.java:[line 2010] | |
Useless condition:numBlocks == -1 at this point At ImageLoaderCurrent.java:[line 727] | |
FindBugs | module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common |
Possible null pointer dereference in org.apache.hadoop.yarn.logaggregation.AggregatedLogFormat$LogValue.getPendingLogFilesToUpload(File) due to return value of called method Method invoked at AggregatedLogFormat.java:org.apache.hadoop.yarn.logaggregation.AggregatedLogFormat$LogValue.getPendingLogFilesToUpload(File) due to return value of called method Method invoked at AggregatedLogFormat.java:[line 314] | |
Possible null pointer dereference in org.apache.hadoop.yarn.util.ProcfsBasedProcessTree.getProcessList() due to return value of called method Dereferenced at ProcfsBasedProcessTree.java:org.apache.hadoop.yarn.util.ProcfsBasedProcessTree.getProcessList() due to return value of called method Dereferenced at ProcfsBasedProcessTree.java:[line 499] | |
FindBugs | module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager |
Useless object stored in variable removedNullContainers of method org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeOrTrackCompletedContainersFromContext(List) At NodeStatusUpdaterImpl.java:removedNullContainers of method org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeOrTrackCompletedContainersFromContext(List) At NodeStatusUpdaterImpl.java:[line 644] | |
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeVeryOldStoppedContainersFromCache() makes inefficient use of keySet iterator instead of entrySet iterator At NodeStatusUpdaterImpl.java:keySet iterator instead of entrySet iterator At NodeStatusUpdaterImpl.java:[line 721] | |
Hard coded reference to an absolute pathname in org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DockerLinuxContainerRuntime.launchContainer(ContainerRuntimeContext) At DockerLinuxContainerRuntime.java:absolute pathname in org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DockerLinuxContainerRuntime.launchContainer(ContainerRuntimeContext) At DockerLinuxContainerRuntime.java:[line 455] | |
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.createStatus() makes inefficient use of keySet iterator instead of entrySet iterator At ContainerLocalizer.java:keySet iterator instead of entrySet iterator At ContainerLocalizer.java:[line 334] | |
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainerMetrics.usageMetrics is a mutable collection which should be package protected At ContainerMetrics.java:which should be package protected At ContainerMetrics.java:[line 134] | |
FindBugs | module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice |
Possible null pointer dereference in org.apache.hadoop.yarn.server.timelineservice.storage.FileSystemTimelineReaderImpl.getEntities(File, String, TimelineEntityFilters, TimelineDataToRetrieve) due to return value of called method Dereferenced at FileSystemTimelineReaderImpl.java:org.apache.hadoop.yarn.server.timelineservice.storage.FileSystemTimelineReaderImpl.getEntities(File, String, TimelineEntityFilters, TimelineDataToRetrieve) due to return value of called method Dereferenced at FileSystemTimelineReaderImpl.java:[line 281] | |
FindBugs | module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager |
org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService.EMPTY_CONTAINER_LIST is a mutable collection which should be package protected At ApplicationMasterService.java:which should be package protected At ApplicationMasterService.java:[line 396] | |
org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy.cleanupStaledPreemptionCandidates(long) makes inefficient use of keySet iterator instead of entrySet iterator At ProportionalCapacityPreemptionPolicy.java:keySet iterator instead of entrySet iterator At ProportionalCapacityPreemptionPolicy.java:[line 315] | |
org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.transferStateFromAttempt(RMAppAttempt) makes inefficient use of keySet iterator instead of entrySet iterator At RMAppAttemptImpl.java:keySet iterator instead of entrySet iterator At RMAppAttemptImpl.java:[line 1005] | |
org.apache.hadoop.yarn.server.resourcemanager.scheduler.AbstractYarnScheduler.EMPTY_CONTAINER_LIST is a mutable collection which should be package protected At AbstractYarnScheduler.java:which should be package protected At AbstractYarnScheduler.java:[line 135] | |
org.apache.hadoop.yarn.server.resourcemanager.scheduler.NodeType.index field is public and mutable In NodeType.java:mutable In NodeType.java | |
org.apache.hadoop.yarn.server.resourcemanager.scheduler.QueueMetrics.queueMetrics is a mutable collection At QueueMetrics.java: At QueueMetrics.java:[line 151] | |
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerQueueManager$1.compare(CSQueue, CSQueue) incorrectly handles float value At CapacitySchedulerQueueManager.java:value At CapacitySchedulerQueueManager.java:[line 74] | |
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSSchedulerNode.cleanupPreemptionList() makes inefficient use of keySet iterator instead of entrySet iterator At FSSchedulerNode.java:keySet iterator instead of entrySet iterator At FSSchedulerNode.java:[line 165] | |
FindBugs | module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client |
Possible exposure of partially initialized object in org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.getTimelineDelegationToken() At YarnClientImpl.java:object in org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.getTimelineDelegationToken() At YarnClientImpl.java:[line 371] | |
Useless condition:isAppFinished == false at this point At LogsCLI.java:[line 999] | |
FindBugs | module:hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core |
Primitive is boxed to call Long.compareTo(Long):Long.compareTo(Long): use Long.compare(long, long) instead At JVMId.java:[line 101] | |
org.apache.hadoop.mapred.Operation.jobACLNeeded field is public and mutable In Operation.java:mutable In Operation.java | |
org.apache.hadoop.mapred.Operation.qACLNeeded field is public and mutable In Operation.java:mutable In Operation.java | |
FindBugs | module:hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app |
Possible null pointer dereference in new org.apache.hadoop.mapred.LocalContainerLauncher(AppContext, TaskUmbilicalProtocol, ClassLoader) due to return value of called method Dereferenced at LocalContainerLauncher.java:new org.apache.hadoop.mapred.LocalContainerLauncher(AppContext, TaskUmbilicalProtocol, ClassLoader) due to return value of called method Dereferenced at LocalContainerLauncher.java:[line 124] | |
Possible null pointer dereference in org.apache.hadoop.mapred.LocalContainerLauncher$EventHandler.relocalize() due to return value of called method Dereferenced at LocalContainerLauncher.java:org.apache.hadoop.mapred.LocalContainerLauncher$EventHandler.relocalize() due to return value of called method Dereferenced at LocalContainerLauncher.java:[line 524] | |
Possible null pointer dereference in org.apache.hadoop.mapreduce.v2.app.MRAppMaster.isJobNamePatternMatch(JobConf, String) due to return value of called method Dereferenced at MRAppMaster.java:org.apache.hadoop.mapreduce.v2.app.MRAppMaster.isJobNamePatternMatch(JobConf, String) due to return value of called method Dereferenced at MRAppMaster.java:[line 577] | |
FindBugs | module:hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs |
Useless object stored in variable paths of method org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager$HistoryFileInfo.moveToDone() At HistoryFileManager.java:paths of method org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager$HistoryFileInfo.moveToDone() At HistoryFileManager.java:[line 410] | |
FindBugs | module:hadoop-mapreduce-project/hadoop-mapreduce-examples |
Possible null pointer dereference in org.apache.hadoop.examples.pi.Parser.parse(File, Map) due to return value of called method Dereferenced at Parser.java:org.apache.hadoop.examples.pi.Parser.parse(File, Map) due to return value of called method Dereferenced at Parser.java:[line 70] | |
FindBugs | module:hadoop-tools/hadoop-rumen |
Return value of new org.apache.hadoop.tools.rumen.datatypes.DefaultDataType(String) ignored, but method has no side effect At MapReduceJobPropertiesParser.java:ignored, but method has no side effect At MapReduceJobPropertiesParser.java:[line 211] | |
FindBugs | module:hadoop-tools/hadoop-gridmix |
org.apache.hadoop.mapred.gridmix.InputStriper$1.compare(Map$Entry, Map$Entry) incorrectly handles double value At InputStriper.java:value At InputStriper.java:[line 136] | |
org.apache.hadoop.mapred.gridmix.emulators.resourceusage.TotalHeapUsageEmulatorPlugin$DefaultHeapUsageEmulator.heapSpace is a mutable collection which should be package protected At TotalHeapUsageEmulatorPlugin.java:which should be package protected At TotalHeapUsageEmulatorPlugin.java:[line 132] | |
FindBugs | module:hadoop-tools/hadoop-azure |
Useless object stored in variable keysToUpdateAsFolder of method org.apache.hadoop.fs.azure.NativeAzureFileSystem.mkdirs(Path, FsPermission, boolean) At NativeAzureFileSystem.java:keysToUpdateAsFolder of method org.apache.hadoop.fs.azure.NativeAzureFileSystem.mkdirs(Path, FsPermission, boolean) At NativeAzureFileSystem.java:[line 2454] | |
FindBugs | module:hadoop-tools/hadoop-sls |
org.apache.hadoop.yarn.sls.SLSRunner.simulateInfoMap is a mutable collection At SLSRunner.java: At SLSRunner.java:[line 103] | |
Sorry for the possibly obvious question, but does spotbugs respect the same findbugs-exclude.xml file as findbugs? Does it have to be configured in the pom.xml like findbugs?
Also, if spotbugs is a superset, than we may as well replace findbugs rather than maintaining both.
-1 overall |
Vote | Subsystem | Runtime | Comment |
---|---|---|---|
0 | reexec | 0m 16s | Docker mode activated. |
+1 | @author | 0m 0s | The patch does not contain any @author tags. |
-1 | test4tests | 0m 0s | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. |
+1 | mvninstall | 14m 42s | trunk passed |
+1 | compile | 0m 11s | trunk passed |
+1 | mvnsite | 0m 12s | trunk passed |
+1 | mvneclipse | 0m 10s | trunk passed |
+1 | javadoc | 0m 10s | trunk passed |
+1 | mvninstall | 0m 8s | the patch passed |
+1 | compile | 0m 7s | the patch passed |
+1 | javac | 0m 7s | the patch passed |
+1 | mvnsite | 0m 11s | the patch passed |
+1 | mvneclipse | 0m 8s | the patch passed |
+1 | whitespace | 0m 0s | The patch has no whitespace issues. |
+1 | xml | 0m 1s | The patch has no ill-formed XML file. |
+1 | javadoc | 0m 7s | the patch passed |
+1 | unit | 0m 7s | hadoop-project in the patch passed. |
+1 | asflicense | 0m 18s | The patch does not generate ASF License warnings. |
17m 29s |
Subsystem | Report/Notes |
---|---|
Docker | Image:yetus/hadoop:0ac17dc |
JIRA Issue | |
JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12864053/HADOOP-14316.01.patch |
Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit xml |
uname | Linux b4d6f8974fff 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
Build tool | maven |
Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh |
git revision | trunk / 7e075a50 |
Default Java | 1.8.0_121 |
Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/12129/testReport/ |
modules | C: hadoop-project U: hadoop-project |
Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/12129/console |
Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org |
This message was automatically generated.
My understanding is:
- the maven findbugs plugin isn't actually part of findbugs
- the maven findbugs plugin has the ability to swap out which version of findbugs is in use; it's really nothing more than a maven driver (and thus all of that pom work remains)
- the spotbugs team is using that functionality to replace the findbugs engine that the plugin normally uses with their new one
- the various mvn command lines, exclusion files, etc, are actually handled by the plugin, not findbugs/spotbugs directly
OK, I'm going to commit this to trunk given -00 was +1'd and -01 is really just a minor tweak. If it breaks anything too horribly, we'll just revert.
Thanks all.
SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11612 (See https://builds.apache.org/job/Hadoop-trunk-Commit/11612/)
HADOOP-14316. Switch from Findbugs to Spotbugs (aw) (aw: rev 394589f38515655b55f9c4fbeaf03f41c0dd1355)
- (edit) hadoop-project/pom.xml
-00:
You can apply this patch then run:
to see what sticks out.