Details
Description
Currently, a mount point points to a single subcluster. We should be able to spread files in a mount point across subclusters.
Attachments
Attachments
- HDFS-13224.000.patch
- 69 kB
- Íñigo Goiri
- HDFS-13224.001.patch
- 72 kB
- Íñigo Goiri
- HDFS-13224.002.patch
- 78 kB
- Íñigo Goiri
- HDFS-13224.003.patch
- 51 kB
- Íñigo Goiri
- HDFS-13224.004.patch
- 54 kB
- Íñigo Goiri
- HDFS-13224.005.patch
- 64 kB
- Íñigo Goiri
- HDFS-13224.006.patch
- 58 kB
- Íñigo Goiri
- HDFS-13224.007.patch
- 66 kB
- Íñigo Goiri
- HDFS-13224.008.patch
- 72 kB
- Íñigo Goiri
- HDFS-13224.009.patch
- 76 kB
- Íñigo Goiri
- HDFS-13224.010.patch
- 76 kB
- Íñigo Goiri
- HDFS-13224-branch-2.000.patch
- 76 kB
- Íñigo Goiri
Issue Links
- breaks
-
HDFS-13299 RBF : Fix compilation error in branch-2 (TestMultipleDestinationResolver)
- Resolved
- is depended upon by
-
HDFS-13250 RBF: Router to manage requests across multiple subclusters
- Resolved
- is related to
-
HDFS-10880 Federation Mount Table State Store internal API
- Resolved
-
HADOOP-8298 ViewFs merge mounts
- Open
-
HADOOP-12077 Provide a multi-URI replication Inode for ViewFs
- Resolved
- relates to
-
HDFS-13291 RBF: Implement available space based OrderResolver
- Resolved
-
HDFS-13845 RBF: The default MountTableResolver should fail resolving multi-destination paths
- Resolved
-
HDFS-13237 [Documentation] RBF: Mount points across multiple subclusters
- Resolved
-
HDFS-13815 RBF: Add check to order command
- Resolved
-
HDFS-13817 RBF: create mount point with RANDOM policy and with 2 Nameservices doesn't work properly
- Resolved
-
HDFS-13810 RBF: UpdateMountTableEntryRequest isn't validating the record.
- Patch Available
Activity
I should add some documentation at some point (maybe a separate JIRA?) but the idea is that one can specify multiple subclusters for a mount point.
However, there are different approaches to doing this, currently we have:
- HASH: use consistent hashing at the first mount point level and decide based on this.
- LOCAL: use the local subcluster (this one is good for locality)
- RANDOM: pick a random subcluster (good for load balancing)
- HASH_ALL: distributes all the files in the mount point subtree using consistent hashing. The problem with this approach is that it requires all the tree structure (subfolders) to be in all subclusters.
We have all these working internally but seems to be a preference for HASH_ALL even though it has some limitations.
It may make sense to split this into a couple JIRAs. Anyway, let's get some feedback and proposals on HDFS-13224.000.patch for now.
-1 overall |
Vote | Subsystem | Runtime | Comment |
---|---|---|---|
0 | reexec | 0m 21s | Docker mode activated. |
Prechecks | |||
+1 | @author | 0m 0s | The patch does not contain any @author tags. |
+1 | test4tests | 0m 0s | The patch appears to include 2 new or modified test files. |
trunk Compile Tests | |||
+1 | mvninstall | 18m 42s | trunk passed |
+1 | compile | 0m 58s | trunk passed |
+1 | checkstyle | 0m 53s | trunk passed |
+1 | mvnsite | 1m 1s | trunk passed |
+1 | shadedclient | 12m 18s | branch has no errors when building and testing our client artifacts. |
+1 | findbugs | 1m 57s | trunk passed |
+1 | javadoc | 0m 55s | trunk passed |
Patch Compile Tests | |||
-1 | mvninstall | 0m 30s | hadoop-hdfs in the patch failed. |
-1 | compile | 0m 31s | hadoop-hdfs in the patch failed. |
-1 | javac | 0m 31s | hadoop-hdfs in the patch failed. |
-0 | checkstyle | 0m 46s | hadoop-hdfs-project/hadoop-hdfs: The patch generated 7 new + 0 unchanged - 0 fixed = 7 total (was 0) |
-1 | mvnsite | 0m 31s | hadoop-hdfs in the patch failed. |
-1 | whitespace | 0m 0s | The patch has 2 line(s) that end in whitespace. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply |
-1 | shadedclient | 3m 40s | patch has errors when building and testing our client artifacts. |
-1 | findbugs | 0m 34s | hadoop-hdfs in the patch failed. |
-1 | javadoc | 0m 52s | hadoop-hdfs-project_hadoop-hdfs generated 3 new + 1 unchanged - 0 fixed = 4 total (was 1) |
Other Tests | |||
-1 | unit | 0m 33s | hadoop-hdfs in the patch failed. |
+1 | asflicense | 0m 21s | The patch does not generate ASF License warnings. |
45m 6s |
This message was automatically generated.
-1 overall |
Vote | Subsystem | Runtime | Comment |
---|---|---|---|
0 | reexec | 16m 7s | Docker mode activated. |
Prechecks | |||
+1 | @author | 0m 0s | The patch does not contain any @author tags. |
+1 | test4tests | 0m 0s | The patch appears to include 2 new or modified test files. |
trunk Compile Tests | |||
+1 | mvninstall | 17m 5s | trunk passed |
+1 | compile | 0m 53s | trunk passed |
+1 | checkstyle | 0m 50s | trunk passed |
+1 | mvnsite | 0m 59s | trunk passed |
+1 | shadedclient | 11m 44s | branch has no errors when building and testing our client artifacts. |
+1 | findbugs | 2m 1s | trunk passed |
+1 | javadoc | 0m 52s | trunk passed |
Patch Compile Tests | |||
-1 | mvninstall | 0m 29s | hadoop-hdfs in the patch failed. |
-1 | compile | 0m 30s | hadoop-hdfs in the patch failed. |
-1 | javac | 0m 30s | hadoop-hdfs in the patch failed. |
-0 | checkstyle | 0m 47s | hadoop-hdfs-project/hadoop-hdfs: The patch generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) |
-1 | mvnsite | 0m 30s | hadoop-hdfs in the patch failed. |
+1 | whitespace | 0m 0s | The patch has no whitespace issues. |
-1 | shadedclient | 3m 25s | patch has errors when building and testing our client artifacts. |
-1 | findbugs | 0m 30s | hadoop-hdfs in the patch failed. |
-1 | javadoc | 0m 50s | hadoop-hdfs-project_hadoop-hdfs generated 3 new + 1 unchanged - 0 fixed = 4 total (was 1) |
Other Tests | |||
-1 | unit | 0m 31s | hadoop-hdfs in the patch failed. |
+1 | asflicense | 0m 19s | The patch does not generate ASF License warnings. |
57m 50s |
This message was automatically generated.
Thanks for the patch, elgoiri. Took a quick pass, and the entire design LGTM.
Have some questions here:
- RouterRpcServer.isPathAll() function, currently it returns true if mount point is HASH_ALL. For some others like RANDOM, it should also return true, right?
- For HashFirstResolver, do we also need to handle temorary naming pattern there?
One minor thing, in MultipleDestinationMountTableResolver.java, the javadoc "It has three options to order the locations:" should be updated to four.
Thanks ywskycn for the comments.
- RANDOM should do the same yes. However, we have been setting up manually and using it as read only so it wasn't that relevant. I'll have to do some work to fully support RANDOM. I'm not sure how to support writing files though.
- I think HashFirstResolver it's using the super class HashResolver to do the temporary file extraction.
-1 overall |
Vote | Subsystem | Runtime | Comment |
---|---|---|---|
0 | reexec | 0m 16s | Docker mode activated. |
Prechecks | |||
+1 | @author | 0m 0s | The patch does not contain any @author tags. |
+1 | test4tests | 0m 0s | The patch appears to include 2 new or modified test files. |
trunk Compile Tests | |||
+1 | mvninstall | 14m 59s | trunk passed |
+1 | compile | 0m 52s | trunk passed |
+1 | checkstyle | 0m 39s | trunk passed |
+1 | mvnsite | 0m 54s | trunk passed |
+1 | shadedclient | 13m 24s | branch has no errors when building and testing our client artifacts. |
+1 | findbugs | 1m 45s | trunk passed |
+1 | javadoc | 0m 51s | trunk passed |
Patch Compile Tests | |||
+1 | mvninstall | 0m 54s | the patch passed |
+1 | compile | 0m 50s | the patch passed |
+1 | javac | 0m 50s | the patch passed |
+1 | checkstyle | 0m 40s | the patch passed |
+1 | mvnsite | 0m 51s | the patch passed |
+1 | whitespace | 0m 0s | The patch has no whitespace issues. |
+1 | shadedclient | 9m 35s | patch has no errors when building and testing our client artifacts. |
+1 | findbugs | 2m 8s | the patch passed |
+1 | javadoc | 0m 52s | the patch passed |
Other Tests | |||
-1 | unit | 96m 54s | hadoop-hdfs in the patch failed. |
+1 | asflicense | 0m 24s | The patch does not generate ASF License warnings. |
146m 35s |
Reason | Tests |
---|---|
Failed junit tests | hadoop.hdfs.TestLeaseRecovery2 |
hadoop.hdfs.TestDFSStripedOutputStreamWithFailure | |
hadoop.hdfs.server.federation.router.TestRouterHashResolver | |
hadoop.hdfs.web.TestWebHdfsTimeouts | |
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | |
hadoop.hdfs.server.federation.resolver.TestMultipleDestinationResolver |
Subsystem | Report/Notes |
---|---|
Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
JIRA Issue | |
JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12913115/HDFS-13224.002.patch |
Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle |
uname | Linux a8a69f581f69 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
Build tool | maven |
Personality | /testptch/patchprocess/precommit/personality/provided.sh |
git revision | trunk / 245751f |
maven | version: Apache Maven 3.3.9 |
Default Java | 1.8.0_151 |
findbugs | v3.1.0-RC1 |
unit | https://builds.apache.org/job/PreCommit-HDFS-Build/23306/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt |
Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/23306/testReport/ |
Max. process+thread count | 4981 (vs. ulimit of 10000) |
modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs |
Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/23306/console |
Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org |
This message was automatically generated.
elgoiri, the initial design and patch looks good overall. Just two comments:
1.In some RPC calls like setPermission, we do the check of (isPathAll(src)) and do the {{invokeConcurrent. But in some other places, we don't do this, any difference between these places? I mean that why we don't do the check isPathAll(src)) in all Router server RPC calls. This looks a little confused.
2. One new order type that based on the available space in destination cluster may also needed. That is say, users will write files into the destination cluster which has the most available space.
Thanks linyiqun for the comments.
- I actually just went through the ones that were more relevant for our workloads, I didn't do an exhaustive pass. I'll do this in the next iteration.
- That sounds good. My only concern is the size of the patch (I'm even considering removing some stuff from the current one). What about doing it in a follow-up JIRA?
I'm thinking on adding an extra JIRA to document this carefully.
For the available space policy, it could follow the same idea in HDFS-8131. In short, using a higher probability (instead of "always") to choose the cluster with higher available space.
Hi elgoiri,
That sounds good. My only concern is the size of the patch (I'm even considering removing some stuff from the current one). What about doing it in a follow-up JIRA?
Yes, we can file anther JIRA for tracking this and using the same idea in HDFS-8131 that ywskycn mentioned.
In addition, the patch looks a little big and not convenient to review. Based on the order type, can we split the patch into three part:
- The basic implementation of OrderedResolver, including the LocalResolver and RandomResolver.
- Implement the hash resolver, including the HashFirstResolverand and HashResolver.
- Available space based order type could be the third part. I can help implement this if you are busy.
In addition, the patch looks a little big and not convenient to review. Based on the order type, can we split the patch into three part:
- The basic implementation of OrderedResolver, including the LocalResolver and RandomResolver.
- Implement the hash resolver, including the HashFirstResolverand and HashResolver.
- Available space based order type could be the third part. I can help implement this if you are busy.
Agreed. I may even do one just for the RouterRpcServer, let me try to split locally this into pieces and I may do it in 3 or 4 JIRAs.
I'll create one for the space based one and assign it to you.
I'll give it a try tomorrow.
In HDFS-13224.003.patch I left only the resolvers for multiple subclusters.
I created HDFS-13250 to add the support to the Router itself.
-1 overall |
Vote | Subsystem | Runtime | Comment |
---|---|---|---|
0 | reexec | 0m 39s | Docker mode activated. |
Prechecks | |||
+1 | @author | 0m 0s | The patch does not contain any @author tags. |
+1 | test4tests | 0m 0s | The patch appears to include 1 new or modified test files. |
trunk Compile Tests | |||
+1 | mvninstall | 20m 41s | trunk passed |
+1 | compile | 1m 16s | trunk passed |
+1 | checkstyle | 0m 57s | trunk passed |
+1 | mvnsite | 1m 18s | trunk passed |
+1 | shadedclient | 13m 20s | branch has no errors when building and testing our client artifacts. |
+1 | findbugs | 2m 20s | trunk passed |
+1 | javadoc | 1m 5s | trunk passed |
Patch Compile Tests | |||
-1 | mvninstall | 0m 40s | hadoop-hdfs in the patch failed. |
-1 | compile | 0m 42s | hadoop-hdfs in the patch failed. |
-1 | javac | 0m 42s | hadoop-hdfs in the patch failed. |
+1 | checkstyle | 0m 54s | the patch passed |
-1 | mvnsite | 0m 42s | hadoop-hdfs in the patch failed. |
+1 | whitespace | 0m 0s | The patch has no whitespace issues. |
-1 | shadedclient | 4m 0s | patch has errors when building and testing our client artifacts. |
-1 | findbugs | 0m 19s | hadoop-hdfs in the patch failed. |
+1 | javadoc | 1m 1s | the patch passed |
Other Tests | |||
-1 | unit | 0m 47s | hadoop-hdfs in the patch failed. |
+1 | asflicense | 0m 24s | The patch does not generate ASF License warnings. |
50m 45s |
This message was automatically generated.
-1 overall |
Vote | Subsystem | Runtime | Comment |
---|---|---|---|
0 | reexec | 0m 38s | Docker mode activated. |
Prechecks | |||
+1 | @author | 0m 0s | The patch does not contain any @author tags. |
+1 | test4tests | 0m 0s | The patch appears to include 1 new or modified test files. |
trunk Compile Tests | |||
+1 | mvninstall | 19m 42s | trunk passed |
+1 | compile | 0m 57s | trunk passed |
+1 | checkstyle | 0m 52s | trunk passed |
+1 | mvnsite | 1m 5s | trunk passed |
+1 | shadedclient | 12m 42s | branch has no errors when building and testing our client artifacts. |
+1 | findbugs | 2m 17s | trunk passed |
+1 | javadoc | 0m 59s | trunk passed |
Patch Compile Tests | |||
+1 | mvninstall | 1m 15s | the patch passed |
+1 | compile | 1m 6s | the patch passed |
+1 | javac | 1m 6s | the patch passed |
+1 | checkstyle | 0m 53s | the patch passed |
+1 | mvnsite | 1m 5s | the patch passed |
+1 | whitespace | 0m 0s | The patch has no whitespace issues. |
+1 | shadedclient | 11m 35s | patch has no errors when building and testing our client artifacts. |
+1 | findbugs | 2m 8s | the patch passed |
+1 | javadoc | 0m 54s | the patch passed |
Other Tests | |||
-1 | unit | 129m 57s | hadoop-hdfs in the patch failed. |
+1 | asflicense | 0m 27s | The patch does not generate ASF License warnings. |
188m 9s |
Reason | Tests |
---|---|
Failed junit tests | hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
hadoop.hdfs.TestReadStripedFileWithMissingBlocks | |
hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA | |
hadoop.hdfs.server.namenode.TestNameNodeMXBean | |
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | |
hadoop.hdfs.server.federation.resolver.TestMultipleDestinationResolver | |
hadoop.hdfs.TestSafeModeWithStripedFileWithRandomECPolicy | |
hadoop.hdfs.server.namenode.TestDecommissioningStatus |
Subsystem | Report/Notes |
---|---|
Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
JIRA Issue | |
JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12913681/HDFS-13224.004.patch |
Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle |
uname | Linux 80a6229b5ecd 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
Build tool | maven |
Personality | /testptch/patchprocess/precommit/personality/provided.sh |
git revision | trunk / 113f401 |
maven | version: Apache Maven 3.3.9 |
Default Java | 1.8.0_151 |
findbugs | v3.1.0-RC1 |
unit | https://builds.apache.org/job/PreCommit-HDFS-Build/23360/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt |
Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/23360/testReport/ |
Max. process+thread count | 3146 (vs. ulimit of 10000) |
modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs |
Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/23360/console |
Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org |
This message was automatically generated.
Thanks for splitting the patch, elgoiri. Still not fully reviewed, some minor comments:
MultipleDestinationMountTableResolver.java
line49. javadoc It has three options to order... should update to four options. HASH_ALL is not contained.
LocalResolver.java
line111. Time.now() is not recommended to calculate the elapsed time. Instead, use Time.monotonicNow.
line111. Interval time TimeUnit.SECONDS.toMillis(10) can be defined as a static final var in LocalResolver. Will be easily found and updated.
line128. Use Time.monotonicNow to update var lastUpdated.
Another problem I am thinking: Do we need to update <IP -> Subcluster> mapping info so frequently? Actually if one cluster is set up, we won't make many adjustments (remove/add nodes) every day/hour. So the IP address of these nodes won't be changed so frequently. We can bump the time interval to one day.
Also haven't looked deep in unit tests, will take a fully reviewed recently.
Thanks linyiqun for the comments on HDFS-13224.004.patch.
I tackled most of them in HDFS-13224.005.patch.
I left out the one about the mapping update because I'm not sure what a good period would be.
In our deployment we can get large chunks of DNs changing subclusters after reimaging.
This is not super common (once a day is reasonable actually) but we would like to catch those when they happen.
-1 overall |
Vote | Subsystem | Runtime | Comment |
---|---|---|---|
0 | reexec | 0m 22s | Docker mode activated. |
Prechecks | |||
+1 | @author | 0m 0s | The patch does not contain any @author tags. |
+1 | test4tests | 0m 0s | The patch appears to include 3 new or modified test files. |
trunk Compile Tests | |||
0 | mvndep | 1m 24s | Maven dependency ordering for branch |
+1 | mvninstall | 15m 31s | trunk passed |
+1 | compile | 14m 11s | trunk passed |
+1 | checkstyle | 2m 24s | trunk passed |
+1 | mvnsite | 3m 51s | trunk passed |
+1 | shadedclient | 15m 24s | branch has no errors when building and testing our client artifacts. |
0 | findbugs | 0m 0s | Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site |
+1 | findbugs | 5m 22s | trunk passed |
+1 | javadoc | 3m 4s | trunk passed |
Patch Compile Tests | |||
0 | mvndep | 0m 16s | Maven dependency ordering for patch |
+1 | mvninstall | 2m 54s | the patch passed |
+1 | compile | 11m 49s | the patch passed |
+1 | javac | 11m 49s | the patch passed |
-0 | checkstyle | 2m 31s | root: The patch generated 1 new + 19 unchanged - 2 fixed = 20 total (was 21) |
+1 | mvnsite | 3m 59s | the patch passed |
+1 | whitespace | 0m 0s | The patch has no whitespace issues. |
+1 | xml | 0m 1s | The patch has no ill-formed XML file. |
+1 | shadedclient | 9m 19s | patch has no errors when building and testing our client artifacts. |
0 | findbugs | 0m 0s | Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site |
+1 | findbugs | 6m 12s | the patch passed |
+1 | javadoc | 3m 5s | the patch passed |
Other Tests | |||
+1 | unit | 8m 22s | hadoop-common in the patch passed. |
-1 | unit | 137m 38s | hadoop-hdfs in the patch failed. |
+1 | unit | 4m 31s | hadoop-mapreduce-client-core in the patch passed. |
+1 | unit | 4m 56s | hadoop-yarn-services-core in the patch passed. |
+1 | unit | 0m 30s | hadoop-yarn-services-api in the patch passed. |
+1 | unit | 0m 19s | hadoop-yarn-site in the patch passed. |
+1 | asflicense | 0m 33s | The patch does not generate ASF License warnings. |
255m 4s |
Reason | Tests |
---|---|
Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
hadoop.hdfs.web.TestWebHdfsTimeouts | |
hadoop.hdfs.server.federation.resolver.TestMultipleDestinationResolver |
Subsystem | Report/Notes |
---|---|
Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
JIRA Issue | |
JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12913811/HDFS-13224.005.patch |
Optional Tests | asflicense mvnsite unit compile javac javadoc mvninstall shadedclient findbugs checkstyle xml |
uname | Linux f98493c48892 4.4.0-116-generic #140-Ubuntu SMP Mon Feb 12 21:23:04 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
Build tool | maven |
Personality | /testptch/patchprocess/precommit/personality/provided.sh |
git revision | trunk / 99ab511 |
maven | version: Apache Maven 3.3.9 |
Default Java | 1.8.0_151 |
findbugs | v3.1.0-RC1 |
checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/23374/artifact/out/diff-checkstyle-root.txt |
unit | https://builds.apache.org/job/PreCommit-HDFS-Build/23374/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt |
Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/23374/testReport/ |
Max. process+thread count | 4743 (vs. ulimit of 10000) |
modules | C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: . |
Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/23374/console |
Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org |
This message was automatically generated.
Have taken a deep review, some comments (Most for unit test):
LocalResolver.java
1.
I left out the one about the mapping update because I'm not sure what a good period would be.
In our deployment we can get large chunks of DNs changing subclusters after reimaging.
This is not super common (once a day is reasonable actually) but we would like to catch those when they happen.
I suggest to add a TODO comment indicate that we can improve this later by making this period configurable.
+ /** Minimum time in ms to update the local cache. 10 seconds by default. */ // TODO:... + private static final long MIN_UPDATE_PERIOD = TimeUnit.SECONDS.toMillis(10);
2.line255. We can use util class HostAndPort.fromString(addr).getHostText() to parse host name from given address string.
TestMultipleDestinationResolver.java
1. line85. No tests used for testing LocalResolver.
2.line103.Read only mount point also doesn't be tested, what's the test case to do for this?
3.line222. for (int f=0; f<100; f++) { needs the space in =, <.
4. We need a corner test for the case that OrderedResolver.getFirstNamespace doesn't find the first namespace. And verify the correctness of PathLocation returned by MultipleDestinationMountTableResolver.getDestinationForPath(String path).
Others look good.
Thanks linyiqun for the comments.
I posted HDFS-13224.006.patch solving most of them; a couple coming soon.
elgoiri, thanks for updating the patch. Review comments based on v006 patch.
- #1 and #4 of my last review comments for TestMultipleDestinationResolver seem still need to complete in the next patch.
- Can we document the setting DFSConfigKeys.FEDERATION_ROUTER_PREFIX + "local-resolver.update-period"; in hdfs-default.xml?
There are two part of work to do in Router Admin.
- Update the Router CLI usage and doc, add the option of DestinationOrder.
- Add some testing for Router Admin CLI, test if destination order added/updated correctly via Router admin commands.
-1 overall |
Vote | Subsystem | Runtime | Comment |
---|---|---|---|
0 | reexec | 0m 20s | Docker mode activated. |
Prechecks | |||
+1 | @author | 0m 0s | The patch does not contain any @author tags. |
+1 | test4tests | 0m 0s | The patch appears to include 1 new or modified test files. |
trunk Compile Tests | |||
+1 | mvninstall | 18m 28s | trunk passed |
+1 | compile | 0m 55s | trunk passed |
+1 | checkstyle | 0m 50s | trunk passed |
+1 | mvnsite | 1m 2s | trunk passed |
+1 | shadedclient | 11m 57s | branch has no errors when building and testing our client artifacts. |
+1 | findbugs | 2m 7s | trunk passed |
+1 | javadoc | 0m 54s | trunk passed |
Patch Compile Tests | |||
+1 | mvninstall | 0m 59s | the patch passed |
+1 | compile | 0m 55s | the patch passed |
+1 | cc | 0m 55s | the patch passed |
+1 | javac | 0m 55s | the patch passed |
-0 | checkstyle | 0m 47s | hadoop-hdfs-project/hadoop-hdfs: The patch generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) |
+1 | mvnsite | 0m 57s | the patch passed |
+1 | whitespace | 0m 0s | The patch has no whitespace issues. |
+1 | shadedclient | 11m 37s | patch has no errors when building and testing our client artifacts. |
+1 | findbugs | 2m 13s | the patch passed |
+1 | javadoc | 0m 52s | the patch passed |
Other Tests | |||
-1 | unit | 115m 35s | hadoop-hdfs in the patch failed. |
+1 | asflicense | 0m 24s | The patch does not generate ASF License warnings. |
170m 29s |
Reason | Tests |
---|---|
Failed junit tests | hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA |
Subsystem | Report/Notes |
---|---|
Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
JIRA Issue | |
JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12913956/HDFS-13224.006.patch |
Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle cc |
uname | Linux 92bbb2478f5e 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
Build tool | maven |
Personality | /testptch/patchprocess/precommit/personality/provided.sh |
git revision | trunk / e1f5251 |
maven | version: Apache Maven 3.3.9 |
Default Java | 1.8.0_151 |
findbugs | v3.1.0-RC1 |
checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/23400/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt |
unit | https://builds.apache.org/job/PreCommit-HDFS-Build/23400/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt |
Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/23400/testReport/ |
Max. process+thread count | 3083 (vs. ulimit of 10000) |
modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs |
Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/23400/console |
Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org |
This message was automatically generated.
In HDFS-13224.008.patch, I added TestLocalResolver; I just have to say it's a mocking piece of art.
It would be even cleaner if powermock was available but it's not.
-1 overall |
Vote | Subsystem | Runtime | Comment |
---|---|---|---|
0 | reexec | 0m 24s | Docker mode activated. |
Prechecks | |||
+1 | @author | 0m 0s | The patch does not contain any @author tags. |
+1 | test4tests | 0m 0s | The patch appears to include 3 new or modified test files. |
trunk Compile Tests | |||
+1 | mvninstall | 15m 21s | trunk passed |
+1 | compile | 0m 53s | trunk passed |
+1 | checkstyle | 0m 45s | trunk passed |
+1 | mvnsite | 0m 57s | trunk passed |
+1 | shadedclient | 11m 12s | branch has no errors when building and testing our client artifacts. |
+1 | findbugs | 1m 45s | trunk passed |
+1 | javadoc | 0m 54s | trunk passed |
Patch Compile Tests | |||
+1 | mvninstall | 0m 56s | the patch passed |
+1 | compile | 0m 49s | the patch passed |
+1 | cc | 0m 49s | the patch passed |
+1 | javac | 0m 49s | the patch passed |
+1 | checkstyle | 0m 41s | the patch passed |
+1 | mvnsite | 0m 55s | the patch passed |
+1 | whitespace | 0m 0s | The patch has no whitespace issues. |
+1 | shadedclient | 10m 20s | patch has no errors when building and testing our client artifacts. |
+1 | findbugs | 1m 50s | the patch passed |
+1 | javadoc | 0m 52s | the patch passed |
Other Tests | |||
-1 | unit | 118m 48s | hadoop-hdfs in the patch failed. |
+1 | asflicense | 0m 24s | The patch does not generate ASF License warnings. |
167m 40s |
Reason | Tests |
---|---|
Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
hadoop.hdfs.TestDFSInotifyEventInputStreamKerberized | |
hadoop.hdfs.TestSafeModeWithStripedFileWithRandomECPolicy | |
hadoop.hdfs.TestRollingUpgrade | |
hadoop.hdfs.server.namenode.ha.TestInitializeSharedEdits | |
hadoop.hdfs.TestDFSStripedOutputStreamWithFailure | |
hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
Subsystem | Report/Notes |
---|---|
Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
JIRA Issue | |
JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12914373/HDFS-13224.007.patch |
Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle cc |
uname | Linux d9d579a9750a 4.4.0-116-generic #140-Ubuntu SMP Mon Feb 12 21:23:04 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
Build tool | maven |
Personality | /testptch/patchprocess/precommit/personality/provided.sh |
git revision | trunk / 9714fc1 |
maven | version: Apache Maven 3.3.9 |
Default Java | 1.8.0_151 |
findbugs | v3.1.0-RC1 |
unit | https://builds.apache.org/job/PreCommit-HDFS-Build/23465/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt |
Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/23465/testReport/ |
Max. process+thread count | 4242 (vs. ulimit of 10000) |
modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs |
Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/23465/console |
Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org |
This message was automatically generated.
elgoiri, thanks for adding the unit tests. Seems the patch is ready to go.
Only one minor commet: Would you add the ASF license header in TestLocalResolver? Looks strange this wasn't detected by Jenkins.
-1 overall |
Vote | Subsystem | Runtime | Comment |
---|---|---|---|
0 | reexec | 0m 27s | Docker mode activated. |
Prechecks | |||
+1 | @author | 0m 0s | The patch does not contain any @author tags. |
+1 | test4tests | 0m 0s | The patch appears to include 4 new or modified test files. |
trunk Compile Tests | |||
+1 | mvninstall | 18m 37s | trunk passed |
+1 | compile | 1m 8s | trunk passed |
+1 | checkstyle | 0m 53s | trunk passed |
+1 | mvnsite | 1m 5s | trunk passed |
+1 | shadedclient | 12m 2s | branch has no errors when building and testing our client artifacts. |
+1 | findbugs | 2m 1s | trunk passed |
+1 | javadoc | 0m 54s | trunk passed |
Patch Compile Tests | |||
+1 | mvninstall | 1m 1s | the patch passed |
+1 | compile | 0m 53s | the patch passed |
+1 | cc | 0m 53s | the patch passed |
+1 | javac | 0m 53s | the patch passed |
+1 | checkstyle | 0m 46s | the patch passed |
+1 | mvnsite | 0m 58s | the patch passed |
+1 | whitespace | 0m 0s | The patch has no whitespace issues. |
+1 | shadedclient | 10m 59s | patch has no errors when building and testing our client artifacts. |
+1 | findbugs | 2m 17s | the patch passed |
+1 | javadoc | 0m 57s | the patch passed |
Other Tests | |||
-1 | unit | 104m 49s | hadoop-hdfs in the patch failed. |
-1 | asflicense | 0m 22s | The patch generated 1 ASF License warnings. |
159m 43s |
Reason | Tests |
---|---|
Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations |
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | |
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | |
hadoop.hdfs.TestMaintenanceState |
Subsystem | Report/Notes |
---|---|
Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
JIRA Issue | |
JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12914397/HDFS-13224.008.patch |
Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle cc |
uname | Linux 91e2a80eaa20 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
Build tool | maven |
Personality | /testptch/patchprocess/precommit/personality/provided.sh |
git revision | trunk / ad1b988 |
maven | version: Apache Maven 3.3.9 |
Default Java | 1.8.0_151 |
findbugs | v3.1.0-RC1 |
unit | https://builds.apache.org/job/PreCommit-HDFS-Build/23467/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt |
Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/23467/testReport/ |
asflicense | https://builds.apache.org/job/PreCommit-HDFS-Build/23467/artifact/out/patch-asflicense-problems.txt |
Max. process+thread count | 2961 (vs. ulimit of 10000) |
modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs |
Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/23467/console |
Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org |
This message was automatically generated.
I uploaded HDFS-13224.009.patch with the fix for the license.
I also used this JIRA to fix a couple issues with MountTableResolver.
They are pretty minor so I went ahead and squeeze them here.
-1 overall |
Vote | Subsystem | Runtime | Comment |
---|---|---|---|
0 | reexec | 0m 41s | Docker mode activated. |
Prechecks | |||
+1 | @author | 0m 0s | The patch does not contain any @author tags. |
+1 | test4tests | 0m 0s | The patch appears to include 5 new or modified test files. |
trunk Compile Tests | |||
+1 | mvninstall | 15m 39s | trunk passed |
+1 | compile | 0m 51s | trunk passed |
+1 | checkstyle | 0m 41s | trunk passed |
+1 | mvnsite | 0m 55s | trunk passed |
+1 | shadedclient | 10m 19s | branch has no errors when building and testing our client artifacts. |
+1 | findbugs | 1m 48s | trunk passed |
+1 | javadoc | 0m 49s | trunk passed |
Patch Compile Tests | |||
+1 | mvninstall | 0m 55s | the patch passed |
+1 | compile | 0m 49s | the patch passed |
+1 | cc | 0m 49s | the patch passed |
+1 | javac | 0m 49s | the patch passed |
+1 | checkstyle | 0m 40s | the patch passed |
+1 | mvnsite | 0m 55s | the patch passed |
+1 | whitespace | 0m 0s | The patch has no whitespace issues. |
+1 | shadedclient | 11m 37s | patch has no errors when building and testing our client artifacts. |
+1 | findbugs | 1m 52s | the patch passed |
+1 | javadoc | 0m 48s | the patch passed |
Other Tests | |||
-1 | unit | 104m 47s | hadoop-hdfs in the patch failed. |
+1 | asflicense | 0m 24s | The patch does not generate ASF License warnings. |
154m 20s |
Reason | Tests |
---|---|
Failed junit tests | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy |
hadoop.hdfs.web.TestWebHdfsTimeouts | |
hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA | |
hadoop.hdfs.server.federation.resolver.TestMountTableResolver |
Subsystem | Report/Notes |
---|---|
Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
JIRA Issue | |
JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12914531/HDFS-13224.009.patch |
Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle cc |
uname | Linux b9b8c08f051c 4.4.0-116-generic #140-Ubuntu SMP Mon Feb 12 21:23:04 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
Build tool | maven |
Personality | /testptch/patchprocess/precommit/personality/provided.sh |
git revision | trunk / 4c57fb0 |
maven | version: Apache Maven 3.3.9 |
Default Java | 1.8.0_151 |
findbugs | v3.1.0-RC1 |
unit | https://builds.apache.org/job/PreCommit-HDFS-Build/23478/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt |
Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/23478/testReport/ |
Max. process+thread count | 3639 (vs. ulimit of 10000) |
modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs |
Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/23478/console |
Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org |
This message was automatically generated.
-1 overall |
Vote | Subsystem | Runtime | Comment |
---|---|---|---|
0 | reexec | 0m 25s | Docker mode activated. |
Prechecks | |||
+1 | @author | 0m 0s | The patch does not contain any @author tags. |
+1 | test4tests | 0m 0s | The patch appears to include 5 new or modified test files. |
trunk Compile Tests | |||
+1 | mvninstall | 18m 4s | trunk passed |
+1 | compile | 0m 59s | trunk passed |
+1 | checkstyle | 0m 51s | trunk passed |
+1 | mvnsite | 1m 4s | trunk passed |
+1 | shadedclient | 12m 32s | branch has no errors when building and testing our client artifacts. |
+1 | findbugs | 2m 18s | trunk passed |
+1 | javadoc | 1m 6s | trunk passed |
Patch Compile Tests | |||
+1 | mvninstall | 1m 10s | the patch passed |
+1 | compile | 1m 6s | the patch passed |
+1 | cc | 1m 6s | the patch passed |
+1 | javac | 1m 6s | the patch passed |
+1 | checkstyle | 0m 52s | the patch passed |
+1 | mvnsite | 1m 16s | the patch passed |
+1 | whitespace | 0m 0s | The patch has no whitespace issues. |
+1 | shadedclient | 12m 39s | patch has no errors when building and testing our client artifacts. |
+1 | findbugs | 2m 26s | the patch passed |
+1 | javadoc | 1m 3s | the patch passed |
Other Tests | |||
-1 | unit | 115m 31s | hadoop-hdfs in the patch failed. |
+1 | asflicense | 0m 23s | The patch does not generate ASF License warnings. |
173m 40s |
Reason | Tests |
---|---|
Failed junit tests | hadoop.hdfs.TestPread |
hadoop.hdfs.TestReadStripedFileWithMissingBlocks | |
hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy | |
hadoop.hdfs.server.federation.resolver.TestMountTableResolver | |
hadoop.hdfs.TestPersistBlocks | |
hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA | |
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | |
hadoop.hdfs.TestRollingUpgrade | |
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
Subsystem | Report/Notes |
---|---|
Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
JIRA Issue | |
JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12914531/HDFS-13224.009.patch |
Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle cc |
uname | Linux d0e334fe59ec 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
Build tool | maven |
Personality | /testptch/patchprocess/precommit/personality/provided.sh |
git revision | trunk / 4c57fb0 |
maven | version: Apache Maven 3.3.9 |
Default Java | 1.8.0_151 |
findbugs | v3.1.0-RC1 |
unit | https://builds.apache.org/job/PreCommit-HDFS-Build/23477/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt |
Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/23477/testReport/ |
Max. process+thread count | 2790 (vs. ulimit of 10000) |
modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs |
Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/23477/console |
Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org |
This message was automatically generated.
-1 overall |
Vote | Subsystem | Runtime | Comment |
---|---|---|---|
0 | reexec | 0m 29s | Docker mode activated. |
Prechecks | |||
+1 | @author | 0m 0s | The patch does not contain any @author tags. |
+1 | test4tests | 0m 0s | The patch appears to include 5 new or modified test files. |
trunk Compile Tests | |||
+1 | mvninstall | 17m 35s | trunk passed |
+1 | compile | 0m 57s | trunk passed |
+1 | checkstyle | 0m 49s | trunk passed |
+1 | mvnsite | 1m 2s | trunk passed |
+1 | shadedclient | 11m 41s | branch has no errors when building and testing our client artifacts. |
+1 | findbugs | 1m 58s | trunk passed |
+1 | javadoc | 0m 56s | trunk passed |
Patch Compile Tests | |||
+1 | mvninstall | 1m 3s | the patch passed |
+1 | compile | 0m 53s | the patch passed |
+1 | cc | 0m 53s | the patch passed |
+1 | javac | 0m 53s | the patch passed |
+1 | checkstyle | 0m 48s | the patch passed |
+1 | mvnsite | 1m 0s | the patch passed |
+1 | whitespace | 0m 0s | The patch has no whitespace issues. |
+1 | shadedclient | 10m 52s | patch has no errors when building and testing our client artifacts. |
+1 | findbugs | 2m 9s | the patch passed |
+1 | javadoc | 0m 57s | the patch passed |
Other Tests | |||
-1 | unit | 128m 26s | hadoop-hdfs in the patch failed. |
+1 | asflicense | 0m 28s | The patch does not generate ASF License warnings. |
181m 46s |
Reason | Tests |
---|---|
Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits | |
hadoop.hdfs.server.namenode.TestMetaSave | |
hadoop.hdfs.server.namenode.TestDecommissioningStatus | |
hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
Subsystem | Report/Notes |
---|---|
Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
JIRA Issue | |
JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12914581/HDFS-13224.010.patch |
Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle cc |
uname | Linux 6dc5e54e174d 3.13.0-137-generic #186-Ubuntu SMP Mon Dec 4 19:09:19 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
Build tool | maven |
Personality | /testptch/patchprocess/precommit/personality/provided.sh |
git revision | trunk / 252c2b4 |
maven | version: Apache Maven 3.3.9 |
Default Java | 1.8.0_151 |
findbugs | v3.1.0-RC1 |
unit | https://builds.apache.org/job/PreCommit-HDFS-Build/23484/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt |
Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/23484/testReport/ |
Max. process+thread count | 2795 (vs. ulimit of 10000) |
modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs |
Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/23484/console |
Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org |
This message was automatically generated.
LGTM, +1. Thanks elgoiri. You may attach a clean patch in HDFS-13250 after this commit and I will take a review today, .
SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13845 (See https://builds.apache.org/job/Hadoop-trunk-Commit/13845/)
HDFS-13224. RBF: Resolvers to support mount points across multiple (inigoiri: rev e71bc00a471422ddb26dd54e706f09f0fe09925c)
- (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAdmin.java
- (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/proto/FederationProtocol.proto
- (add) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/resolver/order/TestLocalResolver.java
- (add) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/utils/package-info.java
- (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java
- (add) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/order/RandomResolver.java
- (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/records/impl/pb/MountTablePBImpl.java
- (add) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MultipleDestinationMountTableResolver.java
- (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MountTableResolver.java
- (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAdminCLI.java
- (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/order/DestinationOrder.java
- (add) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/utils/ConsistentHashRing.java
- (add) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/order/OrderedResolver.java
- (add) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/resolver/TestMultipleDestinationResolver.java
- (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/federation/RouterAdmin.java
- (add) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/order/HashFirstResolver.java
- (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/resolver/TestMountTableResolver.java
- (add) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/order/LocalResolver.java
- (add) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/order/HashResolver.java
elgoiri Could you pls share the design doc for this feature. No idea what kind of scenario need to cross subclusters.
Thanks
Daniel Ma, it's been almost five years so I'm having a hard time finding design docs.
We added some documentation explaining the idea here: https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs-rbf/HDFSRouterFederation.html
In the `Multiple subclusters` section it explains some of the use cases.
This would be similar to what HADOOP-8298 proposes. I'll post a patch with it during this week.