Details
-
Improvement
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
3.4.0
-
Reviewed
Description
When look at the code related to DataNode lock, it is found that the relevant configuration are invalid and maybe can be removed
public static final String DFS_DATANODE_LOCK_READ_WRITE_ENABLED_KEY = "dfs.datanode.lock.read.write.enabled"; public static final Boolean DFS_DATANODE_LOCK_READ_WRITE_ENABLED_DEFAULT = true; public static final String DFS_DATANODE_LOCK_REPORTING_THRESHOLD_MS_KEY = "dfs.datanode.lock-reporting-threshold-ms"; public static final long DFS_DATANODE_LOCK_REPORTING_THRESHOLD_MS_DEFAULT = 300L; <property> <name> dfs.datanode.lock.read.write.enabled </name> <value> true </value> <description> If this is true, the FsDataset lock will be a read write lock. If it is false, all locks will be a write lock. Enabling this should give better datanode throughput, as many read only functions can run concurrently under the read lock, when they would previously have required the exclusive write lock. As the feature is experimental, this switch can be used to disable the shared read lock, and cause all lock acquisitions to use the exclusive write lock. </description> </property> <property> <name> dfs.datanode.lock-reporting-threshold-ms </name> <value> 300 </value> <description> When thread waits to obtain a lock, or a thread holds a lock for more than the threshold, a log message will be written. Note that dfs.lock.suppress.warning.interval ensures a single log message is emitted per interval for waiting threads and a single message for holding threads to avoid excessive logging. </description> </property>
Attachments
Issue Links
- links to