Details
-
Bug
-
Status: Resolved
-
Critical
-
Resolution: Duplicate
-
2.7.6
-
None
-
None
-
Apache Ranger1.2 && Hadoop2.7.6
Description
When I integrated ranger1.2 with Hadoop2.7.6, the following NPE error occurred when executing hdfs dfs -ls /.
However, when I integrated ranger1.2 with Hadoop2.7.1, executing hdfs dfs -ls / without any errors, and the directory list can be displayed normally.
java.lang.NullPointerException
at java.lang.String.checkBounds(String.java:384)
at java.lang.String.<init>(String.java:425)
at org.apache.hadoop.hdfs.DFSUtil.bytes2String(DFSUtil.java:337)
at org.apache.hadoop.hdfs.DFSUtil.bytes2String(DFSUtil.java:319)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.getINodeAttrs(FSPermissionChecker.java:238)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:183)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1752)
at org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getFileInfo(FSDirStatAndListingOp.java:100)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3832)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1012)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:855)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2217)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2213)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1758)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2213)
DEBUG org.apache.hadoop.ipc.Server: IPC Server handler 1 on 8020: responding to org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from xxxxxx:8502 Call#0 Retry#0
When I checked the HDFS source code and debug hdfs source . I found pathByNameArr[i] is null.
private INodeAttributes getINodeAttrs(byte[][] pathByNameArr, int pathIdx,
INode inode, int snapshotId) {
INodeAttributes inodeAttrs = inode.getSnapshotINode(snapshotId);
if (getAttributesProvider() != null) {
String[] elements = new String[pathIdx + 1];
for (int i = 0; i < elements.length; i++)Unknown macro: { elements[i] = DFSUtil.bytes2String(pathByNameArr[i]); }inodeAttrs = getAttributesProvider().getAttributes(elements, inodeAttrs);
}
return inodeAttrs;
}
I found that the code of the trunk branch has been fixed and currently has not been merged in the latest 3.2.1 version.
I hope that this patch can be merged into other branches as soon as possible,thank you very much!
private INodeAttributes getINodeAttrs(byte[][] pathByNameArr, int pathIdx,
INode inode, int snapshotId) {
INodeAttributes inodeAttrs = inode.getSnapshotINode(snapshotId);
if (getAttributesProvider() != null)
Unknown macro: { String[] elements = new String[pathIdx + 1]; /** *Unknown macro: {@link INode#getPathComponents(String)}returns a null component
- for the root only path "/". Assign an empty string if so.
*/
if (pathByNameArr.length == 1 && pathByNameArr[0] == null)
Unknown macro: { elements[0] = ""; }
else
Unknown macro: { for (int i = 0; i < elements.length; i++)
Unknown macro: { elements[i] = DFSUtil.bytes2String(pathByNameArr[i]); }}
inodeAttrs = getAttributesProvider().getAttributes(elements, inodeAttrs);
}
return inodeAttrs;
}
Attachments
Issue Links
- is duplicated by
-
HDFS-12614 FSPermissionChecker#getINodeAttrs() throws NPE when INodeAttributesProvider configured
- Resolved