Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
2.4.0
-
None
-
None
Description
HBASE-9393 found seek+read will leave many CLOSE_WAIT sockets without stream unbuffer, which can free sockets and file descriptors held by the stream.
In our cluster RSes with about one hundred thousand store files, we found the number of CLOSE_WAIT sockets increases with the number of regions opened, and can up to the operating system open files limit 1000000.
2020-11-12 20:19:02,452 WARN [1282990092@qtp-220038608-1 - Acceptor0 SelectChannelConnector@0.0.0.0:16030] mortbay.log: EXCEPTION java.io.IOException: Too many open files at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422) at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:250) at org.mortbay.jetty.nio.SelectChannelConnector$1.acceptChannel(SelectChannelConnector.java:75) at org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:686) at org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) at org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) at org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
[hbase@gha-data-hbase-cat0053 hbase]$ ulimit -SHn 1000000
The reason of the problem is, when store file opened,
private void open() throws IOException { fileInfo.initHDFSBlocksDistribution(); long readahead = fileInfo.isNoReadahead() ? 0L : -1L; ReaderContext context = fileInfo.createReaderContext(false, readahead, ReaderType.PREAD); fileInfo.initHFileInfo(context); StoreFileReader reader = fileInfo.preStoreFileReaderOpen(context, cacheConf); if (reader == null) { reader = fileInfo.createReader(context, cacheConf); fileInfo.getHFileInfo().initMetaAndIndex(reader.getHFileReader()); } ....
only createReader() unbuffered the stream. In initMetaAndIndex(), using the stream to read blocks, so it needs to unbuffer() the socket , too.
We can just add try before fileInfo.initHFileInfo(context); and finally unbuffer() the stream at the end of the open() function.
We fixed it on our cluster, the number of CLOSE_WAIT reduced to about 0.
Attachments
Attachments
Issue Links
1.
|
Backport HBASE-25287 to branch-1 | Resolved | Wei-Chiu Chuang |