Details
-
Bug
-
Status: Resolved
-
Minor
-
Resolution: Incomplete
-
2.3.2
-
None
Description
I use spark to process some data in HDFS and HBASE, I use one thread consume message from a queue, and then submit to a thread pool(16 fix size)for spark processor.
But when run for some time, the active jobs will be thousands, and number of active tasks are negative.
Actually, these jobs are already done when I check driver logs。