Details
-
Bug
-
Status: Resolved
-
Critical
-
Resolution: Fixed
-
1.0.0
-
None
Description
In the case where jobs are frequently causing dropped blocks the storage status listener can bottleneck. This is slow for a few reasons, one being that we use Scala collection operations, the other being that we operations that are O(number of blocks). I think using a few indices here could make this much faster.
at java.lang.Integer.valueOf(Integer.java:642) at scala.runtime.BoxesRunTime.boxToInteger(BoxesRunTime.java:70) at org.apache.spark.storage.StorageUtils$$anonfun$9.apply(StorageUtils.scala:82) at scala.collection.TraversableLike$$anonfun$groupBy$1.apply(TraversableLike.scala:328) at scala.collection.TraversableLike$$anonfun$groupBy$1.apply(TraversableLike.scala:327) at scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:224) at scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:403) at scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:403) at scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:403) at scala.collection.TraversableLike$class.groupBy(TraversableLike.scala:327) at scala.collection.AbstractTraversable.groupBy(Traversable.scala:105) at org.apache.spark.storage.StorageUtils$.rddInfoFromStorageStatus(StorageUtils.scala:82) at org.apache.spark.ui.storage.StorageListener.updateRDDInfo(StorageTab.scala:56) at org.apache.spark.ui.storage.StorageListener.onTaskEnd(StorageTab.scala:67) - locked <0x00000000a27ebe30> (a org.apache.spark.ui.storage.StorageListener)
Attachments
Issue Links
- is duplicated by
-
SPARK-2228 onStageSubmitted does not properly called so NoSuchElement will be thrown in onStageCompleted
- Resolved
-
SPARK-3882 JobProgressListener gets permanently out of sync with long running job
- Closed
- relates to
-
SPARK-2228 onStageSubmitted does not properly called so NoSuchElement will be thrown in onStageCompleted
- Resolved
-
SPARK-2675 LiveListenerBus should set higher capacity for its event queue
- Resolved
- links to