Details
-
Improvement
-
Status: Open
-
Major
-
Resolution: Unresolved
-
3.2.0
-
None
-
None
Description
It will impact performance if there are lot of AccumulableInfo instances.
num #instances #bytes class name ---------------------------------------------- 1: 331897850 50708371592 [C 2: 795358 24591381896 [J 3: 330836234 10586759488 java.lang.String 4: 139726297 8942483008 org.apache.spark.sql.execution.metric.SQLMetric 5: 249040156 5976963744 scala.Some 6: 146278719 5851148760 org.apache.spark.util.AccumulatorMetadata 7: 38448113 5536528272 java.net.URI 8: 37540162 3603855552 org.apache.hadoop.fs.FileStatus 9: 69724130 3346758240 java.util.Hashtable$Entry 10: 61521559 2953034832 java.util.concurrent.ConcurrentHashMap$Node 11: 50421974 2823630544 scala.collection.mutable.LinkedEntry 12: 43349222 2774350208 org.apache.spark.scheduler.AccumulableInfo
--- 15430388364 ns (2.03%), 1543 samples [ 0] scala.collection.TraversableLike.noneIn$1 [ 1] scala.collection.TraversableLike.filterImpl [ 2] scala.collection.TraversableLike.filterImpl$ [ 3] scala.collection.AbstractTraversable.filterImpl [ 4] scala.collection.TraversableLike.filter [ 5] scala.collection.TraversableLike.filter$ [ 6] scala.collection.AbstractTraversable.filter [ 7] org.apache.spark.status.LiveEntityHelpers$.newAccumulatorInfos [ 8] org.apache.spark.status.LiveTask.doUpdate [ 9] org.apache.spark.status.LiveEntity.write