Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-25586

toString method of GeneralizedLinearRegressionTrainingSummary runs in infinite loop throwing StackOverflowError

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Minor
    • Resolution: Fixed
    • 2.3.0
    • 3.0.0
    • MLlib, Spark Core
    • None

    Description

      After the change in SPARK-25118, which enables spark-shell to run with default log level, test_glr_summary started failing with StackOverflow error.

      Cause: ClosureCleaner calls logDebug on various objects and when it is called for GeneralizedLinearRegressionTrainingSummary, it starts a spark job which runs into infinite loop and fails with the below exception.

      ======================================================================
      ERROR: test_glr_summary (pyspark.ml.tests.TrainingSummaryTest)
      ----------------------------------------------------------------------
      Traceback (most recent call last):
        File "/home/jenkins/workspace/SparkPullRequestBuilder/python/pyspark/ml/tests.py", line 1809, in test_glr_summary
          self.assertTrue(isinstance(s.aic, float))
        File "/home/jenkins/workspace/SparkPullRequestBuilder/python/pyspark/ml/regression.py", line 1781, in aic
          return self._call_java("aic")
        File "/home/jenkins/workspace/SparkPullRequestBuilder/python/pyspark/ml/wrapper.py", line 55, in _call_java
          return _java2py(sc, m(*java_args))
        File "/home/jenkins/workspace/SparkPullRequestBuilder/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 1257, in __call__
          answer, self.gateway_client, self.target_id, self.name)
        File "/home/jenkins/workspace/SparkPullRequestBuilder/python/pyspark/sql/utils.py", line 63, in deco
          return f(*a, **kw)
        File "/home/jenkins/workspace/SparkPullRequestBuilder/python/lib/py4j-0.10.7-src.zip/py4j/protocol.py", line 328, in get_return_value
          format(target_id, ".", name), value)
      Py4JJavaError: An error occurred while calling o31639.aic.
      : java.lang.StackOverflowError
      	at java.io.UnixFileSystem.getBooleanAttributes0(Native Method)
      	at java.io.UnixFileSystem.getBooleanAttributes(UnixFileSystem.java:242)
      	at java.io.File.exists(File.java:819)
      	at sun.misc.URLClassPath$FileLoader.getResource(URLClassPath.java:1245)
      	at sun.misc.URLClassPath$FileLoader.findResource(URLClassPath.java:1212)
      	at sun.misc.URLClassPath.findResource(URLClassPath.java:188)
      	at java.net.URLClassLoader$2.run(URLClassLoader.java:569)
      	at java.net.URLClassLoader$2.run(URLClassLoader.java:567)
      	at java.security.AccessController.doPrivileged(Native Method)
      	at java.net.URLClassLoader.findResource(URLClassLoader.java:566)
      	at java.lang.ClassLoader.getResource(ClassLoader.java:1093)
      	at java.net.URLClassLoader.getResourceAsStream(URLClassLoader.java:232)
      	at java.lang.Class.getResourceAsStream(Class.java:2223)
      	at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:43)
      	at org.apache.spark.util.ClosureCleaner$.getInnerClosureClasses(ClosureCleaner.scala:87)
      	at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:269)
      	at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:162)
      	at org.apache.spark.SparkContext.clean(SparkContext.scala:2342)
      	at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndex$1.apply(RDD.scala:864)
      	at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndex$1.apply(RDD.scala:863)
      	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
      	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
      	at org.apache.spark.rdd.RDD.withScope(RDD.scala:364)
      	at org.apache.spark.rdd.RDD.mapPartitionsWithIndex(RDD.scala:863)
      	at org.apache.spark.sql.execution.WholeStageCodegenExec.doExecute(WholeStageCodegenExec.scala:613)
      	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
      	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
      	at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
      	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
      	at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
      	at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
      	at org.apache.spark.sql.execution.DeserializeToObjectExec.doExecute(objects.scala:89)
      	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
      	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
      	at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
      	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
      	at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
      	at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
      	at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80)
      	at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80)
      	at org.apache.spark.sql.Dataset.rdd$lzycompute(Dataset.scala:3038)
      	at org.apache.spark.sql.Dataset.rdd(Dataset.scala:3036)
      	at org.apache.spark.ml.regression.GeneralizedLinearRegressionSummary.nullDeviance$lzycompute(GeneralizedLinearRegression.scala:1342)
      	at org.apache.spark.ml.regression.GeneralizedLinearRegressionSummary.nullDeviance(GeneralizedLinearRegression.scala:1315)
      	at org.apache.spark.ml.regression.GeneralizedLinearRegressionTrainingSummary.toString(GeneralizedLinearRegression.scala:1556)
      	at java.lang.String.valueOf(String.java:2994)
      	at java.lang.StringBuilder.append(StringBuilder.java:131)
      	at scala.StringContext.standardInterpolator(StringContext.scala:125)
      	at scala.StringContext.s(StringContext.scala:95)
      	at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$12$$anonfun$apply$6.apply(ClosureCleaner.scala:289)
      	at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$12$$anonfun$apply$6.apply(ClosureCleaner.scala:289)
      

      Attachments

        Activity

          People

            ankur.gupta Ankur Gupta
            ankur.gupta Ankur Gupta
            Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: