Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-47957

pyspark.pandas.read_excel can't get _metadata because it causes a "max iterations reached for batch Resolution" error

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Open
    • Minor
    • Resolution: Unresolved
    • 3.5.1
    • None
    • Pandas API on Spark
    • None

    Description

      I'm trying to add _metadata.file_path to a Spark DataFrame that was read from Excel files using ps.read_excel, but it causes this error: "Max iterations (100) reached for batch Resolution, please set 'spark.sql.analyzer.maxIterations' to a larger value". Increasing spark.sql.analyzer.maxIterations to larger values does not resolve the error, it just increases the execution time more and more as I try larger values.
       
      Excel files are fairly simple (1 sheet, 8 columns, 2000 rows) and there are only a few files (5)

       
      Sample code to reproduce:
      ```python
      import pyspark.pandas as ps
      from pyspark.sql import DataFrame
      from pyspark.sql.functions import col, lit
       
      adls_full_path: str = f"abfss://container@azurestorageaccountname.dfs.core.windows.net/path/2024-04-01/filenamewithwildcards*.xlsx"
       
      input_df: DataFrame = (
          ps
          .read_excel(adls_full_path)
          .to_spark()
          .withColumn("metadata_file_path", col("_metadata.file_path"))
      )
      ```

       

      This code will raise the following error on .withColumn("metadata_file_path", col("_metadata.file_path")):

      ``

      Py4JJavaError: An error occurred while calling o1835.withColumn. : java.lang.RuntimeException: Max iterations (100) reached for batch Resolution, please set 'spark.sql.analyzer.maxIterations' to a larger value. at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$2(RuleExecutor.scala:352) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at org.apache.spark.sql.catalyst.rules.RuleExecutor.executeBatch$1(RuleExecutor.scala:289) at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$9(RuleExecutor.scala:382) at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$9$adapted(RuleExecutor.scala:382) at scala.collection.immutable.List.foreach(List.scala:431) at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$1(RuleExecutor.scala:382) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:256) at org.apache.spark.sql.catalyst.analysis.Analyzer.executeSameContext(Analyzer.scala:415) at org.apache.spark.sql.catalyst.analysis.Analyzer.$anonfun$execute$1(Analyzer.scala:408) at org.apache.spark.sql.catalyst.analysis.AnalysisContext$.withNewAnalysisContext(Analyzer.scala:322) at org.apache.spark.sql.catalyst.analysis.Analyzer.execute(Analyzer.scala:408) at org.apache.spark.sql.catalyst.analysis.Analyzer.execute(Analyzer.scala:341) at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$executeAndTrack$1(RuleExecutor.scala:248) at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:166) at org.apache.spark.sql.catalyst.rules.RuleExecutor.executeAndTrack(RuleExecutor.scala:248) at org.apache.spark.sql.catalyst.analysis.Analyzer.$anonfun$executeAndCheck$1(Analyzer.scala:393) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.markInAnalyzer(AnalysisHelper.scala:407) at org.apache.spark.sql.catalyst.analysis.Analyzer.executeAndCheck(Analyzer.scala:392) at org.apache.spark.sql.execution.QueryExecution.$anonfun$analyzed$1(QueryExecution.scala:244) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:394) at org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$4(QueryExecution.scala:573) at org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:1079) at org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$2(QueryExecution.scala:573) at com.databricks.util.LexicalThreadLocal$Handle.runWith(LexicalThreadLocal.scala:63) at org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$1(QueryExecution.scala:569) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1175) at org.apache.spark.sql.execution.QueryExecution.executePhase(QueryExecution.scala:569) at org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:238) at org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:237) at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:219) at org.apache.spark.sql.Dataset$.$anonfun$ofRows$1(Dataset.scala:102) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1175) at org.apache.spark.sql.SparkSession.$anonfun$withActiveAndFrameProfiler$1(SparkSession.scala:1182) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at org.apache.spark.sql.SparkSession.withActiveAndFrameProfiler(SparkSession.scala:1182) at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:100) at org.apache.spark.sql.Dataset.$anonfun$org$apache$spark$sql$Dataset$$withPlan$1(Dataset.scala:4739) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$withPlan(Dataset.scala:4739) at org.apache.spark.sql.Dataset.select(Dataset.scala:1647) at org.apache.spark.sql.Dataset.$anonfun$withColumns$1(Dataset.scala:2922) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at org.apache.spark.sql.Dataset.withColumns(Dataset.scala:2896) at org.apache.spark.sql.Dataset.withColumn(Dataset.scala:2860) at sun.reflect.GeneratedMethodAccessor635.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:397) at py4j.Gateway.invoke(Gateway.java:306) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:199) at py4j.ClientServerConnection.run(ClientServerConnection.java:119) at java.lang.Thread.run(Thread.java:750)

      ```

      Attachments

        Activity

          People

            Unassigned Unassigned
            ckarras-ext-exo Christos Karras
            Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

            Dates

              Created:
              Updated: