Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-28358

Fix Flaky Tests - test_time_with_timezone (pyspark.sql.tests.test_serde.SerdeTests)

    XMLWordPrintableJSON

    Details

    • Type: Bug
    • Status: Open
    • Priority: Major
    • Resolution: Unresolved
    • Affects Version/s: 3.0.0
    • Fix Version/s: None
    • Component/s: PySpark, Tests
    • Labels:
      None

      Description

      The some python tests fail with `error: release unlocked lock`.

      • https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test%20(Dashboard)/job/spark-master-test-sbt-hadoop-2.7/6132/console
        File "/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/python/pyspark/sql/group.py", line 56, in pyspark.sql.group.GroupedData.mean
        Failed example:
            df.groupBy().mean('age').collect()
        Exception raised:
            Traceback (most recent call last):
              File "/usr/lib64/pypy-2.5.1/lib-python/2.7/doctest.py", line 1315, in __run
                compileflags, 1) in test.globs
              File "<doctest pyspark.sql.group.GroupedData.mean[0]>", line 1, in <module>
                df.groupBy().mean('age').collect()
              File "/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/python/pyspark/sql/group.py", line 42, in _api
                jdf = getattr(self._jgd, name)(_to_seq(self.sql_ctx._sc, cols))
              File "/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/python/pyspark/sql/column.py", line 66, in _to_seq
                return sc._jvm.PythonUtils.toSeq(cols)
              File "/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/python/lib/py4j-0.10.8.1-src.zip/py4j/java_gateway.py", line 1277, in __call__
                args_command, temp_args = self._build_args(*args)
              File "/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/python/lib/py4j-0.10.8.1-src.zip/py4j/java_gateway.py", line 1241, in _build_args
                (new_args, temp_args) = self._get_args(args)
              File "/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/python/lib/py4j-0.10.8.1-src.zip/py4j/java_gateway.py", line 1228, in _get_args
                temp_arg = converter.convert(arg, self.gateway_client)
              File "/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/python/lib/py4j-0.10.8.1-src.zip/py4j/java_collections.py", line 499, in convert
                java_list = ArrayList()
              File "/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/python/lib/py4j-0.10.8.1-src.zip/py4j/java_gateway.py", line 1554, in __call__
                answer, self._gateway_client, None, self._fqn)
              File "/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/python/pyspark/sql/utils.py", line 89, in deco
                return f(*a, **kw)
              File "/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/python/lib/py4j-0.10.8.1-src.zip/py4j/protocol.py", line 342, in get_return_value
                return OUTPUT_CONVERTER[type](answer[2:], gateway_client)
              File "/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/python/lib/py4j-0.10.8.1-src.zip/py4j/java_collections.py", line 525, in <lambda>
                JavaList(target_id, gateway_client))
              File "/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/python/lib/py4j-0.10.8.1-src.zip/py4j/java_collections.py", line 252, in __init__
                JavaObject.__init__(self, target_id, gateway_client)
              File "/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/python/lib/py4j-0.10.8.1-src.zip/py4j/java_gateway.py", line 1324, in __init__
                ThreadSafeFinalizer.add_finalizer(key, value)
              File "/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/python/lib/py4j-0.10.8.1-src.zip/py4j/finalizer.py", line 43, in add_finalizer
                cls.finalizers[id] = weak_ref
              File "/usr/lib64/pypy-2.5.1/lib-python/2.7/threading.py", line 216, in __exit__
                self.release()
              File "/usr/lib64/pypy-2.5.1/lib-python/2.7/threading.py", line 208, in release
                self.__block.release()
            error: release unlocked lock
        **********************************************************************
           1 of   2 in pyspark.sql.group.GroupedData.mean
        ***Test Failed*** 1 failures.
        

        Attachments

          Activity

            People

            • Assignee:
              Unassigned
              Reporter:
              dongjoon Dongjoon Hyun
            • Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

              • Created:
                Updated: