Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-26157

Asynchronous execution of stored procedure

    Details

    • Type: Bug
    • Status: Resolved
    • Priority: Major
    • Resolution: Invalid
    • Affects Version/s: 2.3.0
    • Fix Version/s: None
    • Component/s: Spark Submit
    • Labels:
      None

      Description

      I am executing a jar file with spark-submit.

      This jar file is a scala program, which combines operations spark-related and non-spark-related.

      The issue comes when I execute a stored procedure from scala using jdbc. This SP is in a Microsoft SQL database and, basically, performs some operations and populates a table with about 500 rows, one by one.

      Then, the next step in the program is read that table and perform some additional calculations. This step is grabbing always less rows than created by stored procedure, but this is because this step is not properly sync with the previous one, starting its execution without waiting the previous step to be done.

      I have tried:

      • Insert a Thread.sleep(10000) between both instructions and it seems to work.
      • Execute the program just with one Executor => it doesn't work.

      I would like to know why is it happening and how can I solve it without the sleep, because that's not a admissible solution.

      Thank you very much!!

        Attachments

          Activity

            People

            • Assignee:
              Unassigned
              Reporter:
              jaimedrq Jaime de Roque Martínez
            • Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved: