Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-48571

Reduce the number of accesses to S3 object storage

    XMLWordPrintableJSON

Details

    • Task
    • Status: Open
    • Major
    • Resolution: Unresolved
    • 3.5.0
    • None
    • Spark Core
    • None

    Description

      If we access a Spark table on an object storage file system with parquet files, the object storage suffers many requests that seem to be unnecessary. To explain this I will do it with an example:

      I have created a simple table, with 3 files:

      business/t_filter/country=ES/data_date_part=2023-09-27/part-00000-0f52aae9-2db8-415e-93f3-8331539c0ead.c000
      business/t_filter/country=ES/data_date_part=2023-06-01/part-00000-0f52aae9-2db8-415e-93f3-8331539c0ead.c000    
      business/t_filter/country=ES/data_date_part=2023-09-27/part-00000-f10096c1-53bc-4e2f-bc56-eba65acfa44a.c000    

      and I have put a table that represents business/t_filter with country and data_date_part partitions, you have the following requests.

      If you use versions prior to Spark 3.5 or Hadoop 3.4, in my case it is exactly Spark 3.2 and Hadoop 3.1, the number of requests you have are the following -> IMAGE Spark 3.2 Hadoop 3.1

      In this image we can see all the requests where we can find the following errors:

      • Two HEAD and two LIST are made with the implementation of S3, of the folders where the files are located, which could only be resolved with a single list. This bug has already been resolved in -> HADOOP-18073 -> Result : IMAGE 2 Spark 3.2 Hadoop 3.4
      • For each file, the parquet footing is listed twice. This bug is resolved in ->SPARK-42388 -> Result : IMAGE Spark 3.5 Hadoop 3.1
      • A Head Object is launched twice each time a file is read, this could be reduced by implementing the FileSystem interface so that it could receive the FileStatus that has already been calculated above.
      • The requests could be reduced when reading the parquet footer, since first you have to read the size of the schema and then the schema, which implies two HTTP/HTTPS requests to S3. It would be nice if there was a minimum threshold, for example 100KB, in which, if the file is smaller than that, it would not have to make two requests, and the entire file would be brought, since bringing 100 KB will take less time in one request to bring 8 B in a request and then another request for x KB. Even so, I don't know if this task makes sense.

       

      With all these improvements, updating to the latest version of Spark and Hadoop would go from more than 30 requests to 11 in the proposed example.

       

       

       

       

      Attachments

        1. Spark 3.5 Hadoop-aws 3.1.PNG
          148 kB
          Oliver Caballero Alvarez
        2. Spark 3.2 Hadoop-aws 3.4.PNG
          178 kB
          Oliver Caballero Alvarez
        3. Spark 3.2 Hadoop-aws 3.1.PNG
          181 kB
          Oliver Caballero Alvarez

        Issue Links

          Activity

            People

              Unassigned Unassigned
              ocaballero Oliver Caballero Alvarez
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

              Dates

                Created:
                Updated: