Uploaded image for project: 'Beam'
  1. Beam
  2. BEAM-6886

Change batch handling in ElasticsearchIO to avoid necessity for GroupIntoBatches

    XMLWordPrintableJSON

    Details

    • Type: Improvement
    • Status: Open
    • Priority: Major
    • Resolution: Unresolved
    • Affects Version/s: 2.11.0
    • Fix Version/s: None
    • Component/s: io-java-elasticsearch
    • Labels:
      None

      Description

      I have a streaming job inserting records into an Elasticsearch cluster. I set the batch size appropriately big, but I found out this is not causing any effect at all: I found that all elements are inserted in batches of 1 or 2 elements.

      The reason seems to be that this is a streaming pipeline, which may result in tiny bundles. Since ElasticsearchIO uses `@FinishBundle` to flush a batch, this will result in equally small batches.

      This results in a huge amount of bulk requests with just one element, grinding the Elasticsearch cluster to a halt.

      I have now been able to work around this by using a `GroupIntoBatches` operation before the insert, but this results in 3 steps (mapping to a key, applying GroupIntoBatches, stripping key and outputting all collected elements), making the process quite awkward.

      A much better approach would be to internalize this into the ElasticsearchIO write transform.. Use a timer that flushes the batch at batch size or end of window, not at the end of a bundle.

        Attachments

          Activity

            People

            • Assignee:
              Unassigned
              Reporter:
              MadEgg Egbert
            • Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

              Dates

              • Created:
                Updated: