Uploaded image for project: 'Ratis'
  1. Ratis
  2. RATIS-979 Ratis streaming
  3. RATIS-1176

Benchmark various ways to stream data

    XMLWordPrintableJSON

Details

    • Sub-task
    • Status: Open
    • Major
    • Resolution: Unresolved
    • None
    • None
    • client, Streaming
    • None

    Description

      In RATIS-1175, we provided a WritableByteChannel view of DataStreamOutput in order to support FileChannel.transferTo. However, runzhiwang pointed out that sun.nio.ch.FileChannelImpl.transferTo has three submethods

      • transferToDirectly (fastest)
      • transferToTrustedChannel
      • transferToArbitraryChannel (slowest, requires buffer copying)

      Unfortunately, our current implementation only able to use transferToArbitraryChannel.

      There are several ideas below to improve the performance. We should benchmark them.

      1. Improve the current implementation of WritableByteChannel so that it may be able to use a faster transferTo method.
      2. Use FileChannel.map(..) and pass MappedByteBuffer to our DataStreamOutput.writeAsync method.
      3. Add a new API
        //DataStreamOutput
         CompletableFuture<DataStreamReply> writeAsync(File);
        

        Internally, use Netty DefaultFileRegion for zero-copy file transfer:
        https://github.com/netty/netty/blob/4.1/example/src/main/java/io/netty/example/file/FileServerHandler.java#L53

      The data flow of client -> primary -> peer as follows
      1. If stream file and do not calculate checksum, we use transferTo. In client, there are 1 DMA copy and 1 DMA gather copy, no CPU copy. In primary, there are
      3 DMA copy and 3 CPU copy. In peer, there are 2 DMA copy and 2 CPU copy.

      2. If stream file and calculate checksum, we use MapByteBuffer. In client, there are 2 DMA copy and 1 CPU copy. In primary, there are
      3 DMA copy and 3 CPU copy. In peer, there are 2 DMA copy and 2 CPU copy.

      3. If stream data not in file and calculate checksum, we use DirectByteBuffer. In client, there are 2 DMA copy and 2 CPU copy. In primary, there are
      3 DMA copy and 3 CPU copy. In peer, there are 2 DMA copy and 2 CPU copy.

      4. we should avoid reading data into heap such as HeapByteBuffer. In client, there are 2 DMA copy and 4 CPU copy. In primary, there are
      3 DMA copy and 3 CPU copy. In peer, there are 2 DMA copy and 2 CPU copy.

      5. The following is flow before ratis streaming and use ProtoBuf to send data. In client there are 2 DMA copy and 4 CPU copy. In leader, there are 3 DMA copy and 7 CPU copy. In follower, there are 2 DMA copy and 5 CPU copy.

      Attachments

        1. image-2020-11-25-07-40-50-383.png
          351 kB
          runzhiwang
        2. screenshot-5.png
          61 kB
          runzhiwang
        3. screenshot-6.png
          41 kB
          runzhiwang
        4. screenshot-7.png
          44 kB
          runzhiwang
        5. screenshot-8.png
          43 kB
          runzhiwang
        6. screenshot-9.png
          61 kB
          runzhiwang

        Issue Links

          Activity

            People

              Unassigned Unassigned
              szetszwo Tsz-wo Sze
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

                Created:
                Updated: