Uploaded image for project: 'Flink'
  1. Flink
  2. FLINK-34775

[Bug] oracle cdc logminer can't catch up the latest records when scn huge increment occured.

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Open
    • Major
    • Resolution: Unresolved
    • None
    • None
    • Flink CDC

    Description

          1. Search before asking
      • [X] I searched in the [issues|https://github.com/ververica/flink-cdc-connectors/issues) and found nothing similar.
          1. Flink version

      1.16.0

          1. Flink CDC version

      2.3.0

          1. Database and its version
      • oracle 11g
      • oracle 12c
          1. Minimal reproduce step

      1. create a simple cdc source table(connector='oracle-cdc')
      2. this is no special requirements about sink
      3. while oracle database instance scn huge increment, the LogminerStreamingChangeEvent can not catch up the latest record in time.

      The problem is because `LogMinerQueryResultProcessor` can not feedback more reasonable `lastProcessedScn` after process mining view's data. During data cycle processing,the first cycle could get the right last `processedScn`,the `endScn` will be reset to the `processedScn`, is will be used in the next cycle as `startScn`, unfortunately the next cycle can not query any mining view datas about the source table, the `startScn` can not be move an in short time any more.
      however oracle scn is already huge increment, the source table's new datas can't get in time.

      the main code in `LogMinerStreamingChangeEventSource` :
      ```java
      final Scn lastProcessedScn = processor.getLastProcessedScn();
      if (!lastProcessedScn.isNull()
      && lastProcessedScn.compareTo(endScn) < 0)

      { // If the last processed SCN is before the endScn we need to // use the last processed SCN as the // next starting point as the LGWR buffer didn't flush all // entries from memory to disk yet. endScn = lastProcessedScn; }

      if (transactionalBuffer.isEmpty()) {
      LOGGER.debug(
      "Buffer is empty, updating offset SCN to {}",
      endScn);
      offsetContext.setScn(endScn];
      }
      ```
      BTW: The implementation has changed since version debezium 1.6.x, `LogminerStreamingChangeEvent` has been optimized in debezium-connector-oracle.

          1. What did you expect to see?

      reset the right `startScn` when there is no scn scan records.

          1. What did you see instead?

      the first cycle `lastProcessedScn` worked as `startScn` in a long time.

          1. Anything else?

      No response

          1. Are you willing to submit a PR?
      • [X] I'm willing to submit a PR!

      ---------------- Imported from GitHub ----------------
      Url: https://github.com/apache/flink-cdc/issues/1940
      Created by: green1893
      Labels: bug,
      Created at: Fri Feb 24 14:32:22 CST 2023
      State: open

      Attachments

        Activity

          People

            Unassigned Unassigned
            flink-cdc-import Flink CDC Issue Import
            Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

            Dates

              Created:
              Updated: