Uploaded image for project: 'Flink'
  1. Flink
  2. FLINK-34819

oracle 19c PDB mode SplitFetcher thread 0 received unexpected exception while polling the records

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Open
    • Major
    • Resolution: Unresolved
    • None
    • None
    • Flink CDC

    Description

          1. Search before asking
      • [X] I searched in the [issues|https://github.com/ververica/flink-cdc-connectors/issues) and found nothing similar.
          1. Flink version

      1.14.2

          1. Flink CDC version

      current

          1. Database and its version

      oracle 19c

          1. Minimal reproduce step

      CREATE TABLE products (
      db_name STRING METADATA FROM 'database_name' VIRTUAL,
      schema_name STRING METADATA FROM 'schema_name' VIRTUAL,
      table_name STRING METADATA FROM 'table_name' VIRTUAL,
      operation_ts TIMESTAMP_LTZ(3) METADATA FROM 'op_ts' VIRTUAL,
      ID INT NOT NULL,
      NAME STRING,
      DESCRIPTION STRING,
      PRIMARY KEY(ID) NOT ENFORCED
      ) WITH (
      'connector' = 'oracle-cdc',
      'hostname' = 'localhost',
      'port' = '1521',
      'username' = 'c##flinkuser',
      'password' = 'flinkpw',
      'database-name' = 'ORCLCDB',
      'schema-name' = 'flink_pdb',
      'table-name' = 'products',
      'debezium.database.pdb.name' = 'ORCLPDB1',
      'scan.incremental.snapshot.enabled' = 'true'
      – 'debezium.log.mining.strategy' = 'online_catalog'
      – 'debezium.log.mining.continuous.mine' = 'true'
      );

          1. What did you expect to see?

      can use 'scan.incremental.snapshot.enabled' = 'true' option to read oracle change log

          1. What did you see instead?

      java.lang.RuntimeException: One or more fetchers have encountered exception
      at org.apache.flink.connector.base.source.reader.fetcher.SplitFetcherManager.checkErrors(SplitFetcherManager.java:225)
      at org.apache.flink.connector.base.source.reader.SourceReaderBase.getNextFetch(SourceReaderBase.java:169)
      at org.apache.flink.connector.base.source.reader.SourceReaderBase.pollNext(SourceReaderBase.java:130)
      at org.apache.flink.streaming.api.operators.SourceOperator.emitNext(SourceOperator.java:342)
      at org.apache.flink.streaming.runtime.io.StreamTaskSourceInput.emitNext(StreamTaskSourceInput.java:68)
      at org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:65)
      at org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:496)
      at org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:203)
      at org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:809)
      at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:761)
      at org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:958)
      at org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:937)
      at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:766)
      at org.apache.flink.runtime.taskmanager.Task.run(Task.java:575)
      at java.lang.Thread.run(Thread.java:748)
      Caused by: java.lang.RuntimeException: SplitFetcher thread 0 received unexpected exception while polling the records
      at org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.runOnce(SplitFetcher.java:150)
      at org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.run(SplitFetcher.java:105)
      at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
      at java.util.concurrent.FutureTask.run(FutureTask.java:266)
      at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
      at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
      ... 1 more
      Caused by: io.debezium.DebeziumException: The db history topic or its content is fully or partially missing. Please check database history topic configuration and re-execute the snapshot.
      at io.debezium.relational.HistorizedRelationalDatabaseSchema.recover(HistorizedRelationalDatabaseSchema.java:59)
      at com.ververica.cdc.connectors.oracle.source.reader.fetch.OracleSourceFetchTaskContext.validateAndLoadDatabaseHistory(OracleSourceFetchTaskContext.java:282)
      at com.ververica.cdc.connectors.oracle.source.reader.fetch.OracleSourceFetchTaskContext.configure(OracleSourceFetchTaskContext.java:116)
      at com.ververica.cdc.connectors.base.source.reader.external.IncrementalSourceStreamFetcher.submitTask(IncrementalSourceStreamFetcher.java:84)
      at com.ververica.cdc.connectors.base.source.reader.IncrementalSourceSplitReader.checkSplitOrStartNext(IncrementalSourceSplitReader.java:138)
      at com.ververica.cdc.connectors.base.source.reader.IncrementalSourceSplitReader.fetch(IncrementalSourceSplitReader.java:70)
      at org.apache.flink.connector.base.source.reader.fetcher.FetchTask.run(FetchTask.java:58)
      at org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.runOnce(SplitFetcher.java:142]
      ... 6 more
      [flink-akka.actor.default-dispatcher-10] INFO org.apache.flink.runtime.executiongraph.failover.flip1.RestartPipelinedRegionFailoverStrategy - Calculating tasks to restart to recover the failed task cbc357ccb763df2852fee8c4fc7d55f2_0.
      [SourceCoordinator-Source: TableSourceScan(table=[[default_catalog, default_database, products]], fields=[ID, NAME, DESCRIPTION, database_name, schema_name, op_ts, table_name]) -> Calc(select=[CAST(database_name) AS db_name, CAST(schema_name) AS schema_name, CAST(table_name) AS table_name, CAST(op_ts) AS operation_ts, ID, NAME, DESCRIPTION]) -> NotNullEnforcer(fields=[ID]) -> Sink: Sink(table=[default_catalog.default_database.print_table], fields=[db_name, schema_name, table_name, operation_ts, ID, NAME, DESCRIPTION])] INFO org.apache.flink.runtime.source.coordinator.SourceCoordinator - Removing registered reader after failure for subtask 0 of source Source: TableSourceScan(table=[[default_catalog, default_database, products]], fields=[ID, NAME, DESCRIPTION, database_name, schema_name, op_ts, table_name]) -> Calc(select=[CAST(database_name) AS db_name, CAST(schema_name) AS schema_name, CAST(table_name) AS table_name, CAST(op_ts) AS operation_ts, ID, NAME, DESCRIPTION]) -> NotNullEnforcer(fields=[ID]) -> Sink: Sink(table=[default_catalog.default_database.print_table], fields=[db_name, schema_name, table_name, operation_ts, ID, NAME, DESCRIPTION]).

          1. Anything else?

      No response

          1. Are you willing to submit a PR?
      • [ ] I'm willing to submit a PR!

      ---------------- Imported from GitHub ----------------
      Url: https://github.com/apache/flink-cdc/issues/2531
      Created by: yuangjiang
      Labels: bug,
      Created at: Wed Sep 27 15:46:24 CST 2023
      State: open

      Attachments

        Activity

          People

            Unassigned Unassigned
            flink-cdc-import Flink CDC Issue Import
            Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

            Dates

              Created:
              Updated: