Uploaded image for project: 'Apache NiFi'
  1. Apache NiFi
  2. NIFI-11402

PutBigQuery processor case sensitive and Append Record Count issues

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • 1.18.0, 1.19.0, 1.20.0, 1.19.1, 1.21.0
    • 2.0.0-M1, 1.22.0
    • None
    • None

    Description

      The PutBigQuery processor seems to have to some issues. I detected 2 issues that can be quite blocking.

      For the first one, if you set a hight value in the Append Record Count property in my case 500 000 and that you have a big flowfile (number of records and size, in my case 54 000 records for a size of 74MB) you will get an error because the message to send is too big. That is quite normal.

      PutBigQuery[id=16da3694-c886-3b31-929e-0dc81be51bf7] Stream processing failed: java.lang.RuntimeException: io.grpc.StatusRuntimeException: INVALID_ARGUMENT: MessageSize is too large. Max allow: 10000000 Actual: 13593340
      - Caused by: io.grpc.StatusRuntimeException: INVALID_ARGUMENT: MessageSize is too large. Max allow: 10000000 Actual: 13593340
      

      So you replace the value with a smaller one, but the error message remains the same. Even if you reduce your flowfile to a single record, you will still get the error. The only way to fix this is to delete the processor and readd it, then reduce the value of the property before running it. Seems to be an issue here.
      It would also be interesting to give information about the limit of the message sent in the processor documentation because the limit in the previous implementation of the PutBigQueryStreaming and PutBigQueryBatch processors was quite straightforward and linked to the size of the file sent. But now the limit is on the Message but it doesn't really correspond to the size of the FlowFile or the number of records in it.

      The second issue occure if you are using upper case in your field name. For example, you have a table with the following schema:

      timestamp | TIMESTAMP | REQUIRED
      original_payload | STRING | NULLABLE
      error_message | STRING | REQUIRED
      error_type | STRING REQUIRED
      error_subType | STRING | REQUIRED
      

      and try to put the following event in it:

      {
        "original_payload" : "XXXXXXXX",
        "error_message" : "XXXXXX",
        "error_type" : "XXXXXXXXXX",
        "error_subType" : "XXXXXXXXXXX",
        "timestamp" : "2023-04-07T10:31:45Z"
      }
      

      (in my case this event was in Avro)

      You will get the following telling you that the required field error_subtype is missing:

      Cannot convert record to message: com.google.protobuf.UninitializedMessageException: Message missing required fields: error_subtype
      

      So to fix it, you need to change your Avro Schema and put error_subtype instead of error_subType in it.
      BigQuery columns aren't case sensitive so it should be ok to put a field with upper case but it's not. In the previous implementation of the PutBigQueryStreaming and PutBigQueryBatch, we were able to use upper case in the schema fields. So it should still be the case.
      If you get this error, the flowfile will not go in the failure queue but just disappear.

      Link to the slack thread: https://apachenifi.slack.com/archives/C0L9VCD47/p1680866688318739

      Attachments

        Issue Links

          Activity

            People

              pvillard Pierre Villard
              juldrixx Julien G.
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved:

                Time Tracking

                  Estimated:
                  Original Estimate - Not Specified
                  Not Specified
                  Remaining:
                  Remaining Estimate - 0h
                  0h
                  Logged:
                  Time Spent - 50m
                  50m