Uploaded image for project: 'Apache Arrow'
  1. Apache Arrow
  2. ARROW-17223

[C#] DecimalArray incorrectly appends values greater than MaxValue / 2 and less than MinValue / 2

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Critical
    • Resolution: Fixed
    • 4.0.0, 4.0.1, 5.0.0, 6.0.0, 6.0.1, 6.0.2, 7.0.0, 7.0.1, 8.0.0
    • 10.0.0
    • C#

    Description

      I try to append values into Decimal arrays (Decimal128Array or Decimal256Array), but for values greater than Decimal.MaxValue / 2 and less than Decimal.MinValue / 2, it fails to do it correctly.

      For example for DecimalArray128 I created simple unit test:

      [Fact]
      public void AppendMaxMinDecimal()                
      {
        // Assert
        var builder = new Decimal128Array.Builder(new Decimal128Type(29, 0));
        var max = Decimal.MaxValue;
        var min = Decimal.MinValue; 
       
        // Act
        builder.Append(max);
        builder.Append(min); 
      
        // Assert
        var array = builder.Build();
        Assert.Equal(max, array.GetValue(0));
        Assert.Equal(min, array.GetValue(1));
      }
      

      I assume it to work correctly, but get:

      Assert.Equal() Failure Expected: 79228162514264337593543950335 Actual:   -1

      Looks like the root cause is in GetBytes method of DecimalUtility class:

      //...
      Span<byte> bigIntBytes = stackalloc byte[12];
      
      for (int i = 0; i < 3; i++)
      {
             int bit = decimalBits[i];
             Span<byte> intBytes = stackalloc byte[4];
             if (!BitConverter.TryWriteBytes(intBytes, bit))
                 throw new OverflowException($"Could not extract bytes from int {bit}");
      
             for (int j = 0; j < 4; j++)
             {
                   bigIntBytes[4 * i + j] = intBytes[j];
             }
       }
       bigInt = new BigInteger(bigIntBytes);
      
      //...
      

      according to MSDN: "The binary representation of a Decimal value is 128-bits consisting of a 96-bit integer number, and a 32-bit set of flags representing things such as the sign and scaling factor used to specify what portion of it is a decimal fraction".

      In 12 bytes BigInteger only 95 bits are used to numbers and 1 bit for a sign.

      So code:

      var newBigInt = new BigInteger(Decimal.MaxValue);
      var arr = newBigInt.ToByteArray();
      

      will produce array of 13 bytes long, not 12. 

      I tried to change

      Span<byte> bigIntBytes = stackalloc byte[12];

      to

      Span<byte> bigIntBytes = stackalloc byte[13];

      and this solved the issue.

      PR: https://github.com/apache/arrow/pull/13732

      Attachments

        Issue Links

          Activity

            People

              Unassigned Unassigned
              asmirnov82 Alexey Smirnov
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved:

                Time Tracking

                  Estimated:
                  Original Estimate - Not Specified
                  Not Specified
                  Remaining:
                  Remaining Estimate - 0h
                  0h
                  Logged:
                  Time Spent - 2h 20m
                  2h 20m