At Hoss's suggestion on #solr IRC last night, I tested whether JsonLoader behavior has changed around BigInteger and BigDecimal values as a result of the changes committed under this issue.
I'm reopening to address an issue with adding JSON BIGNUMBER-s (returned by the Noggit parser when a number won't fit in either a long or a double) to trie integer or long fields: a NumberFormatException is no longer triggered, and the values are silently corrupted.
Before committing the patch on this issue, BigInteger-typed values were not created for BIGNUMBER-s in SolrInputDocument; instead, they (along with every other JSON value) were converted to String-s, and then adding such a value to an integer or long field would cause a NumberFormatException to be thrown from Integer.parseInt() or Long.parseLong(). This was proper and good.
But now, BigInteger-typed values are converted (in TrieField.createField() to int/long using BigInteger's intValue() and longValue() methods, which return only the low-order 32 and 64 bits, respectively. These values are always corrupted: the truncated high-order bits are guaranteed to be non-zero, since BigInteger typing only happens when values won't fit into 64 bits.
Reverting back to String-typed BIGNUMBER values fixes the problem.
By contrast, BigDecimal's doubleValue() and floatValue() methods truncate the low-order bits, resulting in loss of precision rather than corruption. This is the same behavior used by Double.parseDouble() and Float.parseFloat(). Reverting back to String-typing for decimal BIGNUMBER-s in addition to integral BIGNUMBER-s won't be a problem.