We've encountered a few weird edge cases that seem to fail the new null filling unionByName (which has been a great addition!). It seems to stem from the fields being sorted by name and corrupted along the way. The simple reproduction is:
This results in the exception:
You can see in the second schema that it has
when it should be
It seems to happen somewhere during https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/ResolveUnion.scala#L73, as everything seems correct up to that point from my testing. It's either modifying one expression during the transformUp then corrupts other expressions that are then modified, or the ExtractValue before the addFieldsInto is remembering the ordinal position in the struct that is then changing and causing issues.
I found that simply using sortStructFields instead of sortStructFieldsInWithFields gets things working correctly, but definitely has a performance impact. The deep expr unionByName test takes ~1-2 seconds normally but ~12-15 seconds with this change. I assume because the original method tried to rewrite existing expressions vs the sortStructFields just adds expressions on top of existing ones to project the new order.
I'm not sure if it makes sense to take the slower but works in the edge cases method (assuming it doesn't break other cases, all existing tests pass), or if there's a way to fix the existing method for cases like this.